December 4, 2017

The Best of AI: New Articles Published This Month (November 2017)

10 data articles handpicked by the Sicara team, just for you

Welcome to the November edition of our best and favorite articles in AI that were published this month. We are a Paris-based company that does Agile data development. This month, we spotted articles about Hinton’s Capsule Networks, Generative Adversarial Networks and Deep Learning. We advise you to have a Python environment ready if you want to follow some tutorials :). Let’s kick off with the comic of the month:

Correlation doesn’t imply causation, but it does waggle its eyebrows suggestively and gesture furtively while mouthing ‘look over there’.
Correlation doesn’t imply causation, but it does waggle its eyebrows suggestively and gesture furtively while mouthing ‘look over there’.

1 — Understanding Hinton’s Capsule Networks. Part I, II, III: How Capsules Work

capsule

Ok, we missed the big news in the Best of AI October: the Hinton’s Capsule NetworkGeoffrey Hinton, one of the fathers of deep learning, released a completely new type of neural network and its training algorithm. In a nutshell, capsule network allows the classifier to incorporate translation  and rotation information which provides a built-in understanding of 3D space. First tests on the MNIST dataset are really promising. Thrilling!

Read Understanding Hinton’s Capsule Networks — from Max Pechyonkin

2 — Software 2.0

software screen

Have you heard about Software 2.0? Well, it comes after Software 1.0 which is basically the classical stack of software we are familiar with. 2.0 consists in writing software using mostly neural networks. Imagine getting your program just by specifying its behavior! It sounds like science fiction to me but Andrej Karpathy details some very interesting pro and cons of such software.

3 — The State of ML and Data Science 2017

state of graph

Mostly known for organizing data science competitionsKaggle released a great survey about Data Science and Machine Learning jobs. Based on 16000 responses from data scientist all over the world, the results are displayed in great interactive visualizations. You can apply filters to find out exactly the piece of information you want to know. And if you do not like that analysis, the complete dataset is available for free. Kaggle even offers cash prizes for it!

Read The state of ML and Data Science 2017 — from Kaggle

4 — How Facebook Oracular Algorithm Determines The Fates Of Start-Up

thumbs up

Have you heard about CasperDollar Shave ClubHubble? They are successful direct-to-customer companies, and Facebook helped a lot to launch them. Selling everyday goods, like mattresses, razors or lenses required a massive amount of money in advertising campaign at radio or TV. It was really difficult for small companies before Facebook. Now, one is able to aim exactly the appropriate target for a couple of dollars. This exhaustive article analyses the consequences of Facebook services on the global economic fabric. If like me you like successful start-up story, this article is made for you ;).

Read Facebook Oracular Algo — from Burt HelmBurt Helm

5 — Eager Execution Tensorflow

tensor logo

This Google research paper introduces eager execution for TensorFlow. It makes debugging and development with Tensorflow far more interactive. That’s a great news for a lot of programmers. Now, developers are able to write Tensorflow code without having to structure their computations as a static graph. The new code is not as performant as expected yet but it’s now easier to prototype computation in Tensorflow.

Read Eager Execution Tensorflow — from Yaroslav Bulatov

6 — Fantastic GANs and Where To Find Them Part II

robots having fun

Generative Adversarial Networks (GANs) is an idea from Ian Goodfellow when he was a student at the University of Montreal (he is now at Google Brain). According to Yann Lecun, director of AI Research at Facebook, « this, and the variations that are now being proposed is the most interesting idea in the last 10 years in Machine Learning, in my opinion ». Well, if you enjoy Machine Learning you probably know those networks ;). Just a quick reminder of the idea: two neural networks learning by competing against each other. They are useful for unsupervised learning and image generation. A lot of interesting models have been published, this article analyses the best of them.

Read Fantastic GANs and Where To Find Them Part II — from Guim Perarnau

7 — Feature Visualization

crazy colors

Feature visualization is a growing thread of research. It aims to understand how neural networks detect features to classify a dataset of images. The procedure uses optimisation techniques. To understand what feature a layer or a neuron detects, scientists search entry images which give them high value. The resulting images are very interesting and very visual! Other great approaches are detailed in this article. This quest to interpret and understand neural networks seems very promising.

Read Feature Visualization— from Chris Olah & Al

8 — Human Still Better Than AI (Starcraft)

robotchess

You probably heard about Alphabet’s AlphaGo defeating the Go world champion Lee Sedol. That victory was a significant breakthrough in Machine Learning. Next step for gaming AI is real-time strategy games, such as League of Legend or Starcraft. This article details the results of the last Starcraft competition between human players and AI. Humans won.

Read Human Still Better Than AI (Starcraft)— from Yoochul Kim & Al

9 — This AI Learns Your Fashion Sense and Invents Your Next Outfit

fashion drawing

Numerous domains had been impacted by Machine Learning. Today, it is up to Fashion. Researchers from the University of California, San Diego, and Adobe are now able to talk about « predictive fashion ». They succeeded to train an AI to learn person’s style and to generate artificial images of clothes and shoes similar to this style. Far from an invasion of AI, fashion remains an arena providing a lot of data, the resource feeding neural networks. Thus, it is not that surprising AI begins to push that border.

Read This AI Learns Your Fashion Sense and Invents Your Next Outfit — from Jacckie Snow

10 — Causal Inference With pandas.DataFrames

explosion of dust

A new pandas package is developed by Adam Kelleher. It aims to simplify causality analysis. Have you heard about Simpson’s paradox? Causality is often difficult to establish. We can plot <x;y> graphs or compute correlations but often we forget hidden influence factors that skew the results. A very good example of such situation is described in this article. Using the Robins’ g-formula and Machine Learning techniques, Adam Kelleher details an elegant strategy to tackle those problems.

Read Causal Inference With pandas.DataFrames — from Adam Kelleher

We hope you’ve enjoyed our list of the best new articles in AI this month.

Thanks to Flavian Hautbois. 

a progressive web app is a web app but better

A progressive Web application with Vue JS, Webpack & Material Design [Part 1]

This tutorial aims to create a basic but complete progressive web application with VueJS and Webpack, from scratch.

nyc map

Custom Maps on react-native-maps And react-google-maps

Using Open Data Shapefiles

named entity recognition

Python: How to Train your Own Model with NLTK and Stanford NER Tagger?

This guide shows how to use NER tagging with NLTK and Standford NER tagger (Python).