Welcome to the November edition of our best and favorite articles in AI that were published this month. We are a Paris-based company that does Agile data development. This month, we spotted articles about Hinton’s Capsule Networks, Generative Adversarial Networks and Deep Learning. We advise you to have a Python environment ready if you want to follow some tutorials :). Let’s kick off with the comic of the month:
Ok, we missed the big news in the Best of AI October: the Hinton’s Capsule Network. Geoffrey Hinton, one of the fathers of deep learning, released a completely new type of neural network and its training algorithm. In a nutshell, capsule network allows the classifier to incorporate translation and rotation information which provides a built-in understanding of 3D space. First tests on the MNIST dataset are really promising. Thrilling!
Have you heard about Software 2.0? Well, it comes after Software 1.0 which is basically the classical stack of software we are familiar with. 2.0 consists in writing software using mostly neural networks. Imagine getting your program just by specifying its behavior! It sounds like science fiction to me but Andrej Karpathy details some very interesting pro and cons of such software.
Mostly known for organizing data science competitions, Kaggle released a great survey about Data Science and Machine Learning jobs. Based on 16000 responses from data scientist all over the world, the results are displayed in great interactive visualizations. You can apply filters to find out exactly the piece of information you want to know. And if you do not like that analysis, the complete dataset is available for free. Kaggle even offers cash prizes for it!
Have you heard about Casper? Dollar Shave Club? Hubble? They are successful direct-to-customer companies, and Facebook helped a lot to launch them. Selling everyday goods, like mattresses, razors or lenses required a massive amount of money in advertising campaign at radio or TV. It was really difficult for small companies before Facebook. Now, one is able to aim exactly the appropriate target for a couple of dollars. This exhaustive article analyses the consequences of Facebook services on the global economic fabric. If like me you like successful start-up story, this article is made for you ;).
This Google research paper introduces eager execution for TensorFlow. It makes debugging and development with Tensorflow far more interactive. That’s a great news for a lot of programmers. Now, developers are able to write Tensorflow code without having to structure their computations as a static graph. The new code is not as performant as expected yet but it’s now easier to prototype computation in Tensorflow.
Generative Adversarial Networks (GANs) is an idea from Ian Goodfellow when he was a student at the University of Montreal (he is now at Google Brain). According to Yann Lecun, director of AI Research at Facebook, « this, and the variations that are now being proposed is the most interesting idea in the last 10 years in Machine Learning, in my opinion ». Well, if you enjoy Machine Learning you probably know those networks ;). Just a quick reminder of the idea: two neural networks learning by competing against each other. They are useful for unsupervised learning and image generation. A lot of interesting models have been published, this article analyses the best of them.
Read Fantastic GANs and Where To Find Them Part II — from Guim Perarnau
Feature visualization is a growing thread of research. It aims to understand how neural networks detect features to classify a dataset of images. The procedure uses optimisation techniques. To understand what feature a layer or a neuron detects, scientists search entry images which give them high value. The resulting images are very interesting and very visual! Other great approaches are detailed in this article. This quest to interpret and understand neural networks seems very promising.
You probably heard about Alphabet’s AlphaGo defeating the Go world champion Lee Sedol. That victory was a significant breakthrough in Machine Learning. Next step for gaming AI is real-time strategy games, such as League of Legend or Starcraft. This article details the results of the last Starcraft competition between human players and AI. Humans won.
Numerous domains had been impacted by Machine Learning. Today, it is up to Fashion. Researchers from the University of California, San Diego, and Adobe are now able to talk about « predictive fashion ». They succeeded to train an AI to learn person’s style and to generate artificial images of clothes and shoes similar to this style. Far from an invasion of AI, fashion remains an arena providing a lot of data, the resource feeding neural networks. Thus, it is not that surprising AI begins to push that border.
A new pandas package is developed by Adam Kelleher. It aims to simplify causality analysis. Have you heard about Simpson’s paradox? Causality is often difficult to establish. We can plot <x;y> graphs or compute correlations but often we forget hidden influence factors that skew the results. A very good example of such situation is described in this article. Using the Robins’ g-formula and Machine Learning techniques, Adam Kelleher details an elegant strategy to tackle those problems.
We hope you’ve enjoyed our list of the best new articles in AI this month.
A progressive Web application with Vue JS, Webpack & Material Design [Part 1]
This tutorial aims to create a basic but complete progressive web application with VueJS and Webpack, from scratch.
Custom Maps on react-native-maps And react-google-maps
Using Open Data Shapefiles
Python: How to Train your Own Model with NLTK and Stanford NER Tagger?
This guide shows how to use NER tagging with NLTK and Standford NER tagger (Python).