Welcome to the March edition of our best and favorite articles in AI that were published this month. We are a Paris-based company that does Agile data development. This month, we spotted articles about Neural Networks, Neural Architecture Search and more. We advise you to have a Python environment ready if you want to follow some tutorials :). Let’s kick off with the comic of the month:
Neural Networks currently provide the best solution for image-related problems: image classification, object detection… But, up until now, it has been hard to understand how the decision process between layers works.
OpenAI and Google researchers present Activation atlases a new way to dive into neural nets and visualize how the decision-making process works. It shows the various concepts each layer learn to finally classify an image.
You can even start making your own activation atlases using the jupyter notebooks they provide. I know I will.
Microsoft’s Seeing AI is a free app created by Microsoft for blind and low vision community. Based on object and scene recognition algorithms, it narrates the description of a photo just by tapping on it. It can also describe a person and its mood.
Even cooler, now you can get a description of the image’s objects and how they are related which leads to a better understanding.
— by Devin Coldewey
The ‘gestalt effect’ is the idea that the human brain can perceive a whole image from certain fragments of it. This theory dates to the 20th century but the current question is: can neural nets do the same?
Researchers from Google Brain discovered that when a neural network is trained to recognize complete triangles, it can also classify an illusory triangle (image A) as a complete one.
This result shows how neural nets imitate the brain once again and it may be ‘a first step into a new field of machine psychology’.
In March 21st, Google published the first AI-powered Doodle to celebrate the well-known composer and musician Johann Sebastian Bach.
Based on the Coconet ML model, this doodle can harmonize any melody you create in Bach’s style. Briefly, to train this model they use a dataset of 306 chorale harmonizations by Bach. For each piece, they erase some notes randomly and let the model predict the missing notes and restore the music.
A detailed explanation of how and why the model works is published by Google Magenta.
One of the trending researches in artificial intelligence is automatically building neural networks which is called Neural Architecture Search (NAS). This is a very promising field but it demands a lot of resources. In fact, it took 48,000 GPU hours to build a convolutional neural network by Google.
The big news is, MIT researchers presented an algorithm that can build a CNN 200 times faster than state-of-the-art method. This is a huge step in the field since NAS will be accessible to more people.
Two main innovations led to this huge optimization which are “Path-level” binarization and pruning and Hardware-aware. Find out more about them!
One of the biggest news of this month is the announcement of the prestigious Turing award winners Yoshua Bengio, Geoffrey Hinton and Yann LeCun. It is also called the “Nobel Prize of Computing” and it’s a $1 million prize.
The ‘pioneers of AI’ believed in an artificial intelligence approach using neural networks and they’ve been proved right! In 2012, results showed that neural nets are actually good at image recognition and since then they’ve been used everywhere.
In response to their customs needs, AWS has created new containers adapted to Deep Learning projects.
You can choose the docker image that suits you considering the framework to use (TensorFlow or MXNet), the environment (CPU or GPU) and some other parameters.
Plus, these containers are easy to use and you can find a detailed explanation on how to use them in the article.
It’s always good to see AI’s potential for humanity, that’s why I picked a newly published paper on how to improve radiologists’ performance in breast cancer screening.
Researchers at New York University trained a deep convolutional neural network on over a million images of mammogram exams. This model achieves an AUC of 0.895 in predicting the presence of a malignant breast tumor.
A reader study was conducted with 14 radiologists to validate the model and the results show that ‘‘a hybrid model, averaging probability of malignancy predicted by a radiologist with a prediction of our neural network, is more accurate than either of the two separately’’.
If you want to know more about the code and the best-performing models, you can find them on Github.
The robustness of a system has always been a concern for software engineers and some practices were established to ensure a bug-free program before deployment. Yet for machine learning systems new techniques should be adopted to avoid failures.
Testing consistency with specifications: designing and using an adversary to detect even small failures and uncover strange behaviors. Training specification-consistent models: training models that are agnostic to the first technic (adversarial testing) to avoid overestimating their consistency. Formal verification: limiting the output space of a model by computing and refining geometric bounds.
This is one of my favorites!
NVIDIA Research presented this month a new app that transforms rough sketches into masterpieces. It’s called GauGAN, it’s based on Generative Adversarial Networks (obviously) and it creates lifelike landscapes so easily.
Draw in a pond, and nearby elements like trees and rocks will appear as reflections in the water. Swap a segment label from “grass” to “snow” and the entire image changes to a winter scene, with a formerly leafy tree turning barren.
If you want to see how it works check out this video.
NeurIPS (prev. NIPS) Papers Selection
My favorite research articles from NeurIPS (previously NIPS) 2018.
Automate AWS Tasks Thanks to Airflow Hooks
This article is a step-by-step tutorial that will show you how to upload a file to an S3 bucket thanks to an Airflow ETL (Extract Transform Load) pipeline
Enhance Your Loopback Models with Custom mixins
This article puts light on the very useful mixins option of your model.json declaration file.