Welcome to the June edition of our best and favorite articles in AI that were published this month. We are a Paris-based company that does Agile data development. This month, we spotted articles about Football, Climate Change, Reinforcement Learning, and other hot topics. We advise you to have a Python environment ready if you want to follow some tutorials :). Let’s kick off with the comic of the month:
Football is in the air these days with the FIFA Women’s World Cup that took place in France this month. Google also seems to have the world’s most popular sport on its mind. The tech giant released Google Research Football, a reinforcement learning environment based on popular video games. Its purpose is to train agents to master football. The environment enables users to test different reinforcement learning algorithms, study the effect of randomness, and investigate curriculum learning research ideas. This open-source platform seems to me like a unique method to confront RL agents to an immense community of opponents, both human and artificial.
If you’re wondering when the heatwave Europe is going through will end, you’re not alone. Sadly, this weather seems to be becoming the new norm due to global warming.
However, some of the field’s best-known thinkers published a research paper on how AI could be a powerful tool to fight this phenomenon. The MIT Technology Review provides an easily accessible synthesis of this research paper. They dive into 10 of the most high-impact recommendations, which tackle issues such as agriculture, transportation, and energy consumption. One proposal I found particularly innovative is the extraction of building footprints thanks to Computer Vision, which will help to estimate a city’s consumption.
While AI might be a solution to fight global warming, it might also be part of what is causing the problem. New research sheds light on the considerable environmental impact of deep learning.
Training the most recent NLP deep learning algorithms such as Transformer or BERT can emit as much carbon as five cars in their lifetimes. The necessary computational resources are responsible for these emissions. Thankfully, the human brain does not need so much energy to perform the same tasks. It may be time to hold AI efficiency to the same standard as AI accuracy.
Have you ever looked at a food dish and wondered “How did they manage to make this”? If you have, Facebook AI may have the answer. Researchers developed a neural network that takes as input a food image and outputs a recipe containing ingredients and cooking instructions.
Their method starts by pre-training an image encoder and an ingredients decoder which predicts a set of ingredients. They then train the ingredient encoder and the instruction decoder, which generate the instructions. These instructions leverage a state-of-the-art sequence generation model.
This accomplishment could have wider implications by enabling current computer vision systems to go beyond the merely visible.
The foundation of neural networks is their ability to learn complex non-linear relationships. These relationships are then represented by their weight parameters. However, the success of a neural network for a given task also strongly depends on its architecture. In a recent publication, two Google Brain researchers question the importance of a neural network’s weight parameters compared to its architecture.
They manage to find minimal neural network architectures that achieve much higher results than chance on MNIST using random weights. They are even able to perform reinforcement tasks without weight training.
Reinforcement learning was a hot topic this month. Another example of new research in this field is Google AI’s off-policy classification method.
This novel policy evaluation method evaluates the performance of agents from past data by treating evaluation as a classification problem. Actions are labeled as either potentially leading to success or guaranteed to result in failure. This type of policy evaluation could prove particularly interesting for robotics tasks such as a vision-based robotic grasping task, where is it common to use simulated data and transfer learning.
Speaking of robotics, have you ever felt frustrated with the lack of tools to enable AI robotics R&D? Well, your struggles have been heard. Facebook AI collaborated with researchers to build PyRobot.
PyRobot is an open-source framework that makes it possible to get up and running with a robot very fast. It also spares you from worrying about hardware or other details. The goal behind this release is to accelerate AI robotics research by providing a scalable tool for research and education. PyRobot both encourages collaboration amongst researchers and facilitates the integration of ML and AI in robotics.
Communication, tactics, and teamwork are fundamentally human capabilities. They manifest themselves particularly in team sports and video games such as Capture the Flag. Such tasks have always represented a critical challenge for AI research.
DeepMind recently published a paper in which they present new developments in reinforcement learning. Their research enabled agents to cooperate with both artificial and human teammates. This cooperation led to human-level performance in a Capture the Flag video game. These new methods are generating excitement in the field since they seem to produce promising results in other contexts and video games.
Medical Imaging is one of the industries in which AI is particularly active — and produces life-saving results. One major pain-point of medical imaging is the labelling of data, which requires expert doctors. This struggle is especially present when dealing with brain scans.
A group of MIT Researchers addressed this issue in their latest paper. They describe a system that uses CNNs in order to generate a large dataset of distinct training examples from only one single labeled scan along with unlabelled scans. Such data augmentation techniques could be crucial to address some very uncommon brain conditions among child patients.
While training deep learning models or reading research papers, you’ve probably come across the use of batch normalization. You may also have struggled to understand what it really is.
Myrtle.ai just published a 7-step tutorial explaining how to train a ResNet. Amongst these steps is a thorough explanation of batch normalization. This tutorial on batch norm is the best I’ve found so far. It goes into batch normalization’s mathematical definition, how it helps optimization, and its drawbacks.
3 Steps to Improve the Data Quality of a Data lake
From Customising Logs in the Code to Monitoring in Kibana
Bokeh vs Dash — Which is the Best Dashboard Framework for Python?
This article compares Bokeh and Dash (by Plotly), two Python alternatives for the Shiny framework for R, using the same example.
Few-Shot Image Classification with Meta-Learning
Here is how you can teach your model to learn quickly from a few examples.