Welcome to the October edition of our best and favorite articles in AI that were published this month. We are a Paris-based company that does Agile data development. This month, we spotted articles about AI that can solve physics problems, paint portraits, judge criminals, play video games and even recognize smells! Let’s start, as usual, with the comic of the month:
The DeepMind’s bot AlphaStar managed to enter the Grandmaster league in Starcraft II. This league is the highest of the seven ranked leagues of the game. The developers made three different versions of the agent play against real players on Battle.net. The most advanced version got ranked, on average, in the top 0.15% of all players.
By the way, if you want to develop your own Starcraft II bot, you can, just like DeepMind, use the official Blizzard’s API client that provides full external control of the game. If you want to take a look at the official research paper, together with DeepMind’s pseudocode, detailed architecture and datasets with game replays to train your bots, they are available here.
OpenAI trained a robot hand that is capable of manipulating a Rubik’s cube. To be clear, this achievement is not really about solving the cube, but rather about developing an extremely dexterous agent able to perform interactions with the environment with a high degree of precision.
The whole training process took place in a simulated environment. The new method, called Automatic Domain Randomisation, generated harder and harder environments as the agent trained. For more challenge, the developers introduced various perturbations during the robustness tests with a real robot hand. My favorite one is a cute plush giraffe curiously poking the cube with the tip of its nose! Others include throwing a blanket on the robot hand or making it solve the cube while wearing a rubber glove.
Neural nets are ubiquitous. But what really happens inside of them? This remains a mystery even to their developers. This new project takes you on a journey to a mesmerizing world of weirdly satisfying loss landscapes. Some of the visualizations produced by the Loss Landscape project are:
LR Coaster that lets you ride along the minimizer during the learning rate stress test,
Sentinel that explores the optimization process of a convolutional net,
WALTZ-RES that shows the difference between two ResNet networks, with and without skip connections,
and many more!
Streamlit is a new open-source Python framework built for machine learning engineers. As the developers promise on their website, it is “The fastest way to build custom ML tools”. Using Streamlit, you can build sleek web apps to serve your models in just a few lines of Python code!
Here are the core principles of the framework:
Scripts are awesome: every Streamlit app is a stateless Python script.
No callbacks: every widget is a variable !
Information reuse: data and computations are cached in Streamlit’s data store that lets it safely persist information.
Try it now and see for yourself!
COMPAS is an algorithm used in the US courts. It looks at the defendant’s criminal history and outputs a “risk score”. This score reflects how likely the person under trial is to become a recidivist.
It turned out that the algorithm is racially biased, even though the score doesn’t take race into account. This piece lets you tweak the algorithm’s parameters and make it fairer!
Read Can you make AI fairer than a judge? Play our courtroom algorithm game — By MIT Technology Review
The three-body problem is a classic physics problem of calculating the trajectories of three bodies given their initial positions and velocities. The first specific version of this problem, formulated in the 17th century, involved calculating the motion of the Earth, the Sun and the Moon.
It turned out to be an extremely hard problem to solve, since the resulting dynamic system is chaotic, except for a small number of edge cases. So far, a closed-form solution to this problem has not been found. Therefore, the solutions are generally calculated numerically, requiring enormous computational resources.
Researchers from the University of Edinburgh trained a neural network on the solutions produced by the state-of-the-art solver named Brutus. As a result, this network is able to accurately predict the motion of three bodies up to 100 million times faster than the solver.
We’re no longer surprised by AI models that can see and hear things. But what about other senses? Google came up with a model that is able to figure out how different things smell by predicting smell descriptors from molecules. It can distinguish smells like vanilla, chocolate or citrus, but also more complicated ones such as spicy, beefy or creamy.
Further research in this area could make it possible to develop digital scents and to create molecules with completely new smells. It would also be incredibly useful to help those who can’t smell appreciate scents like everyone else.
You may already be familiar with generative adversarial networks that create photorealistic high-resolution images. Human drawings, on the other hand, are rarely photorealistic, and yet we’re able to tell what’s in the picture, which means that they somehow capture the “essence” of objects. This “essence” is a high-level representation that incorporates human knowledge and structure.
SPIRAL++ is a GAN framework that learns how to paint like a human artist. With a limited number of brush strokes and without supervision, the algorithm learns to draw objects that are clearly recognizable by humans. This article lets you click on any image painted by the generator network and see the whole process of its creation, stroke by stroke.
Read Unsupervised Doodling and Painting with Improved SPIRAL — By John F. J. Mellor et al.
Have you ever felt lost looking at some formula containing multidimensional tensor operations and trying to figure out what it does? You’re not alone. Tensor operations can be difficult to wrap your head around.
But don’t be discouraged! Here’s a beautiful technique — called factor graphs — that produces powerful visualizations and helps us understand what’s happening when we work with multi-dimensional arrays of data.
The Microsoft Research Artist in Residence program developed Ada, the first AI-powered pavilion that can sense our emotions and change its colors and lighting in response.
Named after Ada Lovelace, Ada is a two-story photo-luminescent structure created using cutting-edge fabrication techniques such as 3D digital knitting. It is able to pick up on our voice tones, choice of words and facial expressions and use that information to infer our mood in real time. Whether or not it actually understands our feelings, it sure looks fascinating!
3 Steps to Improve the Data Quality of a Data lake
From Customising Logs in the Code to Monitoring in Kibana
5 Mistakes I Made When Doing Custom Data Visualization With D3.js
Basics in R Programming
You are about to begin a project on R? Before you watch any tutorial, read these basic standards.