May was full of interesting research and advances regarding the connection between the brain and hardware – using terms like “neuromorphic” and “neuromodulation” in scientific papers (and not in a sci-fi novel) is exciting indeed.
The recurrent theme in May was the brain – the way it adapts to save energy and the flexibility it achieves to handle different tasks.
Neuromorphic hardware brings energy savings
The researchers from the Graz University of Technology together with Intel Labs have delivered an experiment showing that using neuromorphic hardware can reduce the energy consumption of neural networks by up to sixteen times.
The research is delivered as a part of The Human Brain Project, which unites more than 500 scientists and engineers around Europe who study the human brain. The neuromorphic chips delivered by Intel aim not to deliver more computing power, as traditional chips do, but to deliver a more brain-like structure for neural networks, including the way the brain organizes itself and its architecture. The energy efficiency of the brain is currently beyond imagining for hardware designers – the whole brain consumes no more watts than the energy-saving lightbulb while training a simple neural network can require hours of processing power.
The approach was tested on the Natural Language Processing tasks and proved to be equally effective while improving significantly the energy efficiency of the process.
More details about the research can be found on the Graz University of Technology website.
And the brain…. Again
Building infrastructure that resembles a human brain as much as possible is a goal for scientists worldwide. The group of researchers from The Korea Advanced Institute of Science and Technology (KAIST) approached the goal of building brain-based hardware for neural networks with the goal of lowering energy consumption.
The group implemented a “stashing system” – an approach based on the neuromodulation seen in the natural brain. The approach basically imitates the natural and constant changes in the neural topology to support the operations done by the neural network – this approach reduces energy consumption by up to 40%.
More details about the novel approach can be found on the KAIST website.
AI accurately predicts a patient’s race from MRI scans
A study delivered by the Massachusetts Institute of Technology shows that the AI algorithm is capable of accurately predicting the patient’s race from the images showing no obvious racial features, like an X-ray of an arm or a chest CT scan. This ability is far beyond human capabilities, with little to no hints on how the algorithm is able to detect these features.
The researchers have attempted to misguide the model by removing or obscuring certain features of the images – for example applying a filter that blurs the color of the bones to prevent the model from gaining information about the bone mineral density. Yet all the attempts have shown no significant impact on the predictions.
This research comes with great significance regarding the biases hidden in the algorithms and the ways to tackle them. If a machine is able to detect the race, it may be using this factor in its decision process, and achieving maximum transparency regarding this aspect is the only way forward.
Details about the research can be found on the MIT website.
AI solves crossword puzzles
The crossword puzzle is a popular way to kill some time by delivering a challenge to overcome, yet without a great commitment. Or is it?
Depending on the level of difficulty, crossword puzzles can be extremely demanding for humans and computers alike. The clues are usually obfuscated behind a pun or non-obvious wording. Discovering the answer is also dependent on showing other letters by figuring out the rest of the words.
A Berkeley-devised model uses the state-of-art open-domain question-answering approach and combines it with the system that checks if the word actually fits the required space and already uncovered letters.
The demo of the system can be tested using berkeleycrosswordsolver.com.
Gato AI brings AI closer to the human level
One of the key concepts of modern AI is the “narrow AI” – a term that refers to the fact that AI is usually trained to perform a single task – recommend the movie on Netflix, optimize ad spending on Google or find cancerous cells at a histopathology scan when used with Virtum.
The same neural network, while excelling at one particular task, is useless in other cases. There is no point in testing the movie recommendation engine by playing chess in the same way there is no point in testing the oven’s ability to be a dishwasher – these are different tools, designed to deliver separate outcomes. General intelligence, where the same neural network is able to learn how to play chess, perform surgical operations, drive a car, and recognize particular songs on the radio is human-specific, with some traits of it shown by animals.
Recently delivered by Deepmind Gato AI is a generalist agent – basically the neural network capable of dealing with multiple different tasks. In this particular case, it is about playing different Atari games, manipulating objects, and generating text.
The key to achieving this goal was to train the model on multiple datasets, usually used in the narrow agent training. The approach was derived from the one used in training complex language processing models.
This makes the model closer to the concept of general intelligence. More details regarding the research and the approach can be found in Deepmind’s research.