Scroll

Tooploox CS and AI news 12

Tooploox CS and AI news #12

In the dynamic and ever-changing world of AI development, it is not surprising that natural processes are an inspiration to the next big research topics. But this time, we also gain a deeper understanding of our own brains derived from the field of Artificial Intelligence

October has delivered a variety of interesting, inspiring, and thought-provoking research. The second component is even more surprising, transferring us into the realm of ancient oracles, but this time, instead of inhaling hallucinogenic fumes in a temple of stone, the artificial intelligence is inhaling Reddit. 

Introducing Deep Evolutionary Reinforcement Learning

Mimicking the human brain was the key idea behind the research on artificial neural networks. While the concept itself started down its road in 1943, shortly before the end of World War II, the recent explosion of use and development was powered by the sudden access to computing power. But all of this was about building an artificial brain to power a particular system or agent. 

In recent research published in Nature, researchers from Stanford University delivered a new approach to reinforcement learning, where an agent (the neural network controlling an entity in a simulated environment) can influence not only its own behavior but also its physical form. 

What Is Reinforcement Learning?

To understand the research, a little hint about reinforcement learning is required. Reinforcement Learning (RL) is a paradigm where AI learns not on the data provided by the data scientist, but on the interaction between the agent and its environment. In the most common situation, the RL agent operates in a simulated environment. 

The role of the data scientist is to deliver the set of rewards and punishments for different behaviors. A good example comes from the automotive industry, where the RL agent can be used to control the autonomous vehicle in its simulated environment. It gets rewards for sticking to the traffic rules and driving safely while being punished for speeding or crossing through red lights. 

In the traditional model, the RL agent does not influence the form it was provided with. In the Deep Evolutionary Reinforcement Learning (DERL) model, this has changed and the agent has the ability to not only influence its behavior but also to modify its form. 

This approach is based on the fact that the brain has evolved parallel to the body, and there are countless life forms utilizing multiple ways to operate in various environments and to fulfill various goals – from humans having nimble fingers to operate tools, to cats being able to hide their talons to sneak through shadows while hunting prey, or birds having sophisticated eyes capable of seeing the trails of rodents from high above. 

To simulate the process of evolution, researchers added a lifespan to the agent, with the ability to modify the next generation of agents with the aim of maximizing the goal-related performance. And as the next generation inherits their form as modified by their “parents,” the learning process restarts. 

This line of research opens the door for new approaches to robotics and artificial intelligence. Also, the researchers found in their paper that the faster learners tended to evolve faster and more effectively despite the fact that there was little to no direct reward for the process of learning. 

The full paper can be read in Nature

AI getting close to the way humans process natural language

Mimicking the way the natural brain operates is one goal. But oftentimes it is all about delivering a solution that simply works. When it comes to Natural Language Processing there was little to no resemblance to the actual human brain in the model of its operations and work – it was a purely functional design, based on the probability of the next word in a string of words. 

Apparently, this result-driven approach has resulted in building a human brain-like architecture. Recent research from the Massachusetts Institute of Technology shows that the human brain processes language in a similar manner to a neural network, utilizing a hidden layer responsible for deriving meaning from the sum of words. 

The full research can be found on the MIT News website.

Ask Delphi – AI-powered social experiment

While being a full-fledged social experiment, this news also brings thought-provoking insights into internet societies. The Allen Institute for AI has delivered “Ask Delphi” – an oracle-like artificial intelligence trained on Reddit archives, with countless internet users asking about various matters in their lives – from marriage issues to work-related dilemmas, from raising children to training their dogs – you name it. 

The oracle can process any question given, yet the answers are sparse, with “it is good” or “it is bad” at best. There is no explanation provided, so unlike the ancient Greeks, the modern user gets no poem about the future to come or the thoughts behind a particular judgment. 

The website can be accessed through the Ask Delphi website, yet the researchers take no responsibility for the outcome – contrary to the ancient priests who tuned the Oracle’s responses to be as foggy and unclear as possible and make it easier to reinterpret in the future according to the facts being predicted.

NATO launches $1B in funds for AI strategy

The North Atlantic Treaty Organization will be the next global organization to adopt an Artificial Intelligence strategy. The 30-nation security pact built around the North Atlantic aims to make itself “future proof.” 

As a part of this strategy, the treaty will also launch a $1 billion fund to support tech advancements. According to Jens Stoltenberg, the wars of the future will be fought not only with bombs and guns but also with bytes and big data, so keeping up with technological advancements is critical for the future of national security.

The document calls for the establishment of accountability and transparency in AI-powered solutions used by member states as well as delivering reliability and avoiding biases. 

The summary of the document can be found on the NATO webpage