Let the robots move the couch
AI is supporting researchers in discovering new drugs and business people in driving profitability. But concerns arise, are we not trusting AI-generated results a bit too much?
AI-based solutions are currently at their hottest reforging academic work into practical business results. But this doesn’t stop researchers from finding new ways to use their knowledge in improving people’s lives or solving problems once deemed unsolvable.
AI spending on the rise
According to the IDC report, spending on AI-based solutions is increasing and is estimated to reach $342B by the end of 2021 with the $500 billion mark predicted to be broken by 2025. The most popular AI-related categories on the rise are AI platforms and AI application development, implying that companies are more willing to invest in tailored products or solutions in order to implement them in-house.
The third most popular category under the IDC’s umbrella term is AI ERM, which is predicted to be outpaced by AI CRM apps in 2025. The whole report can be found on IDC’s website.
Experts tend to overtrust AI
To err is to be human, but that doesn’t mean machines make no mistakes at all – or does it? According to a recent study conducted by IBM, the Georgia Institute of Technology, and Cornell University, even experts in the field appear to trust AI-based systems to a huge degree, without ever digging deeper into the reasons for a particular effect delivered.
The experiment itself took an interesting form. It was conducted in a game-like environment, where lava was about to flood the supplies of space explorers. The participants had to imagine themselves as an explorer, whose food and oxygen supply are about to be flooded by lava and the only possible savior is an AI-controlled robot that moves boulders to stop the flow.
This scenario was presented in three ways – the first group saw the robot delivering only a description of their current action, like “I am standing still.” The second group got a plain-English description of the robot’s motivation and intent, while the third got the binary code as a robot’s “state of mind.”
Interestingly, the participants who got only numerals, while knowing nothing about the real meaning of these, trusted deeply in the logic of the machine while those provided with explanations started to attribute an emotional intelligence to the robot. And obviously, the robot was lacking one.
This fallacy, although not directly measured or examined before, gives strong backing to the regulator’s push toward explainable AI. The full research can be found on Arxiv.
Let the robots move the couch
The case of moving a couch is interesting from an organizational point of view – there is a need for two fully autonomous beings to cooperate, yet to stay independent and make decisions for the task they have to fulfill.
When it comes to robotics, the general approach to this problem is usually comparable to either a single hero or to the hive mind behind a swarm of robotic workers. But delivering a group of independent robots to complete a single task together is a relatively new approach, yet one seen widely in our day-to-day lives, from moving the coach, to painting walls and hundreds of other tasks.
Researchers from the University of Cincinnati managed to deliver robots essentially able to move a couch (as was their virtual task). The robots were able to move it even in an unfamiliar environment. This was accomplished due to the fuzzy logic approach – instead of operations based on binaries, the robots perceived their surroundings in various degrees of “good” or “bad” and chose optimal solutions.
What’s interesting, the robots didn’t share their strategy or consult in any way before the test – it was simply about moving the couch. The full text can be found in ScienceDaily.
Algorithm teaches drones to avoid obstacles
One of the most challenging elements of controlling drones is to avoid crashes. While the drone remains stable and relatively easy to control while moving slowly, they lose stability as they gain speed. Also, moving with greater speed increases the chances of simply overlooking an obstacle until it is too late. That’s why crashes are so common during drone races.
The challenge with avoiding obstacles at high speeds limits drone usage during an emergency – for example, in delivering supplies or searching for survivors.
The new algorithm is based on a set of experiments that aim to include better predictions regarding aerodynamics and the drone’s behavior at high speeds.
The full article describing the research can be found on the MIT website.