Scroll

Tooploox CS and AI news 4

Why do so many Al projects fail?

Tooploox CS and AI News #15

This edition of Tooploox CS and AI news deals with VCs (Venture capital) using Artificial Intelligence (AI) in their decision-making processes. Also, we provide our readers with information on why so many AI projects fail and what to do to avoid this all too common fate. 

March 2021 has delivered new research not only in the field of AI but also in those that help us understand the landscape surrounding it. And it seems that issues regarding privacy, ethics, and success rates in implementation have been on the rise. 

Gartner predicts that 75% of VCs will be AI-driven by 2025

VCs and startups are currently an ecosystem lagging behind in the adoption of innovations. While Venture Capitalists have risks accounted for in their business model, in the end, it is all about earning more money and reducing losses. More about the matter can be found in our blog posts about Venture Capital during Covid and the venture capital during the recession

While human investors come with knowledge and experience, AI-powered algorithms can chew through gigabytes of data in mere seconds, outperforming many humans in the accuracy of their predictions. 

Thus, it is not a surprise that Gartner predicts that up to 75% of all VCs will be using AI-based solutions in the investment process by 2025, up from the 5% of today. To be more precise – AI will perform early stage filtering, in fact deciding if  particular startups or founders will ever meet with a human analyst. 

So after years of funding AI-based solutions, VC has decided to grab hold of the solutions they’ve helped to build. 

The full story can be found in the Wall Street Journal

Poor data quality behind most AI project failures 

According to the latest State of the Cloud Report, 87% of employees claim that poor data quality is the main reason why AI projects in their organizations fail. Also, the study has found that only 8% of data-related professionals believe that AI-based solutions are used widely in their organization. 

The report stays in opposition to the dominant narrative of companies widely adopting AI-based solutions. But there is little information regarding real day-to-day usage of solutions after their implementation – whether it was a success or failure. 

Adversarial training not secure enough to be used in robots

Convolutional Neural Networks are currently top-tier technology as used in image recognition. The technique delivers impressive results in multiple tasks with semantic segmentation and scene recognition coming as only the first-to-mind examples. 

But what can be applied to a virtual environment can be challenging in a far more complicated reality. And the core of the problem lies in the nature of neural networks

The network scans an image in search of statistically repetitive elements that are used to make further generalizations in the task of object recognition. An adversarial image is an image when there was a layer of noise added to make it different from the neural network’s point of view. The human brain is either not precise enough to spot the difference, or too good at generalization to even see the noise layer. 

But for the neural network, the adversarial image can be utterly confusing. There have already been adversarial attacks done on autonomous vehicles, where black and white stickers put on a road sign made it utterly invisible for the vehicle. Thus, robotics are especially prone to this type of attack due to their operations in real-space, where any malicious actor or prankster can put some adversarial stickers to confuse the systems. 

This problem has been further described in a research paper published on Arxiv.  

More information about the issues with robotic vision can be found in our blog post about computer vision for autonomous vehicles

…but probably not for long

The problem of autonomous vehicles being vulnerable to adversarial attacks has not been put aside. Researchers are aware that data disturbances are present in real world environments in addition to the possibility of adversarial attacks. 

That’s why a team from MIT decided to design a “digital sceptic” – an algorithm that will help an autonomous vehicle navigate throughout the imperfect and overcomplicated real world. The system uses the reinforcement learning approach. 

The advantage of reinforcement learning is that this method does not use labels, but is trained by a system of rewards and penalties associated with the agent’s interactions with the environment. Thus when the network approaches a new situation or an entity that has not been recognized, the second network can evaluate the response. Thus, when paired with the multiple sensors in autonomous vehicles and backed by lidar point cloud data analytics, safety can be boosted even further. 

This can be considered a kind of artificial common sense. More details can be found in the MIT publication

Blurring faces does not reduce the performance of trained neural networks

Building a large and consistent dataset is a principle of building an AI-based solution. On the other hand, concerns about the privacy and safety of people depicted in images have risen. 

ImageNet is currently the gold standard when it comes to image datasets used in AI training. Though, with numerous images of people, it often struggles with proper labeling with one major scandal emerging just this last year, with racial slurs found among labels. The problem is severe, with countless models trained off a dataset depicting particular, existing people with depreciating labels. 

To mitigate privacy concerns regarding the identity of people in the images, the team behind the dataset decided to blur all faces. The researchers have found the impact of this action on image recognition performance is minimal. 

More details can be found in the official announcement. And more about the issues regarding the responsible usage of data can be found in our blog post about datasets and AI ethics.  

Similar Posts

See all posts