part 1
Artificial Intelligence (AI) is considered a societal and economical revolution comparable to the steam engine and the introduction of the PC in offices during the first and the third industrial revolutions, respectively. On the other hand, the buzz is strong, and filtering out the meaningful trends from the noise is getting increasingly challenging.
The bold claim behind naming AI technologies the fourth industrial revolution was coined by Klaus Schwab, the Founder and Executive Chairman of the World Economic Forum. The effects of previous revolutions have either been fully adopted (the Internet, PCs). Others have faded away to be replaced by even more sophisticated technologies (in the case of steam engines), the fourth revolution is still on the move.
The key driver of the fourth revolution is big data, with AI being one of the key technologies in this area. While this concept is not new – the first papers were published in 1940 and 1943 – the shift from academia to business and public availability has been seen only recently. According to The Artificial Intelligence Index report the volume of peer-reviewed AI papers has grown by more than 300% between the years 1998 and 2018.
The revolution is not going to slow down. According to PwC estimations, AI will add $15.7 trillion to the global economy and deliver a 26% boost to local economies, both by 2030.
What is an AI trend?
To spot trends in the heterogeneity of the AI world, it is necessary to understand the real meaning of a trend – which lacks a formal definition. For the sake of this article, it is reasonable to assume that a trend is a larger set of events and technologies, not a particular one use case.
This means that there is a larger set of items in the category and there is a major, leading thread within them that connects all the dots. This approach has resulted in our bringing up the AI and ML trends below:
- Part 1
AI Trend 1: Responsible AI
AI Trend 2: Edge AI
AI Trend 3: Conditional Computation
- Part 2
AI Trend 4: Synthetic Media
AI Trend 5: Large Language Models and Natural Language Processing
AI Trend 6: Continual Learning
AI Trend 7: AI Cybersecurity
Responsible AI and Machine Learning
According to the McKinsey global survey on AI, Artificial Intelligence has become a key element of modern business and is more common than it appears at first glance. The report has highlighted that the share of companies using AI to support at least one business function has grown from 50% in 2020 to 56% in 2021. That means the majority of consumers have probably encountered or used a system that utilizes AI-based solutions, whether consciously or not.
The largest tech companies, like Google, Amazon, Netflix, and Apple, are just the tip of the iceberg. But the increasing adoption of these solutions has boosted the need to regulate them and to responsibly provide solutions.
AI Regulations, Guidelines, And Good Practices
AI is cutting-edge technology, with currently operating companies field-testing models. While the tech brings advantages and profits, there are also situations where the machines behave unpredictably. Uber’s face detecting tool was unable to detect the faces of drivers from ethnic minorities and Amazon’s AI-based tool automatically rejecting female coders are only a few examples that come to mind.
The European Union is one of the first major players that aims to deliver a legal framework that protects the rights of users while allowing companies to leverage the power of the new tech.
The European Commission (EC) delivered the first proposition of the Artificial Intelligence Act (AIA) in April 2021, introducing not only a tier system for classifying AI-based systems but also a set of severe fines. The details about the law’s construction as well as its possible effects and implications were delivered in our text titled, The Artificial Intelligence Act in the EU – what’s likely to change.
Interpretability
Another challenge regarding responsible AI is interpretability. A common conception about AI is that the system is a black box, delivering results from nobody knows where.
This is sometimes the point, but not always. Also, when it comes to using AI in more sensitive matters, like healthcare, the AI system needs to be interpretable, to provide the user (a physician for example) with information about the factors influencing a particular result. That appears to be more challenging.
Edge AI
One of the key challenges with neural networks is the fact that both training and running them require huge amounts of computing power. While the training can usually be supported by cloud computing, running the network can be more challenging.
But this doesn’t stop the edge AI market from developing dynamically. According to the MarketsAndMarkets report, the global edge AI software market is predicted to reach $1,835 million in worldwide value by 2026. This development is fueled mostly by the growing demand for smart appliances and applications, as well as the development of 5G telecommunications.
While web-based technologies also leverage the cloud, edge devices like cameras, smartphones, IoT, or autonomous vehicles need neural networks to be scaled down enough to operate on less sophisticated hardware.
Edge AI was also mentioned in AI Trends 2021, proving that our team is skilled in filtering the long-lasting trends from the noise.
Reducing The Model Size
This can be done by downscaling a large model into a size that fits the requirements of the edge hardware while preserving the model’s performance as much as possible. The NeurIPS 2021 featured two interesting papers on the matter:
Only Train Once: A One-Shot Neural Network Training And Pruning Framework – the paper describes a way to reduce the time and effort required to downscale a model. The approach can bring significant savings on edge technologies development.
Reducing The Power And Computational Demand Of The Model
This can be done by delivering architectural and structural tweaks to the network that enable the system to use resources more efficiently. A good example comes from a paper delivered by the Tooploox team: Zero Time Waste: Recycling Predictions in Early Exit Neural Networks. The research was delivered in collaboration with Tooploox. It enriches the neural network with a function that enables the system to reuse unused predictions to support and enrich the works of subsequent layers and effectively reduce the computational cost of image classification by up to 50%.
Conditional computation
One of the key limitations of AI is the fact that the models are narrow – there is severely limited adaptability and flexibility in them. A particular network has little to no way to perform any other task than it has been delivered to do.
This means that in a traditional approach, the whole world of neural networks is limited to particular data material – and it has proven difficult to infuse with additional information to boost the performance of the model.
But this is rapidly changing as well. Interesting research – Plugin Networks for Inference under Partial Evidence – was delivered on the matter by Tooploox. The research provided neural networks with sub-networks that analyze the factors that influence the outcomes of the main neural network.
For example, if an autonomous car is heading through California, there is no need to compare encountered road signs with European ones. The same goes for spotting a McDonald on Mars or a star destroyer in the kitchen. With the significant reduction of the number of possible objects to detect in a particular context, the power required to perform the computation drops significantly.
Summary
This was just the first part of a two-part summary of the AI trends of 2022 that will shape the upcoming future and which have seen their debut in recent days. Follow us to keep you informed about the next upcoming part!