Al summarzing books and talkig about the weather

This topic begins with AI systems intended to reduce the unpredictability of weather forecasts around the world by delivering precise to-the-hour predictions of upcoming changes. But the interest in artificial neural networks in the realm of our climate does not limit itself to simply predicting rainfall.
And, surprisingly enough, artificial neural networks are increasingly effective in summarizing books – so real, human brains will not need to spend so much time reading them.
Researchers from Bloomberg and AWS claim to deliver an algorithm that delivers a reliable estimation of greenhouse gas emissions
Climate change is one of the hottest (no pun intended) topics of the decade, with companies and states slowly transforming their operations towards low-emission or zero-emission modes. The core document behind the world’s movement toward reducing the greenhouse effect is the Paris Agreement. In 2015, representatives of 196 countries signed a document that obliged them to reduce emissions in order to prevent the global temperature from rising more than 2-celsius degrees above the average, as known from the pre-industrial levels.
The key challenge in sticking to the Paris agreement is the reliable measurement of emissions – companies responsible for the highest levels of greenhouse gas can hide or simply fail to disclose what they truly produce. To tackle this challenge, Bloomberg Quant Research and Amazon Web Services have delivered a neural network that successfully and accurately estimates the emissions of companies and institutions that have refused to disclose their emissions.
The model was trained on a dataset containing over 24,052 rows of data regarding income, investments, locations, and multiple other factors combined with the general emissions of a particular company. By combining these aspects, the neural network is able to spot the patterns and correlations between the performance data, the publicly available information, and the greenhouse gas emissions.
The paper is available on Arxiv.
Deepmind delivers AI that predicts rain with by the hour accuracy
Weather is not only uncontrollable but also largely unpredictable – with the butterfly effect being one of the major factors contributing to its difficulty. According to National Atmospheric and Oceanic Administration (NOAA) data, the weather forecast has up to 90% accuracy when applied to a five-day period and 80% when applied across seven days. Yet in the longer run, the predictions get steadily looser.
But your typical weather forecast delivers only the most general data on whether it is going to rain or not – rarely with exact locations and an accurate time frame.
To address this challenge Deepmind, a research facility owned by Alphabet, the company behind Google, has delivered a machine learning model that is capable of predicting rain conditions within the upcoming 90 minutes, as derived from the past 20 minutes.
More details about the model and its methodologies can be found in Deepmind’s blog post.
China delivers its own AI regulation
The need for regulations in the AI landscape is on the rise along with the ever-increasing usage of machine learning-based solutions. One of the first organizations to build (or at least propose) a legal framework for ML was the European Union. The regulation delivered a broad definition of how and where solutions could be used and penalties for failure to comply, among others. You can find more details, as provided by Tooploox, regarding the Artificial Intelligence Act in the EU here.
This need was also spotted by China, where officials also proposed their own AI regulations. According to VentureBeat, the regulation reflects general thought on algorithms and the need for regulation in this space, with the need to make the rules of the system’s inner workings more transparent. Also, the regulation includes obligations to regularly audit algorithms that companies use in their daily operations – including checking datasets and outputs.
More detail on the regulations can be found on Cyberspace Administration in China (source in Chinese).
OpenAI delivers a book summarization engine
Delivering a book summary is a challenging task, with the need to deliver all relevant information in a brief and easily digestible format. Delivering a book summary is also a great test of the alignment problem. This term describes the keeping of human goals in alignment with AI’s inner workings. In other words, it is all about ensuring that the AI is doing what it was designed to.
A book summary needs to keep as close to the story as it can, cutting out all that one might consider unnecessary. OpenAI has delivered a model that performs this task by using three foundations:
- It is derived from the GPT-3 model, which is currently the state-of-the-art when it comes to Natural Language Processing (NLP) tasks.
- It breaks the task into easier, smaller chunks. Instead of building a summary of an entire book all at once, it delivers summaries of chapters and then summarizes these further until producing a final text.
- It has a reinforcement learning component. The model also leverages human feedback, particularly whether the summary is considered good or bad.
More details on the model can be found in this Arxiv paper.
Building a world with synthetic data
One of the key costs in building AI-based solutions is the cost of the data one needs to acquire and the preparation for the model to digest this data. A dataset is not only about collecting all available information, but also labeling it accordingly and correctly so that the model can learn from it.
Data also bears multiple other challenges – datasets that carry a hidden bias can perform even worse when dealing with underrepresented cases. Also, there are other issues regarding datasets, including ethics and copyright.
To tackle these challenges, companies have come up with synthetic data. The concept is pretty straightforward – instead of collecting real data samples, the company can use neural networks to produce new data that is indistinguishable from reality. Delivering artificial human faces is a perfect example that anyone can test simply by using the https://thispersondoesnotexist.com/ website.
Apparently, synthetic data delivers big business opportunities, as 89% of tech execs consider this technology crucial in staying ahead of the competition. More about synthetic data can be found in this report.