Al tackles disease and language bias
February 2021 delivered stunning new research focused on the tackling of diseases in two ways – filtering by the most vulnerable patients and evaluating the forecasting methods of the disease’s spread.
Artificial Intelligence (AI) undoubtedly delivers powerful solutions, capable of solving day-to-day problems in multiple fields. But on the other hand, the need to evaluate these AI models is crucial, and the concerns are fully expressed in the news delivered below.
Researchers propose a platform for evaluating AI disease forecasting methods
The pandemic has delivered significant growth in the interest of disease forecasting methods. Currently Google Scholar delivers over 14,000 results on “Covid forecasting.” With the results differing highly depending on their implementation, the need for a benchmarking system has grown.
To tackle this issue, researchers from the University of California proposed the EpiBench, a platform that focuses on retrospective forecasting. In this situation, the overall effect is already known and the model operates on data gathered from past cases. The platform is currently in the development stage, but researchers believe that developers working on their own forecasting and modeling solutions can benefit from it now.
Without clear and unbiased benchmarks it is a challenge to deliver a reliable conclusion on the better or worse performances of a particular model or method.
AI models support COVID-19 patient management
With the huge boost of interest in the relationship between AI and healthcare, it is natural that researchers have begun to construct new and increasingly sophisticated models that dig through the previously gathered data to mine for new knowledge.
Recent research delivered by the University of Copenhagen delivers predictions about the virus’ impact on patients with up to 90% accuracy. The model was initially fed with the data of 3,944 Danish COVID-19 patients. The model has managed to spot factors that significantly increase the risk of death, including being male or having high blood pressure.
The model is aimed for use in vaccination management in order to help identify patients who are the most vulnerable and to prevent the worst of outcomes.
The availability of this type of data underlines the significance of implementing Electronic Health Records in AI-powered solutions.
Tackling racial, religious, and gender biases with dedicated datasets
Natural Language Processing (NLP) can be a minefield with multiple layers of meaning hidden within messages and texts. Thus, instead of being completely fair and unbiased, NLP-based solutions tend to reflect and reinforce popular stereotypes.
To counter this problem, researchers from Amazon and the University of Santa Barbara have prepared a dataset and metrics dedicated to measuring the biases of NLP models. The Bias in Open-Ended Language Generation Dataset (BOLD) consists of a staggering 23,679 English text generation prompts which are analyzed by the model. The model is benchmarked on five fields – profession, gender, race, religion and political ideology.
A quick benchmark done on three popular NLP models revealed that produced texts are, in general, even more biased than human-written Wikipedia articles.
More details on the work and the dataset can be found in this Arxiv paper. Stripping the AI language models of their hidden biases is considered one of the most significant challenges in AI ethics.
Deepmind proposes NFNet – a new state-of-the-art image classification model
Deepmind has delivered multiple research papers focused on various aspects of machine learning. This time the team delivered a new type of neural network that doesn’t leverage batch normalization – and that’s a game-changer.
Batch normalization is a technique that boosts learning rates. Yet the technique comes with several disadvantages with a significant cost increase being one of the most painful. That’s why research on alternative approaches has been conducted.
Thanks to adaptive gradient clipping techniques the neural network delivers better performance than models that utilize normalization, in fact bringing better results with less computational power required. The details of the research can be found on Arxiv.
Waymo launches robo-taxi tests in San Francisco
Waymo is a Google-affiliated autonomous car company that is gaining attention in an increasingly crowded market. The company has launched Waymo One, a fully autonomous taxi service in San Francisco. The company insists that the autonomous car sensors used in the vehicle are reinforced and tuned to spot unexpected obstacles early enough to react swiftly.
According to Waymo, the models used to control these cars have driven 20 billion miles in simulations and 20 million autonomous miles on real-life city streets and roads. To make the service more credible, all of Waymo’s staff are using the app to commute within the city.
More about the launch and events surrounding Waymo’s operations in San Francisco can be found in this VentureBeat article.