February keeps generative AI at the top of the headlines while bringing an answer to one of key questions regarding the development of the whole internet – will Google consider AI-generated content equal to human-delivered content?
Apart from dealing with the future of internet communication, there have also been first steps taken toward regulating AI-based policies and automated agents, implementing reinforcement learning-based agents in computer gaming, and tackling climate change with AI models.
NASA partners with IBM to build AI-powered climate change models
NASA gathers a gargantuan amount of data from earth-observing satellites – according to VentureBeat, the dataset has grown to over 70 petabytes. Up to now, the organization was building its own models to process the gathered information. From now on, the Agency will proceed by using IBM technology to process its data and gather knowledge from it.
The collaboration is founded on the application of Large Language Models (LLM) both in language and non-language challenges. The former applies to building the single largest dataset consisting of scientific publications about Earth and the climate, which can then be easily processed and searched.
The other application will be about using the LLM to process the data gathered by satellites to bring a better understanding of them and ease the gathering of insights.
More about the collaboration can be found in this Press Release.
Google accepts AI-generated content – as long as it is of high quality
According to the official statement of the most popular search engine in the world, there is no need to be worried about AI-generated content – as long as it is of high quality. The AI-based tools enable individuals and organizations to produce content at rate as yet unseen before, generating thousands of words in a matter of minutes. With content being the blood and soul of the internet, the threat of filling it with non-human works emerges.
Google compares this to a content explosion seen some time ago – the threat, according to the company, is rather represented in low-quality content, spam, and search over-optimized texts rather than with the source of them, be they from a human or AI-powered writer. As long as the content follows the guidelines, the search engine is not interested in the method which was used to deliver it.
The full statement can be found on Google’s blog.
AI models tackle Super Bowl preparations
In the shadows of training halls and prep rooms, coaches and analysts spend hours and hours on annotating video recordings of various football matches. The goal is to spot patterns and break the codes of opposing teams.
To make this tedious work more efficient, researchers from Brigham Young University delivered an algorithm that spots the players on the field, recognizes their names and positions, and labels them. This automates the analysis process and enables the coach to gather more information faster.
According to the researchers, the algorithm recognizes formations with up to 99.5% accuracy. More information about the work can be found on the Brigham Young University webpage.
PwC names 11 ChatGPT and generative AI security concerns in 2023
The hype around generative AI is not slowing down, with new services and use cases emerging on a daily basis. The technology saw one million users in under a week from launch, setting a world record for gaining popularity fast.
Yet the spread of generative AI comes with multiple security risks, named by PwC experts during interviews conducted by VentureBeat. These include:
- Malicious AI usage
- The need to protect AI training and output
- Setting generative AI usage policies
- Modernizing security auditing
- Greater focus on data hygiene and assessing bias
- Keeping up with expanding risks and mastering the basics
- Creating new jobs and responsibilities
- Leveraging AI to optimize cyber investments
- Enhancing threat intelligence
- Threat prevention and managing compliance risk
- Implementing a digital trust strategy
Mode details can be found on the VentureBeat website.
Colorado takes the first step toward regulating AI
The State of Colorado has delivered a first draft of the Algorithm and Predictive Model Governance regulation which aims to deliver a legal framework regarding the use of AI-based solutions in the insurance business – at least for now. The draft imposes requirements on Colorado-licensed life insurance companies. The bill was issued to further ensure the avoidance of discrimination based on gender, race or other factors in issuing insurance.
More details can be found in the bill.
An average player has beaten AI at GO
Go is a traditional Chinese board game, comparable to checkers or chess, where players aim to surround more territory than their opponent using stones of different colors. In a famous match between AlphaGO and Lee Sedol, the world Go champion, the artificial intelligence agent triumphed, beating the human champion in three out of four matches. The match is comparable to the famous Kasparov vs. Deep Blue match, where a computer beat a human player for the first time.
Mr. Sedol was so shocked by the way the computer was playing and the mere fact that the machine had such finesse in the game, he retired from playing, naming AI “an entity that cannot be defeated” in contrast to human players who, no matter how brilliant, eventually die.
Amateur Kellin Perline proved Lee Sedol wrong. In a match with the invincible AI he managed to win 14 out of 15 matches without direct support from a computer. The key was in a strategy that was forged by gathering knowledge about his adversary.
The California-based company Far AI has delivered software that probes an AI system in search of fallacies and hidden mistakes. After over a million games, the system spotted a weakness – by distracting the system with stones placed in a single corner, the human player can easily encircle AI-owned stones and win the game.
More about the match and the preparations that preceded it can be found in the Financial Times.
Reinforcement learning-based AI debuts in play station GT Racing
An artificial intelligence trained to control opponents in the Gran Turismo 7 game has been implemented by Sony AI and Polyphony Digital to enrich the gaming experience and provide players with a more demanding environment.
The game is renowned for its focus on the realistic depiction of the racing experience, with realistic physics and an impact of environmental factors on driving conditions.
Applying Artificial Intelligence of some sort has been seen in computer gaming since its very beginning. In racing games, it was mostly concerned with making the opponent’s car follow the optimal trajectory and approach curvatures and turns with a certain speed.
More information can be found in this Venture Beat release.