May has seen the introduction of new AI-powered features in popular products like LinkedIn, TikTok, and Gmail. As well as a massive deal between New York Times and Amazon.
The month has also produced interesting new models and research papers worth reading, with new problems tackled by AI technology.
1 May 2025 Wikipedia uses generative AI
The Wikimedia Foundation will add generative AI tools to help Wikipedia editors. AI will handle research, translation, and volunteer onboarding, freeing humans up for content review. The plan keeps human control, open models, and transparency. Wikipedia already uses AI for vandalism detection and translation, but this is its first editor-facing rollout. The foundation is also creating a structured Wikipedia dataset for machine learning to curb bot scraping, which has raised bandwidth use by half.
More can be found in The Verge.
3 May 2025 Google introduces Gemini AI for children
Google will let children under Family Link use Gemini AI on Android. Kids can ask for homework help or stories. Google says their data will not train AI but warns Gemini can err and surface content parents dislike. Parents can disable access and get first-use alerts. Google urges parents to explain that Gemini is not human and that kids should keep personal details private.
More can be found in The Verge.
5 May 2025 Pinterest taps in visual search
Pinterest is adding a visual language model that tags fashion Pins with style terms and “vibe” words. Users can click tags to search and adjust details such as color, fabric, or occasion. The tool launches today for women’s fashion in the US, Canada, and the UK, with more categories and regions planned later.
More can be found in The Verge.
6 May 2025 OpenAI restructures to keep its non-profit soul
OpenAI’s CEO, Sam Altman, says dropping the capped-profit model lets the new Public Benefit Company issue ordinary stock and seek “hundreds of billions of dollars” to scale AGI worldwide. Dividends will fund nonprofit programs in health, education, and science. Altman frames three aims: raise massive capital, grow the nonprofit into a global social-good engine, and ship safe, “democratic” AGI with continued open-sourcing and red-team safety work.
More can be found in Artificial-Intelligence News.
7 May 2025 LinkedIn launches Generative AI-powered job hunting tool
LinkedIn has added a generative-AI search bar that returns job listings from plain-language prompts like “entry-level brand manager in fashion” or “analyst roles focused on sustainability.” The tool bypasses manual filters, matching openings to a user’s skills and interests. It is live now for English-language Premium subscribers and will reach all members who use Global English by week’s end.
More can be found in The Verge.
8 May 2025 AliBaba delivers “ZeroSearch” approach, cutting costs of LLM training
Alibaba researchers propose “ZeroSearch,” a reinforcement-learning framework that lets large language models learn search skills in a simulated environment instead of querying real search engines. By removing hundreds of thousands of API calls, the method cuts training cost, improves scalability, and gives developers more control over how models retrieve information.
More can be found in VentureBeat.
12 May 2025 ChatGPT’s Deep Research supports now PDF export
OpenAI’s Deep Research tool now lets users download their reports as fully formatted PDFs that keep tables, images, and linked citations. The option appears under the share menu for both new and existing reports. It’s live for Plus, Team, and Pro plans; while Enterprise and Education tiers will get it “soon.” The upgrade underscores OpenAI’s push to attract enterprise research customers.
More can be found in VentureBeat.
13 May 2025 TikTok introduces AI Alive
TikTok’s AI Alive, found in the Story Camera, lets a user choose one photo and supply a text prompt; the system then renders a seconds-long video with movement and effects in a few minutes, though some prompts still miss the mark. TikTok’s moderation checks the image, prompt, and result before preview and again before posting, tags the clip as AI-generated, and embeds C2PA metadata. The feature is rolling out to give users access to video creation without the need for manual editing.
More can be found in The Verge.
14 May 2025 DeepMind introduces AlphaEvolve, an AI Coding agent
Google DeepMind’s AlphaEvolve links Gemini Flash for breadth and Gemini Pro for depth to automated evaluators inside an evolutionary loop that writes, tests, and refines code. The system uses an internal set of metrics.
More information can be found on DeepMind’s blog.
14 May 2025 AI supports early spotting of dyslexia and dysgraphia
University at Buffalo scientists show that AI handwriting analysis can screen children for dyslexia and dysgraphia. Their SN Computer Science study describes training models on pen-and-tablet samples to spot spelling mistakes, poor letter shapes, and layout cues tied to the disorders.
More can be found in ScienceDaily.
14 May 2025 MIT: Vision-language models struggle with negations
MIT researchers show vision-language models often ignore negation words such as “no” and “not,” causing them to mis-retrieve images, for example selecting scans that include and exclude the same findings. On tests with captions containing negations, model accuracy dropped to mere chance. The team built a new dataset pairing images with negated captions and, after retraining, saw gains in recall and multiple-choice accuracy, yet they warn that deeper fixes are needed before deploying these systems in tasks like triage or quality control.
More can be found in ScienceDaily.
15 May 2025 OpenAI rolls-out GPT-4.1
OpenAI is rolling out GPT-4.1 and GPT-4.1 mini in ChatGPT. Paid tiers – Plus, Pro, and Team – can now pick the full GPT-4.1 model. GPT-4.1 mini replaces GPT-4o mini as the default model for every account, including free users. Both new versions run faster, are tuned for coding and instruction following, and expand the context window to one million tokens, far beyond GPT-4o’s 128,000.
More can be found in TheVerge.
16 May 2025 OpenAI launches Codex
OpenAI has launched a research preview of Codex, a cloud-based software-engineering agent powered by codex-1, a variant of the o3 model tuned for programming. Pro, Team and Enterprise users can open a ChatGPT sidebar, describe a feature, bug fix or question, and click “Code” or “Ask.” Codex spins up an isolated sandbox preloaded with the repository, edits files, runs linters, tests and other commands, and iterates until tests pass, usually in 1–30 minutes.
More can be found on the OpenAI blog.
18 May 2025 China builds space supercomputer
China has placed the first 12 crafts of the “Three-Body Computing Constellation” into orbit, opening a plan for 2,800 AI satellites that process data in space instead of sending it to ground stations. Built by ADA Space with Zhejiang Lab, each satellite runs an 8-billion-parameter model at 744 TOPS; together they already reach about 5 POPS and link by laser at up to 100 Gbps, sharing 30 TB of storage.
More can be found in The Verge.
20 May 2025 Google releases preview of Gemma 3n model
Google has released a preview of Gemma 3n, an open on-device model built with Qualcomm, MediaTek and Samsung. The 5 B and 8 B-parameter versions use Per-Layer Embeddings to cut RAM to about 2–3 GB and answer roughly 1.5 times faster than Gemma 3 4B. Gemma 3n processes text, audio, images and video, runs offline, supports more languages and lets developers drop to a nested 2 B submodel for lower latency. The same architecture will power the next Gemini Nano for Android and Chrome later this year, while the preview is available to developers now.
More can be found in Google Blog.
21 May 2025 Volvo partners with Google for Gemini in their cars
Volvo is extending its Google partnership to embed the Gemini AI assistant in all models running Android Automotive. Drivers will soon be able to make conversational requests – for translation, navigation, place finding, and even queries about the user manual – without fixed voice commands. Gemini arrives on Android Auto within the coming weeks, while cars with Google built-in receive it later this year.
More can be found in TheVerge.
22 May 2025 Amazon tests AI voiceovers
Amazon is testing “Hear the highlights,” an app button that plays a short AI-generated chat between two voices about a product’s specs, user-review themes and web facts. The feature appears on a few U.S. mobile listings—such as Shokz headphones and the Ninja blender—and opens with a note that the clip is AI-made. Amazon says wider rollout to more items and customers will follow in the months ahead.
More can be found in TheVerge.
22 May 2025 AI may consume up to 50% of datacenter usage by the end of the year
According to various estimations, AI workloads already draw about 20 percent of global datacentre electricity and could approach 49 percent by the end of 2025. Using chip power ratings from Nvidia, AMD and others, it is projected that AI demand could hit 23 GW—roughly double Netherlands’ total consumption—within the year, versus the 415 TWh the IEA attributes to all data-centre activity in 2024.
More can be found in The Guardian.
27 May 2025 Salesforce buys Informatica for $8 Billion
Salesforce will acquire Informatica for $8 billion in cash, paying $25 per share for stock it does not already own. Informatica, founded in 1993 to supply ETL software and later expanded into data quality, MDM, security, and the cloud-based Intelligent Data Management Cloud, has recently added generative-AI features. Salesforce plans to fold these data-management tools into its AI and agent platforms.
More can be found on Google Developers blog.
28 May 2025 Mistral introduces code embedding model
Mistral has introduced Codestral Embed, a code-focused embedding model priced at $0.15 per million tokens. On benchmarks such as SWE-Bench and GitHub’s Text2Code it retrieves code more accurately than OpenAI’s Text Embedding 3 Large, Cohere Embed v4.0, and Voyage Code 3. The model outputs variable-size embeddings, letting users trade dimension for storage cost while still leading rivals at 256-dim int8. Mistral targets four tasks: retrieval-augmented generation, semantic code search, similarity search, and code analytics.
More can be found in VentureBeat.
29 May 2025 New York Times strikes deal with Amazon on generative AI
Amazon and The New York Times have signed a multi-year deal that lets Amazon use the Times, the Athletic, and NYT Cooking articles for Alexa summaries and for training its AI models. Financial terms were not disclosed. The agreement comes after the Times’ 2023 copyright suit against Microsoft and OpenAI and amid other publishers’ lawsuits and licensing deals.
More can be found in The Verge.
30 May 2025 Gmail delivers smart summaries automatically
Gmail in Workspace will now auto-generate Gemini summaries for email threads with multiple replies. The summaries appear at the top of English messages on mobile and update as replies arrive. Rollout may take up to two weeks. Users can still request a summary manually or disable all AI features by turning off “Smart features.”
More can be found in The Verge.