Tooploox CS and AI news 25

  • Scope:
  • AI
  • Artificial Intelligence
Date: January 10, 2023 Author: Konrad Budek 3 min read

The last month of 2022 was undoubtedly dominated by the effects of releasing ChatGPT – arguably the most widely used conversational AI up to date. Yet AI-powered solutions have shown their ability to negotiate in another form. 

Apart from that, it has become obvious that despite the increasing conversational and diplomatic skills of AI-powered solutions, being overly cheerful is not proving to be a good strategy for non-human speakers. 

Chat GPT Launch – CEO admits risks

Technically GPT’s launch is November news, yet December was the month of ChatGPT. The internet exploded with experiments, use cases, tests, and, sometimes, interesting glitches. 

The interest was so intense that OpenAI’s CEO Sam Altman shared his remarks on the system’s limitations on his Twitter account

“ChatGPT is incredibly limited but good enough at some things to create a misleading impression of greatness. It’s a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness,” he says. 

ChatGPT has proven to be incredibly convincing regarding delivered text – it is consistent, heavily human-like, and often accurate when it comes to delivering answers. The tool has been tested by various media, starting from tech-focused ones to mainstream journals and even Fox Weather, where the team asked ChatGPT about the weather yet to come. 

Attitudes toward this technology vary from amazement to terror, with all shades of gray in between. OpenAI’s CEO delivers a highly needed and significant voice in that discussion.

Artists can opt-out of the next stable discussion training

The issue of copyright regarding visual data used to train AI image generating models is still a hot discussion. The problem is especially visible when it comes to training generative models, where copyrighted images are not only used to train the neural network but sometimes re-used by the network to deliver desired results. Which, in actual fact, is a copyright violation. 

To deliver a more fair approach toward training and to build a healthier relationship between artists and the AI development society, artists can now opt-out of being included in the datasets used to train stable diffusion models. The process will be facilitated by an organization called Stability AI. You can complete the opt out process using the https://haveibeentrained.com/ website. 

AI for board game Diplomacy

Diplomacy is a cult-classic board game that simulates the negotiations and tensions between European states just before World War I. The game has no dice rolling or randomness – all outcomes are determined by the rules, and the key driver of the game is negotiation. Players can make truces, schemes, form alliances, and betray each other at will. 

What’s interesting is there is no rule-based way to enforce diplomatic agreements on military movements on the board. Thus, trust is the only currency players can earn or lose through their activities. As a game of nerves, poker faces, and bluffs, the game is far from chess and requires high communication skills in addition to computing power. 

Using this battlefield, Deepmind has tested the performance of communicating agents (AI-controlled players) against non-communicating ones. They’ve also tested various approaches toward truces (from everlasting loyalty to guaranteed betrayal) and for conflicts (from the forgetful angel of mercy to a no-mercy grudge bearer). 

The effects show that cooperating agents overwhelmingly outperform non-communicating ones. Full information about the diplomacy-playing agents can be found in a research paper published in Nature

Cheerful chatbots don’t necessarily improve customer service

One of the key elements of AI-powered customer service agents is the fact that they have no emotions. There is no way for them to feel hurt or lose their tempers. They are tireless and possessed endless patience toward clients. 

The question is – should an emotionless automaton appear cheerful in any way? According to research published by the Georgia Institute of Technology, this is simply undesired. While a human employee who says he or she is “delighted” to help is seen positively, the same phrase comes with no measurable effect when it is said by a machine. Things got even worse in the case when customers were purely transaction-oriented, willing only to have his or her purchase completed and have that be the end of it. 

The full text can be found on the Georgia Tech website

Latest posts

See all posts