Tooploox CS and AI news 6

Sarcastic machines that don’t get analogies and a bit of magic

Tooploox CS and AI News #15

May came bearing news of interesting AI research on problems like predicting breakthrough research and detecting sarcasm in social media. While armed with increasingly sophisticated language skills, machines remain in their infancy when it comes to parsing analogies.

This edition of Tooploox CS and AI news also comes with a little twist of magic – that is magically shaped by one of the most renowned Sci-fi minds that ever existed. 

It’s a kind of magic

The complicated problem of human-tech relations has dazzled and haunted even the most renowned sci-fi writers. The problem of humans using tech without fully comprehending it was one of the core concepts behind Isaac Asimov’s “Foundation”. The problem was explicitly summarized by Arthur C. Clarke in one of his famous laws:

Any sufficiently advanced technology is indistinguishable from magic.

Behind an inspiring statement and brilliant works of literature, there is a problem of understanding the technology and interpretability of hidden AI-enabled outcomes. More of the here and now, according to a FICO study, up to 65% of execs cannot explain how specific AI decisions and predictions are made. Also, only one in five (20%) monitor the models they use in relation to fairness and ethics. 

Thus, there are decisions being made by robotic minds that are not fully understood by human minds. In our blog post about the five main challenges with datasets and AI ethics we describe the ways to prevent AI-based solution ethics from breaking bad. 

Spotting the pearl in the mud

Some technologies are historic game-changers. The steam engine and gunpowder quickly jump to mind. With thousands of research papers being published every year, spotting the breakthroughs gets increasingly troublesome. 

Researchers around the world currently use citations as a measure of the impact of a particular paper. While this might be the best approach we have today, inconsistency and ease of manipulation are among the issues that make it far from perfect.  

To tackle the challenge of filtering out the most valuable research from other information, researchers at MIT devised a DELPHI (Dynamic Early-warning by Learning to Predict High Impact) framework that leverages AI to spot potentially impactful or breakthrough research. 

The system was trained on a database of research papers in a time-structured manner and further tested on a set of research done between 1980 and 2014 on biotechnology. It proved highly effective, spotting 19 of 20 seminal biotechnologies described in papers of that period. 

More about the research can be found in the paper published on

What mud?

A good analogy can be a portal to knowledge for a student, with great examples found as early as in ancient philosophers’ works. Plato’s famous cave is a great example of how a simple analogy can shed great light on the complicated process of enlightenment and education.

Plato’s cave is a short story of slaves being chained to a wall, upon which they can see only the shadows of objects cast on the wall they are forced to watch.  One of the slaves manages to escape and is briefly blinded by the sun. This illustrates how the process of gaining knowledge can be challenging.

Having seen the real world and real objects, he desires to return to the cave to tell his friends about it. But they are unwilling to break their chains and choose to watch shadows for the rest of their lives. What transpires between the enlightened escapee and his cave-dwelling mates is the allegory of being a teacher. 

Making the unknown known and reinforcing the knowledge with familiar examples is common in the human process of understanding. But machines are clearly not humans and understanding an analogy can be a Gordian knot. Or a hornets’ nest. Or a can of worms. 

paper issued by researchers from Cardiff University, United Kingdom examines machines’ ability to spot and understand analogies. The results demonstrate that machines are somewhat up to the task. Yet when a word pairing is more uncommon, machines can find it incomprehensible. 

And why would one expect mud in the muddy waters of the Internet?

While machines struggle to understand analogies that are a cinch for a human to grasp, an even greater challenge looms at the edge. 

The problem with sarcasm in social media or that commonly wielded to such great effect in memes is that it is not so easy even for humans to recognize, and totally incomprehensible for machines. The problem gets even worse when the communication is done online.

In the real world, we can change our tone of voice, roll our eyes, raise eyebrows or send out countless other physical reactions to reinforce our intentions or state of mind. These non-verbal means of communication are unavailable on the Internet and thus sarcasm should be less present in the digital world.

But it is not, a fact proven by a study published in the Journal of Language and Social Psychology, where researchers have found that sarcasm is not only present, but also used more frequently than in the non-digital world.

Spotting a sarcastic-positive review among earnestly positive ones posed a great challenge for both users and monitoring software, with irony proving a major factor limiting the accuracy of sentiment analysis systems.

The problem has been tackled by a group of researchers from the University of Florida, who designed a model that analyzed examples of sarcastic online statements and uncover patterns in their construction. These patterns were further used to determine if the statement can be considered sarcastic or not. 

The details can be found in the research published in Entropy.

Similar Posts

See all posts