Next month comes with new information about changes and new endeavors in the world of Artificial Intelligence. Unsurprisingly, April has been the next month to see the total dominance of the generative AI buzz.
Yet this issue also contains information about the potential harm done by technology and the things done today to mitigate it. Including things as surprising as patenting one’s own face.
New tools to verify the knowledge of AI tools
Generative AI, like ChatGPT, is a revolutionary technology in business and society. With slight tweaking the system can not only leverage knowledge from the whole internet but also digest a company’s internal databases, making the tool extremely effective in regards to knowledge harvesting and management.
On the other hand, this system may digest too much information, making the organization vulnerable to information breaches or subject to losing control of access to sensitive information.
To mitigate this risk, a group of researchers from the University of Surrey have developed a tool that is able to verify how much information an AI-based system has harvested from a company’s database.
The first case of copyrighting an AI likeness
Metaphysic is a company that delivers authorized “deep fakes” for media and entertainment usage. The company’s CEO, Tom Graham, has created an AI-powered avatar of himself and submitted it for copyright registration in the U.S. Copyright Office.
According to his statement, the copyrighted AI version of his image would make pursuing people who produced AI deep fakes of him much easier and more regulated in the current legal framework. Yet it also shows the rising need to protect people’s privacy and mitigate the risks of malicious deep fake usages.
More about this news can be found in the VentureBeat coverage.
Biden says the US needs to address potential AI risks
The news above can be easily linked with information about Joe Biden’s goal to address the risks associated with AI system development. The President has said that, “AI can help deal with some very difficult challenges, like disease and climate change, but we also have to address the potential risks to our society, to our economy, to our national security.”
One of the ways to tackle this issue is to deliver a legal framework that will provide ways to deliver new tools with respect to people’s privacy as well as serve the general society’s needs and concerns.
More on the topic can be found in a transcript of the Meeting with the President’s Council of Advisors on Science and Technology.
HuggingFace launches open-source version of ChatGPT
HuggingChat aims to be an open-source alternative to ChatGPT, sharing a similar environment for use. The system is delivered by LAION, a nonprofit organization that previously trained Stable Diffusion.
According to the organization’s CEO, Clement Delangue, there is a need for open-source alternatives to AI tools where transparency and inclusivity will be higher than in closed-source environments.
HuggingChat can be accessed on the organization’s website.
Microsoft releases Copilot for Viva
Microsoft releases Copilot for Viva, an employee engagement and experience platform. The tool is designed to provide employees with the information they need to deliver their work, set goals, and see their impact in the bigger picture of the organization. Helping them stay more engaged and motivated to deliver better results.
Copilot in Viva comes as a general assistant to boost the productivity of employees. For example, it can summarize existing documents and deliver summaries of them. Also, it can prepare ideas for company intranet pages and deliver a report on employee feedback gathered via Viva Glint.
More can be found in the Yahoo Finance published announcement.
PwC plans to invest $1 billion in generative AI capabilities
The company has announced plans to invest up to $1 billion in the upcoming three years in its generative AI capabilities, mostly to augment and rearrange its business workflows, particularly their tax, audit and consulting services. The company will do so in partnership with Microsoft, harnessing the power of GPT-4 models as well as the Azure platform.
More information can be found in the company’s announcement.
Apparently, ChatGPT can be more empathetic than a human being
Research conducted by a team of academics and later published in the Journal of the American Medical Association (JAMA) has shown that an empathic response does not require empathy at all. The research was performed on Reddit, where a set of questions was gathered and later responded to by a trained physician and ChatGPT (with the latter reviewed by a professional in order to not deliver misinformation). Afterwards, users were asked about the empathy and quality of the response given.
According to responses, the evaluators preferred the chatbot’s answers. These being scored up to 9.8 times higher in terms of the empathy seen in the answer.
More details can be found in the research paper.