GenAI recap – 5 takeaways from the Generative AI conference

  • Scope:
  • Artificial Intelligence
GenAI recap – 5 takeaways from the Generative AI conference
Date: March 16, 2023 Author: Konrad Budek 6 min read

Leveraging generative AI tools during the creation process and leaving the fine-tuning to human experts who will take generated output to the next level – according to GenAI experts, this vision applies not only to marketing and software development but to nearly all industries and businesses.

Generative Artificial Intelligence is undoubtedly the main theme of 2023. The last quarter of 2022 and the first months of 2023 began with an enormous buzz around generative AI and its possible business applications. ChatGPT and image generation apps like Dall-e dominated the headlines. 

According to Google Trends, interest in these tools is rising. They are currently outperforming interest in the latest movies (like Avatar: The Way of the Water) or popular celebrities (such as Kim Kardashian). 

The buzz around generative AI was boosted by the availability of tools like Midjourney and Dall-e. These have made it easy for anyone to create high-quality, realistic images with minimal effort and expense. Both tools were seen as time-and-effort savers and raised concerns regarding copyright alike. 

The challenges were addressed during the GenAI conference, arguably the first to facilitate cooperation between the generative AI ecosystem and business. 

GenAI by Jasper – key takeaways

Over 1200 people attended the GenAI conference in San Francisco. It was a place for business and generative AI experts and enthusiasts to meet and discuss use cases, new ideas, and ways to utilize the new tools for the benefit of their companies. 

The conference was opened by three speakers

  • Zach King – an internet celebrity, known for creating short videos of digital illusions, who told his story using AI-generated images.
  • Harry Mack – a beatboxer and improvisational comedy artist who has been featured on The Tonight Show with Jimmy Fallon, America’s Got Talent, and The Ellen DeGeneres Show who delivered a hip-hop interpretation of generative AI output. 
  • Aleah Bradshaw – educator and slam poet, currently working with Youth Speaks, who posed questions about the role of art in “being human.” 

These speeches were later followed by more business and company-oriented presentations that included interviews with Nat Friedman, former CEO of GitHub, and Peter Welinder, VP of Product and Partnerships at OpenAI. These panels provided interesting insight into the applications of generative AI in today’s companies. 

Key takeaway 1: the future of marketing (but not only)

Conference experts stated that generative AI models can be used to create realistic visuals and content that can be used by marketing teams to save time and effort. This also allows companies to focus more on creativity, research, and strategy. 

Following the use of tools like Jasper.ai in marketing, the conference highlighted the role of Copilot and comparable coding-support tools in software development. Using these, software engineers can focus on solving complex problems instead of getting bogged down delivering endless, often repeatable, lines of code. 

The future is generally concerned with shifting from repeatable and uncreative tasks toward refined tasks that require purely human abilities like synthetic thinking and the delivery of new quality content which is not necessarily based on things that previously existed. In fact, all AI-generated text or AI image generators from text leverage pre-existing content. When addressing the need to create something new (or something underrepresented in data) an AI art generator or AI text generator can prove insufficient. 

Key takeaway 2: the human touch is crucial for keeping on the surface

Despite Artificial Intelligence (AI) thought leaders highlighting the role of augmenting the workforce instead of replacing it with AI and automated systems, the concern remains. 

According to a report from the World Economic Forum, up to one-third of all jobs may be at risk of automation during the upcoming decade. On the other hand, the development of technology is expected to create several million more jobs than it will displace. 

The process is comparable to the introduction of ATM machines. At first glance, introducing a machine that enables one to both deposit or withdraws cash at any moment, available 24/7, and never showing any sign of tiredness would be expected to cause massive tech-induced unemployment among bank staff. But that was never the case – in fact, the demand for bank tellers increased due to the reduction of overall costs in managing a branch. 

According to the experts who spoke at the conference, the same goes for professionals who work in a field awaiting automation – the human touch will always be necessary and the skills of professionals will be seen as transforming the mediocre into the outstanding. 

Key takeaway 3: models will get both larger and smaller

AI-based systems can be power-hungry at levels unseen before. According to estimations done by Tom Goldstein, associate professor at the University of Maryland, running ChatGPT costs about $100k per day or roughly $3 million each month. This is a level many companies wish to operate on, and not only to spend on their solutions. This goes mainly to electricity bills – with a great share of power worldwide being produced by fossil fuels, thus AI training significantly contributes to climate change. As such, not only monetary but also environmental cost comes into play. 

On the other side of the spectrum, there are narrow, focused models deployed on local machines or connected devices. 

Delivering both types of models will continue and these technologies will eventually follow different paths. Narrow models will find their way toward precise use cases and in facilitating cost-effectiveness in particular applications. For example, a model of this kind can support depth estimation in a camera while working on a chipset comparable to a Raspberry Pi.

The second use case, models that cost millions of dollars to run, will continue to be seen in automated assistants and versatile models that require a stable internet connection and huge computing power at their disposal in order to operate. 

Key takeaway 4: models need to stop hallucinating 

In the optimistic buzz around ChatGPT use cases and performance, there were a few whistleblowers to be heard who highlighted the fact that the model hallucinates from time to time – a thing never before seen in computers. 

What is model hallucination

AI Model Hallucination is a phenomenon where an AI model confidently delivers credible-sounding information to a user, yet the information seems to be unjustified by the AI’s training data. Examples may include missing or making up information which the model lacks. Due to the way models are created, they tend to stick to their own responses even when corrected by humans. 

The phenomenon was named after a hallucination in a psychological sense – just as when a human sees things that do not exist in the real world, the model “sees” things in the data it was trained on. 

A hallucinating model can behave in ways from funny to harmful when delivering complete nonsense responses or misleading users. ChatGPT has been caught red-handed by scientists completely making up entire science articles and bibliographies for them. 

This needs to be put to an end, especially if such a model finds its application in areas  which would significantly affect people’s lives. 

Information retrieval will be a crucial element in building the models of the future and the process of gaining more control over a model’s outputs has already begun. 

Key takeaway 5: Integration is everything

ChatGPT, Midjourney, and Dall-e have found popularity mostly due to being available in a convenient form. One needs nothing more than a browser to enjoy them. If one is more tech-savvy, it is easy to build a solution using API and launch a large model-based product. 

The experts have highlighted the ability to infuse workflows and procedures with AI support. The more flexible and versatile models will become, the bigger the chance that a tiny (or larger) form of automation will be applied to a system. 

This will be further supported by the emergence of open source models that can be run and accessed by a local machine. As such, much bigger security and faster performance will be made available. Also, this solves the problem of the lack of a fast internet connection, for example in cargo ships, remote drilling platforms, or industrial installations placed away from human settlements. 

Summary

Generative AI is a game changer for nearly all businesses and industries. And it is only the beginning, with ChatGPT sparking interest and tools like Jasper.ai, the host of the conference, taking the first steps into a future of basically everything. 

“I enjoyed attending Gen.AI mainly due to the positive vibe around generative AI. I am Research Scientist working with generative models for a long time, and I am delighted to see how these kinds of solutions will change the world in the upcoming future” commented Maciej Zięba, ML Researcher at Tooploox and Associate Professor Wroclaw University of Science and Technology who attended the conference. 

The conference took place in San Francisco with attendees from all over the world. It was the first edition of the conference and the keen interest was a clear sign that the market is in high demand.

Tooploox is one of the AI leaders in the world, with unparalleled experience in building AI-based solutions. With our experience in delivering generative AI, we support our clients with enriching their workflows with cutting-edge technologies. 

If you wish to know more about the ways Generative AI can reinvent your business today, don’t hesitate to contact us now!