Generative AI is revolutionizing the world of work and media. But how do we deal with the flood of machine-generated content? And how can we ensure that AI is used responsibly? Armin Schroeder, Managing Director & Partner Crossmedia Düsseldorf, argues for a careful examination of the new technology – both in terms of its potential and its limitations.
In recent years, we have seen remarkable developments in the field of artificial intelligence (AI), especially in terms of generative AI. Generative AI models like GPT-3 (Generative Pre-trained Transformer 3) have revolutionized the way we can generate text, images, and even music.
This text is not mine. It is from ChatGPT. It is the first sentences of an article the program spit out in response to the command “Write an essay about developments in generative AI.” Each of these sentences is true. The following paragraphs of the 500-word article are spot on, too. All essential aspects of the topic are dealt with correctly.
What would take a human several hours to write, ChatGPT has produced within a few seconds – and done so with astonishing quality in terms of content and language. The software is the most important protagonist of a new generation of super-intelligent learning machines. And it is about to change our private and working lives from the ground up.
A Star is Born
We know the day that changed our working world forever: November 30, 2022. On this day, the Californian company OpenAI made its new AI freely available on the internet. Five days later, more than one million users had already registered worldwide. At the beginning of February, two months later, 100 million people worldwide were already using the program. No technology has ever spread faster.
The day ChatGPT came into the world is a watershed in technology history – comparable only to January 7, 2007, when Steve Jobs, wearing a black turtleneck sweater, unveiled the first iPhone on a stage at the Moscone Center in San Francisco. With one difference: this new AI generation is transforming our daily lives even more strongly and quickly than the smartphone revolution 16 years ago.
That’s because ChatGPT, Google’s rival product BERT and other so-called generative AI programs, are entering a new dimension. They relate to earlier operational AI generations in much the same way that early human cave paintings relate to Michelangelo’s frescoes in the Sistine Chapel.
Scientists do not speak of the beginning of the “Generative Age” without reason. These AI programs are creative. Like their operational predecessors, they process immense amounts of data – but no longer with the sole purpose of optimizing or automating processes. Generative AI creates entirely new content that is based on machine learning: in the form of programming codes, images or texts. Operational AI has created tools to simplify work. Generative AI has the potential to take over this work itself.
Generative artificial intelligence has long been part of our everyday lives. In many cases, its products are not even recognized as such. Brands such as Coca-Cola, Becks and Prada have long had AIs generate images for advertising campaigns. The fast-food chain, Burger King, even launched the “Cheeseburger Nugget” recently – a dish created solely by the imagination of the image generator, Midjourney. Universities and schools are desperately looking for ways to track down essays and term papers written by ChatGPT. And people without any programming skills are using AI to create complex software.
AI is permeating all fields and industries. But nowhere are these changes raising more questions than in the media.
Machine-generated images, texts and videos are already widely available. In the near future, however, they might no longer be the exception, but the rule. Experts predict that as early as 2026, up to 90 percent of all online content will no longer be created by humans, but by machines. Not only does this open up fascinating prospects; it harbors risks, too.
On the one hand, artificial intelligence can make media better, more efficient and more relevant. Newsrooms have already been using copywriting programs for years to create standardized formats such as simple news stories or sports reports. AI helps journalists analyze large amounts of data. Without this support, for example, major revelations such as the reports on the Panama Papers would not have been possible at all.
The downside: programs like Dall-E, Midjourney, Stable Diffusion or GPT-4 create images and texts that even experts can hardly recognize as machine-made. This touches on a core question of journalism: What is the truth?
Machines can lie
In March, millions of people commemorated a tsunami disaster off the Oregon coast in 2001. Thousands of people – according to reports – died, and hundreds of thousands lost their homes. On the social media platform Reddit, one user posted pictures of crying families and then-President George W. Bush visiting the disaster area. But this disaster never happened. It is a product of generative machine fantasy. Digital illusory worlds like this can become a problem for professional media as well. The question is: how do media brands remain credible when the digital public is increasingly flooded with such fakes?
Another danger: artificial intelligence has no ethical compass per se. Rather, the self-learning algorithms only virtuously reproduce what they find and what they are told – also including prejudices and discrimination. Thus, in the worst case, this technology can amplify unconscious bias or even conspiracy theories, which is the very thing the media should critically question. The web is already full of such examples.
The professional use of the new AI technology requires responsibility. Not only from media houses themselves, but also from advertising companies. The progressive automation of the media system is also taking place on a second front. Not only are machines increasingly creating media content. Artificial intelligence is also increasingly deciding how much advertising money is to be invested in which areas.
Digital ad spaces are now traded in real time in much the same way as stocks. Here, too, AI can further amplify problematic decision-making patterns. As a result of this, in the so-called programmatic advertising ecosystem, brands do not automatically end up with their ads in the best, most valuable places. Rather, algorithms often place them in the channels that cost the least ad dollars. This touches on another highly relevant issue: media financing.
In two crucial areas of the media system, decisions are thus delegated to algorithms and machines according to the principle of maximum effectiveness: in the production of content and in its financing. This poses the risk that everything will settle down to the lowest common denominator. The result would be a gigantic sea of sameness. Ultimately, quality suffers.
Why are these questions so explosive, especially for the media industry? Because it’s not an industry like any other. Independent media and independent journalism are essential foundations for a democratic society. Media that ends up losing credibility and quality, and that can no longer be refinanced, is not just a problem for the companies concerned. Ultimately, this will pose a problem for the democratic community.
Therefore, responsibility and transparency are key concepts in the use of generative AI that companies have to address. To align with these benchmarks, it helps to ask a core question: does AI function as a co-pilot? In other words, does it support the human being, who still continues to hold the steering wheel and determine the course? Or is it a substitute that takes over the leading role in the cockpit?
Here’s a small example from our agency’s everyday work. We have developed a tool based on GPT-4 and fed it with the knowledge of our industry: all the award-winning campaigns of the renowned Effie marketing prize. Our goal: additional inspiration for our clients’ creative media at the push of a button. The result: the majority of the suggestions are not usable, some are even sheer nonsense. Most of the time, however, a really good approach is also found among the suggestions. This means, AI is helpful to inspire, similar to an additional colleague participating in the brainstorming. However, to complete the entire process of brainstorming, evaluating and elaborating, much more complex applications are needed – and a further level of maturity of the AI models.
Preventing a Sea of Sameness
Colleague AI can thus support us in our thinking. However, the results must be critically evaluated. Anyone who uses artificial intelligence professionally should know exactly the standards by which it works, in addition to being familiar with its data sources. The phrase that can be read on the wall in the entrance area of our agency still applies: thinking for yourself makes you smart.
The use of super-smart programs will fundamentally change the way we work. Algorithms will do more and more busywork that previously required a great deal of effort. Thus, used responsibly, they can create capacity for jobs that will be all the more skilled.
In the case of the media industry, this is true to an even greater degree. If machines can lie, it depends all the more on the people who set the right standards. On media brands and journalists who stand for credibility and truth. This is the only way to effectively distinguish fakes and prejudices from facts. Or, as Adrian Kreye writes in the Süddeutsche Zeitung: the engineers have not yet been able to teach AI the “core skill of journalism”: “critical thinking”. Because “it can’t be calculated.”
However, the question of how to deal with generative AI does not only arise in the media sector. All industries and companies face similar questions and challenges.
Deal with the topic now
So how should we handle this new dimension of artificial intelligence? The essential advice is: busy yourself with it in detail. And do it now. The topic still seems to very abstract and far away, both in terms of content and time. Believing this would be a mistake. Even by the standards of the highly-accelerated digital transformation, AI technology is developing at a speed we have never seen before. The consequences are not dreams of the future. They already affect the here and now.
The practical understanding of generative AI is still highly underdeveloped in many cases. A few weeks ago, we asked a customer to describe his office to the image generation AI Midjourney. The image the program generated in response to this input did not bear the slightest resemblance to the real space. The reason: our client did not know how to talk to the program. To achieve the best result, you have to know exactly how to formulate the input – the so-called prompts. And it’s precisely this vocabulary in interaction with the current chat AIs that everyone should experience once. Only when asking smart questions will you get smart answers.
Another piece of advice: find the aspects of the use of AI that are already relevant and have potential for your company today – no matter how small they may be. Don’t delegate the topic to individual experts and digital managers alone. If possible, include the entire organization on this way. At Crossmedia, we established a horizontal AI team at the beginning of the year to identify and tap into the potential of this technology for our business at an early stage. The five-member working group covers a broad spectrum of our agency: from experts to interested trainees who still bring a fresh view of things – all from different departments.
By all means, think of the topic of AI in extremes. This technology carries great potential for the complete disruption of long-standing business models. The leading question about the co-pilot function of AI is important. However, in practice, it will not be so easy to have a precise definition of the boundary between humans and machines. It will probably shift much further than we thought possible not long ago. Only a corporate culture that allows the familiar world to be radically challenged will be truly equipped for the coming changes.
As such: the use of artificial intelligence requires both courage and openness to new things. But also critical thinking and responsibility. At the end of the day, learning machines are great amplifiers: thoughtlessly deployed by prejudices and problematic decision-making patterns. Or highly-efficient supporters that can have our backs, so that we can deal with the essentials.
By the way, in principle, our colleague ChatGPT has a similar opinion. To conclude, let’s hear what it has to say:
Current developments in generative AI are undoubtedly fascinating and have the potential to transform many aspects of our lives. From creative industries to medical research, these technologies offer numerous opportunities for improvement and innovation. However, we must not lose sight of the ethical and social implications. Responsible development and application of generative AI requires careful consideration of the opportunities and risks to ensure that the benefits to society are maximized and the negative impacts are minimized.
This article was first published on onlythebest.de on 09.09.2023.
What Do You Think?