27.06.2023 / Knud Wassermann

AI – Where is the Journey Heading? (Part 1)

Since the end of November 2022, the topic of "Artificial Intelligence" has been on everyone's lips. With new developments, AI is truly available to all due to its ease of use. With ChatGPT, you can have a text written, and Stable Diffusion or Midjourney are generating realistic images. What's in store for the creative, media and print industries?
 
AI has already been used in very different areas for many years. AI has become particularly widespread in process optimization – and autonomous printing, as propagated by individual manufacturers, is not possible at all without AI. In the field of publishing, AI is used, for example, in the tagging of images and graphics. Algorithms bring images to a high gloss, and in many cases, the rejects of the printed sheets are already done by means of AI.
 
For these applications, the term "machine learning" has therefore also become established, which describes very well, what this AI is all about. However, AI has also already penetrated far into our private lives. We use facial recognition to unlock our cell phones and conduct banking transactions without giving it much thought. So why all the fuss?
 
AI: a data-driven algorithm
Behind AI, there is always a data-driven algorithm. The data for this is provided in abundance by the internet, and via the platforms of e-commerce or social media, this happens de facto on the fly. Databases operating in the background feed the algorithms in a very targeted manner until, over time, they have grown into powerful tools.
 
Amazon, for example, created an algorithm years ago, that predicts what customers will buy next. With OpenAI's ChatGPT chatbot and image generators like Stable Diffusion or Midjourney, the AI world is opening up to everyone without having to code a single line of code yourself.
 
AI as co-pilot
It is astonishing how quickly the development and market penetration of individual AI solutions has proceeded. Microsoft, as a major player in the OpenAI platform, rolled out ChatGPT in two and a half years. It went live on November 30, 2022. Within five days, one million users registered – two months later, there were already 100 million. By comparison, Facebook took ten months.
 
Satava Natella, CEO of Microsoft, emphasized at the World Economic Forum 2023 in Davos how much AI applications will support people in their everyday lives and take on the function of co-pilot. He cited an Indian company as an example. This company has already integrated ChatGPT into its platform and uses it to support people in submitting applications to public authorities, where it not only takes care of the formulation but also the corresponding translation. In a country with 22 official languages, such a solution is very helpful. The AI solution is to become a fixed component of the Microsoft world this year and will be integrated into various applications such as Word, Teams and, above all, in the company's own browser.
 
Unresolved questions remain
Basically, the question arises whether the term Artificial Intelligence is appropriate at all. Wikipedia provides the following definition: "Intelligence can be described as the ability to perceive or infer information, and to retain it as knowledge to be applied towards adaptive behaviors within an environment or context.
 
AI, however, only tries to simulate these abilities and appear as human as possible. But that has nothing to do with human decision-making ability, which is characterized by emotion, empathy, imagination, reason and much more. "We must not make the mistake of assuming that AI has human understanding," emphasizes AI expert Marnus Flatz. Perhaps the term technical intelligence (TI) would be a better choice.
 
One problem is also the foundation on which AI is built: Where does the data come from? What data are the algorithms fed with? How does AI solutions map diversity, for example? These are certainly not the only questions that remain unanswered. Where do the copyrights lie – with the company that developed the AI solution or with the user of the AI solution?
 
The image agency Getty Images has already filed a lawsuit against the image generator Stability AI (https://stability.ai) in order to bring about clarification here. Questions of data protection must also be questioned once again, because AI applications are mostly in the cloud, and the issue of sustainability has been ignored to date.
 
Who will take responsibility?
ChatGPT and all the other AI tools will not remain free for all users in the long term – they are obviously too expensive to operate. Experts say that the energy consumption will be similar to, if not higher than, that of the blockchain. According to the Bitcoin Electricity Consumption Index, 107 terawatt hours were consumed in 2022 for Bitcoin mining alone, which is roughly equivalent to the electricity consumption of the Netherlands.
 
If the volume of data continues to grow at such a massive rate, and we can assume it will, it will require a different IT infrastructure to handle the future AI applications. When ChatCPT3 jumped to 4, the data volume increased ten thousand times from the original 500 GB. Quantum computing could provide the solution here, but until then, it will be a while.
 
ChatGPT can certainly be used in a professional environment. For example, as a support tool in online stores, where chatbots are already in use today. But also in customer service, in marketing, when it comes to the creation of posting texts, or for the construction of product pages – and here especially for the formulation of description texts. It is in these areas that manually time-consuming tasks can be optimized through the use of AI. However, the responsibility for checking and validating the generated information clearly lies with the user – which means, conversely: you cannot blindly trust an AI.
 
The perfect image or display
With Mindjourney or DALL-E, you get a useful result very quickly, provided you enter the appropriate prompts (keywords). The more detailed the prompts are, the better the images will be – and that without tapping a programming line into the keyboard. Whether the result is good or bad is, as always, in the eye of the beholder. The platform www.looka.com helps to create logos and various corporate design elements. The whole thing is also available for video, of course, and then we're very quickly on the subject of "deep fake". The term sums it up pretty well: new images or video sequences are created that don't necessarily have anything to do with reality.
 
Creating public awareness
Despite many unanswered questions that have now emerged and will continue to emerge, AI-powered solutions will free us from many repetitive tasks. The economic benefits will be huge – especially when you think about the combination of AI, robotics and 3D printing. All levels of society could benefit from this, it's just that the framework conditions are currently lacking. A quote from physicist Stephen Hawking fits the bill: "Success in creating effective Artificial Intelligence could be the greatest event in the history of our civilization. Or the worst."
 
With this in mind, the goals of these very broad algorithms, such as ChatGPT, need to be exposed. The issue cannot simply be left to the tech giants, who are essentially pursuing only economic interests. Even if OpenAI assures that it is working on open, generally accessible AI solutions from which all of humanity should benefit. Fundamentally, technology is not good or evil. But how AI is used requires clear legal rules. Every drug needs to be approved, and a similar approach is needed in the generative AI field.
 
Weighing positive and negative effects
In this context, critics point out that such regulations inhibit innovation. But hand on heart: with all the profound innovations of the past decades, it might not be a bad thing to take out a little steam and weigh the possible positive as well as negative effects a little more carefully. Even Elon Musk recently signed a petition calling for a six-month development freeze on large AI projects such as ChatGPT. But in parallel, Musk has founded the company "X.AI", which once again counteracts his credibility.
 
And finally, this note: Artificial of Technical Intelligence is not comparable to human intelligence. From this realization, fears that the machine will replace the human can be debunked. The strength of humans lies in their empathy, creativity, imagination and interpretation of complex relationships. Technology is not good or bad per se – it is ultimately society that decides what to do with it.
 
In the second AI blog in a week in this space, we take a look at how AI is already being used in the publishing and printing industry today!
 
Yours
Knud Wassermann,
Editor-in-Chief "Graphische Revue"
27.06.2023 Knud Wassermann Editor-in-Chief of Graphische Revue