I am personally more critical of Generative AI than most startup founders in this space. Why is that? Because I was an artist and producer of cultural artefacts – mostly video games – in the past. I’ve shown pieces at exhibitions and I’ve launched console games. I know how much work it is to create culture and how easily Generative AI can make creative people feel like everyone using those tools is cheating. I also think that using a probability-based technology for knowledge retrieval, e.g. in search engines and other information retrieval systems, will hardly lead to as reliable results as traditional technologies. These are some of the reasons why I spend so much time on this newsletter arguing for a middle ground, for a new space that is created by interacting with Generative AI that neither replaces the old, nor prevents the development of skills to make something new. I want to help furthering a world of abundance, where we can tap into cultural history stored in a machine learning model as well as create analog pinball machines, cook elaborate recipes and paint digital images.

For reasons that I assume to be rooted in psychology, chat interfaces have won the interface race when it comes to Generative AI. There are still some classic prompting environments, especially in image generation, but the majority of people seem to feel most comfortable in having a chat with ChatGPT instead of using tools that work more directly with their materials. I.e. pasting a paragraph into ChatGPT and asking the model to operate on it is a modus operandi that seems to work quite intuitively whereas a lot of writing tools out there that offer more direct interaction with the text, based on AI, seem to have a harder time explaining their interface. Or maybe that’s just our experience with LAIKA.

ChatGPT calls itself an “assistant” in the prompt. There are just three roles that the prompt format supports: “system” – the environment that the chat bot operates in, “user” – that would be you, and “assistant” – which indicates lines produced by ChatGPT. The relationship between the user and the bot is hierarchical in that one is an obedient servant whereas the other one is in charge. ChatGPT was carefully crafted to show no personality. At the same time the last OpenAI demo was a showcase of a bot that showed hints of personality. A strangely flirty female sounding personality that maintained the telltale obedience. Personally I found it quite unsettling. But I accept how hard it is to craft good characters.

In our Writers Room you find five different personalities. We tried to give them motivations and backstories that create unique exchanges with each of them. We tell them how to talk so they sound a little bit different to each other while we use the same base model(s). We try to make them as little obedient as possible but it’s hard to prompt that “instinct” out of them. All chat models I’ve ever tested were trained to be overly obedient. Since we have a multi-companion chat they are obedient to each other and that makes them appear super supportive. They are personalities but we hope it’s always clear that they are more akin to video game characters than to real people.

Our characters are designed in the vein of other artificial beings that are openly “created” but that humans treat like a hybrid of person and artefact. Virtual idols, puppets, and your beloved RPG party members are more related to them than Siri or Alexa. We think crafting an artificial person comes with a lot of responsibility and, honestly, we’re a bit tired of the trope of the flirty female servant. That’s why we built a whole system for characters that develop as you chat with them, that gradually open up, and that sometimes are in a bad mood.
Our companions live in a hybrid space that is both, the world of humans – because they were trained on human communication – and the world of computers – because they use math and algorithms to speak. This space has existed before Generative AI, in video games, stories, and other media. We intend to bring far more existing knowledge and design experience of this space into our systems. Why? Because we think it is fun, entertaining, enlightening, and rewarding to do social experiments with artificial beings. From flirting with RPG characters to arguing ChatGPT into a corner, social fun can be had with machines as long as they show some hints of personality and determination. Strangely, it makes machines more human, for better or worse.