We’re building an agentic1 system for the Writers Room and that made me think about what the hell we’re even doing with this artificial intelligence stuff. I have in the past written about this and concluded that large language models are the cumulation of the culture that has survived. That’s where their strengths and their weaknesses come from. They have more soul(s) than any individual because they are a statistical approximation of how we humans communicate. They are a compressed stratum of culture. Well, they were also trained on web pages of dubious provenance, but that’s culture too.
The word “stratum” is an interesting one. According to my thesaurus it means “layer, bed, thickness, or vein”. And that is how language models should be used. As veins and layers. But you can’t expect a lot to happen if you only interact with the stratum itself. The power of the statistical inference that we call “prompting” is to cut into the stratum from the right angle. To dig for gold in the right place using the right tools. What works best for that is if you add two more strata: context and direction. In other words, you show the system what you have – a setting, a reference, a text, a piece of information from the world. And then you tell it what you want, e.g. you ask a question or issue a command. These three combined – what it is, what you have, and what you want – that’s the basis of worthwhile interaction with a language model.
Some models really profit from the digging process being explained to them. Step-by-step instructions and other sophisticated prompting techniques exploit that. It’s good for a lot of tasks. In fact it is so good that I would expect these methods to become a part of how models extract data instead of remaining in the realm of the user’s control. So let’s disregard them.
How do we use this in our agentic system? Well, most of our functions are based on combining the users’ input with the text they’re working on, some prefabricated data and the language model. The Writers Room is a set of virtual companions, co-workers that have different personalities, abilities, and specialisations. They are there to provide you with a simple, easy, entertaining, work environment, even if you’re alone with your work. They are in no way comparable to human beings. Imagine them like a party in a computer role-playing game.
For example, if you ask Alex, one of the members of the Writers Room, for a plot twist, they first access a small database of possible plot twists. Then they combine a random one with the text you’ve selected. That way they guide you towards how you could twist your specific plot. This is the one thing that these models can do that needed humans before: they can personalise something very easily and draw from a gigantic amount of knowledge. Very often you will brush away the advice of a virtual companion. At other times it might just be what you need to move forward.2
So here’s the magic formula3: combine something you want (a plot twist) with something given (a random twist and your writing) and use that to dig into the stratum of culture (the language model). You shall find gold.4
I love that this word does not exist yet in the dictionary but is thrown around a lot right now. We’re witnessing the birth of a new concept here. Agentic means “based on agents”. Those are not employed by MI6 – as far as we know – and only have a license to blabber. But they are little autonomous beings. In this case they’re artificial but agents are also used to model humans or other animals. Anything autonomous.
Personally I think mostly you will be entertained and curious about what happens next. No one said creative work can’t be fun (except Nick Cave). Secretly I’m pretty sure we’re making an entertainment product more than anything else.
Visita Interiora Terrae Rectificando Invenies Occultum Lapidem
I think you're just magic, Mar
Hi - I was hoping to try out the new tools before my newsletter this week but ran out of time. i'm gonna play this weekend! excited!