Why AI won’t work without content design
There are a lot of articles and talks circulating within the UX design community about how artificial intelligence (AI) will take our jobs - and, more positively, how to harness the output of Large Language Models (LLMs) like ChatGPT.
But as an experienced senior content designer - I am currently working with Caution Your Blast Ltd (CYB) on the recently launched Foreign, Commonwealth and Development (FCDO) service, the first use of LLMs by FCDO in public facing government - I’ve realised that in actual fact it’s the other way around: the success of AI is absolutely dependent on good UX. Specifically, good content design
A primer, for those who need catching up: in essence, LLMs take loads and loads and loads of content, process it, and churn it out again. OpenAI haven’t been all that clear about which websites they’ve trained their models on, but the amount of content required to train LLMs is, well, vast.
When you’re creating a chatbot, the best way to ensure the LLM doesn’t drag up inaccurate content from elsewhere is to confine its source of information to one area. It’s still been trained to think and work and process data using other sources - but then you tell it to only look at a specific set of data when providing answers. This is called Retrieval-Augmented Generation (RAG).
I could use any large content estate to provide this case study, but I’m going to talk about the BBC website. I worked there as a content producer for over 10 years, and even with hundreds of trained staff working in content production I can assure you that a vast amount of content live on bbc.co.uk is out of date.
This isn’t me being rude - it’s a fact. Large organisations have been publishing content on the internet for at least 25 years now - the BBC website launched in 1997 - and when you’re publishing hundreds of pages a day for 25 years, who really has time to go back and delete/archive old content? We all do our best, but publishing tends to either be left to tiny central web teams or is decentralised. When setting budgets, auditing and removing obsolete content tends to be at the bottom of the pile.
Worse, despite inaccessible PDFs now being unethical and even, sometimes, illegal unless they’re accompanied by HTML versions, I have worked on a great many projects where there simply hasn’t been enough time or money to replace these documents with accessible HTML. In BBC Bitesize’s case, we started replacing downloadable PDF worksheets with HTML versions well over a decade ago, but this is sadly not the case for every website. We don't know if AI is scanning content in downloadable documents, so any information contained within them may not be included when the AI is trained.
With so much out of date information going into AI, it is absolutely natural that what comes out is often unpredictable, inaccurate, or hallucinated (and I’ll save the consequences of hallucination and model collapse for another day).
So how do we fix this? Right now, the only answer I can see to creating accurate chatbots is for us to fix the information we feed them - and content designers are absolutely perfect for this job. It is us who tracks and audits information; it is us who writes clearly, accurately and concisely. We fact-check, we find the source of truth, and we work out what people really need when they’re looking for answers.
Will AI take our jobs? Maybe. It may well change them, but not so much in the case of content designers - because we’re already skilled at doing exactly what AI needs. If AI is to work properly, content designers and UX writers are absolutely key to its success. So while others may rightfully ask questions about the future of working in technology, I feel pretty safe for now.