There is a lot of chatter among editorial professionals about the risks and benefits of artificial intelligence (AI) tools. This article gives a brief overview of the way popular AI tools work, ideas for how to use AI and the concerns that are emerging – not least its impact on the creative industries, particularly publishing, and its unseen impact on the environment.
More than 125 years ago, innovators and entrepreneurs unleashed what was to become one of the most influential inventions of the modern era: the motor car. At the time, motor cars were regarded as both exciting developments and potential scourges on society. As with any new technology, there were early adopters (mostly wealthy individuals) who loved the feeling of freedom and the open road (think Wind in the Willows’ Mr Toad, poop-pooping[1] around the English countryside at great speed in 1908), but it wasn’t until after the Second World War that the global market properly took off.
Safety concerns initially focused on the risks to the individual driving the vehicle (and any passengers), but as the car grew in popularity the focus switched to the risks to other road users and it wasn’t long before governments stepped in with laws to control the use (mainly speed) of vehicles.
What few (if any) people worried about in the early days was the unseen risks that were slowly building up as a result of the chemicals emitted by the petrol-powered engines: instead, the main priority was securing access to fuel, with efforts throughout the 1920s by European countries and the USA to gain control of world oil supplies[2] – and we all know how that has turned out! And even with our twenty-first century scientific knowledge of the environmental impact of vehicles powered by fossil fuels, we are still dithering over their eradication.
The parallels with AI are obvious, yet the speed of development of AI-powered tools has been compressed into just a few decades. With some of the tools now freely available and others almost unavoidable, it is not Luddite to say, “Hold on, let’s think where this is taking us”. If we don’t do that now, who knows where we may be heading.
For instance, a recent BBC radio discussion programme highlighted the early adoption of AI in weapons systems that are already being deployed in “remote” warfare.[3] Equally, there are also many reports of high-level science using AI for innumerable positive reasons, from sifting through massive datasets to improve climate models, to drug discovery and innovation.[4]
What is different about the growth in use of AI tools compared with the growth of the motor industry is that some AI tools are being provided “free of charge” to the general public. Note my use of scare quotes: just because we aren’t asked to pay a usage fee doesn’t mean there isn’t a cost, whether that’s providing our personal data, or further contributing to climate change courtesy of the vast computing power required to keep AI systems running.
I’m of the generation that grew up with unreliable cars that needed to be tinkered with (mostly by dads) to keep them running smoothly. That required a decent level of understanding of how a motor car works, plus a well-stocked toolbox and overalls to save your clothes from grease stains.[5] When we bought our first car in 1989, I took myself off to our local night school (anyone remember those?[6]) to learn basic car maintenance.
So when AI started to show up in conversations among editorial colleagues I did the same, but in the modern way – from the comfort of my desktop.
Understanding the basics
An excellent (free to members) webinar from the European Association of Science Editors (EASE) started my learning journey, with an overview of ChatGPT, machine learning and large language models (LLMs) by Phill Jones and Avi Staiman.[7] Even though this event was back in May 2023, the basics haven’t changed, so it’s worth sharing the key points here, starting with the terminology.
- What is “GPT”? The answer is in the initialism: Generative Pre-Trained Transformer. The P is the part of the tool that has been implicated in copyright violations (see LLM, below).
- Who is behind the widely used ChatGPT tool? It emerged from a collaboration that began in 2015 with the creation of the research company OpenAI by, among others, Amazon Web Services, Infosys, Sam Altman (CEO) and Elon Musk (who resigned in 2018). OpenAI comprises the non-profit OpenAI Inc. and the profit-making arm, OpenAI Global LLC, which has Microsoft as a major shareholder (approx. 49%; USD 13 billion invested).
- What is an LLM? A large language model is, in simple terms, a computer program created by assessing a massive amount of published text (often content gleaned, for free and without acknowledgement) to identify common word combinations known as “grams” (e.g. a two-word combination is a 2-gram). With around 40,000 commonly used words in the English language, there are about 1.6 billion 2-grams, while the number of 20-grams “would exceed the number of particles in the universe”![8] In other words, an LLM is a probability machine. The LLMs used for AI text generation have been pre-trained[9] and then fine-tuned to predict ngrams (longer chains of connected words).
- What is NLP? It stands for natural language processing. It’s the transformer technique used to turn words into numbers for easier analysis. That makes it possible to predict sentences using rules about semantics and word types/combinations and, crucially, which word combinations (the grams) are more likely than others in a given piece of text. This clever technique gives GPT its ability to simulate conversational and even high-level research text. It can “see” long-range relationships between words.
- What can a GPT be used for? This is where we get to the G. A GPT tool generates an answer to a question (known as a prompt) from a user, using its “knowledge” of the way language works. Crucially, it does not “understand” the prompt: it analyses the words that comprise the prompt, then provides an answer that responds to the prompt.
In summary: ChatGPT is a probability machine. What’s so worrying about that?
Well, if you have ever stood in a queue to buy a ticket for a lottery you may have heard people throwing around unmathematical reasons for their choice of numbers. We humans (including me) are just not great at understanding probability. Which is a worrying failure of our mathematics education, considering the huge role probability now plays in all our lives – from weather forecasts to the spread of pandemics.
Approach with caution
As things stand, my take is that we should approach free AI tools with caution, for the reasons implied above. In fact, AI told me so! (See screenshot, below.)
From that tiny example it’s easy to see why logging in to ChatGPT is so tempting. It has been programmed to talks like a human. (See what I did there?)
There are numerous examples of it getting things wrong, or simply making things up (known as hallucinating).[10] But we could say the same for people …
We editorial professionals (copyeditors, proofreaders, copywriters et al) have additional concerns because of the potential for AI to take over some of the functions we offer to clients. (Note the word potential – it isn’t ready yet!)
A cautionary tale comes via a New Scientist report.[11] Chen Liang and colleagues at the University of Connecticut School of Business ran an experiment that involved asking users of Prolific (an online platform that pays people for participating in research projects) to complete paid-for tasks with and without the aid of ChatGPT. They observed that the writers were willing to accept lower pay if they had used the AI tool. Commenting on the findings, Mike Katell of the Turing Institute in London told New Scientist that the study shows how using LLMs can “make creative writing less specialist and thereby less valuable overall, and will drive down the demand power for any writer, regardless of whether they use an LLM.”
The similarities to the editorial community are obvious.
Of course, ours is not the only industry sector concerned about the use and misuse of public-facing AI tools such as ChatGPT. In 2023 the European Commission published its AI Act and began to set up an AI Office to enforce and supervise rules surrounding these LLM-based technologies, particularly legal and ethical aspects of their use. Unfortunately, as an article in Nature pointed out, the Act assumes that the public-facing AIs are “low to no risk”, and that the AI developers will be permitted to self-assess the risks of their products. This is worrying. As Nature put it:
Governments need to support innovation, but they also have a duty to protect citizens from harm and ensure that people’s rights are not violated. Lessons learnt from the regulation of existing technologies, from medicines to motor vehicles, include the need for maximum possible transparency, for example, in data and models. Moreover, those responsible for protecting people from harm need to be independent of those whose role it is to promote innovation.[12] (My italics)
Independent assessment and regulation is the cornerstone of successful technologies and businesses, even though it doesn’t get everything right (as anyone involved in the construction industry over recent decades will know only too well). The fact that the tech sector has adopted the “agile” approach to product development – where it creates a tool and launches it as quickly as possible then relies on users to feed back about problems – only makes matters worse. But I digress.
Tools you can (probably) trust
There are early adopters among the editorial community who have used AI-driven tools far more than I have. Apart from anything else, I would be concerned about the confidentiality of client materials (no matter what the AI platform may tell me).
I have tried out ChatGPT to see how I might use it. I’ve found it handy as a search tool if there’s a word stuck on the tip of my tongue: it’s good to be able to refine a search on the fly to pinpoint the precise nuance I’m seeking. I’ve also let it have a shot at proofreading/editing text I’ve written, just out of curiosity. (It tends to tell me I’m verbose, which is rude: I hadn’t asked it to analyse my personality!)
I don’t see ChatGPT as a tool I would use in my workflow (see Scarier thoughts, below), but I have sat in on several webinars[13] about systems developed by publishing-focused businesses that you might want to investigate:
- Free AIs such as Mistral and Perplexity (recommended as “better than ChatGPT” by Michael Wooldridge[14] of Oxford University’s Department of Computer Science) and Gemini (formerly Bard, the Google AI). Note that the latter is up-front about its modus operandi, including that it doesn’t just generate text, it can also fetch it from regular searches (a plus); and that it learns “from your prompts, responses and feedback” (a minus).
- Curie is an academic editing tool promoted by the editorial services provider American Journal Experts (AJE; part of the Springer Nature group) in collaboration with MPS, a publishing software developer. Adrian Wallwork, an author and lecturer on English for Academic Purposes, who has assessed the tool[15] found it useful for correcting verb forms, replacing pronouns with the correct noun and flagging passive constructions. He also noted that it “corrects spelling of authors with English names” (but unfortunately he didn’t use it to clarify the ambiguity of that final point).
- Intelligent Editing, the makers of the popular consistency-checking tool, PerfectIt, have created their own AI product, Draftsmith. Both Curie and Draftsmith show suggested changes as fully tracked, like PerfectIt. The Draftsmith developers are very clear that it’s a tool for writers, not a writing tool (it isn’t generative, even though it is based on Microsoft’s Azure OpenAI Service). Intelligent Editing describe Draftsmith as a “writing ally”, and have incorporated features targeting people who use English as an additional language.
- ProWritingAid has been widely promoted among editors in recent months, even though, like Curie and Draftsmith, it is mainly intended as a tool for writers. I had a chat with the ProWritingAid’s representative at the CIEP annual conference in September, who was aiming to impress editors with its customisability, and I later joined an ACES webinar that demonstrated some of this power while comparing the tool compared against Grammarly’s AI offering.[16] My impression is that ProWritingAid – with its “CamelCase” name reminiscent of the NaNoWriMo writing trend, and the fairyland cartoon graphics on its website – is firmly targeting the fiction market, about which I know only a little. Suffice to say that both ProWritingAid and Grammarly AI are about twice the price of Curie and Draftsmith (monthly billing).
Annoyances
In the past six months you will have noticed that AI tools are invading many other aspects of our online life. My iPhone’s predictive text is now AI powered; Microsoft’s Office 365 incorporates AI (but it’s fairly simple to turn it off); and – most infuriating – Adobe’s free Acrobat Reader has an AI tool constantly twinkling from the top right corner of the page that seems impossible to switch off.
Although these tools purport to be user benefits, you have to ask yourself why the software companies are adding these features free of charge. Someone somewhere is paying for this, and most likely – as with Facebook, Twitter et al – it is we users who are the commodity being sold.
Scarier thoughts
As Michael Wooldridge explained, a significant problem with ChatGPT and the like is that they were trained by feeding in huge amounts of text: 500 billion words from the World Wide Web ranging from some 45 million long novels to the entire content (at the time) of Reddit, with no quality control checks at the input phase. That immediately opens up the risk of an AI becoming a self-perpetuating “garbage in, garbage out” machine.
Although the tools I’ve listed above emphasise that they give users the choice to accept or reject changes, that relies on the users’ own level of confidence in English – their lack of which is the very reason they used a tool in the first place!
I even heard one webinar presenter say that “readability is more important than grammar” … which is quite a misunderstanding of the nature of grammar.
And then we come to the hallucinations. Alice Grundy of The Australia Institute Press (aka @alicektg), speaking at a webinar given by the Australian Institute of Professional Editors (IPEd), warned of the importance of writing good prompts – citing an example of a student using AI to write an essay on Pride and Prejudice that referred to non-existent scenes, although this now seems trivial compared with inaccurate news alerts (see note 10).
Worse, AI is now creating material that is being published on the web that appears to be human made but is actually entirely fake. Known as “slop”, this content misleads search engines and takes the unwary reader off to sites where the owners gain advertising revenue.[17] To this murky world we can also add AI-generated books (untouched by human editors), deliberately manipulated or entirely fabricated photographs, stolen voices used for deepfakes[18] and the resulting general lack of trust in … well, many public and political institutions.
Most worrying of all is the humongous amount of energy being used to drive these online tools and their behind-the-curtain databases and neural networks, as well as the human brainpower needed to cut through the growing torrents of slop. How have we come to this?
In conclusion: There may be some useful nuggets swimming around in this primordial “content soup”, but I think I’ll wait a little longer before dipping my toes in.
P.S.
This article was written by a human. It was proofread by a human, who also used Word’s Editor functions, PerfectIt and ChatGPT as supporting tools. (Email me via the link above if you would like to see ChatGPT’s feedback on this article.)
Further reading
Adobe (2024) “Conversational AI to Trillions of PDFs with the New AI Assistant in Reader and Acrobat”. Press release. 20 February 2024. See https://news.adobe.com/news/news-details/2024/adobe-brings-conversational-ai-to-trillions-of-pdfs-with-the-new-ai-assistant-in-reader-and-acrobat.
Battersby, Matilda (2024) “Penguin Random House underscores copyright protection in AI rebuff”. The Bookseller. 18 October 2024. See https://www.thebookseller.com/news/penguin-random-house-underscores-copyright-protection-in-ai-rebuff.
Burkeman, Oliver (2024) “It’s the human connection, stupid”. Blog post: The Imperfectionist. 2 May 2024. See https://ckarchive.com/b/d0ueh0h4gdvw6fk4xx64otmx96444hl.
EASE (2024) “Quality and safety of artificial intelligence generated health information”. EASE Digest, European Association of Science Editors. See https://ease.org.uk/2024/05/quality-and-safety-of-artificial-intelligence-generated-health-information/.
Kearns, Bernadette (2023) “A creative sea of troubles: Generative AI and the legal battle for creativity”. Blog post: Association of Freelance Editors, Proofreaders & Indexers of Ireland (AFEPI). 19 December 2023. See https://afepi-ireland.com/blog/a-creative-sea-of-troubles-generative-ai-and-the-legal-battle-for-creativity/.
Tobitt, Charlotte (2024) “Who’s suing AI and who’s signing: Publisher deals vs lawsuits with generative AI companies”, PressGazette. 6 December 2024. See https://pressgazette.co.uk/platforms/news-publisher-ai-deals-lawsuits-openai-google/.
Vincent, James (2024) “Do Chatbots Dream of AI Poetry? Calvino, Madness and Machine Literature”, Blog post: Faber. 17 May 2024. See https://www.faber.co.uk/journal/do-chatbots-dream-of-ai-poetry-calvino-madness-and-machine-literature/.
Notes
[1] Back then, that was how author Kenneth Graham described the sound made by the vehicle’s horn.
[2] Source: https://www.vam.ac.uk/articles/the-car-petrol-power-pollution.
[3] “Sideways: The Age of Digital Warfare”, presented by Matthew Syed. BBC Radio Four, Monday 6 January 2025: https://www.bbc.co.uk/sounds/play/m0026nj9.
[4] See, for example, this story about AI’s role in preserving the Amazon rainforest: https://news.microsoft.com/source/latam/features/ai/amazon-ai-rainforest-deforestation/
[5] Just a bit of fun: https://www.youtube.com/watch?v=ifJ1fvXjcFc.
[6] See https://premium.oxforddictionaries.com/definition/english/night-school.
[7] Phill Jones, co-founder of the MoreBrains Cooperative (https://www.morebrains.coop/ ) and Avi Staiman of Academic Language Experts (https://www.aclang.com/about-us/ ).
[8] Phill Jones (see note 7).
[9] AI developers such as OpenAI have used online information sources to train their tools without paying for the privilege. Numerous copyright violation cases have been lodged, including by some major publishers. Source: “News media versus AI: What if we win?” by Dominic Young, Press Gazette, 15 February 2024.
[10] As I write, Apple’s news AI is in trouble for making up BBC news stories: https://www.bbc.co.uk/news/articles/cge93de21n0o (6 January 2025).
[11] “Writers accept lower pay when they use AI to help with their work”, by Chris Stokel-Walker. New Scientist. 7 June 2024. See https://www.newscientist.com/article/2434307-writers-accept-lower-pay-when-they-use-ai-to-help-with-their-work/.
[12] “There are holes in Europe’s AI Act – and researchers can help fill them”, Nature, Vol. 625, 11 January 2024, p.216.
[13] Now, there’s a tech that has proved its worth in recent years.
[14] “The road to conscious machines: AI from Eliza to ChatGPT and beyond”, a New Scientist webinar presented by Michael Wooldridge of Oxford University’s Department of Computer Science, 4 June 2024.
[15] AJE presentation on AI and Curie, 16 June 2024.
[16] I’m an Advance Professional Member of the Chartered Institute of Editing and Proofreading (CIEP). I am also a member of ACES, the American Society for Editing.
[17] “Spam, junk … slop? The latest wave of AI behind the ‘zombie internet’”, by Alex Hern and Dan Milmo. The Guardian. 19 May 2024. See https://www.theguardian.com/technology/article/2024/may/19/spam-junk-slop-the-latest-wave-of-ai-behind-the-zombie-internet.
[18] Another day, another worrying development: https://www.theguardian.com/commentisfree/2025/jan/07/ai-clone-voice-far-right-fake-audio.