Reasons to be careful: Using generative AI in 2024
A disclaimer
First off, it is worth pointing out that this document is talking primarily about Generative AI (genAI) tools such as ChatGPT that are aimed at originating content. There are many useful roles that AI-powered solutions can have in the overall content workflow, from analysing headline effectiveness to SEO to ideation to increasingly powerful grammar checks and advice. And, of course, even the use of automatic transcribing apps and services is increasingly AI-driven.
This is not a Luddite manifesto that advocates a scorched earth policy and the removal of all AI tools used in content creation. Nor is it looking in detail at the visual uses of genAI as with applications such as Midjourney and Sora. Rather, it takes a critical look at the capabilities and the consequences of using genAI as the primary source of written content for corporate communications, content creation, and journalism in 2024.
Quality vs quantity
One thing that AI is very, very good at is generating large amounts of copy in a short time, whether blog posts on certain subjects, Amazon reviews, news articles, and otherwise. There are many companies that are pursuing the quantity approach as a result, flooding their sites with content in a bid to drive up traffic.
This is, however, a scattergun approach that sacrifices quality for sheer volume. In that, it is an extension of the largely discredited SEO-first approach to content creation that saw companies cram keywords into their content without proper consideration of their context within a web page. It is slightly more sophisticated in that the content is usually more readable, but it is also often elliptical, meandering, padded, and devoid of references and external URLs (this factor will often mark down a webpage in search engines).
Furthermore, in the rapidly evolving and arcane world of SEO, currently it looks as if content that is perceived to be generated by genAI is marked down by Google etc.
A reader may end up on a webpage, but they will not stay there long nor be minded to investigate the company offering the information further. Bounce rate will be high, and clicks on CTAs will be marginal.
If this is true for consumer-oriented companies, it is even more so for B2B content, which is targeted at an expert audience. While traffic is always a good thing, it is the high-quality content that can attract business leaders that truly counts.
Think of it like a tradeshow. Footfall is always welcome, but in the ideal world, you want a significant percentage of the visitors to your booth to be C-level decision-makers and not students looking for freebies.
This is not to say that genAI will never be able to write closely argued, carefully targeted, fully referenced, topical content that, in the words of Lord Reith, both entertains and informs. But it certainly is unable to do so in 2024.
It’s not just a lack of capability, however, that should limit the use of genAI. There are also several reasons to be wary of its usage that have both legal and reputational ramifications.
Accuracy and hallucinations
One of the major Achilles Heels of genAI is its ability to hallucinate and simply make up information to follow the instructions of a given prompt. This is well-documented and has afflicted the technology from the early days of public genAI, Google’s Bard hallucinating a fact about exoplanets as part of its launch materials and wiping an astonishing $100bn off of parent company Alphabet’s stock valuation as a result.
Such reputational damage can be difficult to recover from, especially if you are not Google.
The problem intensifies around technical subjects that go into depth about operations, workflows and more. If a company is making an important point about a new product, it needs that information to be accurate. The difficulty with genAI is that not only can it lie, it can lie extremely convincingly. The result is that every word generated has to be carefully checked by humans as an essential part of any content creation workflow. Facts, figures, and individual nuances of meaning all have to be carefully assessed for veracity, a fact made all the more complex by many genAI’s lack of transparency and clear references to source material.
IP infringement and copyright
The problems relating to source material are another deeply serious issue with AI. Many of the most popular genAIs, including ChatGPT, have been trained on the open web. That means they have been trained on copyrighted content at some point in their life cycle, and this is a matter that is keeping lots of lawyers currently very busy.
This is a complex issue, and it’s perhaps worth quoting Variety here, which has created several very detailed guides to the likely impact of genAI on the creative industries and some of the issues surrounding its usage.
———————————————————
When can enterprises or individuals use AI-generated content without liability risk?
AI-generated outputs might themselves infringe copyright, if copyrighted data was used to train models that produced it. So far, lawsuits alleging copyright infringement have been brought against the AI companies, not end users of AI systems. However, until this question is settled, a possibility exists that content creators or companies could become liable.
Some tech companies including Google, Adobe and Microsoft have offered indemnification clauses and funds to protect enterprise customers who use their gen AI tools against liability and cover claims if they’re sued. Even so, media companies may want to restrict or sanction artists or creative teams to certain “ethical” gen AI tools that train exclusively on owned or licensed material.
———————————————————
Ethical tools include, for example, Adobe’s Project Firefly image processing tools that have been trained solely on content that the company has already licensed. Shutterstock has a similar genAI toolset.
A related issue is the ability to copyright AI-generated content by companies using genAI themselves. The current guidance from the US Copyright Office is that entirely AI-generated works are not copyright-protectable but that the human-authored elements of AI-assisted work are. The interesting legal challenge there will be, of course, separating the two.
This could be a consequential issue, especially when it comes to the creation of high-value gated content such as white papers.
Questions of security
Many of the larger companies that are either using genAI explicitly in their workflows or are assuming staff are doing so on an ad hoc basis have issued internal guidelines as to their use. One of these usually reflects the disclaimers that can be found in many genAI offerings bundled into existing tools that users should avoid putting sensitive data or information directly into AI channels.
The problem is that AI is trained on massive amounts of data and does not discriminate between sensitive company information and publicly shared content. The result can be significant data leakage. And while ChatGPT is one that says it is planning on rolling out memory controls so that users can specify where it ‘forgets’ data input into it, it is likely not until we reach a new generation of on-device AIs that we will be able to talk about data security and genAI with any confidence.
In the meantime, one of the latest surveys on the subject suggested that 31% of employees using genAI acknowledged having entered sensitive data into these tools. And, of course, as genAI’s use spreads, so it becomes an increasing target for bad actors.
———————————————————
"Already in 2023, X-Force observed over 800,000 posts on AI and GPT across Dark Web forums, reaffirming these innovations have caught cybercriminals attention and interest.
“Enterprises should also recognise their existing underlying infrastructure is a gateway to their AI models that doesn't require novel tactics from attackers to target -- highlighting the need for a holistic approach to security in the age of generative AI.”
Source: IBM Newsroom
———————————————————
Relationships and expertise
Lastly, it is important to consider one of the more ephemeral metrics: relationships.
The media industry remains very much a people business. Relationships are built, often over decades, and reaffirmed at trade shows around the world — and even on Zoom calls — on a regular basis. There is a reason that networking is seen as such a key part of any industry get-together; people do business with other people that they a) like and b) trust. And in an era of increasing commoditisation and virtualisation of products, it is often these factors that can give a company an edge over its rivals.
Writers, PR companies, production companies, event companies, and more form an intricate network of meshed contacts that stretch around the globe. And while these connections are never something that appears on a balance sheet, they are vital for achieving successful outcomes at many levels of the business and are likely to remain so for many years to come.
I recently spoke with a PR company that had spent well over a year trying to repair a broken relationship with a large vendor and a press outlet. After plenty of hard work behind the scenes, not only was a detente reached but a positive relationship had been formed, with the media company starting to regularly turn to the company CEO for comment and thought-leadership.
Even if genAI can pass all the hurdles in creating content already mentioned, and currently, just to be clear, it can’t, it still requires human intervention and human relationships to work effectively.
In conclusion, tools and users
There’s a phrase that’s common in the film industry: a tool is only as good as its user. It’s probably common in a lot of other industries as well, but given the sheer pace of progress in the media world over the past decade or so, it seems particularly relevant here.
Basically, it means that if you stick a $50,000 RED camera in the hands of someone who has never learned to frame a shot or light a scene, you’re not going to get anything better out of them than if you’d handed them a consumer video camera from Walmart. It might be in higher resolution and better exposed, but it’s still going to betray the lack of experience and expertise in its creation.
This is where we are with genAI at the moment. As a behind-the-scenes tool it can play an important role in optimising content and, in some circumstances, help people do more with less. But on the whole, it is far from ready for anything that could be considered front of house just yet. Just in the last hours of writing this document, Google said that it had had to pause its newly minted genAI Gemini from creating images of people while it addressed a problem with ethnicity in historical images. Such issues persist across genAI. In 2016, Microsoft had to turn off its AI chatbot Tay after 24 hours when it became, in The Verge’s words, “a racist asshole.” A cynic would say that the main difference between then and now is that the researchers actually turned it off rather than simply trying to dial down the racism so as not to offend the shareholders.
So, what can companies do? The answer is to turn to the users who can use the tools in the right way. If genAI is going to lead us to a race to the bottom and flood the web with low-grade content, then the answer to standing out is not to create more of the same but to generate high-quality content that people actually want to read and watch. It’s about making your leadership spokespeople stand out and highlighting their personal expertise and how they can help your customers. It’s about interviews, it’s about videos, it’s about emphasising the human experience and human contacts. Arguably, we might be heading back to a point where the trade show is even more important than it was pre-pandemic.
And yes, genAI and AI tools in general will most likely have a hand in creating all this content. But, for this year and for the foreseeable future at least, genAI needs to be seen as a useful tool on the journey and not the destination in itself.