Ever since ChatGPT burst onto the scenein late 2022helpful AI chatbots for business productivity have quickly multiplied. AI-enabled tools range from the broadly useful (Microsoft Copilot), to niche tools created for a specific purpose (like a LinkedIn AI that helps HR managers answer frequently asked benefits questions).
For content marketers, AI represents a whole new area of opportunity too. But it’s important to recognize its limits as well. There are important ethical considerations to keep in mind before you ask a chatbot to, say, create a white paper, article, or even a press release out of whole cloth.
To establish ethics and guidelines around their use, it’s important to know how chatbots (aka generative AI or GenAI), are “trained,” and the issues that come with that process.
Chatbots in Training
The power behind a chatbot is a data lake known as a large language model (LLM). Think of these parts as a front end and a back end. For instance, ChatGPT is the human-facing, friendly bot; but the LLM that feeds it the information is known as GPT-4.
In most cases, LLMs are told to learn specific parts of the Internet, including public records, social media sites, enterprise websites, creative repositories and more. Sometimes they’re just turned loose to learn the entirety of the Internet, including in some cases Dark Web sites and underground forums. And private companies are creating their own internal LLMs that are trained on proprietary enterprise data for customized and specific use cases.
The idea for all of these approaches is to have LLMs interact with a set of specified content, learn it, and then be able to slice and dice it in ways that chatbots can deliver back to humans.
Then the second stage of training begins, which involves learning how users interact with the information on offer. What are they asking? What’s most relevant? What carries the most weight? What tasks are they using the chatbot for? All of that behavioral information is fed back into the model to help it get smarter and be more relevant over time.
AI Considerations for Content Marketers
That all sounds great, right? And it is—to an extent. But there’s a catch: Models are only as accurate and fair as the information they’re trained on. It’s a classic case of garbage in, garbage out—as the old saying goes.
Given this, below are some considerations for content marketers when it comes to content generation.
- Erosion of Differentiation: Because LLMs are trained on vast repositories of data, the content that comes from that training tends to be “consensus-based.” The final product might give you the facts or even a passably grammatical and engaging rundown of the topic at hand—but it’s unlikely to provide unique insights, fresh angles, or expertise drawn from lived experiences. That kind of angle might not be documented within its training data. In short, if your marketing content leans into showcasing subject matter expertise, deep knowledge in the field, or any type of unique insight or voice, GenAI is not the author for you.
- Bias: LLMs that have been trained on imperfect data—such as blog content on the internet that reflects certain human biases, or decades-old commercial data that no longer reflects modern values—will surface results that can be distorted and potentially harmful. In the case of bias, an AI might think a user is older or younger than they are based on outdated stereotypes in the training data, leading to inappropriate content recommendations.
- Hallucinations: This refers to the phenomenon of AIs returning false, inaccurate or completely made-up results based on gaps in its training. It might make inferences to fill those gaps, which could be wrong, or draw from fictional sources. An example is the case of the lawyer who asked a chatbot to write a legal brief; the result cited several compelling court cases, none of which actually exist in real life.
- Imperfect Guardrails: Most LLMs claim to have guardrails in place to prevent the use of AI for malicious purposes. You can’t, for example, ask ChatGPT to create a piece of malware, because it will simply tell you no. But guardrails can be patchy, or, in the case of more open models that also train on deep web sources, not there at all. That means that whatever answer you get to a question might be of dubious ethical provenance—or potentially even illegal.
- Inadvertent Data Exposure: Because models keep refining their training based on human input, that means that information entered into a chatbot query goes back into the data lake. When that includes proprietary organizational data or information on customers, that information becomes fair game for chatbots to use to answer queries from users outside of the organization. From a content marketer perspective, your AI-written how-to, let’s say, might contain other companies’ proprietary data, that then becomes accessible to outsiders.
- Subverting Creator Rights: Any content generation effort should respect digital rights licenses, copyrights, and the right of artists, writers, musicians, photographers and so on to get credit and compensation for their work. Many AI tools are trained on work created by others and will use it without permission. That opens both liability and ethical exposures for organizations.
Great Marketing Applications for AI Use
Across the country, media newsrooms and creative agencies have been on the forefront of establishing AI content policies, but the reality is that every organization needs to start thinking about how it engages with this technology. Every organization should have an AI content policy!
At Calysto, we use AI in very specific ways, and never to create our unique content. Some of the best applications that we’ve found for AI tools so far include:
- Providing post-interview summaries or meeting minutes;
- Offering smart, accurate transcription that can differentiate between speakers and accurately capture English-as-a-second-language subject matter experts;
- SWOT analyses;
- Offering snapshots of written materials to help prioritize market messaging.
Other applications that content marketers might want to consider include:
- Creating custom charts or other insights from their own first-party data;
- Producing website content like product overviews, or collecting reviews for comparison;
- Providing metrics and audience insights for content marketing campaigns;
- Executing campaigns at scale by automating core marketing tasks.
A note on this last point: Don’t expect AI to replace content creators, social media strategists, or the marketing gurus in your midst. The key is to pinpoint specific “busywork” AI tasks that can be used to bolster productivity and free your team up for more high-value work.
Bottom line? The genie is out of the proverbial bottle, and GenAI tools are destined to become ubiquitous. Creating an AI policy now, especially when it comes to creating and sharing high-value, engaged content, can save a lot of headaches down the line.
Want to learn more about AI for content marketing? Contact Calysto.