Image this: It is 2025. Your advertising and marketing intern used an AI software to generate content material to your greatest consumer and by accident included hallucinated product options and hit ship earlier than anybody may evaluation it.
Gave you a chill, didn’t it?
Because the creator economic system races to undertake generative AI instruments, taking a pause to construct a correct content material governance ought to be the next move.
Fortunate for us, the writer of “What Issues Subsequent” and the founder and CEO of KO Insights, Kate O’Neill, shared her knowledge on navigating the wild west of AI-powered content material creation earlier than your group faces its personal content material disaster.
This interview is a part of G2’s Q&A collection. For extra content material like this, subscribe to G2 Tea, a publication with SaaS-y information and leisure.
To look at the total interview, try the video beneath:
Contained in the trade with Kate O’Neill
Your newest guide, “What Issues Subsequent,” addresses future-ready decision-making. Are you able to inform us how this is applicable particularly to content material danger administration?
I believe future-ready choice making is an idea or a mindset that includes a steadiness between enterprise goals and human values. This performs out in tech as a result of the size and scope of tech choice making is so large. And a number of leaders really feel daunted by how complicated the choice making is.
Inside content material danger administration, what we’re is a necessity for governance and a kind of coverage to be put in place. We’re additionally a proactive strategy that is past simply regulatory compliance.
The bottom line is understanding what issues in your present actuality whereas anticipating what might be essential sooner or later, all guided by a transparent understanding of what your group is attempting to perform and what defines your values.
I believe the give attention to growing sturdy inside frameworks will actually profit folks with regards to content material danger. And people frameworks ought to be based mostly on function and organizational values. It is rather essential to have a extremely clear understanding of what it’s the group is attempting to perform and what it’s that defines their values.

Rework your AI advertising and marketing technique.
Be part of trade leaders at G2’s free AI in Motion Roadshow for actionable insights and confirmed methods to reimagine your funnel. Register now
Speaking about content material dangers, what are essentially the most important hidden dangers in content material methods that organizations sometimes overlook, and the way can they be extra aware sooner or later?
After I labored for a big enterprise on the intranet staff, our focus was not simply on content material dissemination but additionally on sustaining content material integrity, managing laws, and stopping duplication. For instance, totally different departments usually stored their very own copies of paperwork, just like the code of conduct. Nevertheless, updating these paperwork may result in inconsistent variations throughout departments, leading to “orphaned” or outdated content material.
One other traditional instance that I’ve seen so many occasions is a few type of work course of getting instantiated after which codified into documentation. However that doc represents one individual’s quirky preferences, which turn out to be ingrained in documentation even after that individual leaves. This results in sustaining non-essential info with out a clear purpose. And so I believe these are the sorts of issues which can be very low-key type of dangers. These are low-harm dangers, though they add up over time.
What we’re seeing within the higher-risk stakes is just not having readability or transparency throughout communications and never with the ability to perceive which stakeholders are accountable for various items of content material.
Additionally, with generative AI getting used inside organizations, we see lots of people producing their very own variations of content material after which sending that out on behalf of the corporate to purchasers or to outside-facing media organizations. And people aren’t essentially sanctioned by the stakeholders inside the group who wish to have some type of governance over documentation.
A complete content material technique that addresses these points at regulatory, compliance, and enterprise engagement ranges would go a great distance towards mitigating these dangers.
With content material methods turning into international, how regulatory variations throughout international markets have difficult content material danger administration, significantly with the emergence of generative AI. What particular compliance points ought to organizations be most involved about?
We see this loads in lots of fields of AI. We’re seeing how generative AI, significantly due to its widespread use, is clashing with international laws. Particularly in areas just like the U.S., the place deregulation is outstanding, corporations face challenges in establishing efficient inside governance frameworks. Such inside governance frameworks are essential to make sure their resilience in international markets and to forestall points just like the dissemination of unrepresentative content material that might misalign with an organization’s values or positions, doubtlessly compromising security and safety.
We’d like to consider resilience and future readiness from an organization management standpoint. And meaning with the ability to say, “We’d like the most effective type of procedures for us, for our group.” And that is most likely going to imply being adaptable to any market. For those who do enterprise globally, it is advisable to be ready to your content material to be consumed or engaged with by international markets.
“I believe specializing in growing worth pushed frameworks that transcend particular laws is the best option to go.”
Kate O’Neill
Founder and CEO of KO Insights
We have to assume proactively about governance in order that we will create the type of aggressive benefit and resilience that can assist us navigate international markets and altering circumstances. As a result of as quickly as any explicit authorities modifications to a special chief, we might even see full fluctuation in these regulatory states.
So, by specializing in long-term methods, corporations can shield their content material, folks, and stakeholders and keep ready for shifts in governmental insurance policies and international market dynamics.
I see that you just’re very lively on LinkedIn, and also you discuss AI capabilities and human values intertwining. So, contemplating the steadiness between AI capabilities and human values, what framework do you advocate for making certain that AI-powered content material instruments align with human-centric values and never vice versa?
Opposite to the idea that human-centric or values-driven frameworks stifle innovation, I consider they really improve it. When you perceive what your group is attempting to perform and the way it advantages each inside and exterior stakeholders, innovation turns into simpler inside these well-defined guardrails.
I like to recommend utilizing the “now-next continuum” framework from my guide “What Issues Subsequent.” This includes figuring out your priorities now, participating in state of affairs planning about doubtless future outcomes, defining your most popular outcomes, and dealing on closing the hole between doubtless outcomes and most popular outcomes.
This train, utilized by way of a human-centric lens, is definitely the most effective factor I can consider to facilitate innovation as a result of it actually means that you can transfer rapidly but additionally lets that you just’re not shifting so rapidly that you just’re harming folks. It creates a steadiness between technological functionality and moral duty that advantages each the enterprise and the people related to it.
“Take into consideration the steadiness between technological functionality and moral duty and do this in a means that advantages the enterprise and the people which can be in and out of doors of the enterprise on the identical time.”
Kate O’Neill
Founder and CEO of KO Insights
Trying forward, what abilities ought to content material groups develop now to be ready for future content material dangers?
Content material groups ought to give attention to growing abilities that mix technical understanding with moral issues till this integration turns into second nature. The opposite factor needs to be proactive management and actually excited about how there’s a number of uncertainty due to geopolitics, local weather, AI, and different quite a few matters.
And given the uncertainty of this time, I believe there is a tendency to really feel very caught. As an alternative, that is truly the most effective time to look forward and do the integrative work of understanding what issues now and what is going to matter sooner or later — from one 12 months to 100 years forward.
The bottom line is pulling these future issues into your present selections, actions, and priorities. This forward-looking integration is the essence of “What Issues Subsequent” and represents the abilities many individuals want proper now.
For those who loved this insightful dialog, subscribe to G2 Tea for the newest tech and advertising and marketing thought management.
Comply with Kate O’Neill on LinkedIn to know extra about AI ethics, content material governance and accountable tech.
Edited by Supanna Das