Adopting AI: Narrative Risk and the New Rules of Reputation
For Rosely Group’s latest executive marketing and comms roundtable, we dived deep into the intersection of AI, brand reputation and narrative, and leadership. At the very core of our discussion was how to adopt AI in a way that doesn't erode trust, quality, or the human edge that still matters.
To set the foundations, we opened with a keynote from Dr Stephen Anning exploring the tension between what AI can do and what it should do. To really dissect this dichotomy, we went back to basics: understanding how AI actually works with a grounding in theories of knowledge. From there, we learned how we can build better human-machine partnerships - how to defend their outputs and our decision-making.
You can access Stephen’s slides here to refresh on his presentation.
It was from this knowledge sharing that we kicked-off our marketing and comms leaders discussion - and a few key themes emerged:
The risk/reward tension
Building AI into your product offering can be a genuine competitive advantage, especially for organisations willing to move early. But there are real trade-offs. In regulated industries and sectors built on human relationships - for many of the financial and professional services represented in the room - the internal efficiency gains aren’t necessarily something you can translate into external messaging. You have to take into consideration your more risk-adverse customers and regulators.
Start with the problem, not the tool
With pressure from above, whether investors or the board, to move fast with AI adoption, there's a real risk of reaching for a new tool first and then looking for problems to apply it to. The group agreed this needs to flip: start with the business challenge, then decide whether AI actually helps.
Successful adoption requires structured change
We are still early in the AI journey with a lot of twists and turns still to come which may slow down adoption. However, not implementing a proper change programme with a vision for the future will be detrimental. Create alignment across the business by mapping where you are now to where you need to be - think about the tools that will enable that evolution, the policies and procedures, and the team and culture that you need to build to champion that new state of being.
You can read more on this in the report from MIT on why generative AI pilots are failing.
Authenticity comes out on top
AI-generated content “slop” is under fire. We’ve all seen it across LinkedIn, cold sales emails and maybe even some of the content produced by your colleagues internally - it’s cringe. The reaction usually isn’t positive, and there's growing fatigue around it. The same skepticism came up around digital avatars, using the example of Reid AI - the avatar made of Reid Hoffman, CEO of LinkedIn. Who wouldn’t feel duped having a conversation with an avatar over a face-to-face meeting?
In the same way, journalists and the big media companies are pushing back against AI-written press releases (see Reach PLC example). They don’t want to run the risk of reporting on false news stories, using hallucinated sources nor citing quotes that haven’t come directly from an industry expert themselves.
For more on this, Con Franklin, Rosely Group Director, shared his thoughts on the value of authenticity in the AI-age in a recent blog post here.
Quality over volume
With AI agents being able to produce content in the matter of seconds it’s no wonder we are seeing content volumes surge. But, we shouldn’t be feeling the pressure to produce more content for the sake of it. Content strategies need to be carefully crafted to deliver on your marketing strategies, and teams need to maintain oversight and protect what goes out. This is absolutely pertinent when it comes to regulated industries where there are carefully crafted policies and content guidelines in place to not fall foul of regulation.
Regulation is still to come
There wasn't much consensus on regulatory input. Some organisations are waiting for clearer direction; others see an opportunity to move ahead and help shape what good looks like. The general feeling was that meaningful regulation will be hard until the technology is used more widely across more use cases…and we have greater clarity on the power of this beast! There seems to be “a race against the good guys and the bad guys” to harness this technology, as one guest said in response to the news of Anthropic’s Mythos model this week.
Some guiding principles for your AI journey
While there is much to think about, debate, and forecast for the next stages of AI, there are a few ever-green practical tips and tricks we touched on in the session:
Start with low-risk use cases where AI saves time without compromising trust.
Keep humans involved in high-stakes decisions and relationships.
Be honest about whether it's actually solving the problem, or just generating more noise and output.
Get in touch
If you want to discuss authenticity, brand narrative and reputation in more detail, or more generally your PR strategy, please reach out to Jonathan to schedule in a call