AI in the Workplace: Tackling Shadow Tools, Compliance Risks, and Building Smarter Governance

ORIGINALLY PUBLISHED ON LINKEDIN - July 28, 2025

Insights from the OXBY AI Exchange session with Robyn MacMillan and Daniela Rodrigues

Introduction

Delighted to share another article in my Intelligence and Insight series.

The growing presence of artificial intelligence in our daily workflows is no longer speculative — it’s silently embedded into the very platforms we rely on. According to new insights from the OXBY AI Exchange webinar, 35% of AI-enabled tools in the workplace are operating without users even being aware of their AI capabilities.

As someone passionate about the intersection of ethics, digital strategy, and innovation, I found this session to be both eye-opening and affirming.

Hosted by Robyn MacMillan and Daniela Rodrigues, the discussion laid bare the practical and psychological drivers behind unauthorized AI use—often referred to as “Shadow AI”—and why organizations need to shift from fear-based control to trust-based governance.

The Current State: Unseen AI and Growing Gaps

Modern platforms such as Microsoft 365, Salesforce, HubSpot, and Notion regularly roll out AI-powered features with minimal fanfare. These features, while valuable, create compliance and security risks when they’re not acknowledged or tracked.

  • Only a minority of organizations track AI usage across departments.

  • Most employees lack clear guidance on what constitutes acceptable AI use.

  • Shadow AI incidents, including internal leaks at Amazon and Samsung, show just how unacceptable and avoidable these risks are.

Add to this the decentralized nature of AI tools—browser-based, free, and easily accessible — and you have a perfect storm of unmanaged risk.

What’s Driving Shadow AI?

The session revealed several contributing factors behind unapproved AI adoption:

  • 📈 Productivity pressure: Employees turn to AI to offload cognitive load and meet performance targets.

  • 🧩 Poor internal tooling: When legacy systems are rigid, staff will seek workarounds.

  • 🧠 Instant gratification: Tools like ChatGPT and Notion AI offer fast results, bypassing bottlenecks.

  • 😨 FOMO culture: Peer comparison and social media success stories create pressure to “keep up.” - And yes.. I will admit to looking up FOMO - Fear Of Missing Out

Something I feel passionate about is integration. Indeed, 'choose your weapon'. Which AI platform do you ultimately go with and from which, centralise and manage your data in the most practically secure environment?

Recommendations for Responsible AI Governance

Here’s where the session’s guidance truly shines. Rather than defaulting to enforcement, the panel recommended a shift toward principle-based AI governance, with transparency, education, and cross-departmental collaboration at its core.

Whilst I believe that ultimately, legislation is paramount for safeguards and for driving industry compliance. The key points from the session are a great 'Starter for 5' if you are embarking on this discovery.

✅ 1. Audit First, Not Last

Conduct regular cross-functional AI audits involving IT, operations, HR, security, and marketing. Identify all AI tools currently in use, whether approved or unsanctioned.

✅ 2. Build a Living AI Registry

Use accessible tools like Notion or Airtable to log AI tools, use cases, purpose, and data sensitivity. A “living” registry ensures tracking evolves with your tech stack.

✅ 3. Replace Controls with Principles

Rigid rules often push AI use underground. Instead, create principle-based policies that guide usage while supporting innovation and autonomy.

✅ 4. Normalize AI Conversations

Encourage AI retrospectives in team meetings. Ask what tools are being used, what worked, and what didn’t. Open dialogue breaks stigma and builds collective intelligence.

✅ 5. Onboard With Clarity

New joiners should be informed of the organization’s AI expectations, including an approved tools list, risk considerations, and who to contact for advice.

A Better Way Forward

AI is not going away, but shadow use, secrecy, and fragmentation put both data and people at risk. This session made one thing clear: AI governance must evolve beyond compliance. It should educate, empower, and encourage ethical adoption.

As we move deeper into the age of AI-augmented work, let’s ask ourselves:

🔍 Is our organization promoting AI innovation safely—or merely reacting to it?

Final Thoughts

Transparency is not a weakness—it’s a strength. Let’s build a culture where AI is visible, accountable, and beneficial for everyone involved.

If your team, department, or organisation is navigating these waters, I’d love to connect and explore this journey with you.

It's all about AI working for us, not around us.

🔗 Please like, share, or comment if this resonates with your experience. I welcome your thoughts and stories around Shadow AI, governance, and responsible tech leadership.

#AIgovernance #ShadowAI #DigitalTransformation #ResponsibleAI #WorkplaceTech #TechEthics #InnovationLeadership #TyDaviesInsight #TDii #OXBYAIExchange

Tyrone Davies

Ty Davies Intelligence & Insight Ltd is a digital consultancy established to provide

high-quality, strategic advisory services to public sector bodies, private enterprises, and

third-sector organisations. With specialisms in AI implementation, Agile transformation,

cloud migration, and digital strategy, the company leverages Ty Davies' 25+ years of

leadership across the UK and the Isle of Man. Services will be provided on a freelance

basis, with Ty as the sole director and employee.

https://TDii.co.uk
Previous
Previous

🔍 AI: From Science Fiction to Societal Transformation – A Welsh Perspective

Next
Next

I was never The Weakest Link.. 20 years later.