# AI Bootcamp Follow-up ## From Awareness to Agents Leo Pharma — Global Clinical Development February 27, 2026
# McKinsey now counts 25,000 AI agents as employees > "How big is McKinsey? 60,000 — but 40,000 humans and 25,000 agents." > — Bob Sternfels, CEO McKinsey, CES January 2026 ([Business Insider](https://www.businessinsider.com/mckinsey-workforce-ai-agents-consulting-industry-bob-sternfels-2026-1)) - 18 months ago: **3,000 agents**. Today: **25,000** - Every employee enabled by at least one agent - **1.5 million hours saved** → Let me show you what one actually looks like.
# AI agents reviewed every slide you'll see today
Torben
→
Reviewer
→
Leo Director
→
Designer
◀
next round
**Reviewer** checks structure and sources · **Leo Director** reacts as a skeptical GCD director · **Designer** applies visual standards · **Torben** decides what stays Human in the loop — agents analyze, draft, and refine. The human decides.
# A 150-line instruction turns AI into a skeptical GCD director **Persona (who the AI becomes):** - Director in GCD, 15+ years in pharma, manages 20-50 people - Results-oriented, time-poor, skeptical of hype, respects data - Submits to EMA and FDA — wants regulatory limits acknowledged > → **You don't write this from scratch.** Start with 5 lines — you'll build your own reusable instruction later today.
# The Leo Director reviews every slide through six lenses - "So What?" — What do I DO with this? - "Evidence" — Says who? Is this pharma-relevant? - "Already Tried That" — What's new since the bootcamp? - "Time" — Is this worth my 3.5 hours? - "My Team" + "Mirror" — How do I bring this back, and did he ask where WE are on this? > → **A skill is a reusable instruction for a specific recurring task — you'll write your own later today.**
# What does an agent look like? Let me show you Claude Code — an AI agent that builds working tools from a conversation *Live demo: we'll build something for GCD in real time*
# Agents live at level three — most teams start at level one ### 🔧 Level 1: Tooling — give people access Copilot · LEO AI — does every team member have access? ### 🏢 Level 2: Culture — embed AI in process Is "AI-assisted?" a standard step in your workflows? ### 🤖 Level 3: Agents — let AI act autonomously Copilot Studio · Claude Code — multi-step, process-integrated > → **Today we work on all three. Let's find out where GCD is.**
# Agenda and what we need to land today ### Morning - Where are we now? (data + discussion) - Value creation & how to measure it ### Before Lunch - Prompting hands-on - More live demos & AI leadership actions - Decide: 3 use cases we commit to **There's more here than we can cover together.** This presentation lives online — revisit any slide, follow the sources, go deeper where it matters for your team.
# Only 1 in 50 AI investments delivers transformational value - **Only 1 in 50 AI investments** delivers transformational value ([HBR, Feb 2026](https://hbr.org/2026/02/why-ai-adoption-stalls-according-to-industry-data)) - Only **1 in 5** delivers any measurable ROI ([HBR, Feb 2026](https://hbr.org/2026/02/why-ai-adoption-stalls-according-to-industry-data)) - **Only 25% of leaders** report achieving significant value from their AI investments ([BCG, Sep 2025](https://www.bcg.com/publications/2025/closing-the-ai-impact-gap)) - Just **4% have achieved repeatable business value** at scale ([Economist Impact, Feb 2026](https://impact.economist.com/technology-innovation/from-intent-to-action/report)) > → That's where the market is. **How many of your AI initiatives have delivered measurable results?**
# The question shifted: not *should we use AI* but *how do we get value* > "We're no longer asking *should we use AI?* > We're asking *how do we get value from it?*" - Awareness → **Adoption** → **Embedded processes** - Only 13% have deployed AI into actual workflows ([BCG, Sep 2025](https://www.bcg.com/publications/2025/closing-the-ai-impact-gap))
# Most AI value hides in tasks you repeat every week How do we find it? How do we measure it?
# AI saves hours on leadership tasks — email, reports, meeting prep - Study protocols: **25% time saved** on drafting with AI ([AstraZeneca, 2025](https://www.artificialintelligence-news.com/news/astrazeneca-ai-clinical-trials-2025/)) - Email drafting: ~15 min → ~2 min *(typical across engagements)* - Better first drafts — fewer revision cycles, more time for expert review > → **Which of these time sinks hits your team hardest? Where would 2 hours back per week matter most?**
# A 45-minute steering committee summary takes 10 minutes with AI ### Before (manual) Write from scratch — 45 min, 3 revision rounds ### After (AI-assisted) Paste notes → structured summary in 3 min → you review and refine — 10 min total > → **How many of your recurring reports follow this pattern — manual drafts, multiple revision rounds?** *Live demo: we'll do this together*
# The 4% that create real AI value share five traits Only 4% of companies achieve repeatable AI value at scale ([Economist Impact, Feb 2026](https://impact.economist.com/technology-innovation/from-intent-to-action/report)). What they do differently: - **Task-oriented:** Solve a specific, recurring pain point - **Measurable:** Before/after comparison is possible - **Low friction:** Integrated where people already work - **Champion-driven:** Someone owns it and shows others - **Domain-grounded:** AI + expert knowledge, not AI alone > → **Let's score GCD against these five: where do we do well? Where is the gap?**
# Embed AI in workflows — make it part of how you work ### ✅ What the 4% do differently - Add **"AI-assisted?"** as a step in existing process templates - Make AI output the **default first draft** — human review is the quality gate (non-negotiable for regulatory deliverables) - Run **monthly 10-min AI demos** at team meetings — normalize it ### ❌ What keeps the 96% stuck - AI is optional → no one prioritizes it - Only 38% have a skills budget, only 36% say training is sufficient ([Economist Impact](https://impact.economist.com/technology-innovation/from-intent-to-action/report) · [BCG](https://www.bcg.com/publications/2025/closing-the-ai-impact-gap)) > → **How many of your team processes have "AI-assisted" as an explicit step today?**
# Where is GCD's biggest gap between AI potential and daily usage? > **Quick pulse — hands up:** > How many of you have explicitly defined that AI should be part of your team's processes? > And how many places is it still voluntary? - Where have you seen **early wins** worth scaling? - What's **blocking** broader adoption? - What would you **try this quarter** if you had 30 minutes of support?
# AI agents collaborate on tasks your teams still do manually - **What it is:** Multiple AI agents working together — one researches, one drafts, one reviews - **GCD example:** CRO status update → structured summary + risk flags + decision brief - By end 2026, **40% of enterprise apps** will have AI agents ([Gartner 2025](https://www.gartner.com/en/newsroom/press-releases/2025-08-26-gartner-predicts-40-percent-of-enterprise-apps-will-feature-task-specific-ai-agents-by-2026-up-from-less-than-5-percent-in-2025)) *Live demo — OpenClaw*
# Your expertise is the differentiator — AI just makes it faster Context + Prompting + Domain Knowledge
# Context + domain knowledge matters more than which model you use > 82% of enterprise leaders now use AI weekly — the gap is **how** they use it, not **which** tool ([Wharton/GBK, Oct 2025](https://knowledge.wharton.upenn.edu/special-report/2025-ai-adoption-report/)) - **Context:** Give AI the background it needs (trial phase, indication, stakeholders) - **Prompting:** Structure your request clearly - **Domain knowledge:** You are the expert — AI is the accelerator > → Quick hands up: **who uses AI daily? Weekly? Rarely?**
# One framework turns vague requests into expert-level AI output | Block | Question | GCD Example | |-------|----------|-------------| | 🎭 **Role** | Who should the AI be? | Senior clinical project manager | | 🌍 **Context** | What background does it need? | Phase III atopic dermatitis, EMA submission | | 📋 **Task** | What exactly should it do? | Summarize CRO status report | | 📐 **Format** | How should the output look? | Bullets, max 1 page, flag risks |
# A 30-second context investment transforms AI output quality ### ❌ Weak "Write a summary of the meeting." ### ✅ Strong "You are a senior clinical project manager. I had a 1-hour steering committee meeting about trial X. Here are my notes: [paste]. Write a structured summary with: decisions made, action items with owners, and open questions. Max 1 page, bullet points."
# Improve this prompt > "Help me write an email about the project delay." Rewrite it using the 4-block framework: **Role + Context + Task + Format** Use Copilot, Claude, or whichever AI tool you have open — compare before/after.
# A prompt solves one task. A reusable instruction solves it every time - **Prompt:** "Summarize this email" → generic output, every time - **Reusable instruction:** Paste 5 lines first → expert-level output, every time - You write it once, save it in a note, paste it at the start of any AI conversation - The agents that reviewed this presentation? Each one started as a short instruction like this > → **Think: which task do you do every week that follows the same pattern?**
# Copy this into your AI tool — then change the bold parts > Open Copilot or LEO AI. Paste this text. Change what's in **bold** to fit your task. You are a **senior director in clinical development**. You work with **CRO oversight and trial governance**. When I give you a **status email from a CRO**, always do this: 1. Summarize the **3 key points** in bullets 2. Flag any **risks, delays, or open questions** 3. Draft **2-3 decision points** I should raise at the next steering committee Format: Bullets, max 1 page. Always include next steps with owners. Quality rule: **Use plain language. Flag anything that could affect a regulatory timeline.** > Now paste a real email, report, or meeting note below it — and compare the output to what you'd normally write.
# ☕ Break — 15 minutes We're halfway. Grab coffee, check your phone — we'll pick up with leadership & demos.
# Leadership sets the pace — not the technology Visible use matters more than strategy decks
# Managers who must approve AI are the ones who face the biggest shift > "The very leaders asked to champion AI adoption face three identity threats: role compression, control shift, and span erosion" — [Harvard D3, Feb 2026](https://d3.harvard.edu/the-managers-ai-dilemma/) - **Role compression:** AI handles tasks that defined your expertise - **Control shift:** Teams use AI autonomously — less need for your approval - **The paradox:** AI doesn't reduce work — it intensifies it ([HBR, Feb 2026](https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies-it)) > → **Which of these three threats feels most real in your role today?**
# How visible is AI in your daily leadership? > **Hands up:** > How many of you use AI visibly in front of your team — weekly? - Do your teams know **which AI tools** you use personally? - Have you ever **shared a prompt** in a meeting or email? - What's one **visible AI action** you could take at your next team meeting?
# Five leadership actions that accelerate AI adoption this quarter Five things that consistently make a difference: - **Use AI visibly** — share your screen, show your prompts, iterate in front of your team - **Change the template** — add AI as a standard step in reports and status updates - **Appoint a champion** — one person per team owns AI adoption, reports monthly - **Protect experiment time** — 30 min/week for AI is an investment, not a perk - **Track it** — add "AI-assisted deliverables" to your quarterly review
# Pharma peers already use AI for these tasks daily - **AstraZeneca:** 12,000 employees upskilled — 85-93% report productivity gains ([Source](https://www.clinicalresearchnewsonline.com/news/2025/03/04/astrazeneca-scaling-up-use-of-generative-ai-to-help-reach-2030-ambitions)) - **Pfizer:** AI saves scientists 16,000 hours/year, targeting $1.2B savings by 2027 ([AWS](https://aws.amazon.com/solutions/case-studies/pfizer-PACT-case-study/)) - **Roche:** 80-90% Phase I success for AI-developed candidates vs. ~40% traditional ([Roche](https://www.roche.com/stories/ai-revolutionising-drug-discovery-and-transforming-patient-care)) > → Those are peers. **What about GCD — any examples worth sharing?**
# Directors are the critical layer — you bridge vision and execution ### 🔝 EVP / Leadership Set the direction — strong alignment ### 🔄 Directors (YOU) The only layer that can translate strategy into daily practice — organizations must build "change fitness", not just deploy technology ([Harvard Business School, 2026](https://www.library.hbs.edu/working-knowledge/ai-trends-for-2026-building-change-fitness-and-balancing-trade-offs)) ### 👤 Individual Users Good understanding — need permission & tools from you > EMA and FDA now share 10 common principles for AI in drug development — regulators are ready. Are we? ([EMA/FDA, Jan 2026](https://www.ema.europa.eu/en/news/ema-fda-set-common-principles-ai-medicine-development-0)) · **Who in your team would you appoint as AI champion tomorrow?**
# Chat tools can't do what agents need — the tooling question matters - **Chat** (Copilot, LEO AI) — one question, one answer - **Integrated** (Copilot in Outlook/Teams) — AI where you already work - **Agentic** (orchestrated pipelines) — multi-step, process-integrated. Where value scales - Chat alone **can't** orchestrate multi-step workflows across systems - One-third of companies already deploy agentic AI systems ([MIT Sloan/BCG, Nov 2025](https://sloanreview.mit.edu/projects/the-emerging-agentic-enterprise-how-leaders-must-navigate-a-new-age-of-ai/)) > → **Worth considering:** Does Leo's current toolstack — Copilot, LEO AI — support what we just discussed? What would you need from IT?
# Leading companies deploy agents as infrastructure, not chat windows - **Pharma:** Moderna drafts regulatory docs across departments; J&J runs autonomous drug discovery ([Medable 2026](https://www.medable.com/knowledge-center/how-agentic-ai-is-transforming-life-sciences-discovery-and-operations)) - **73% of pharma companies** are already planning or piloting agentic AI ([Medable 2026](https://www.medable.com/knowledge-center/how-agentic-ai-is-transforming-life-sciences-discovery-and-operations)) - Less than 20% is the model — **over 80% is integration, governance, and validation** ([MIT Sloan, Feb 2026](https://mitsloan.mit.edu/ideas-made-to-matter/5-heavy-lifts-deploying-ai-agents)) - Chat is the right **starting point** — agents need **API access + orchestration** > → **Open question:** What capabilities would Leo need to move beyond chat? Who should own that conversation?
# Start with micro-habits, not transformation projects Small daily actions beat big quarterly plans: - **"Summarize this email thread"** — 30 seconds - **"Draft this meeting agenda from my notes"** — 2 minutes - **"Review this document for inconsistencies"** — 1 minute - **"Prepare 3 questions for this stakeholder meeting"** — 1 minute Pick one. Try it this week.
# LEO AI already does this inside your own tools - Creating presentations that used to take hours - AI integrated where you already work - Available to you — ask your AI champion how to get started *Live demo*
# Which tools could your team start with this month? > **Quick round — 30 seconds each:** > Which demo resonated most with your team's daily work? - What would you **show your team** tomorrow if you had 10 minutes? - Which task in your department would benefit most from **AI + human review**? - Where is the **lowest-risk, highest-frequency** starting point?
# We leave with three decisions that make AI real for GCD From discussion to action
# Pick 3 use cases your teams can show results on this quarter > Think about your team's biggest time sinks. > Which ones could AI meaningfully improve *this quarter*? - Criteria from the top 4%: **High frequency** + clear before/after + **someone owns it** ([Economist Impact](https://impact.economist.com/technology-innovation/from-intent-to-action/report)) - Each use case needs an owner — let's agree on who
# Clear ownership turns good ideas into actual adoption ### Organizational setup - **AI as standing agenda item** at GCDLT — one champion per department reports monthly - **Dedicated AI skills budget** — even small. The top 4% treat it as a line item ### Track what matters (pick 3) - Hours saved per use case (before/after) - Active users per department (trend, not snapshot) - Number of processes with "AI-assisted" as standard step
# Thank You ### Let's make AI work for GCD in 2026 All slides, sources, and links: [leo-ai-followup.pages.dev](https://leo-ai-followup.pages.dev) — revisit anytime, share with your teams. Torben Madsen — Kvalitor Consulting [tm@kvalitor.dk](mailto:tm@kvalitor.dk)