# AI Bootcamp Follow-up ## From Awareness to Measurable Value Leo Pharma — Global Clinical Development February 27, 2026
# Three hours, four themes, one decision to agree on ### Morning - Where are we now? (data + discussion) - Value creation & how to measure it ### Before Lunch - Prompting hands-on - AI leadership actions & live demos - Decide: 3 use cases we commit to **There's more here than we can cover together.** This presentation lives online — revisit any slide, follow the sources, go deeper where it matters for your team. *Hybrid link will be shared separately via Teams invite*
# 88% of companies use AI — but only 7% have scaled it - **88% use AI, only 7% at scale** — and just 4% achieved repeatable business value ([McKinsey 2025](https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai) · [Economist Impact, Feb 2026](https://impact.economist.com/technology-innovation/from-intent-to-action/report)) - **90% of transformative use cases** are stuck in pilot ([McKinsey 2025](https://www.mckinsey.com/capabilities/quantumblack/our-insights/seizing-the-agentic-ai-advantage)) > → That's where the market is. **How many of your AI initiatives have delivered measurable results?**
# The question shifted — not *should we use AI* but *how do we get value* > "We're no longer asking *should we use AI?* > We're asking *how do we get value from it?*" - From awareness → **adoption** (e.g. AI-drafted steering committee summaries as standard) - From experiments → **embedded processes** — 90% are stuck here ([McKinsey 2025](https://www.mckinsey.com/capabilities/quantumblack/our-insights/seizing-the-agentic-ai-advantage)) - From individual use → **organizational capability**
# Most AI value is hiding in tasks you repeat every week How do we find it? How do we measure it?
# AI saves hours on tasks leaders do daily — email, reports, meeting prep - Clinical study reports: **2-3 weeks → 3-4 days** ([McKinsey Pharma 2025](https://www.mckinsey.com/industries/life-sciences/our-insights/reimagining-life-science-enterprises-with-agentic-ai)) - Email drafting: ~15 min → ~2 min (typical) - Meeting prep: automated briefs from notes - Better first drafts — fewer revision cycles, more time for expert review - Consistent language across documents and stakeholders > → **Which of these time sinks hits your team hardest? Where would 2 hours back per week matter most?**
# A 45-minute steering committee summary takes 10 minutes with AI ### Before (manual) Write from scratch — 45 min, 3 revision rounds ### After (AI-assisted) Paste notes → structured summary in 3 min → you review and refine — 10 min total > → **How many of your recurring reports follow this pattern — manual drafts, multiple revision rounds?** *Live demo: we'll do this together*
# The 4% that create real AI value share five traits Only 4% of companies achieve repeatable AI value at scale ([Economist Impact, Feb 2026](https://impact.economist.com/technology-innovation/from-intent-to-action/report)). What they do differently: - **Task-oriented:** Solve a specific, recurring pain point - **Measurable:** Before/after comparison is possible - **Low friction:** Integrated where people already work - **Champion-driven:** Someone owns it and shows others - **Domain-grounded:** AI + expert knowledge, not AI alone > → **Let's score GCD against these five: where do we do well? Where is the gap?**
# Embed AI in workflows — make it part of how you work ### ✅ What the 4% do differently - Add **"AI-assisted?"** as a step in existing process templates - Make AI output the **default first draft** — human review is the quality gate (non-negotiable for regulatory deliverables) - Run **monthly 10-min AI demos** at team meetings — normalize it ### ❌ What keeps the 96% stuck - AI is optional → no one prioritizes it - No dedicated skills budget — only 38% have one ([Economist Impact](https://impact.economist.com/technology-innovation/from-intent-to-action/report)) > → **How many of your team processes have "AI-assisted" as an explicit step today?**
# Where is GCD's biggest gap between AI potential and daily usage? > **Quick pulse — hands up:** > How many of you have explicitly defined that AI should be part of your team's processes? > And how many places is it still voluntary? - Where have you seen **early wins** worth scaling? - What's **blocking** broader adoption? - What would you **try this quarter** if you had 30 minutes of support?
# Two live demos — from idea to working solution Claude Code · OpenClaw
# Claude Code turns ideas into working tools in minutes - From idea to working solution — no IT ticket required - Real-world example: building a GCD-relevant tool live - McKinsey runs 25,000 AI agents alongside 40,000 humans ([Source](https://www.hrgrapevine.com/us/content/article/2026-01-15-mckinsey-goes-all-in-on-ai-with-interview-testing-workforce-of-20000-agents)) *Live demo*
# This presentation was built and reviewed by four AI agents A skill is like activating a specialist — you describe who they are, what they know, and how they should work. Then they operate independently on your material. | Agent | Role | What it does | |-------|------|-------------| | **Slide Reviewer** | Quality editor | Checks sources, headlines, density, structure | | **Leo Director** | Simulated audience | Reacts as a skeptical GCD director — flags weak content | | **Slide Designer** | Visual specialist | Applies typography, colors, accessibility standards | | **Exesearch** | Research assistant | Finds fresh sources and data online | > → **The concept is the same as your AI profiles from earlier — but for a specialized task instead of a general role.**
# What does a "skill" actually look like? Here's the Leo Director The full instruction is ~150 lines of text. This is the core — the persona and the review lenses: **Persona (who the AI becomes):** - Director in GCD, 15+ years in pharma, manages 20-50 people - Uses Copilot occasionally — knows AI matters but hasn't embedded it - Results-oriented, time-poor, skeptical of hype, respects data - Submits to EMA and FDA — wants regulatory limits acknowledged **Six lenses it applies to every slide:** - "So What?" — What do I DO with this? - "Evidence" — Says who? Is this pharma-relevant? - "Already Tried That" — What's new since the bootcamp? - "Time" — Is this worth my 3.5 hours? - "My Team" — How do I bring this back? - "Mirror" — Did he ask where WE are on this? > → **You've already built an AI profile today. A skill is the next step — a reusable profile for a specific recurring task.**
# AI agents collaborate on tasks your teams still do manually - **What it is:** Multiple AI agents working together — one researches, one drafts, one reviews - **GCD example:** CRO status update → structured summary + risk flags + decision brief - By end 2026, **40% of enterprise apps** will have AI agents ([Gartner 2025](https://www.gartner.com/en/newsroom/press-releases/2025-08-26-gartner-predicts-40-percent-of-enterprise-apps-will-feature-task-specific-ai-agents-by-2026-up-from-less-than-5-percent-in-2025)) *Live demo — OpenClaw*
# Your expertise is the differentiator — AI just makes it faster Context + Prompting + Domain Knowledge
# Context + domain knowledge matters more than which model you use > 88% already have access to AI tools — the gap is **how** they use them, not **which** tool ([McKinsey 2025](https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai)) - **Context:** Give AI the background it needs (trial phase, indication, stakeholders) - **Prompting:** Structure your request clearly - **Domain knowledge:** You are the expert — AI is the accelerator > → Quick hands up: **who uses AI daily? Weekly? Rarely?**
# One framework turns vague requests into expert-level AI output | Block | Question | GCD Example | |-------|----------|-------------| | 🎭 **Role** | Who should the AI be? | Senior clinical project manager | | 🌍 **Context** | What background does it need? | Phase III atopic dermatitis, EMA submission | | 📋 **Task** | What exactly should it do? | Summarize CRO status report | | 📐 **Format** | How should the output look? | Bullets, max 1 page, flag risks |
# A 30-second context investment transforms AI output quality ### ❌ Weak "Write a summary of the meeting." ### ✅ Strong "You are a senior clinical project manager. I had a 1-hour steering committee meeting about trial X. Here are my notes: [paste]. Write a structured summary with: decisions made, action items with owners, and open questions. Max 1 page, bullet points."
# Improve this prompt — 5 minutes > "Help me write an email about the project delay." Rewrite it using the 4-block framework: **Role + Context + Task + Format** Use Copilot, Claude, or whichever AI tool you have open — compare before/after.
5:00
# Build once, reuse forever — AI profiles eliminate repetitive setup - A **custom AI profile** = pre-loaded context for recurring tasks - Instead of re-explaining every time, build a reusable profile - Examples for GCD leaders: - CRO oversight email drafter (your CROs, your escalation thresholds) - Trial status summarizer (your portfolio, your stakeholders) - Decision brief generator (your metrics, your format) > → **Which of your recurring tasks has the most repetitive setup? That's your first profile.**
# Build your personal AI profile — 7 minutes > Create an AI profile that describes: > Who you are, what you do, what tone you prefer, > and what kind of tasks you'll ask AI to help with. Save it — paste it at the start of any AI conversation to test it. Use Copilot, Claude, LEO AI, or whichever tool you have open.
7:00
# ☕ Break — 15 minutes We're halfway. Grab coffee, check your phone — we'll pick up with leadership & demos.
# Leadership sets the pace — not the technology Visible use matters more than strategy decks
# Managers who must approve AI are the ones who face the biggest shift > "The very leaders asked to champion AI adoption face three identity threats: role compression, control shift, and span erosion" — [Harvard D3, Feb 2026](https://d3.harvard.edu/the-managers-ai-dilemma/) - **Role compression:** AI handles tasks that defined your expertise - **Control shift:** Teams use AI autonomously — less need for your approval - **The paradox:** Leaders who thrive blend human depth with digital fluency ([McKinsey, Jan 2026](https://www.mckinsey.com/capabilities/strategy-and-corporate-finance/our-insights/building-leaders-in-the-age-of-ai)) > → **Which of these three threats feels most real in your role today?**
# How visible is AI in your daily leadership? > **Hands up:** > How many of you use AI visibly in front of your team — weekly? - Do your teams know **which AI tools** you use personally? - Have you ever **shared a prompt** in a meeting or email? - What's one **visible AI action** you could take at your next team meeting?
# Five leadership actions that accelerate AI adoption this quarter Five things that consistently make a difference: - **Use AI visibly** — share your screen, show your prompts, iterate in front of your team - **Change the template** — add AI as a standard step in reports and status updates - **Appoint a champion** — one person per team owns AI adoption, reports monthly - **Protect experiment time** — 30 min/week for AI is an investment, not a perk - **Track it** — add "AI-assisted deliverables" to your quarterly review
# Leaders at other companies already use AI for these tasks daily - **McKinsey:** 25,000 AI agents alongside 40,000 humans — saved 1.5M hours in 2025 ([Source](https://www.hrgrapevine.com/us/content/article/2026-01-15-mckinsey-goes-all-in-on-ai-with-interview-testing-workforce-of-20000-agents)) - **Healthcare:** 85% of organizations say AI increased revenue, 80% reduced costs ([Nvidia Survey, Feb 2026](https://techhq.com/news/healthcare-ai-roi-business-model-2026/)) - **Pharma R&D:** 50% report faster time-to-target, 73% of leaders use AI for structure prediction ([Drug Discovery News, Feb 2026](https://www.drugdiscoverynews.com/the-2026-ai-power-shift-17020)) > → Those are external. **What about GCD — any examples worth sharing?**
# Directors are the critical layer — you bridge vision and execution ### 🔝 EVP / Leadership Set the direction — strong alignment ### 🔄 Directors (YOU) The only layer that can translate strategy into daily practice — most teams are waiting for your signal ([McKinsey, Jan 2026](https://www.mckinsey.com/capabilities/strategy-and-corporate-finance/our-insights/building-leaders-in-the-age-of-ai)) ### 👤 Individual Users Good understanding — need permission & tools from you > EMA and FDA now share 10 common principles for AI in drug development — regulators are ready. Are we? ([EMA/FDA, Jan 2026](https://www.ema.europa.eu/en/news/ema-fda-set-common-principles-ai-medicine-development-0)) > **Think about it:** Who in your team would you appoint as AI champion tomorrow? Do they know?
# The best AI tool is the one that's already open - Copilot in Outlook/Teams, LEO AI — **low friction, high frequency** - AI that lives in your workflow beats a separate tool - Where do your teams spend their screen time? Start there - Start here. But know that **chat is just level one** > → **Where do your teams actually spend their screen time? Is AI already in those tools?**
# Chat tools can't do what agents need — the tooling question matters - **Chat** (Copilot, LEO AI) — one question, one answer - **Integrated** (Copilot in Outlook/Teams) — AI where you already work - **Agentic** (orchestrated pipelines) — multi-step, process-integrated. Where value scales - Chat **can't** access files, connect systems, or run workflows - McKinsey built **ARK** to orchestrate 25,000 agents ([GitHub](https://github.com/mckinsey/agents-at-scale-ark)) > → **Worth considering:** Does Leo's current toolstack — Copilot, LEO AI — support what we just discussed? What would you need from IT?
# Leading companies deploy agents as managed infrastructure — not chat windows - **McKinsey:** Open-sourced **ARK** (Agentic Runtime for Kubernetes) — agents as governed workloads ([GitHub](https://github.com/mckinsey/agents-at-scale-ark)) - **Pharma:** Moderna drafts regulatory docs across departments; J&J runs autonomous drug discovery ([Medable 2026](https://www.medable.com/knowledge-center/how-agentic-ai-is-transforming-life-sciences-discovery-and-operations)) - **73% of pharma companies** are already planning or piloting agentic AI ([Medable 2026](https://www.medable.com/knowledge-center/how-agentic-ai-is-transforming-life-sciences-discovery-and-operations)) - <20% is the model — **>80% is integration, governance, and validation** ([MIT, Feb 2026](https://mitsloan.mit.edu/ideas-made-to-matter/5-heavy-lifts-deploying-ai-agents)) - Chat is the right **starting point** — agents need **API access + orchestration** > → **Open question:** What capabilities would Leo need to move beyond chat? Who should own that conversation?
# Start with micro-habits, not transformation projects Small daily actions beat big quarterly plans: - **"Summarize this email thread"** — 30 seconds - **"Draft this meeting agenda from my notes"** — 2 minutes - **"Review this document for inconsistencies"** — 1 minute - **"Prepare 3 questions for this stakeholder meeting"** — 1 minute Pick one. Try it this week.
# LEO AI already does this inside your own tools - Creating presentations that used to take hours - AI integrated where you already work - Available to you — ask your AI champion how to get started *Live demo*
# Which tools could your team start with this month? > **Quick round — 30 seconds each:** > Which demo resonated most with your team's daily work? - What would you **show your team** tomorrow if you had 10 minutes? - Which task in your department would benefit most from **AI + human review**? - Where is the **lowest-risk, highest-frequency** starting point?
# Three decisions to bring home From discussion to practice
# Pick 3 use cases your teams can show results on this quarter > Think about your team's biggest time sinks. > Which ones could AI meaningfully improve *this quarter*? - Criteria from the top 4%: **High frequency** + clear before/after + **someone owns it** ([Economist Impact](https://impact.economist.com/technology-innovation/from-intent-to-action/report)) - Each use case needs an owner — let's agree on who
# Clear ownership turns good ideas into actual adoption ### Organizational setup - **AI as standing agenda item** at GCDLT — one champion per department reports monthly - **Dedicated AI skills budget** — even small. The top 4% treat it as a line item ### Track what matters (pick 3) - Hours saved per use case (before/after) - Active users per department (trend, not snapshot) - Number of processes with "AI-assisted" as standard step
# Thank You ### Let's make AI work for GCD in 2026 All slides, sources, and links: [leo-ai-followup.pages.dev](https://leo-ai-followup.pages.dev) — revisit anytime, share with your teams. Torben Madsen — Kvalitor Consulting [tm@kvalitor.dk](mailto:tm@kvalitor.dk)
# Syntax Reference ### YouTube embed
# Syntax: Mermaid Diagram
graph LR A[Identify Task] --> B[Build Prompt] B --> C[Run AI] C --> D{Good enough?} D -->|Yes| E[Use Output] D -->|No| B
# Syntax: All Features - **Link:** [OpenAI](https://openai.com) - **Image:** `` - **QR code:** `
` - **Timer:** `
5:00
` — 5 min countdown - **Fragment:** add `` after a bullet - **Speaker notes:** add `Note:` block at end of slide (press **S**) - **PDF export:** add `?print-pdf` to URL, then Ctrl+P - **Slide background:** `` - **Code highlight:** fenced code block with language