top of page
Search

Afterthoughts: AI Anxiety Session

AI experience in the room ran from a Level 1 ("I think") all the way to self-declared 8s. The emotional range was even wider: "cautiously optimistic," "nervous but excited," "divergently excited and worried," "optimistic but trying to be realistic." One participant put it plainly: "I would fall into a group that are anxious about AI." Another countered: "We have a tech company that fully harnesses AI for good. I'm divergently excited and worried." A third, in what might be the most honest self-assessment of the year: "7 on AI use… but that's based on what I know how to do now, likely not as high in terms of actual capabilities."
AI experience in the room ran from a Level 1 ("I think") all the way to self-declared 8s. The emotional range was even wider: "cautiously optimistic," "nervous but excited," "divergently excited and worried," "optimistic but trying to be realistic." One participant put it plainly: "I would fall into a group that are anxious about AI." Another countered: "We have a tech company that fully harnesses AI for good. I'm divergently excited and worried." A third, in what might be the most honest self-assessment of the year: "7 on AI use… but that's based on what I know how to do now, likely not as high in terms of actual capabilities."

The Problem: AI Panic Without a Playbook

Generative AI presents a unique dual challenge for L&D teams: like every other function, they face workflow automation risks and the potential obsolescence of their current product and service offerings but unlike other teams, they also carry the added responsibility of helping the broader workforce navigate AI-driven disruption.


Dr. Andrew Barrett opened the session by naming this tension directly. Drawing on the concept of the "Jagged Technological Frontier" which found that AI dramatically boosted consultant performance on tasks inside its capability zone (12.2% more tasks completed, 25.1% faster, 40% higher quality) while causing a 19-percentage-point drop in performance on tasks outside it Andrew framed AI disruption not as a uniform wave but as an uneven, unpredictable force. The session's core argument: the antidote to AI anxiety isn't prediction but having a repeatable, structured method for evaluating your offerings under uncertainty and a shared vocabulary that helps teams make defensible decisions and articulate their relevance to senior leadership.


The Framework: CATS in Action

The heart of the session was the ScaleLearning CATS AI Strategic Framework, a two-axis matrix that maps any L&D product or service along two dimensions: (1) Human Advantage (how much humans still outperform AI in delivering this offering) and (2) Future Viability (whether demand for the offering is likely to grow, hold, or erode).

Placement into one of four quadrants — Capitalize, Assess, Transform, or Shift — carries distinct strategic implications: upper-right (high human advantage, high viability) is lowest risk; lower-left (low human advantage, low viability) is highest.


Each quadrant poses a specific strategic question:

  • Can you maximize and expand the value of this offering?

  • Can you win the competition to automate and scale it?

  • Can you reinvent it to increase resilience?

  • Or should you pivot toward something more future-proof?


Participants worked through a hands-on application using a shared L&D portfolio plotting offerings like leadership development communities of practice and customer support e-learning onto the matrix then using the quadrant questions to draft the beginnings of a one-page action plan. The framework is designed to help L&D leaders move beyond AI panic toward a clear roadmap for relevance, upskilling, and reskilling and crucially, to produce an artifact that senior leaders will actually engage with.


The Bigger Picture: Why This Matters Now

Andrew closed with a provocation that resonates well beyond the session itself: teams that can't articulate the resilience of their own offerings won't be trusted to help others navigate change. The CATS framework isn't a one-time exercise but a planning muscle. Applied regularly, it positions L&D as a strategic function that leads with evidence rather than reacting with urgency. This aligns with a broader argument in the field: as Wharton professor Ethan Mollick argues in Co-Intelligence, the key isn't to fear AI or embrace it uncritically, but to actively experiment, find where AI's uneven capabilities help or hurt your specific work, and make deliberate partnership choices. Barrett's framework operationalizes exactly that instinct for L&D teams translating AI uncertainty into portfolio strategy, and portfolio strategy into leadership-facing action plans.


📚 Further Reading, Listening & Watching

From other experts to deepen and challenge the ideas from this session


📚 Books

Title

Author(s)

Who Mentioned

Context

Co-Intelligence: Living and Working with AI

Ethan Mollick

Andrew Barrett

Andrew recommended it during the session

The Leadership Pipeline

Charan, Drotter & Noel

Alex

Mentioned he was reading it after attending an information session the prior week

The Coming Wave

Mustafa Suleyman

Andrew Barrett

Andrew referenced it after SHAD mentioned the Pi app — "I think it's called the upcoming wave or something like that, and I thought that was also a great read"

AI and the Octopus Organization

Jonathan Brill & Steve Wunker

Rose Benedicks

Dropped into the chat during the human advantage discussion as "an interesting read"

🎙️ Podcasts

Title

Host / Platform

Who Mentioned

Context

The Amplifying Intelligence Podcast

Andrew Barrett

Shared in the pre-session chat: "a great one for L&D"

Gilded Age

Alex

Pre-session chat: "my favorite"

Huberman Lab — "Cultivating Awe" episode

Andrew Huberman

Urooj Mazhar

Pre-session chat, what she was listening to

This is Love

Phoebe Judge

Tara Muir

Pre-session chat: "the podcast is excellent!"

The Secret World of Roald Dahl

Spotify

Louise Platiel

Pre-session chat: "His life story is fascinating!"

SPARK

CBC Radio 1

Carroll

Recommended during the AI ethics/researchers discussion: "always on the edge of sci-fi in the real world"

📄 Articles & Research Papers

Title / Description

Source

Who Mentioned

Context

"AI Workforce Change: Slower Than Headlines Suggest"


Danielle Wallace

Shared as a link pre-session with the note "a lot of people are finding a lot of value in these insights"

"Navigating the Jagged Technological Frontier" (2023)

Dell'Acqua, Mollick et al. / Harvard Business School & BCG

Andrew Barrett

Core research underpinning the Jagged Frontier concept in the session

METR research on AI task capability doubling

METR (Model Evaluation & Threat Research)

Andrew Barrett

Referenced to support the claim that AI capabilities have been doubling every 7 months

🌐 Websites & Online Resources

Resource

URL

Who Mentioned

Context

ScaleLearning CATS AI Strategic Framework

Andrew Barrett

This is available below.

Humanity's Last Exam / Safe AI benchmark

Ariel Harlap

Shared in response to Jennifer Cohen's question about AGI definitions

AIGS (AI Governance & Safety)

Robin Yap + Tara Muir + Khushroo Garda

Emerged during the ethics/researchers conversation as a relevant Canadian organization

🏛️ Research Institutions & Organizations

Name

Who Mentioned

Context

AMII — Alberta Machine Intelligence Institute (amii.ca)

Ariel Harlap

Canadian AI safety/responsibility research lab

Ariel Harlap

Canadian AI research with ethics focus

Vector Institute (Toronto)

Ariel Harlap

Same context — "each of them have different and overlapping takes on AI safety/responsibility"

LawZero

Ariel Harlap

Mentioned as similar to AIGS, founded by Yoshua Bengio

🤖 AI Tools & Apps Referenced

Tool

Who Mentioned

Context

Pi (Personal Intelligence) by Inflection AI

Shad

Shared as a personal turning point

Microsoft CoPilot

Jennifer Cohen

Noted it was the only AI tool widely adopted at her company

Google Gemini

Khushroo Garda

Used to look up AIGS during the session

👤 People & Thinkers Named

Person

Role

Who Mentioned

Context

Ethan Mollick

Wharton professor, AI researcher

Andrew Barrett

Described as "a prolific writer and communicator on every single platform about AI"; Co-Intelligence recommended

Tristan Harris

Tech ethicist, Center for Humane Technology

AD Blackwell

Raised in response to Tara's question about AI ethics experts

Mustafa Suleyman

CEO Microsoft AI, co-founder Inflection AI

Andrew Barrett

Referenced in connection with the Pi app and The Coming Wave book

Yoshua Bengio

AI researcher, Turing Award winner

Ariel Harlap

Named as the founder of LawZero


Additional Materials you might find useful


📖 Books

Competing in the Age of AI — Marco Iansiti & Karim Lakhani (Harvard Business Press, 2020) By two of the researchers behind the Jagged Frontier study, this book examines how AI is restructuring the operating model of organizations — a useful strategic complement to Barrett's portfolio-level framework.


Navigating the Jagged Technological Frontier — Dell'Acqua, Mollick et al. (HBS Working Paper, 2023) The foundational academic paper introducing the "jagged technology frontier" — showing that AI assistance improves performance for some tasks while worsening it for others, even within the same knowledge workflow and at seemingly similar difficulty levels. Free to access on SSRN. A 30-minute read that radically reframes how to think about AI capability mapping — the empirical backbone of Barrett's Human Advantage axis.


🎙️ Podcasts

The Learning & Development Podcast — David James (360Learning) A fortnightly show hosted by David James, Chief Learning Officer at 360Learning and one of the top 10 global L&D influencers, with over 500,000 downloads — each episode debates topics affecting the profession alongside expert guests. Episodes like "How Generative AI is Transforming L&D Right Now" and "Rethinking L&D, Performance, and the Power of AI" (with Laura Overton and Donald Clark) directly extend the conversation Barrett started. One of the most credible practitioner-facing shows in the field.


Your Undivided Attention — Center for Humane Technology A December 2025 episode featured both Ethan Mollick and Molly Kinder of the Brookings Institution, who led research with the Yale Budget Lab examining AI's real-time impact on the labor market exactly the kind of macro-level evidence that contextualizes why Barrett's "Future Viability" axis matters. The show consistently brings rigorous, non-hype perspectives on AI's societal effects.


AI and the Future of Work — Dan Turchin (PeopleReign) Host Dan Turchin interviews thought leaders and technologists from industry and academia who share their experiences about artificial intelligence and what it means to be human in the era of AI-driven automation. Especially useful for CCCE members leading within organizations navigating workforce change, with strong episodes on reskilling, AI in HR, and the boundary between human judgment and algorithmic decision-making.


📺 YouTube / Video

Ethan Mollick — Co-Intelligence: An AI Masterclass (Stanford GSB, ~45 min) Mollick walks through the hype, fears, and potential of transformative AI, urging business leaders to experiment and discover rather than waiting on the sidelines. His practical "ten-hour rule" for building genuine AI fluency is a concrete counterpart to Barrett's call for strategic action over paralysis. Find it on the Stanford Graduate School of Business YouTube channel.


Powerpoint and Template provided at the session are here


ai created image
ai created image

 
 
 

Recent Posts

See All
Afterthoughts on our AI Anxiety webinar

PHOTO HERE Generative AI can strengthen a workflow in one part of an organization and quickly expose risk in another. For L&D leaders, the challenge is no longer whether AI will affect the portfolio.

 
 
 

Comments


bottom of page