Why You Can't Learn by Asking AI Everything
One of the best things I took away from my Systems Design Engineering degree—after almost failing first year—was learning how to learn complex topics quickly.
I see this same learning challenge with my clients. They need to master new M365 and SharePoint workflows quickly, but often the approach is just an hour-long 'corporate training' session with someone talking at them with slides.
This is exactly the opposite of how people actually learn.
"Learning is about creating the condition for people to teach themselves. It's not about lecturing to people."
— Seth Godin, "The Human Advantage"
This post explores what creates the right conditions for learning, especially now that AI is starting to change the game. I recently wrote about what's changing in learning. This complementary piece explores how we as individuals can adapt to learn in this new landscape.
We Don't Learn from a Lecture
It's easy to fall back on lectures and slides when you're tasked with teaching something.
A typical example in the workplace is "lunch and learns" or training session. These might be one hour presentations where someone talks at the audience, usually with slides.
So I flip this completely: a 5-minute intro that generates discussion, then the rest is hands-on practice in live SharePoint environments. Yes, it takes longer to set up—I have to create practice sites and clean up afterward—but people are more likely to retain what they learn. They're solving real problems, not just watching me click through screens.
I'm also thinking about incorporating Copilot or other AI tools into the training, where people use it to learn, and assess and share with everyone else for analysis.
This is about creating the condition for safety and fear at the same time.
There's productive tension because people have to figure out problems in real-time and share their approach with the group. There's no hiding behind taking notes.
I think great teachers make learning interactive. They ask provocative questions that spark discussions or tensions. They give people the space to practice a concept, but are there if things go off the rails.
This isn't just better pedagogy—it's what actually creates those conditions for learning that Godin talks about.
The "Why Bother?" paralysis
If we have calculators and phones, why bother learning math?
I had another "why bother?" moment with AI too. Now that AI is here, always available, ready to help me work out any answer, do we even need to learn anything? Plus it's better than me in so many domains—coding, writing, marketing plans and more.
This happened while I was building out a study plan to learn AI agents, with AI's help. I told ChatGPT what my learning goals were, and it created a comprehensive AI Agents Study Plan. I looked at this in-depth learning plan—which was also better than anything I'd create BTW— and I thought: why bother? AI is smarter than me, it knows this stuff, it can build the agents, I don't even need to learn this.
It feels like being in an intellectual arms race I can't win. This feels different from previous technology disruptions. Microsoft Excel didn't make accountants obsolete, but AI feels fundamentally different—more threatening to human intelligence itself.
But then I reminded myself of something crucial: I must have conversations with decision makers in the real world about this stuff.
When a CTO asks me whether they should implement AI agents, they're not looking for a technical deep-dive. They want to know:
Will this solve our actual problem?
What could go wrong?
How do we measure success?
Most importantly: what will this cost us if it doesn't work?
That's the conversation AI can't have, at least not easily.
As ChatGPT told me:
"You don't need to outsmart AI. You need to be the person who understands where to use it, when not to, and how to explain it in plain terms. That's the part most people can't do, and it's not going away."
For now, I’m choosing to believe it! :)
Why We Still Need to Learn Things
You're the Context Provider
AI knows facts, but you know which facts matter for your specific situation. AI can explain machine learning beautifully, but you know whether it's right for your company's actual data problems. You understand the politics, constraints, and real-world messiness that AI can't see.
You're the Quality Filter
AI can generate ten solutions, but you know which one won't get shot down in the meeting with your boss. You can spot when AI is being confidently wrong about your domain. You understand the crucial difference between "technically correct" and "actually useful."
You're the Translator
You can explain complex AI concepts to your peers and managers in language they understand. You know how to bridge the gap between what's possible and what's practical. You can communicate the risks and limitations that AI won't tell you about.
Practical Strategies for Learning with AI
The "AI as Research Assistant" Approach
Use AI to create your learning roadmap, then dive deeper on the parts that matter most. Let AI handle the "what" so you can focus on the "why" and "when."
For instance, AI gives you the technical explanation about implementing agents with Azure AI Foundry, but you have to figure out the business implications and real-world applications.
The "Teaching Test”
If you can explain something to a colleague without AI's help, you've truly learned it.
Use the Feynman technique: if you can't explain it simply, you don't understand it yet.
When you get a bunch of AI output, practice translating AI-generated explanations into your own words and frameworks. Think about how you'd teach it to someone else.
Strategic Ignorance
Accept that you don't need to know everything AI can tell you. This is liberating. Free up some of those "little grey cells!"
When learning with AI, focus on understanding principles and frameworks rather than memorizing details.
Part of this is learning to ask better questions rather than just accumulating more answers.
For example, when in learning about AI agents, I don't need to memorize every API endpoint or configuration option. But I do need to understand the core principles: how agents make decisions, when they fail, and what types of problems they're actually good at solving. The details I can always look up, but the judgment I had to develop through practice.
Keep Putting the Effort In!
Here's the key: if the learning or studying you're doing is low effort or surface level, it's not really learning.
If it's just skimming what ChatGPT outputs, that's not learning.
You have to take notes, sometimes by hand. You have to review those notes, keep going back to the material.
Learning takes concerted practice—like studying engineering. You can't just read the formulas and attend lectures; you need time to problem solve, make mistakes and repeat concepts to yourself.
The AI age hasn't eliminated the need for deep learning. If anything, it's made the ability to learn deeply and think critically more valuable than ever. We just need to be smarter about how we do it.
Creating Your Own Learning Conditions
To wrap up, let’s go back to Seth Godin's insight: learning is about creating the right conditions for people to teach themselves.
In the AI age, this means being intentional about the environment we create for our own learning. It means understanding why we're learning something before we dive in, using AI as a tool rather than a crutch, and putting in the real effort that deep understanding requires.
The fundamentals of learning haven't changed, but the context has completely shifted. We still need curiosity, effort, and practice. What's changed is that we now have an incredibly powerful research assistant that can help us get to the good stuff faster.
Whether I'm helping teams adopt new collaboration tools or executives understand AI strategy, the principle remains the same: you can't shortcut the learning process. You can make it more efficient, more targeted, and more practical—but you still have to put in the work.