Home » You Don’t Want AI to Happen to You: Reflections on AI Literacy in Practice

You Don’t Want AI to Happen to You: Reflections on AI Literacy in Practice

by Stella Lee | Jul 10, 2025

I recently had the pleasure of being interviewed by Gelareh Keshavarz, a fellow educator and alum of the AI Literacy mini-course I developed in partnership with Athabasca University. Our conversation ranged from the myths that hold professionals back to the urgent need to embed ethics, equity, and intentionality into how we engage with AI.

Here are some highlights from our fireside chat—raw, real, and rooted in the belief that AI literacy is not just a tech skill. It’s a leadership imperative.

Q1: Why is AI literacy no longer optional, and how can we empower professionals to engage with AI critically, ethically, and creatively?

AI is no longer something that sits quietly in the background of our professional lives—it now actively shapes how we work, learn, make decisions, and interact with each other. AI cuts across every sector and domain. There’s no avoiding it.

That doesn’t mean everyone needs to become a computer scientist. But it does mean we all need a baseline level of understanding—enough to ask the right questions, spot potential risks, and engage with AI systems in a meaningful and informed way. 

Ultimately, AI literacy is not a one-time event or a standalone course. It’s a continuous, evolving capability that needs to be woven into professional development, organizational strategy, and the future of work conversations. The goal isn’t to master AI. It’s to ensure we can work with it—critically, ethically, and creatively.

Q2: You’ve said ‘AI is not magic.’ What’s one myth about AI that’s holding professionals back—and how does it distort ethical decision-making?

One of the biggest myths is that AI is some kind of magical, all-knowing force—this idea that it’s almost godlike in its intelligence or decision-making ability. And I’ll be honest—I hate the term “artificial intelligence” for that very reason. It gives people unrealistic expectations and creates a kind of blind trust in the outputs, even when they’re flawed or biased.

Research has shown that the less people understand about AI, the more likely they are to see it as magical or mysterious. That’s a huge problem. Because when we treat AI as magic, we stop questioning it. We stop asking where the data came from, how the algorithm was trained, or who it was designed for. And that’s where ethical decision-making gets really distorted.

Another issue is the tendency to talk about AI as if it’s one single thing. It’s not. AI is a collection of techniques—machine learning, natural language processing, computer vision, recommendation engines—and each of these comes with its own capabilities and limitations. Lumping them all together makes it harder for professionals to critically evaluate which tool is appropriate, or even whether AI is needed at all.

So part of being ethically informed is learning to demystify AI. That means understanding its parts, recognizing its constraints, and remembering that behind every AI tool is a human—making choices, setting parameters, and defining success.

Q3: You also said ‘we don’t need tech people anymore.’ For educators who feel anxious about not knowing the backend, how can we shift from fear to capability?

Let me clarify—when I say “we don’t need tech people anymore,” what I really mean is you don’t have to be one to belong in this AI space. We’ve reached a point where non-technical professionals—educators, instructional designers, learning leaders—are not just relevant, they’re essential.

With the rise of no-code and low-code platforms, you no longer need to know how to write Python or build models from scratch. But that doesn’t mean you’re off the hook entirely. We do need to understand the logic behind these tools—the basics of how they work, what kind of data they rely on, and where bias or error might creep in. That way, we can ask the right questions, evaluate tools critically, and make informed decisions.

And remember: you don’t have to do this alone. Shifting from fear to capability often starts with collaboration. Partner with someone in IT, data analytics, or another tech-savvy colleague. Create cross-functional teams. Ask questions. Stay curious.


Q4: If we position AI literacy as the new digital literacy, how do we embed ethics and equity as non-negotiables in that learning?

I see AI literacy as an evolution—a continuum—of digital literacy. The foundational skills we’ve built around digital fluency, critical thinking, and responsible tech use still hold. But AI brings new complexities that require us to expand that literacy: things like algorithmic bias, automated decision-making, and data ethics.

When it comes to embedding ethics and equity, I believe these shouldn’t be “add-ons”—they should be baked in from the beginning. They are design choices, not afterthoughts. And honestly, I don’t think it’s difficult to do, but it does require intention and thoughtful design.

One methodology I talk about in the course is Ethics as Design. This is about making ethical considerations an active part of your development process. It frames ethics as a set of challenges with multiple possible solutions—not as a binary checklist of right or wrong. 

Another great framework is Privacy by Design, championed by Dr. Ann Cavoukian. It emphasizes embedding privacy at the architectural level—from the outset of system design—not patching it in later. This kind of proactive thinking can be extended to equity and inclusivity as well.

So, if we’re serious about positioning AI literacy as the new digital literacy, we must also be serious about building a culture where ethics and equity are standard features—not just compliance exercises.


Q5: What does thinking with AI really mean? How can we move beyond passive use to critical co-agency in practice?

Thinking with AI means we’re using these tools not as replacements, but as augmentations—to enhance our work, our thinking, and our creativity. The goal isn’t to hand over our agency or decision-making; it’s to become better, sharper, and more impactful versions of ourselves.

Yes, there’s a lot of hype out there about what AI can do, and we need to cut through that noise. Many claims are inflated or oversimplified. But there are also real, transformative possibilities—like using AI to process massive datasets, spot patterns we’d never see on our own, or track how trends evolve over time across complex systems. These are things that were simply not humanly possible before.

To move from passive use—where we just take AI’s outputs at face value—to critical co-agency, we need to actively engage with the tools. That means:

  • Understanding what AI can and cannot do, or where it’s less effective, so we know when to trust it, when to challenge it, and when to supplement it with human judgment.

  • We need to be asking better questions, to frame problems thoughtfully, defining what “good” or “useful” outcomes look like, and continuously iterating alongside the AI.

  • Keeping human values at the center, as we collaborate with AI, we must maintain oversight, ethical responsibility, and the creativity that only we can bring. 

Q6: What happens when AI literacy is unevenly distributed, especially in communities or institutions already facing systemic inequities?

When AI literacy is unevenly distributed, it deepens the digital divide that’s already been with us for decades. And the consequences are profound—because AI is rapidly becoming a gatekeeper to participation in the digital economy, access to education, and competitiveness in the job market.

What worries me most is that many advanced AI tools are now paywalled or subscription-based. Those who can afford tools like ChatGPT, Perplexity, or other premium platforms gain a significant edge—not just in productivity, but in learning how to work with AI, developing fluency and confidence that others miss out on.

If we’re not intentional, we risk creating a world where the benefits of AI amplify privilege and wealth, leaving under-resourced students, educators, and communities further behind.

That’s why educational institutions, public libraries, and community organizations need to get ahead of this challenge. One practical step is to provide access to essential AI tools as part of institutional support—just like many universities already offer free licenses for Microsoft Office or Adobe Creative Cloud. But beyond access, we need to embed AI literacy programs that prioritize inclusivity, culturally relevant pedagogy, and community engagement, so we’re not just handing out tools, but equipping everyone to use them meaningfully.

If we don’t address this proactively, the equity gap won’t just persist—it will widen.


Q7: What’s one mindset shift you’d offer to someone who still thinks AI is ‘too technical’ to be part of their everyday practice?

I’d say this: you don’t want AI to happen to you.

If you stay on the sidelines because it feels “too technical,” you’re letting decisions be made for you—by systems you don’t understand and by people who may not share your values or priorities. That’s not a neutral position. That’s giving up your voice in a space that will shape the future of your work, your learners, and your profession.


Q8: The mindset shift is this: AI isn’t just about code—it’s about context, judgment, ethics, and creativity. And those are things educators and learning professionals already excel at. We bring the human lens. We know how to ask better questions, frame learning in context, and support meaningful engagement.

So instead of thinking, “I’m not technical enough,” start thinking, “My voice is needed here.” Whether it’s reviewing tools, shaping policy, advocating for inclusion, or designing thoughtful learning experiences—there’s a seat at the table for you. You don’t need to know everything. You just need to show up and stay curious.