March 1, 2026

AI Made Practical for Teachers and Designers with Sairam Sundaresan

AI Made Practical for Teachers and Designers with Sairam Sundaresan

AI is not a magic wand, and that clarity might be the most empowering place to begin. Throughout our conversation with AI engineering leader and author Sairam Sundaresan, we challenged the common belief that artificial intelligence can be thrown at any problem with guaranteed results. Instead, we explored the difference between artificial narrow intelligence and the still-hypothetical general intelligence, grounding the conversation in concrete examples that educators and instructional designers face daily. From chatbots that excel at highly scoped tasks to voice assistants that reliably follow simple commands, the strength of today’s AI lies in specificity. That lens helps teams plan better, prompts become clearer, and expectations align with what the technology can actually deliver. For listeners who have felt frustration when a model promises the world and delivers a messy spreadsheet, we unpack why the misalignment happens and how to steer around it with tighter scoping and iterative prompts.

One core metaphor anchored the episode: treat AI like a new hire. A first response from a model is rarely the final answer—just as a new teammate needs context, examples, and feedback to ramp up. When we onboard AI with process checklists, exemplar outputs, and constraints, the quality of results improves quickly. This approach reframes prompting as management: define roles, share resources, chunk complex tasks into steps, and stage deliverables for review. We discussed practical strategies for educators, such as giving the model a template for a curriculum map, feeding it a small subset of syllabi first, and asking for incremental outputs rather than a single monolithic result. That shift from “do everything” to “do step one well” reduces friction, surfaces errors earlier, and builds a reusable workflow you can hand off or refine across courses and cohorts. It also preserves critical thinking, because the human sets the direction and performs quality control while AI shoulders parts of the execution.

From there, we moved to high-impact, classroom-facing use cases. Personalized learning is the headline benefit: a 24/7, infinitely patient tutor that adapts to a learner’s level, preferred media, and pace. With careful design, AI can generate targeted practice based on weak spots, vary question types to reinforce transfer, and scaffold complex tasks with hints rather than answers. We highlighted how designers can use AI to identify slide-level confusion, spot patterns in assessment performance, and propose faster revisions between sessions rather than waiting for the next term. The time saved enables richer live interactions where teachers focus on nuance—reading the room, noticing motivation, and responding to nonverbal cues. Pairing those qualitative signals with quantitative traces from AI tutors produces a fuller picture of learning, supporting better interventions and more equitable experiences. For practitioners, this means quicker iterations, more inclusive materials, and the flexibility to meet learners where they are without burning out.

None of this works without ethics. We examined three pillars: content provenance, model behavior, and responsible usage. First, respect copyright and the lineage of materials; consent and compensation matter when training or remixing content. Second, tackle hallucinations head-on with verification steps: ask models to cite sources, cross-check claims, and avoid over-reliance when stakes are high. We discussed the very real risk of fabricated citations and how to design assignments that require learners to interrogate outputs, not just submit them. Third, consider environmental impacts—these systems consume significant energy, so align usage with meaningful learning goals instead of novelty. We also addressed emerging misuse, from AI-assisted impersonation in interviews to fully outsourced coursework. The message was clear: establish guardrails, teach tool literacy, and position AI as an assistant rather than a mask. Ethics is not a footnote; it’s a design constraint that strengthens outcomes and public trust.

Looking forward, we highlighted why this moment is so exciting for educators and designers: rapid prototyping turns ideas into feedback within hours, not months. Tools like Notebook LM, Canva, Gamma, and interactive platforms such as Genially allow fast exploration of formats, translations, and interactive elements with AI co-pilots. The result is a new cadence of experimentation where teams can test multiple directions, discard what doesn’t work, and double down on what resonates. To prepare, we encouraged hands-on play with a small toolset, building AI literacy without getting lost in hype. Start with a real problem you care about, document the steps, and replace pieces with AI where it makes sense. Over time, you’ll develop a mental model for when to lean on automation and when to lean in as a human—especially for judgment, empathy, and creative synthesis. 

🔗 Website and Social Links:

Please visit Sairam Sundaresan’s website to subscribe to his newsletter.

Sairam Sundaresan’s Website

📢 Call-to-Action: Want to explore AI in a way that feels clear and approachable? Connect with Sairam Sundaresan and check out his book AI For the Rest of Us. You’ll find practical insights, real-world examples, and guidance on how to use AI responsibly in work, learning, and life. Visit Sairam’s website to learn more and access resources designed to help you confidently navigate the AI era.

Photo by Google DeepMind: https://www.pexels.com/photo/an-artist-s-illustration-of-artificial-intelligence-ai-this-illustration-depicts-language-models-which-generate-text-it-was-created-by-wes-cockx-as-part-of-the-visualising-ai-project-l-18069693/