April 22, 2026

AI Guardrails 101: Policies and Permissions

AI Guardrails 101: Policies and Permissions
Apple Podcasts podcast player iconSpotify podcast player iconYoutube Music podcast player iconAmazon Music podcast player iconiHeartRadio podcast player iconPodcast Addict podcast player iconPodchaser podcast player iconPocketCasts podcast player iconDeezer podcast player iconPlayerFM podcast player iconCastro podcast player iconCastbox podcast player iconGoodpods podcast player icon
Apple Podcasts podcast player iconSpotify podcast player iconYoutube Music podcast player iconAmazon Music podcast player iconiHeartRadio podcast player iconPodcast Addict podcast player iconPodchaser podcast player iconPocketCasts podcast player iconDeezer podcast player iconPlayerFM podcast player iconCastro podcast player iconCastbox podcast player iconGoodpods podcast player icon

AI isn’t just a time-saver anymore, it’s a trusted choice. When instructional designers paste the wrong thing into the wrong tool, the risk isn’t abstract: it can touch learner privacy, employee data, internal documents, proprietary processes, and even regulated content. In this episode, Jackie shares a simple way to stop guessing and start using AI with calm, clear guardrails you can actually follow.

We walk through three practical AI risk tiers with real examples: Tier 1 public and low risk, Tier 2 internal and sensitive, and Tier 3 regulated and personal data. Then we match those tiers to three AI tool types: public chatbots, enterprise-approved AI tools, and closed internal systems. The big takeaway is simple but powerful: the same prompt can be safe or unsafe depending on the tool and the data you feed it, which is why policies and permissions matter more than ever for responsible learning design.

To make this usable in the moment, Jackie teaches the "AI Paste Test," which consists of three fast questions you can ask before you paste anything into an AI tool. I also share a safer prompting workaround that keeps the speed benefits of AI while protecting confidentiality, plus a quick weekly challenge to build the habit. You’ll leave with practical AI governance language you can use with stakeholders and a clearer path to building trustworthy AI workflows in instructional design.

If you found this helpful, follow or subscribe, share it with a designer friend, and leave a review so more educators and instructional designers can build with AI safely and confidently.

🔗 Episode Links

Please check out the resource mentioned in the episode. Enjoy!

The AI Guardrails Compass

Send Jackie a Text

Join PodMatch!
Use the link to join PodMatch, a place for hosts and guests to connect.

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Support the show

💟 Designing with Love + allows you to support the show by keeping the mic on and the ideas flowing. Click on the link above to provide your support.

Buy Me a Coffee is another way you can support the show, either as a one-time gift or through a monthly subscription.

🗣️ Want to be a guest on Designing with Love? Send Jackie Pelegrin a message on PodMatch, here: Be a guest on the show

🌐 Check out the show's website here: Designing with Love

📱 Send a text to the show by clicking the Send Jackie a Text link above.

👍🏼 Please make sure to like and share this episode with others. Here's to great learning!


00:00 - Welcome & Series Setup

01:10 - AI Use Becomes A Trust Decision

02:44 - Three Risk Tiers For AI

04:06 - Matching Tools To Risk Levels

05:28 - The AI Paste Test

06:00 - Safe Prompt Workarounds & Example

07:13 - Weekly Challenge & Guardrails Resource

08:33 - Next Episode Preview & Closing

Welcome & Series Setup

Jackie Pelegrin

Hello, and welcome to the Designing with Love Podcast. I am your host, Jackie Pellegrin, where my goal is to bring you information, tips, and tricks as an instructional designer. Hello, instructional designers and educators. Welcome to episode 109 of the Designing with Love Podcast. As we continue through the 2026 lineup, we're also moving through the AI Ready Designer Series. Last time, we mapped what AI changes and ID so you can focus on where your value is growing. Today, I'm going to walk you through risk tiers, tool types, and a quick pace test so you can use AI confidently without guessing. So, grab your notebook, a cup of coffee, and settle in because we're going to make this feel doable. In this 12-episode AI ready designer series, we'll move through five AI ready checkpoints each time. So you always leave with something practical you can apply right away. Alright, let's jump into checkpoint one. Here's the shift. AI tools have moved from cool experiment to everyday workflow. And that means policies and permissions matter more than ever. A few years ago, if you tried a tool on your own time, it probably didn't impact anyone else. Now AI can touch learner data, employee data, internal documents, proprietary processes, and regulated content. So the big shift is this. Using AI isn't just a productivity choice, it's a risk and a trust decision. And when we handle risk well, we don't just protect the organization, we protect learners. Which brings us to what doesn't change. Even with AI, your role is still about protecting learner trust, designing responsibly, and aligning solutions to real world context. Your stakeholder may ask you, can we use AI for this? But the better question is, should we use AI for this? And if yes, under what guardrails? Because when policies are unclear, people do one of two things. They avoid AI completely and fall behind, or they use it anyway, quietly, and hope it's fine. Your value is being the person who makes things safe, clear, and doable. Now let's get practical with risk tiers. Let's make this simple. I'll break this down into three risk tiers. Tier one, public and low risk. Examples include public blog posts, generic templates, brainstorming, and rewriting your own words. Generally safe. Yes, as long as you're not sharing confidential information. Tier two, internal and sensitive. Examples include internal standard operating procedures, drafting training materials, internal process documents, learner feedback, and project plans. Guardrails needed. This is where you need approved tools and clarity on what can be shared. Tier three, regulated and personal data. Examples include anything with personally identifiable information, HR data, student records, health information, legal and compliance information, and safety critical procedures. Here's a default rule. Don't paste into general AI tools. Use secured and approved systems only. Here's the important thing I want you to remember. Most oops moments happen in tier two because it doesn't feel super sensitive, but it often is. So how do you quickly decide? That's where types of tools matter. This is where you level up. You stop being the person who says yes or no, and you become the person who says yes with the right tool and the right inputs. Here are three types of tools in plain language. Tool type A, public AI tools. Here I'm referring to general chatbots where you don't control storage or training. This is better for tier one work, which is public and low risk. Make sure to avoid anything internal, sensitive, or personal. Tool type B, enterprise and approved AI tools. These are tools your organization explicitly approves, often with data protections. This is best for Tier 1 and some Tier II, depending on policy. Tool Type C, internal and closed systems. Here I am referring to custom or secured internal AI systems or environments that don't expose data externally. This is best for tier two and tier three when approved and configured properly. And here's the punchline. The same prompt can be safe or unsafe, depending on the tool and the data you feed it. That's why we need a quick check you can do in real time, known as the paste test. Here's the pace test. Before you paste anything into an AI tool, ask the following. Would I paste this into a public website? Would I feel okay if this showed up in tomorrow's meeting? Does this include names, IDs, internal processes, or anything proprietary? If the answer is no or I'm not sure, treat it as tier two or tier three and don't paste it into a public tool. Here's a quick safe prompt workaround. When you can't paste the real content, do this instead. Describe the scenario generally, remove identifiers, and use placeholders. So let me share an example to bring this to life. Instead of, here's our internal onboarding doc for Company X, say create a generic onboarding outline for a customer support role, include modules, practice, and a checklist. This keeps you moving without risking confidentiality. Here's a real world moment that happens all the time. A designer is asked to make training from this internal document, and the easiest thing would be to paste the whole SOP into a chat bot. But instead, they run the paste tests and realize this document includes internal processes, details, and tool screenshots, which is tier two. So they do a safe workaround, summarize the process themselves in neutral bullet points with placeholders, then ask AI to generate a clean outline, practice scenarios, and a learner checklist. Same speed benefit, but way less risk. Here's your checkpoint challenge for the week. Make a note on your desktop or in your notebook called AI Paste Test. Then write these three questions. Would I paste this on a public website? Would it be okay if this showed up in a meeting? And does this include private, personal, or proprietary information? Use it once this week before prompting. And if you want to go a step further, create a tiny tier label habit, tier one, tier two, or tier three before you use a tool. As we wrap up, I created a quick interactive resource for this episode called AI Guardrails Compass. It's a simple guide you can click through in under five minutes to help you decide what's safe to share and what's not. If this episode helped you, please follow or subscribe and share it with a designer friend because guardrails are how we keep AI useful and trustworthy. In the next episode, we'll keep building with episode 111, data literacy IDs, the basics you need to work smarter. I'll break down the core data concepts instructional designers actually need so you can make better decisions, measure impact, and work more efficiently with AI and stakeholders. When you're clear on policies and permissions, AI becomes less stressful because you stop guessing. You don't need perfect rules. You need simple guardrails you can actually follow. Before I conclude this episode, here's an inspiring quote by Benjamin Franklin. An ounce of prevention is worth a pound of cure. Thanks for spending time with me today. Until next time, keep it practical, keep it human, and keep designing with love. Thank you for taking some time to listen to this podcast episode today. Your support means the world to me. If you'd like to help keep the podcast going, you can share it with a friend or colleague, leave a heartfelt review, or offer a monetary contribution. Every act of support, big or small, makes a difference, and I'm truly thankful for you.