April 22, 2026

How to Use AI Safely in Instructional Design: 3 Risk Tiers Explained

How to Use AI Safely in Instructional Design: 3 Risk Tiers Explained

In the rapidly evolving landscape of instructional design, the integration of AI tools has transitioned from a novel experiment to an essential component of everyday workflows. However, this shift brings significant responsibility. How can you harness the power of AI while protecting sensitive information and maintaining learner trust? In this post, I’ll outline the three risk tiers associated with using AI tools and provide practical guidance to ensure you can use these technologies confidently and responsibly.

Understanding the Shift: Why AI Risk Management Matters

AI tools are now capable of handling learner data, internal documents, and even proprietary processes, making the handling of these tools a matter of risk management rather than just productivity. As instructional designers, your role is to protect learner trust and align AI solutions with real-world contexts. This post will help you understand how to categorize your AI usage and implement effective guardrails.

The Three Risk Tiers of AI Usage

To navigate the complexities of AI use in instructional design, we can categorize AI tasks into three distinct risk tiers:

Tier One: Public and Low Risk

Tier one tasks involve low-risk activities that are generally safe for AI use. Examples include:

  • Public blog posts: Sharing general insights or knowledge.
  • Brainstorming: Generating ideas without sensitive context.
  • Generic templates: Creating frameworks that do not involve confidential information.

These tasks can be done confidently as long as you avoid sharing any confidential data. Always ask yourself: "Am I sharing information that could be sensitive or proprietary?"

Tier Two: Internal and Sensitive

Tier two tasks are more sensitive and require clear policies and guardrails. They involve:

  • Drafting training materials: Developing content based on internal processes.
  • Internal SOPs: Writing standard operating procedures that could reveal proprietary information.
  • Feedback collection: Handling learner feedback that includes personal data.

For tier two, you need to use approved tools and ensure clarity on what can be shared. This tier often leads to potential mistakes, as it may not seem as sensitive at first glance.

Tier Three: Regulated and Personal Data

Tier three involves the highest level of sensitivity, where you must exercise extreme caution. Examples include:

  • Personally identifiable information (PII): Any data that can identify individuals.
  • HR data: Confidential employee information.
  • Health records: Any health-related information that is regulated by law.

For tasks in this tier, avoid using general AI tools. Only use secure and approved systems to protect sensitive information. A good rule of thumb is: if in doubt, don’t paste it into a public tool.

The Paste Test: A Quick Check for Safety

Before using AI tools, implement the "paste test" to determine whether the information can be shared safely. Ask yourself:

  • Would I paste this into a public website?
  • Would I feel comfortable if this information were shared in a meeting?
  • Does this content include sensitive or proprietary information?

If the answer is no or uncertain, consider it as tier two or three and avoid using general AI tools. Instead, describe the scenario generally, remove identifiers, and utilize placeholders for sensitive content. For example, instead of sharing an internal onboarding document, you might say, "Create a generic onboarding outline for a customer support role."

Practical Application: Your Checkpoint Challenge

To help reinforce the concepts discussed, create a document called "AI Pace Test" and jot down the three questions from the paste test. Use this checklist at least once this week before prompting any AI tools. Additionally, develop a habit of labeling each task as tier one, two, or three before using any AI tool.

Conclusion

Navigating the landscape of AI tools in instructional design requires a nuanced understanding of risk management. By categorizing your tasks into three risk tiers, implementing the paste test, and creating a systematic approach to using AI responsibly, you can harness the benefits of technology while safeguarding learner trust. 

For further exploration, check out the interactive resource, AI Guardrails Compass, which offers a quick guide to help you navigate the safety of sharing information with AI tools. Remember, the key to successful AI integration is clarity and responsibility in your approach.

🔗 Episode Links

Please check out the resource mentioned in the episode. Enjoy!

The AI Guardrails Compass