Dec. 31, 2025

How to Use Data to Improve Instructional Design

How to Use Data to Improve Instructional Design

Good design starts with a clear goal and ends with a real-world result. We walk you through a practical, human-centered approach to data-driven instructional design that turns scattered metrics into confident, ethical decisions. From writing a sharp creative brief and instrumenting your learning ecosystem to analyzing patterns and testing targeted fixes, youโ€™ll get a repeatable playbook built to reduce risk and improve outcomes without burning time or trust.

Ready to turn evidence into impact? Follow the flow, try one small experiment this week, and tell us what you learn. If this guide helped, subscribe, share with a teammate, and leave a review so more designers can build learning that truly works.

๐Ÿ”— Episode Links:

Please check out the resource mentioned in the episode. Enjoy!

How to Use Data to Improve ID Infographicย 

Send Jackie a Text

Join PodMatch!
Use the link to join PodMatch, a place for hosts and guests to connect.

Disclaimer: This post contains affiliate links. If you make a purchase, I may receive a commission at no extra cost to you.

Support the show

๐Ÿ’Ÿ Designing with Love + allows you to support the show by keeping the mic on and the ideas flowing. Click on the link above to provide your support.

โ˜• Buy Me a Coffee is another way you can support the show, either as a one-time gift or through a monthly subscription.

๐Ÿ—ฃ๏ธ Want to be a guest on Designing with Love? Send Jackie Pelegrin a message on PodMatch, here: Be a guest on the show

๐ŸŒ Check out the show's website here: Designing with Love

๐Ÿ“ฑ Send a text to the show by clicking the Send Jackie a Text link above.

๐Ÿ‘๐Ÿผ Please make sure to like and share this episode with others. Here's to great learning!


00:00 - Framing Data-Driven Design

00:51 - Set The Brief And Define Success

02:15 - Collect The Right Data Ethically

03:35 - Diagnose With Patterns Then Causes

04:56 - Iterate Small And Test Fast

06:08 - Share Wins And Systemize Practice

07:17 - Client Case: Onboarding Redesign

09:13 - Your Next Sketch: Take Action

10:04 - Studio Recap And Resources

11:03 - Deming Quote And Human-Centered Close

11:35 - Gratitude And Support Options

WEBVTT

00:00:00.800 --> 00:00:04.240
Hello, and welcome to the Designing with Love Podcast.

00:00:04.240 --> 00:00:12.240
I am your host, Jackie Pelegrin, where my goal is to bring you information, tips, and tricks as an instructional designer.

00:00:12.240 --> 00:00:17.120
Hello, instructional designers and educators.

00:00:17.120 --> 00:00:22.079
Welcome to episode 77 of the Designing with Love Podcast.

00:00:22.079 --> 00:00:33.520
Today, we're diving into data-driven design decisions, how to use analytics and feedback to improve learning experiences with confidence, clarity, and care.

00:00:33.520 --> 00:00:44.159
By the end, you'll have a simple flow for defining success, collecting the right metrics, testing small changes, and sharing results with your team.

00:00:44.159 --> 00:00:51.280
So, grab your notebook, a cup of coffee, and settle in as we explore this topic together.

00:00:51.280 --> 00:00:57.359
Before we start sketching, every strong design begins with a clear creative brief.

00:00:57.359 --> 00:00:59.039
Let's write yours.

00:00:59.039 --> 00:01:00.560
Set the brief.

00:01:00.560 --> 00:01:03.359
Define success before you measure.

00:01:03.359 --> 00:01:06.719
Data only matters in relation to a goal.

00:01:06.719 --> 00:01:11.359
What change do we want to see in learner behavior or performance?

00:01:11.359 --> 00:01:12.719
What's the goal?

00:01:12.719 --> 00:01:18.480
Align metrics to meaningful learning and business outcomes, not vanity numbers.

00:01:18.480 --> 00:01:20.159
What to include?

00:01:20.159 --> 00:01:24.480
First, state your primary outcome in plain language.

00:01:24.480 --> 00:01:32.799
For example, increase task accuracy, reduce time to proficiency, or complete within a defined window.

00:01:32.799 --> 00:01:38.959
Next, set your minimum success criteria as the baseline win you must achieve.

00:01:38.959 --> 00:01:44.560
Then, set your stretch goal as the aspirational level you would love to reach.

00:01:44.560 --> 00:02:02.400
After that, choose two or three metrics at most that map directly to your outcome, such as completion rate, item level accuracy, time on task, evidence of on the job application, number of support tickets, or manager observations.

00:02:02.400 --> 00:02:08.400
Finally, remove any metric that does not inform a decision you are prepared to make.

00:02:08.400 --> 00:02:15.520
With your brief in place, let's lay out the tools on your design table so you can actually see what's happening.

00:02:15.520 --> 00:02:17.360
Lay out the tools.

00:02:17.360 --> 00:02:19.520
Collect the right data.

00:02:19.520 --> 00:02:22.879
If it isn't instrumented, we're guessing.

00:02:22.879 --> 00:02:24.240
What's the goal?

00:02:24.240 --> 00:02:32.240
Capture reliable signals such as behavioral, attitudinal, and qualitative while protecting learner privacy.

00:02:32.240 --> 00:02:34.080
What to include?

00:02:34.080 --> 00:02:47.360
First, track behavioral analytics by capturing learning management system events and experience API events, reviewing quiz item analysis and monitoring time on task.

00:02:47.360 --> 00:02:57.919
Next, add feedback signals by using one-minute pulse surveys, short exit surveys, and a direct prompt that asks what was least clear.

00:02:57.919 --> 00:03:08.800
Then gather qualitative insights by running five quick think aloud usability tests or scheduling focused 15-minute learner interviews.

00:03:08.800 --> 00:03:22.800
After that, be explicit about ethics by explaining what you collect and why, minimizing personally identifiable information, aggregating results when possible, and storing data securely.

00:03:22.800 --> 00:03:34.879
Finally, maintain a data dictionary that lists each metric with its definition, its source system, its refresh cadence, and the person responsible for it.

00:03:34.879 --> 00:03:39.520
Now that the tools are out, it's time for a constructive critique.

00:03:39.520 --> 00:03:42.479
Let's review the draft with clear eyes.

00:03:42.479 --> 00:03:44.159
Critique the draft.

00:03:44.159 --> 00:03:47.680
Diagnose with simple analysis flow.

00:03:47.680 --> 00:03:50.400
Patterns first, then causes.

00:03:50.400 --> 00:03:51.599
What's the goal?

00:03:51.599 --> 00:03:56.159
Move from scattered data to clear, testable hypotheses.

00:03:56.159 --> 00:03:57.680
What to include?

00:03:57.680 --> 00:04:02.960
First, run the pattern, probe, then propose loop from start to finish.

00:04:02.960 --> 00:04:14.639
Next, in the pattern step, flag a notable trend, such as drop off at slide seven, or a quiz item that 62% of learners miss.

00:04:14.639 --> 00:04:24.319
Then in the probe step, validate your hunch with qualitative checks like screen recordings, learner comments, or a quick five user test.

00:04:24.319 --> 00:04:30.879
After that, in the proposed step, craft a change that targets the most likely root cause.

00:04:30.879 --> 00:04:48.000
Finally, review item discrimination and common wrong answer patterns to spot confusing stems or distractors, and scan quick visuals such as start to finish funnel, click heat maps, and score distributions by cohort or role.

00:04:48.000 --> 00:04:51.839
With the critique in hand, let's iterate the mock-up.

00:04:51.839 --> 00:04:55.920
Small targeted edits be a total redesign.

00:04:55.920 --> 00:04:57.600
Iterate the mock-up.

00:04:57.600 --> 00:05:00.079
Design small, test fast.

00:05:00.079 --> 00:05:02.480
Ship improvements in slices.

00:05:02.480 --> 00:05:03.920
What's the goal?

00:05:03.920 --> 00:05:08.879
Reduce risk and learn faster through small, measurable changes.

00:05:08.879 --> 00:05:10.639
What to include?

00:05:10.639 --> 00:05:23.839
First, start with low lift fixes by shortening an overlong video, clarifying instructions, chunking practice, adding a worked example, or rewriting a confusing distractor.

00:05:23.839 --> 00:05:35.199
Next, when you run an A B split test, change only one variable, such as the title, the sequence, or the activity type, and keep everything else constant.

00:05:35.199 --> 00:05:44.639
Then pilot your change with one team or with roughly 10 to 15 learners and measure before and after performance on your key metric.

00:05:44.639 --> 00:05:49.439
After that, set guardrails by defining a clear rollback rule.

00:05:49.439 --> 00:05:55.279
For example, if completion drops by more than 15%, revert the change.

00:05:55.279 --> 00:06:06.560
Finally, log each tweak in a design change log that records what changed, why you changed it, the expected impact, the owner, and the date.

00:06:06.560 --> 00:06:07.759
Great.

00:06:07.759 --> 00:06:14.160
Now let's mount the revised piece and host a quick studio walkthrough so everyone can learn from it.

00:06:14.160 --> 00:06:15.759
Exhibit the work.

00:06:15.759 --> 00:06:19.920
Share wins, learn publicly, and systemize.

00:06:19.920 --> 00:06:22.879
Evidence is a team sport.

00:06:22.879 --> 00:06:24.480
What's the goal?

00:06:24.480 --> 00:06:30.959
Turn isolated fixes into repeatable team-wide practice rooted inequity.

00:06:30.959 --> 00:06:32.560
What to include?

00:06:32.560 --> 00:06:43.680
First, publish a one-page learning brief that captures the problem, a concise data snapshot, the change you made, the outcome, and the next step.

00:06:43.680 --> 00:06:51.920
Next, post a 20-minute evidence roundup that highlights one success, one surprise, and one next experiment to try.

00:06:51.920 --> 00:06:59.519
Then, templatize your survey items, your analytics dashboard views, and your checklists for new builds.

00:06:59.519 --> 00:07:08.959
After that, run an equity check by comparing outcomes across cohorts, roles, and devices to ensure gains serve everyone.

00:07:08.959 --> 00:07:17.040
Finally, capture lessons learned in a shared repository so the practice continues even when team members change.

00:07:17.040 --> 00:07:22.240
Let's do a quick client review to see how this plays out on a real project.

00:07:22.240 --> 00:07:25.360
Real life example, client review.

00:07:25.360 --> 00:07:26.879
Here's the scenario.

00:07:26.879 --> 00:07:33.839
There's a new hire software onboarding with one 45-minute module plus a 10 question quiz.

00:07:33.839 --> 00:07:35.199
What's the goal?

00:07:35.199 --> 00:07:44.160
Reduce time to first ticket resolution from 10 days to 7 days, with eight days set as your minimum success criteria.

00:07:44.160 --> 00:07:45.839
What to include?

00:07:45.839 --> 00:07:56.160
First, instrument the experience by tracking video completion, capturing item level quiz data, and monitoring help desk tickets during the first 30 days.

00:07:56.160 --> 00:08:01.279
Add one question exit poll that asks what was least clear.

00:08:01.279 --> 00:08:20.079
Next, read the signals by noting a 58% drop-off on a 9-minute API video, observing that quiz item 6 on authentication is missed 64% of the time with the same distractor and collecting comments that point to excessive jargon.

00:08:20.079 --> 00:08:34.559
Then make the change by splitting the API video into three three-minute clips with captions and a small inline glossary chip, inserting a worked example before item six and rewriting the distractor.

00:08:34.559 --> 00:08:42.159
Next, pilot one cohort with 24 learners against a control group with 26 learners.

00:08:42.159 --> 00:08:51.519
After that, evaluate outcomes by aiming for higher completion, higher item accuracy, and lower time to first resolution.

00:08:51.519 --> 00:08:58.720
For example, 7.4 days in the pilot compared to 9.6 days in the control group.

00:08:58.720 --> 00:09:08.080
Finally, close the loop by publishing a learning brief and planning the next test that compares a printable job aid with inline tips.

00:09:08.080 --> 00:09:10.879
Ready to put pencil to paper?

00:09:10.879 --> 00:09:12.799
Here's your next sketch.

00:09:12.799 --> 00:09:15.919
Call to action, your next sketch.

00:09:15.919 --> 00:09:17.759
Now it's your turn.

00:09:17.759 --> 00:09:21.039
Pick one active course or module this week.

00:09:21.039 --> 00:09:22.399
What's the goal?

00:09:22.399 --> 00:09:27.759
Move from listening to action with a single manageable experiment.

00:09:27.759 --> 00:09:29.919
What to include?

00:09:29.919 --> 00:09:33.279
First, write down one outcome you care about.

00:09:33.279 --> 00:09:37.679
Next, select two metrics that truly reflect that outcome.

00:09:37.679 --> 00:09:47.039
Then, run one small experiment, such as shorten a video, clarify one quiz item, or add a worked example.

00:09:47.039 --> 00:09:53.679
After that, record the change in your design change log and share a one-page learning brief with your team.

00:09:53.679 --> 00:09:59.279
Finally, send me your mini case so I can feature a few in a future episode.

00:09:59.279 --> 00:10:04.000
Before we close, here's a 30-second studio recap.

00:10:04.000 --> 00:10:05.440
Set the brief.

00:10:05.440 --> 00:10:10.799
Define the outcome, your minimum success criteria, and stretch your goal.

00:10:10.799 --> 00:10:21.039
Lay out the tools, capture LMS events, experience API events, quick surveys, and short interviews with ethics in mind.

00:10:21.039 --> 00:10:22.720
Critique the draft.

00:10:22.720 --> 00:10:28.320
Run the pattern, probe, then propose loop to find causes, not just symptoms.

00:10:28.320 --> 00:10:36.480
Iterate the mock-up, ship small edits, run an A B split test on one variable, and set rollback rules.

00:10:36.480 --> 00:10:38.080
Exhibit the work.

00:10:38.080 --> 00:10:46.399
Publish a one-page learning brief, hold a monthly evidence roundup, and check equity so improvements lift everyone.

00:10:46.399 --> 00:10:56.399
To make this easy to use and incorporate into your projects, I've put together an interactive infographic that walks you through the exact flow step by step.

00:10:56.399 --> 00:11:02.720
You'll find it linked in the show notes and in the companion blog post on the Designing with Love website.

00:11:02.720 --> 00:11:07.200
As I conclude this episode, I would like to share an inspiring quote by W.

00:11:07.200 --> 00:11:16.720
Edwards Deming, a well-known statistician and quality pioneer who showed organizations how to use data for continuous improvement.

00:11:16.720 --> 00:11:21.200
In God We Trust, all others must bring data.

00:11:21.200 --> 00:11:24.720
Remember, behind every data point is a learner.

00:11:24.720 --> 00:11:28.559
We use evidence to serve people, not just dashboards.

00:11:28.559 --> 00:11:34.960
Until next time, keep your goals clear, your data clean, and your iterations small.

00:11:34.960 --> 00:11:39.120
Thank you for taking some time to listen to this podcast episode today.

00:11:39.120 --> 00:11:41.360
Your support means the world to me.

00:11:41.360 --> 00:11:50.159
If you'd like to help keep the podcast going, you can share it with a friend or colleague, leave a heartfelt review, or offer a monetary contribution.

00:11:50.159 --> 00:11:55.919
Every act of support, big or small, makes a difference, and I'm truly thankful for you.