There's more to come! Stay tuned for learning design resources, step-by-step guides, discussion forums, and more.

Resource

Why We Built Lazuli

A Brief History

I’ve had the privilege of working with AI in education for over a dozen years (yes, it’s been around in edtech that long!). Along the way, I’ve collaborated with teams exploring how to grade essays, recommend personalized content, predict and intervene in learning interactions, and power tutoring chatbots (remember IBM Watson?).

As data processing improved and transformers transformed (sorry, couldn’t resist) what was possible, long-imagined capabilities suddenly came within reach. Ideas I’d been chasing for years—once impractical—were finally achievable at scale.

That’s when my mind went to the learning design process itself. What if we could build an AI-powered engine to support learning designers, SMEs, and instructors in doing the work we’ve always known has the biggest impact on learners, but has traditionally been the hardest to do?

Too often, designers are constrained by the high cost of generating assessment items, media, or content revisions. As a result, they lack the time for high-quality up-front analysis, for applying research-based principles, or for truly connecting with and differentiating for learners.

So a couple years ago I decided to go step by step through the learning design process we teach students from such amazing resources like The Systematic Design of Instruction or The Learning Engineering Toolkit, and started to work through AI-based workflows for them.


Early Lessons

A few things quickly became clear:

  • Granularity matters. If you break design tasks into small enough steps, LLMs can do an amazing job—though not for everything.

  • Examples fuel performance. Stronger prompts require better examples, which means sourcing open datasets and OERs wherever possible.

  • Context is king. A real system needs to manage the messy web of interviews, assessments, job tasks, brand/style requirements, and authoring guidelines. That meant building RAG databases and workflows that could surface the right material at the right time.

  • Evaluation must be explicit. At first, we had to show the system what “good” looked like by hand, but we knew this could evolve through feedback loops.

  • Adoption depends on fit. Whatever we built had to plug into existing learning design processes—or it simply wouldn’t scale.

These insights shaped the prototypes we built and led us to take the next step.


Building Forward

Over the past two years, we’ve experimented with workflows, agents, evaluation methods, integrations, and interfaces. Each week, I’m blown away by what’s now possible—both in speed and in quality.

By focusing not just on content creation but on AI-first learning engineering, we’re aiming to make agile, research-based design possible at scale. The result is Lazuli: a tool we hope will help teams co-design faster, iterate based on evidence, and ultimately impact learning outcomes worldwide.


Why a Nonprofit?

My career has taken me through both philanthropy and venture capital, and I’ve seen the promise and pitfalls of each. When I started Lazuli, one thing was non-negotiable: access.

This tool needs to be available to every learning designer—regardless of resources, hardware, or geography. Our model is simple: those who can pay help us extend access to those who can’t.

Equally important, we believe success requires an alliance: researchers, developers, educators, governments, and learners working together. We’re committed to making our learnings public and free of the pressure to generate shareholder returns.

That freedom has shaped everything:

  • No gating features behind upsells.

  • Transparent pricing.

  • Long-term sustainability through product use, while grants fuel collaborative research.

  • A focus on helping designers retool for an AI-powered workforce—rather than be replaced by it.


What Makes Lazuli Different?

So how do we stand apart from generic content-generation tools or custom GPTs? A few things:

  • AI-to-widget design. Our system internalizes a design language, outputting interactions directly into a usable UI—true learning components, not just text.

  • LMS-ready delivery. Last-mile matters. We’re building integrations (currently via LTI, with cmi5 and others ahead) so designs flow directly into the systems you already use.

  • Competency-based data. Using a TLA-aligned architecture, Lazuli creates xAPI statements for interactions and updates mastery estimates using Bayesian inference—backed by auditable evidence.

  • In-design evaluation. Our agents assess alignment, accessibility, and learning principles as you build—no more burying analyses in drawers, since clear alignment to the problem-to-be-solved is clear during the development process.

  • Collaborative authoring. Teams can co-design, share drafts, comment, and track revisions—syncing with project management tools in the future.

  • Learning science at the core. We’re building datasets of principles, strategies, and patterns, alongside public datasets in education research.

  • Personalization (coming soon). Early results show strong promise for differentiated learner experiences, though we’re moving carefully to keep humans in charge of direction.

  • Truly agile design. With rapid, data-driven cycles of discovery and iteration, we’re unlocking a new pace of evidence-based learning design (more on that in future posts!)


Join Us

That’s why we’re so excited to keep growing—designing and refining new agents, evaluators, interactives, and integrations.

If you share our vision of putting both data and heart at the center of learning’s future, come join our alliance.

Explore recent articles