An ML System That Finds the
Right Path for Every Student.
Not Just the Average One.
Students with disabilities have always been handed the same curriculum paths as everyone else and told to make it work. We built a machine learning system that changes that. It reads each student's profile, their goals, their assessment scores, and their prior learning history, then prescribes the sequence most likely to get them to employment. Validated against real student data. Built to support continued federal funding.
Every student was getting
the same path.
None of them are the same student.
The educational system for students with disabilities has long operated on a fundamental assumption that does not hold up: that a standardized curriculum sequence, delivered the same way to every student, produces the best outcomes for all of them. The data says otherwise.
Students with disabilities present across an enormous range of profiles. Cognitive ability, prior educational history, assessment scores, career interests, learning preferences, and specific disability characteristics all interact in ways that determine which learning sequence will unlock employment-ready skills most effectively. A student who thrives on structured sequential learning needs a different path than one who builds understanding through contextual application. The same content delivered in the wrong order, at the wrong time, produces worse outcomes than no intervention at all.
The organization awarded the NSF grant understood this. They had the content. They had the students. They had years of outcome data. What they needed was a system capable of reading all of it and prescribing the right path for each individual. At a scale no human advisor could manage. With a consistency no manual process could maintain.
"This was not an academic exercise. The model results would determine whether students got better outcomes and whether the organization received continued federal funding. Both depended on getting this right."
A recommendation
that knows
the whole student.
Most curriculum recommendation tools operate on a single dimension: assessment score, grade level, or subject interest. They miss the interaction effects between multiple factors that actually predict which learning sequence will work.
The system we built reads across every available dimension of a student's profile to understand not just where they are, but what kind of learner they are and which pathway is most likely to get them to employment-ready competency.
We tested three architectures
against real student data.
The right ML approach for a problem like this is not obvious. It depends on the structure of the data, the size of the student population, and the nature of the outcome being predicted. We tested three architectures and selected the best performing one based on rigorous validation against real student outcomes.
Identifies the students in the historical dataset most similar to the current student across all profile dimensions, and recommends the learning sequence that produced the best employment outcomes for that peer group.
Identifies latent patterns in student-curriculum interaction data to surface learning pathway recommendations. Think of it like a recommendation engine that finds pathways a student did not know to ask for but consistently connects with.
Treats the student-curriculum relationship as a graph problem. It models not just which students completed which modules, but the relationships between modules, the dependencies between skills, and how student profiles cluster in the latent space of learning behavior.
The NSF does not fund
interesting ideas.
It funds proven ones.
Getting an AI system to produce recommendations is the easy part. Getting those recommendations to hold up under NSF review, peer analysis, and real world deployment is the hard part. We designed the validation methodology from the start to meet federal research standards. Not bolted on at the end. Built in from day one.
Every architectural decision, every training split, every evaluation metric was selected to produce results that were reproducible, explainable, and defensible in a formal research context. The output was not a dashboard with numbers that looked good. It was a documented, peer review ready methodology with statistically significant improvement over baseline curriculum delivery.
That methodology is what convinced the NSF to continue funding. Not the idea. The proof.
a job. A student who gets the wrong one
does not.
Working in a regulated
or federally funded
environment?
AI in grant funded, compliance-driven, or research contexts requires more than a working model. It requires a documented methodology, defensible validation, and results that hold up under scrutiny. We have built in exactly these environments. Tell us about yours.
Start a Conversation ← Back to all case studiesRigorous validation is not a burden. It is the proof that makes the system worth funding.
Organizations that treat validation as a box to check produce systems that fail under scrutiny. Organizations that design for validation from day one produce systems that get renewed, expanded, and trusted by the people who depend on them. That distinction shapes every technical decision we make in high-stakes environments.

