From sorting → to stewardship 🧠🌱

This constellation is a set of interlinked, standalone platforms for educators who want learning to function like a studio + lab + quest hub: students create artifacts, revise through feedback, build agency, and practice futures-thinking — instead of surviving a sequence of pass/fail tests.

Mastery loops (revise → improve) Quest-based creation Inquiry with scaffolds Self-regulated learning Gamification without harm Benevolent futures lens

Why this exists

Tests can measure — but when they become the *purpose*, school turns into a sorting machine. This constellation is built for the opposite: learning as capacity-building, where "failure" becomes data for iteration, not a label on a student's identity.

What changes when school stops being a verdict? Replace "pass/fail" with "attempt → feedback → revise → demonstrate"
Working theory: People learn better when the environment signals psychological safety and growth over judgment. In practice, this looks like: low-stakes checks, fast feedback, revision, and multiple ways to demonstrate mastery.

In a mastery system, a student who didn't "get it" on Tuesday isn't "behind" — they're mid-iteration. The teacher's role shifts from gatekeeper to coach: diagnosing misconceptions, guiding practice, and helping students build the habit of improvement.

Mastery Loop:
1) Attempt (draft / solve / build)
2) Feedback (specific, actionable)
3) Revision (improve based on feedback)
4) Demonstration (show evidence of mastery)
5) Reflection (what changed? what worked?)

This is not "lowering standards." It's raising the standard from "performed once under pressure" to "can do reliably after refinement."

📚 Research grounding: This mastery loop aligns with Zimmerman's self-regulated learning cycle (forethought → performance → reflection), Hattie's meta-analysis showing formative evaluation (d=0.68) and feedback (d=0.70) among the highest-impact interventions, and Black & Wiliam's Assessment for Learning framework emphasizing that learning happens through the feedback-revision cycle, not the test itself.
Subject-specific mastery examples What this looks like across disciplines

Mathematics: Instead of one test on quadratic equations, students work through a skill progression: identify vertex from graph → write equation from context → solve application problems. Each skill gets multiple attempts with targeted feedback. Mastery means demonstrating reliable competence across problem types, not one-shot performance.

ELA/Writing: Replace letter grades on essays with a writing studio model: draft → peer review → teacher feedback → revision → publication/portfolio. Rubrics focus on specific craft elements (thesis clarity, evidence integration, transitions). Students track their growth across multiple pieces rather than living with one bad grade.

Science: Lab work becomes iterative inquiry: initial hypothesis → experiment design → results analysis → redesign based on findings → final demonstration of understanding. Failed experiments aren't grade penalties—they're data that informs the next iteration. Portfolio includes both successes and productive failures.

Social Studies: Civic projects follow the mastery loop: research question → evidence gathering → draft argument → feedback from peers/teacher → revision → public presentation. Students might create policy proposals, historical documentaries, or community action plans—each going through multiple drafts before being considered "complete."

Arts: Natural fit for mastery—artists already work iteratively. Formalize this: technique practice with video self-review, peer critique sessions, artist statements that document growth, portfolios showing evolution from sketch to finished work. Assessment focuses on demonstrated skill development and creative risk-taking.

Design principles (the non-negotiables) Autonomy, competence, belonging — plus a benevolent futures lens

This constellation uses a simple compass: students need choice (autonomy), visible progress (competence), and social safety (relatedness). When those are present, creation becomes sustainable — not a stress response.

  • Autonomy: meaningful choices (topics, formats, paths), not fake choices. Students choose *how* they demonstrate mastery while you maintain the *what* (standards).
  • Competence: clear criteria, exemplars, feedback cycles, "next steps." Progress must be visible and earned through effort, not innate "talent."
  • Relatedness: collaboration norms, non-shaming culture, shared purpose. Learning happens in community, not isolation.
  • Benevolent futures: consider downstream effects ("who benefits?" "what could go wrong?"). Ethics isn't a unit—it's a lens on all creation.
  • Zero-harm by design: avoid coercion loops, public ranking, scarcity shame, surveillance vibes. If it would make a struggling student feel watched or doomed, redesign it.
Practical test: If your "game" mechanic makes a struggling student feel watched, ranked, or doomed, it's not a learning tool — it's a stress engine. We build stress engines only for cardio, not children. 🫀
📚 Research grounding: These principles draw from Self-Determination Theory (Deci & Ryan), which has 40+ years of evidence showing that autonomy, competence, and relatedness predict intrinsic motivation, persistence, and wellbeing. When students experience all three, they engage more deeply, learn more effectively, and develop healthy learning identities. When any is missing, external rewards can't compensate for long.
Neurodiversity-friendly design Universal Design for Learning principles

Good design for neurodivergent learners is good design for all learners. These patterns reduce cognitive load, honor different processing styles, and expand access:

Predictable routines with flexible outputs: Keep the structure consistent (every Monday = feedback day, every Friday = reflection) but allow students to show understanding in multiple ways (written, visual, oral, digital).

Sensory-friendly options: Quiet work spaces alongside collaborative zones. Fidget-friendly policies. Choice in lighting when possible. Movement breaks built into longer work sessions.

Explicit scaffolds that fade: Checklists, rubrics, templates, and exemplars aren't "cheating"—they're cognitive prosthetics that help working memory. As competence grows, students can choose whether to keep using them.

Time flexibility with clear boundaries: "This must be done by Friday, but you can work on it during any of these four work sessions" gives structure without rigidity. Accommodates both speedsters and deep processors.

Error-friendly environments: Mistakes are expected data, not identity threats. Draft work is celebrated. "I'm not there yet" is normalized language. Revision is built into the timeline, not tacked on as extra credit.

⚠️ Common pitfall: "Differentiation" that means "different kids get different worksheets" often just creates a two-tier system. True UDL means the design itself is flexible from the start—multiple means of engagement, representation, and expression baked into the core assignment.
Platforms map (interlinked, standalone) Each card is its own index.html and can be deployed separately
01 · Mastery Over Testing core

Replace "fail/pass" with mastery loops, revision rights, and proof-based proficiency.

Open
02 · Quest-Based Learning Studio core

Turn curriculum into missions that produce artifacts: guides, prototypes, proposals, portfolios.

Open
03 · Formative Feedback Lab core

Feedback as fuel: fast, specific, kind, and usable. Reduce grading as punishment.

Open
04 · Self-Regulated Learning Engine skill

Teach students to plan, monitor, reflect, and adapt — the "learning how to learn" stack.

Open
05 · Inquiry & Creation Foundry lab

Inquiry that ships: questions → methods → evidence → artifact. Guided → open inquiry.

Open
06 · Benevolent Futures Framework lens

Make "who benefits?" and "what could go wrong?" a normal part of student creation.

Open
07 · Gamification Without Harm overlay

Use game mechanics to support autonomy and belonging — avoid leaderboard shame.

Open
08 · Teacher Co-Design Toolkit tools

Teachers as designers: templates, routines, launch plans, and scaling patterns.

Open
09 · Student Agency Dashboard view

Make progress visible and personal: goals, choices, reflections, and evidence portfolios.

Open
10 · Evidence & Research Map refs

Plain-language summaries, citations, and "what the research suggests (and doesn't)."

Open
11 · Policy & Systemic Shift leaders

How schools adopt this safely: grading policy, schedules, communication, and rollout.

Open
Local-only progress: Click "Mark explored" on any card to earn XP and light up nodes in the constellation. Nothing leaves your device. Ever.
Start here (teacher paths) Choose the path that matches your reality
Path A: "I have 20 minutes and 0 bandwidth."

Start with 03 Formative Feedback Lab. Implement one feedback routine and one revision right. That's it. One loop. One win.

Path B: "My grading system is eating my soul."

Go to 01 Mastery Over Testing. Shift one unit to proof-based mastery + fewer grades + more revision.

Path C: "Students are bored. I want creation."

Jump to 02 Quest-Based Learning Studio. Convert one unit into an artifact quest with a rubric.

Path D: "I want future-building + ethics baked in."

Open 06 Benevolent Futures Framework. Add "downstream impact + risk mitigation" to every project reflection.

Subject-specific micro-paths Discipline-focused entry points for quick wins
ELA: Writing studio + revision rights

Start: Pick one essay/narrative unit. Add mandatory draft stage with peer + teacher feedback before final submission. Use Platform 03 for feedback protocols.

Next step: Build a writing portfolio (cumulative, not one-shot). Track craft skill development across pieces.

Math: Mastery checks + spaced practice

Start: Replace one chapter test with skill-based mastery checks. Students demonstrate each skill individually with retake rights. Use Platform 01 for structure.

Next step: Add proof tasks—"explain your reasoning" problems that show conceptual understanding, not just procedural fluency.

Science: Inquiry foundry + evidence logs

Start: Convert one lab from cookbook → iterative inquiry: students design experiment, analyze results, redesign based on findings. Use Platform 05 for scaffolds.

Next step: Build evidence portfolios showing evolution of scientific thinking—mistakes included.

Social Studies: Civic quests + stakeholder analysis

Start: One unit becomes a civic project: students research a community issue, analyze stakeholder perspectives, propose action. Use Platform 02 for quest structure and Platform 06 for impact analysis.

Next step: Add public presentation component—city council, school board, community forum.

⚠️ Implementation tip: Don't try to overhaul your entire course at once. Pick one unit, one path, one term. Get that working well, gather student feedback, adjust, then expand. Incremental change is sustainable; total transformation burns out.
Research foundation & evidence base What we know (and don't) from learning science

This constellation synthesizes decades of education research. Here are the foundational frameworks and their practical implications:

Mastery learning & formative assessment Why feedback loops matter more than test scores
📚 Key findings:
  • Hattie's Visible Learning (2023 update): Formative evaluation shows effect size d=0.68, teacher clarity d=0.75, feedback d=0.70. These are among the highest-impact interventions available to teachers.
  • Black & Wiliam's Assessment for Learning: When students receive specific, actionable feedback without grades, learning improves significantly. Adding grades to feedback actually reduces its effectiveness—students focus on the score, not the guidance.
  • Bloom's mastery learning research: With appropriate time and support, ~80% of students can reach the same level of achievement as the top 20% in traditional instruction. The key: criterion-referenced assessment + corrective feedback loops.

What this means for practice: Stop grading every assignment. Use low-stakes checks + frequent feedback instead. Reserve grades for summative moments after students have had multiple attempts to demonstrate mastery. The grade is the endpoint, not the driver.

⚠️ What research doesn't say: Mastery learning doesn't mean "everyone passes eventually with enough time." It means most students can reach proficiency if misconceptions are diagnosed and addressed quickly, and if they get deliberate practice with feedback. Time alone doesn't teach—structured feedback does.
Self-Determination Theory (SDT) Autonomy, competence, and relatedness as psychological needs
📚 Key findings:
  • Deci & Ryan (1985-2023): 40+ years of research across cultures showing that autonomy, competence, and relatedness predict intrinsic motivation, persistence, wellbeing, and internalization of values.
  • Autonomy-supportive teaching: Teachers who provide choice within structure, explain rationale for requirements, and acknowledge student perspectives see higher engagement and achievement—especially among marginalized students.
  • Competence feedback loops: Students need to experience their efforts causing improvement. "Growth mindset" interventions only work when paired with visible progress and specific feedback.

What this means for practice: Design assignments with meaningful choices. Make progress visible through portfolios, skill trackers, or before/after comparisons. Build collaboration into routines so learning happens in community. When students feel autonomous, competent, and connected, external rewards become unnecessary.

⚠️ Common misunderstanding: Autonomy doesn't mean "do whatever you want." It means choice within structure. The teacher still sets standards and learning goals—students choose the path to reach them.
Self-regulated learning (SRL) Teaching students to manage their own learning process
📚 Key findings:
  • Zimmerman's cyclical model: Expert learners move through forethought (goal-setting, planning) → performance (self-monitoring, help-seeking) → reflection (self-evaluation, attribution). These skills can be taught explicitly.
  • Metacognition matters: Students who can accurately judge their own understanding learn more efficiently. Teaching calibration—"How well do you think you did? Let's check"—improves over time.
  • SRL interventions work: Meta-analyses show teaching SRL strategies has medium-to-large effects on achievement, particularly for students who struggle with executive function.

What this means for practice: Explicitly teach goal-setting, planning, self-monitoring, and reflection. Build these into assignment structures—not as add-ons but as core steps. Use think-alouds to model your own SRL processes. Have students track their strategy use and reflect on what works.

Universal Design for Learning (UDL) Designing for variability from the start
📚 Key principles:
  • Multiple means of engagement: Recruit interest through choice, relevance, and appropriate challenge. Sustain effort through feedback and collaboration. Develop self-regulation through goal-setting and reflection.
  • Multiple means of representation: Present information in multiple formats (text, audio, visual, interactive). Support comprehension through scaffolds that fade as competence grows.
  • Multiple means of action & expression: Allow students to demonstrate understanding through varied modalities—not just written tests. Support executive function with tools like checklists and templates.

What this means for practice: Design assignments with flexibility built in, not bolted on. "Write an essay OR create a video OR build a model OR..." isn't extra work—it's recognizing that the goal is demonstrating understanding, and there are multiple valid ways to do that. Provide scaffolds (rubrics, exemplars, templates) that all students can access, and let them decide which supports they need.

What research doesn't tell us (yet) Honest acknowledgment of uncertainty

Good design acknowledges what we don't know:

Long-term effects of mastery-based systems: Most studies are short-term (one semester, one year). We have less evidence on how students fare 5-10 years later in college or careers. Promising, but not definitive.

Optimal challenge levels: We know students need appropriate challenge, but "appropriate" varies wildly by individual, context, and prior experience. There's no formula—it requires ongoing calibration.

Gamification effects: Points/badges can increase engagement short-term, but long-term effects on intrinsic motivation are mixed. Some students love it, others find it infantilizing. Context and implementation quality matter enormously.

AI-enhanced feedback: Too new for robust research. Early studies are promising for certain types of feedback (grammar, procedural skills) but concerning for others (creative writing, complex reasoning). Watch this space.

Design implication: When evidence is thin, design for agency. Give students the choice to opt into or out of experimental features. Monitor effects locally. Share what you learn. We're all figuring this out together.

Want deeper dives? See Platform 10: Evidence & Research Map for annotated bibliographies, meta-analyses, and "what the research actually says vs. what edtech companies claim it says."

Constellation visualizer A tiny "state map" of what you've explored (local-only)

How it works: each platform is a node. When you mark a platform "explored," the node brightens and links glow. It's a gentle visual memory aid — not a scoreboard.

Why it matters: education change fails when people feel alone. This map helps teams coordinate: "we've explored feedback + mastery… now let's add quests."

Preferences & safety (local-only) Offline-first defaults, gentle limits, transparent storage

This page uses IndexedDB (browser storage) for preferences and your local progress. Nothing is sent anywhere. No auto-send. No hidden telemetry. No external dependencies.

Preferences
Zero-harm & anti-inversion guardrails
  • No coercive mechanics (no leaderboard shame, no forced streaks).
  • No surveillance features; no hidden analytics.
  • No auto-send functionality, ever.
  • Input is sanitized; actions are rate-limited gently to prevent abuse.
Local plan note (stored offline) Write what you're doing next. Keep it small. Keep it kind.
Team mode pattern (optional) How 2-3 teachers can pilot this together

Change is easier in small, supportive teams. Here's a lightweight structure for 2-3 teachers piloting mastery-based approaches:

Three roles (rotate each cycle):

  • Pilot teacher: Tries the new approach first. Documents what happens. Shares both wins and struggles.
  • Feedback buddy: Observes one class session or reviews artifacts. Asks clarifying questions. Offers specific, kind feedback.
  • Coordinator: Keeps the team on schedule. Handles logistics (meeting times, shared docs). Celebrates progress.

Two-week cycles: Week 1 = plan + launch. Week 2 = adjust + reflect. Then rotate roles and start the next pattern.

One non-grade metric: Don't track test scores during piloting—track something the research says matters: revision rate, quality of student self-reflections, number of students who ask for feedback voluntarily. Pick one, measure it simply, notice what changes.

Why this works: Small teams, short cycles, clear roles, meaningful metrics. You're not isolated, you're not overwhelmed, and you're learning together. When you hit a wall, you have people who understand the context and can help troubleshoot.