Investors

A Learning Intelligence Platform — outcome metrics for how students think

Pragnya Analytics measures, improves, and proves how well students think — not just what they study. We deliver outcome‑verified learning for schools (Grades 6–10) using learning intelligence diagnostics, mentor‑led cohorts, and retention/transfer measurement over time.

Not a video‑content platform. Not exam‑drill focused. Not self‑paced only. Not a generic AI tutor.

Discuss the round Download deck
LIP
Category thesis
Pre/Post
Outcome measurement
Cohorts
Mentor-led delivery

Thesis: learning outcomes need a better unit of measurement

Why content-heavy edtech fails

Content and engagement are weak proxies. Schools need evidence of understanding, retention, and transfer — the capabilities that matter beyond exams.

Why now

With better diagnostics and responsible ML, we can model learning behavior at scale — but outcomes still require mentors to change thinking, not just recommend content.

What Pragnya enables

Pre/post learning intelligence baselines, mentor-led cohorts, and retention checks that prove whether learning stuck and transfers.

Market Opportunity

Initial wedge: progressive schools (Grades 6–10) that want measurable thinking outcomes, not just marks. Expansion: parents of high-potential underperformers and broader education programs where outcome proof drives adoption.

Business Model

Paid school pilots (baseline → cohort → post checks), cohort programs, and annual subscriptions for learning intelligence measurement + reporting.

Traction

  • Pilot-ready delivery: diagnostics + cohorts + outcomes reporting
  • Early cohorts with mentors and feedback loops to refine interventions
  • Assessment library and learning intelligence framework under active iteration

Learning Intelligence metrics (definitions)

Our core innovation is not “AI tutoring”. It’s measuring learning quality, running interventions, and proving uplift with metrics that track durability and transfer.

Concept Transfer Score

A measure of whether a student can apply a concept in a novel context (new wording, constraints, or scenario), without being cued into the pattern.

Retention @ 30/60/90 days

A time-based check that answers: does the learning persist without re-teaching? Retention is treated as a first-class outcome, not a side effect.

Problem‑Solving Depth Index

Tracks reasoning quality: planning, multi‑step structure, justification of steps, and the ability to explain “why”, not just reach an answer.

Learning Velocity Uplift

Measures the rate of improvement after an intervention. It helps distinguish “more practice” from “better learning”, and guides cohort design.

Why this is defensible

Outcome metrics + mentor interventions generate a longitudinal dataset on how misconceptions change, what interventions work, and how retention/transfer evolves. This is a compounding advantage over content-first platforms.

Roadmap

0–6 months

  • Standardize the 4-step pilot playbook (baseline → cohorts → post checks)
  • Operationalize mentor training and intervention protocols
  • Ship school-ready outcome reports (learning intelligence metrics)

6–24 months

  • Automate parts of diagnostics and reporting while preserving mentor quality
  • Scale pilots across schools with repeatable outcome uplift
  • Subscription model for ongoing measurement and improvement cycles

Team & Advisors

Sairam
Founder & Head of Curriculum
Sairam
Director of Data Science & Mentor
Sundar
Product & Partnerships

Full bios and collaborators are available on the Team page.

Fundraise

We’re raising pre-seed capital to productize learning intelligence measurement, expand mentor-led pilots with schools, and scale go-to-market. Detailed financials available on request under NDA.

Contact IR Download deck