"A half hour into the lesson, it’s also unclear whether she understands the concept of place value. " Sums it all up so well! What a terrifying landscape.
Would love if you could also investigate the company behind IXL. My kid’s school uses that and it is such a source of anxiety for him. And it’s required. We have very similar complaints to what you’ve shared here about this platform.
Thank you. I'm not particularly familiar with IXL, outside of what I've learned on Emily Cherkin's substack about the FTC amicus brief in the 2025 IXL case that challenged schools' ability to consent to data sharing on behalf of kids/parents. When I'm able, I'll definitely look more into it!
I love that more folks are digging into the dubious research behind iReady and their dishonest marketing practices.
One way iReady does this is by invoking Kraft (2020) to justify their product while simultaneously claiming their absence of randomized controlled trials is a principled stance in line with mainstream practices.
Here’s that the problem with that:
1. On the evidence page for iReady, they state directly that, “ the i-Ready evidence base is not centered on randomized controlled trials, as education researchers have widely acknowledged that fully experimental data is too narrow a standard to evaluate daily classroom practice in America’s schools.”
Which researchers? Citations, please. RCTS are widely considered the gold standard of educational research. It’s how you establish causal evidence. It’s why ESSA has a tier 1.
2. They cite Kraft’s framework to show that their effect sizes are meaningful, “In education research, results in the .10 to .25 SD range are considered meaningful and are commonly associated with additional months of learning over the course of a school year. See Kraft, 2020 for more information.
Across multiple studies, i-Ready Personalized Instruction demonstrates impacts in this range when used as recommended. This indicates that consistent, targeted use can contribute to measurable gains in student progress.”
This is a ballsy and shady maneuver. Kraft’s whole framework was derived from a massive set of RCTs. The point of Kraft’s framework was that other kinds of studies artificially inflate effect sizes through various effects.
So…Curriculum Associates dismisses RCTs as “too narrow” while citing Kraft (2020) to validate their effect sizes. Yet Kraft’s benchmarks were derived entirely from 747 RCTs precisely because non-RCT designs inflate effects. His paper documents that developer-funded studies massively inflate effect sizes, quasi-experiments inflate 2x vs. RCTs, and proprietary outcome measures (e.g. using iReady’s diagnostic to assess iReady’s personalized learning software) inflate 2-4x. iReady’s research has all three! You cannot reject RCTs and then invoke a framework built on RCTs to validate your non-RCT evidence. That’s dishonest and should be clearly called out.
When Kraft’s inflation adjustments are applied to iReady’s evidence base, the effect sizes collapse close to zero, which is exactly what the independent studies show.
The problem is most people don’t take the time to understand the nuances, and this makes it easy to dazzle administrators with misleading claims.
There’s something fundamentally broken about companies being allowed to validate their own effectiveness using proprietary assessments they designed themselves.
That’s not independent accountability. That’s self-referential measurement.
It’s like asking me whether I’m a good driver. Of course I’ll say yes. Just don’t ask the cop who gave me the speeding ticket.
"A half hour into the lesson, it’s also unclear whether she understands the concept of place value. " Sums it all up so well! What a terrifying landscape.
Doesn’t it? Isn’t it?! Thanks for reading and commenting, I appreciate you!
Would love if you could also investigate the company behind IXL. My kid’s school uses that and it is such a source of anxiety for him. And it’s required. We have very similar complaints to what you’ve shared here about this platform.
Thank you. I'm not particularly familiar with IXL, outside of what I've learned on Emily Cherkin's substack about the FTC amicus brief in the 2025 IXL case that challenged schools' ability to consent to data sharing on behalf of kids/parents. When I'm able, I'll definitely look more into it!
Thanks for this excellent piece, Kelly. I hope your readers will join me & Nicki to learn more about the lawsuit at our Town Hall webinar on 5/21.
Thank you Lila, I hope so too! I look forward to joining!
I love that more folks are digging into the dubious research behind iReady and their dishonest marketing practices.
One way iReady does this is by invoking Kraft (2020) to justify their product while simultaneously claiming their absence of randomized controlled trials is a principled stance in line with mainstream practices.
Here’s that the problem with that:
1. On the evidence page for iReady, they state directly that, “ the i-Ready evidence base is not centered on randomized controlled trials, as education researchers have widely acknowledged that fully experimental data is too narrow a standard to evaluate daily classroom practice in America’s schools.”
Which researchers? Citations, please. RCTS are widely considered the gold standard of educational research. It’s how you establish causal evidence. It’s why ESSA has a tier 1.
2. They cite Kraft’s framework to show that their effect sizes are meaningful, “In education research, results in the .10 to .25 SD range are considered meaningful and are commonly associated with additional months of learning over the course of a school year. See Kraft, 2020 for more information.
Across multiple studies, i-Ready Personalized Instruction demonstrates impacts in this range when used as recommended. This indicates that consistent, targeted use can contribute to measurable gains in student progress.”
This is a ballsy and shady maneuver. Kraft’s whole framework was derived from a massive set of RCTs. The point of Kraft’s framework was that other kinds of studies artificially inflate effect sizes through various effects.
So…Curriculum Associates dismisses RCTs as “too narrow” while citing Kraft (2020) to validate their effect sizes. Yet Kraft’s benchmarks were derived entirely from 747 RCTs precisely because non-RCT designs inflate effects. His paper documents that developer-funded studies massively inflate effect sizes, quasi-experiments inflate 2x vs. RCTs, and proprietary outcome measures (e.g. using iReady’s diagnostic to assess iReady’s personalized learning software) inflate 2-4x. iReady’s research has all three! You cannot reject RCTs and then invoke a framework built on RCTs to validate your non-RCT evidence. That’s dishonest and should be clearly called out.
When Kraft’s inflation adjustments are applied to iReady’s evidence base, the effect sizes collapse close to zero, which is exactly what the independent studies show.
The problem is most people don’t take the time to understand the nuances, and this makes it easy to dazzle administrators with misleading claims.
Absolutely
Wow this is fascinating! Thank you for sharing. I'm not surprised, but I learned something. I got a question about RCTs on a reddit thread (https://www.reddit.com/r/edtech/comments/1t4m4rn/comment/ok6kclo/?utm_source=share&utm_medium=web3x&utm_name=web3xcss&utm_term=1&utm_content=share_button) and I'd love to share this comment (with credit) in that convo.
Exactly.
There’s something fundamentally broken about companies being allowed to validate their own effectiveness using proprietary assessments they designed themselves.
That’s not independent accountability. That’s self-referential measurement.
It’s like asking me whether I’m a good driver. Of course I’ll say yes. Just don’t ask the cop who gave me the speeding ticket.
Yes - total science-washing closed-loop evidence system. And because the algorithms are held close as a trade secret, nothing can be audited. 🤯