A Seven-College Experiment Using Algorithms to Track Students: Impacts and Implications for Equity and Fairness

Researchers

* Peter Bergman | UT-Austin

Elizabeth Kopko | Teachers College, Columbia University

* Julio Rodríguez | Teachers College, Columbia University

Partners

State University of New York (SUNY)

Center for the Analysis of Postsecondary Readiness at the Community College Research Center (CCRC)


Key Findings

Algorithms that place incoming college students into courses based on future-learning predictions can reduce the share of students in remedial courses and increase placement in college-level courses while increasing overall credits earned - and without reducing pass rates. Compared to the most widely-used placement tests, the algorithms placed 49% of students into a higher-level English course and 15% of students into a higher-level math course. The algorithms studied were more accurate, more equitable, and less discriminatory than tests.

 

Other Findings

  • Algorithmic course placement increases placement into college-level English and math courses across demographics with one exception: men in college-level math. This placement method also narrows gaps between some underrepresented demographic groups and counterparts.

  • In assessing the disparate impact of both placement methods, the tests showed more instances and higher levels of discrimination than the algorithms. 

  • Instances when the algorithms places students in college-level courses but the placement tests would not returned higher pass rates, and thus more accuracy, than when the opposite course recommendation is made (tests placing in college-level but the algorithms do not).

  • With the reductions in remedial courses taken, algorithmic placement saved students $150 on average. 

  • Following the initial year of implementation, the estimated operational cost of the algorithmic placement method for colleges is $40 per student. [1]

Methodology & Data Highlights

  • Randomized field experiment across seven two-year community colleges 

  • 12,544 first-year community college students entering in fall and spring semesters over three school years

  • Measures overall impacts and demographic subgroup outcomes when students are tracked into - i.e., placed in - English and math courses by predictive algorithms (treatment) versus standard placement tests (control)

Summary

Remedial courses at two- and four-year colleges and universities in the U.S. aim to help students prepare for college-level courses. In the 2011-2012 academic year, 68% of students at two-year colleges had previously taken at least one remedial course.

Though remedial courses may improve outcomes for some students, a student placed into an unnecessary remedial course will waste time and money taking classes that don’t count toward graduation requirements. Given that 71% of U.S. colleges use a single measure of knowledge - a test - to track students into either remedial or college-level courses, an important question is whether an evaluation considering multiple measures can improve the accuracy of student placements. [2]

In a study that develops algorithms to predict a student’s likelihood of success in college-level courses based on multiple measures of academic potential, researchers find that the algorithms place many more students in college-level courses than standard placement tests do without compromising course credits and pass rates. The study and its findings are specific to course placement methods for English and math only.

Background on the experiment

Based on Figure 1 of the working paper. Shows hypothetical data presented to colleges with placement predictions.

Working with staff at seven two-year community colleges in the State University of New York (SUNY) system, researchers developed an alternative, algorithmic course placement method to predict incoming students’ likelihood of succeeding in college-level English and math courses on the basis of multiple measures like: 

  • high school courses,

  • high school GPA, 

  • high school rank, 

  • diploma status, 

  • time since high school graduation, 

  • course readiness exam scores (e.g. SAT scores), and

  • standard course placement test scores.

The real-world experiment allowed researchers to evaluate the impact of the algorithms on student outcomes. However, because colleges have constraints and preferences of their own (e.g., course enrollment caps and pass rate goals), the researchers designed separate algorithms for each institution based on the college’s previous cohorts of students and gave each college some flexibility in how to implement the placement algorithms in their existing systems.

When choosing a “cut point” to determine which students would be recommended for a college-level course and which would be tracked into the remedial course level, most colleges chose to keep the pass rate of college-level English and math courses constant. That is, they chose to place students in a way that wouldn’t impact the status quo pass rates. (See figure for examples.)

Experimentation with algorithms reveals chronic under-placement in college-level courses

Even holding pass rates constant, the algorithms resulted in major placement changes that suggested prior under-placement of many students. Relative to the course placement recommended by the colleges’ test score system, the algorithms changed 55% of English placements and 23% of math placements.

Of those students whose placement was different from that of the test score system, the algorithms placed:

  • 15% into a higher-level math course

  • 7% into a lower-level math course 

  • 49% into a higher-level English course

  • 6% into a lower-level English course

 

Based on Table 3 of the working paper.

 

Increases in college-level course enrollment and placement accuracy

About 81% of students placed by the algorithms enrolled in the recommended course. Among the entire group tracked by the algorithms – including those who didn’t comply with its recommendation – enrollment in college-level courses increased by 13.6 percentage points in English and 2.6 percentage points in math.

Students placed by the algorithms in either English or math earned 0.53 more college-level credits compared to students placed by the tests. For the subset of students placed by the algorithm for both English and math, they earned 1.3 more college-level credits. 

These increases in credits earned do not compromise overall pass rates. In fact, when the algorithms placed students in a college-level course but the test scores would have placed them in a remedial course, algorithmic placement led to higher course pass rates. This indicates more accurate placement than when the decision makers flipped.

For placement in college-level math by the algorithms but not by the tests, resulting pass rates were 10 percentage points higher than the counter disagreement (tests place in college-level math but the algorithms do not). Similarly, pass rates for college-level English are 12 percentage points higher when placed by the algorithms but not by the tests.

Closing key demographic gaps and mitigating discrimination

Implementing the algorithmic placement method increased student placement into college-level courses for cohorts overall. When analyzed by demographic subgroups, the algorithms proved useful in narrowing some of the gaps between underrepresented demographic groups and counterparts:

  • Placement rates in college-level math increase for women relative to men. 

  • Placement rates in college-level English increase for Black students relative to white students.

  • Placement rates in college-level math increase for Hispanic students overall, but relative to white students, the gains are not as large.

  • In terms of remedial courses, lower-income students have larger decreases in these non-bearing credits relative to higher-income students. [3]

Algorithms in the experiment did not use demographics (e.g., race, ethnicity, gender) as an input, preventing discrimination in the form of disparate treatment. However, the researchers evaluate both the algorithmic model and standard placement tests for disparate impact. Results show more frequent and much higher rates of disparate impact for the tests relative to the algorithms. For colleges, these measures are important when weighing the fairness, equity, and accuracy of the different methods.

Costs savings for students and public funding sources

Improved course placements not only have the potential to improve academic outcomes, but also college affordability. A detailed cost analysis shows that the reduction in remedial courses by students tracked by the algorithms leads to per-student savings of $150, on average. Per cohort, per college, these savings sum to an average of $145,200.

Implementing the algorithms had up-front costs for each institution ranging from $70-$360 per student. Most of these costs pertained to manual data entry of high school transcripts. Data collection improvements and automation could further reduce these costs. Beyond initial costs, implementing the algorithms on an annual basis would cost an estimated $40 per student and could be reduced, again, with more efficient data processes.

Using algorithmic technology to improve student outcomes

Researchers note that embedding the algorithms into existing systems limited their modeling choices, and that the algorithms should be evaluated and improved in the future. Even with these limitations, the study shows how algorithmic technology can more effectively target the delivery of remedial education and improve key educational outcomes for students, especially those from underrepresented groups.


*Peter Bergman is the founder and director of Learning Collider. Julio Rodríguez is a Learning Collider affiliate.

Footnotes:

[1] Estimated costs for the initial year of implementing the algorithmic placement method range from $70 to $360. This accounts for larger upfront fixed costs and labor costs to manually enter data from high school transcripts. Data-transfer systems and process enhancements could reduce these data collection costs.

[2] Most colleges and universities use ACCUPLACER, a computer-based test offered by the College Board. Colleges choose a “cut” score and place students who score above it in college-level classes and students who score below it in remedial courses.

[3] The study uses Pell grant status (recipient or not) as a proxy for household income.

Previous
Previous

Creating Moves to Opportunity: Experimental Evidence on Barriers to Neighborhood Choice

Next
Next

Better Together? Social Networks in Truancy and the Targeting of Treatment