by Dan Knox <Daniel.Knox@suny.edu> & Zach Pardos <pardos@berkeley.edu>

As the higher education sector grapples with the ‘new normal’ of the post-pandemic, the structural issues of the recent past not only remain problematic, but have been exacerbated by COVID-related disruptions throughout the education pipeline.  Navigating the complexity of higher education has always been challenging for students, particularly at under-resourced institutions which lack the advising capacity to provide guidance and support.  Areas such as transfer and financial aid are notorious black boxes of complexity, where students lacking financial resources and ‘college knowledge’ are too often left on their own to make decisions which may prove costly and damaging down the line.  The educational disruptions that many students have faced during the pandemic will likely deepen this complexity by producing greater variations in individual students’ levels of preparation and academic histories, even as stressed institutions have less resources to provide advising and other critical student services.  Taken together, these challenges will make it all the more difficult to address the equity gaps that the sector must collectively solve.

While not a panacea, recent advances in artificial intelligence methodologies such as machine learning can help to alleviate some of the complexity that students and higher education institutions face.  However, researchers and policymakers should proceed with caution and healthy skepticism to ensure that these technologies are designed and implemented ethically and equitably.  This is no easy task, and will require sustained, rigorous research to complement the rapid technology advances in the field.  While AI-assisted education technologies offer great promise, they also pose a significant risk of simply replicating the biases of the past.  In the summary below, we offer a brief example drawn from recent research findings that illustrate the challenges and opportunities of equitable and ethical AI research.

Machine learning-based grade prediction has been among the first applications of AI to be adopted in higher education. It has most often been used in ‘early warning’ detection systems to flag students for intervention if they are predicted to be in danger of failing a course and it is starting to see use as part of degree pathway advising efforts. But how fair are these models with respect to the underserved students these interventions are primarily designed to support? A quickly emerging research field within AI is endeavoring to address these types of questions, with education posing particularly nuanced challenges and tradeoffs with respect to fairness and equity.

Generally, machine learning algorithms are most accurate in predicting that which they have seen the most of in the past. Consequently, with grade prediction they will be more accurate at predicting the groups of students who produce the most common grade. When the most common grade is high, this will lead to perpetuating inequity, where the students scoring lower will be worst served by the algorithms intended to help them. This was observed in a recently published study[1] evaluating predictions of millions of course grades at a large public university. Having the model give equal attention to all grades led to better results among underserved groups and more equal performance across groups, though at the expense of overall accuracy.

While addressing race and bias in a predictive model is important, doing so without care can exacerbate the issue. In the same study, adding race as a variable to the model without any other modification led to the most unequal, and thus least fair performance across groups. Achieving the fairest result was an approach that teaches the model not to recognize race using a technique called adversarial learning which adds a machine-learning penalty when the model successfully predicts race based on a student’s input data (e.g., course history). Training separate models for each group was also attempted in the study to improve accuracy; however, information from all students always benefitted prediction of every group compared to only using that group’s data.

These findings underscore the challenges in designing AI-infused technologies so that they do not behave in ways counterproductive to the student success objectives of an organization. Further work is needed to develop additional best practices to address equity and fairness in the myriad of educational scenarios in which machine learning could otherwise widen achievement gaps.

The State University of New York and UC Berkeley are launching a partnership to take on these challenges and advance ethical and equitable AI research broadly in higher education.  The first project of the partnership will be applied to the transfer space, where we will be quantifying disparities in educational pathways between institutions based on data infrastructure gaps, testing a novel algorithmic approach to filling these gaps, and developing policy recommendations based on the results.  While this project represents an incremental step, we look forward to advancing this work and welcome partnerships with individuals and organizations with similar interests and values.  We appreciate the continued engagement and support of our NASH colleagues as we begin this exciting work.  Please feel free to reach out to either of us directly for more information.

Best,

Daniel J. Knox
Assistant Provost for Academic Planning & Student Success
The State University of New York

Zachary A. Pardos
Associate Professor, GSE
University of California, Berkeley

 

[1] Jiang, W., Pardos, Z.A. (2021) Towards Equity and Algorithmic Fairness in Student Grade Prediction. In B. Kuipers, S. Lazar, D. Mulligan, & M. Fourcade (Eds.) Proceedings of the Fourth AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society (AIES). https://arxiv.org/pdf/2105.06604.pdf