"Dubious Standards of Proof": Disciplinary and Regulatory Battles over Discrimination Statistics in the United States, 1972-1983

Michael F McGovern, Yale Law School

In the past few years, a burgeoning scholarly interest in algorithmic fairness has given rise to a broader conversation about technology and discrimination. Critics have outlined a number of reasons why existing civil rights laws are not up to the task of remedying disproportionate harms caused by implementations of big data and machine learning on protected groups. Statistical tools and concepts are not new to antidiscrimination law; in fact, they played a decisive role in shaping the legal doctrine and regulations we have inherited today. This paper explores how social scientists, government officials, and federal judges wrestled with numerical definitions of discrimination amid efforts to reform the U.S. federal statistical infrastructure during the 1970s. Using records from the Equal Employment Opportunity Commission, American Statistical Association, and the personal papers of Supreme Court justices, it shows how different groups vied for authority over exactly what type of guideline would be used (a numerical rule of thumb or a probabilistic algorithm), what exactly the threshold should be, and which data to account for. Across education and employment, administrators resisted what they believed to be “dubious standards” for reporting, but often these critiques exploited a lack of consensus in order to shirk responsibility for redress. We have excellent scholarship on the politics of classification and the Census, and this work joins it by focusing on counting as a distinct problem domain; a history that contemporary efforts to regulate algorithms should heed.

See extended abstract

 Presented in Session 30. History of Data and Statistics