Jason Jia-Xi Wu*

Download a PDF version of this article here.

The rampant growth of artificial intelligence (AI) has reshaped the landscape of credit underwriting and distribution in consumer financial markets. Despite expanding consumers’ access to credit, the unbridled use of AI by creditors has widened credit inequality along racial, gender, and class dimensions. Existing regulatory paradigms of consumer financial protection fail to meaningfully protect consumers against potential AI discrimination and exploitation. At its core, the failure of the existing legal regime lies in its fetishization of free markets and consumer autonomy—the two ideological pillars of neoliberalism. Judges and lawmakers who subscribe to neoliberal ideals have consistently attributed credit market defects to individual choices, rather than systemic and inherited social inequalities. Today, this neoliberal ethos continues to inform mainstream legal responses to the threats posed by AI.

This article proposes an alternative. It argues that thinking of AI governance in purely individualist, dignitarian terms obscures the real source of algorithmic harm. Contrary to neoliberal assumptions, AI-inflicted harms in credit markets—e.g., algorithmic discrimination and exploitation—are not the result of irresponsible creditor conduct or opaque markets. Rather, they are caused by unjust relations of data production, circulation, and retainment that reflect and reproduce systemic social inequalities. Understanding algorithmic harm as both individually and socially constituted can help lawmakers move away from the outdated neoliberal paradigms that idolize individual responsibility. It also opens new avenues for legal reform. To reshape unjust data relations, this article proposes a propertarian approach to AI governance that involves: (1) reimagining the nature of data ownership, (2) creating a collective intellectual property right in data, and (3) building a collective data governance infrastructure anchored in the open digital commons.

 

Introduction

For decades, our legal system has embraced neoliberalism as the dominant regulatory ethos for consumer financial protection.1 Its twin ideals—free markets and consumer autonomy—serve as the guiding principles governing the supply and underwriting of credit.2 For markets to be free, constraints on informational flow must be removed, price distortions must be controlled,3 and governments should not regulate absent market failure.4 For consumers to be autonomous, markets must be transparent enough to enable unhindered consumer decision-making.5 Viewed holistically, these two pillars of neoliberalism undergird the prevailing ideology of consumer protection: the freer the markets, the more autonomous the consumers.

The ideal of free markets finds legal expression in consumer credit reporting and disclosure laws. Such laws aim to facilitate the efficient and transparent flow of market information. The Truth in Lending Act (TILA)6 and the Fair Credit Reporting Act (FCRA)7 require creditors to disclose lending terms, as well as material risks and consequences therefrom. With the enactment of these laws, Congress endorses the view that disclosure reveals the true cost of lending, which can level the playing field for creditors, and enable consumers to compare similar or substitutable products.9 Born out of the 1970s civil rights movement, the Equal Credit Opportunity Act (ECOA)10 and Fair Housing Act (FHA)11 have applied colorblind principles12 of non-discrimination and race-and-gender-neutrality to the underwriting of consumer credit.13 These statutes reflect the congressional view that disparate treatment14 in credit undermines consumers’ exercise of individual free choice and agency.15

Together, these consumer financial protection laws, which embody the twin ideals of free markets and consumer autonomy, reinforce the neoliberal ideology of individual responsibility.16 Rather than treating credit inequality as a socially-constructed systemic problem, our consumer financial protection laws deem inequality as outcomes of individual choice.17 Absent from the regulatory toolkit is the language to describe systemic injustices, redress collective harm, or install broad social infrastructures. Over the past five decades, this ideal of individual responsibility has coalesced into a neoliberal consensus that crowded out alternative visions for our consumer financial protection regime.

However, this neoliberal consensus is now disrupted by the rise of artificial intelligence (AI) in consumer finance.18 Increasingly, credit unions, banks, and lenders use AI to underwrite consumer credit.19 Because AI does not need transparent market information or human actions in making credit decisions, it renders the current disclosure-based consumer protection regime20 ineffective. Advanced machine learning21 techniques such as deep learning (DL) can now scrape and process unimaginable volumes of data in the blink of an eye.22 These algorithms can continually adapt and tune their parameters to reflect new informational intake with minimal or no human supervision.23 Due to the algorithms’ black-box properties, even original programmers cannot understand some of AI’s predictions.24 Moreover, AI generates predictions about consumer creditworthiness even without credit history or formalized financial data. Instead, AI analyzes “fringe data”25e.g., online subscriptions, club memberships, browser history, location, and social media—information that may be irrelevant to determinations of creditworthiness.26 This process can be entirely unsupervised and incomprehensible, undermining the fairness of credit provision.27

Normative and Legal Implications

This article examines how AI disrupts the normative and legal underpinnings of neoliberalism embedded in our consumer financial protection regime.

From a normative perspective, AI problematizes neoliberal ideals of free markets and consumer autonomy. With regards to the free market ideal, AI challenges the notion that prices can ever be transparent or neutral. In digital environments where AI could use scraped data to manipulate consumer behavior and tailor-recommend products at inflated prices,28 prices do not reflect the objective market value that consumers (as market agents) ascribe to their preferences.29 With regard to the consumer autonomy ideal, AI defies the prevailing understanding that more information is always better for consumers. Through manipulating personal data and inundating consumers with information, AI can easily distract consumers from their true product preferences.30 Under the psychological mechanism of confirmation bias,31 overwhelmed consumers can easily agree to terms against their best interests.32 Thus, widespread, unrestrained adoption of AI solutions in the consumer financial market can undermine both free choice and market transparency.

From a legal perspective, AI exposes the blind spot of individualist consumer protection regimes; its commitment to formal equality conceals systemic inequalities. Existing disclosure and fair lending laws embrace the assumptions of market neutrality and formal equality of economic opportunities without recognizing the substantive, systemic inequalities in credit provisions.33 Consequently, they adopt individual-based solutions to credit inequality, which are inherently ill-fit for systemic problems. Both the ECOA34 and TILA35 look exclusively to creditors’ individualized conduct when the laws should instead look to the parties’ market relations.

Essentially, neoliberalism’s emphasis on formal equality and individualism obscures the source of algorithmic harm: unjust market relations. AI aggregates data of specific consumers in unaccountable ways and derives knowledge about general consumer groups from this aggregated data (i.e., knowledge discovery); this affects both consumers within direct transactional relations with creditors and nonparties.36 Whether intentional or not, creditors’ widespread use of AI for credit underwriting may reinforce unjust market relations between creditors and all consumers. This occurs because creditors, as owners and users of AI systems, control the channels of consumer data production, circulation, and retainment.

Key Concepts and Definitions

Before delving into the details, it is necessary to first clarify some key concepts being invoked throughout this article:

(i) Artificial Intelligence: When this article uses the term AI, it focuses on a subset of machine learning,37 or deep learning (DL), that is currently being deployed by FinTech lenders to assess and underwrite consumer credit.38 DL uses a layered decision-making structure called artificial neural networks, which simulates the neural networks of a biological brain.39 Like other machine learning techniques, DL algorithms operate by harvesting training data, extracting features from datasets, learning from these features, and “apply[ing] what they learned to larger datasets to determine or predict something about reality.”40 The key difference is that, while earlier iterations of machine learning required human instructions to extract features from data inputs, DL recognizes patterns automatically.41 What this means is that a DL algorithm can engage in its own feature extraction, continuously learn from past mistakes, and self-adjust future interactions with consumer data inputs each time it makes a prediction.42 After a few iterations, the DL model matures its decision logic by eliminating noise data that is contradictory or irrelevant.43 Although FinTech lenders and creditors also use other AI technologies for credit underwriting, their use of DL models currently raises regulatory concern due to their opaqueness and self-learning capabilities.44 Regulators’ primary concern is that DL models often use concepts that produce unpredictable outcomes.45

(ii) Algorithmic Harm: This article identifies two sources of algorithmic harm: (1) algorithmic decisional harm, which refers to the harm that consumers incur when algorithms exploit consumers (through price discrimination)46 by taking advantage of their market-induced insecurities or cognitive flaws through the use of biased information that the algorithm has garnered about individual consumers or consumer groups,47 and (2) algorithmic informational harm, which refers to the harm that consumers suffer due to how information about them (whether consumer-owned or within their reasonable expectations of privacy) is collected, processed, and engineered to construct archetypes of consumer preferences for market usage.48 Whereas the former category describes harms associated with problematic outputs, the latter describes harms associated with problematic inputs.

(iii) Knowledge Discovery: This refers to the process by which data (e.g., digital footprint, market information, online records) regarding any consumer group or individual is discovered—that is, through scraping, mining, and aggregating.49 Data discovered via this process is then tuned and optimized to generate behavioral insights (i.e., knowledge) about consumers who are subjects of algorithmic decision-making. Machine learning is a technique to conduct knowledge discovery. By way of illustration, machine learning generates predictions through the following repeating steps: (1) data gathering and cleansing; (2) splitting the data into a training and a testing dataset; (3) training the predictive model with training dataset based on the algorithm’s instructions; (4) validating the model with the testing dataset.50

(iv) Consent Manufacturing: This refers to processes of information control that manipulate consumer desire and influence consumers to make market decisions against their interests. In AI-mediated credit markets, consent-manufacturing takes two forms: (1) creation of personalized information silos that control expectations of consumers who engage in a credit transaction with an AI-informed creditor; and (2) production of generalized knowledge about group consumption behaviors designed to manipulate prospective consumers and nonparties to the credit transaction.51

(v) Credit Underwriting: This refers to the practice of underwriting consumer credit through risk-based assessment of consumer creditworthiness.52 Typically, creditors base their decisions to extend or deny credit to a consumer on the following considerations: (1) the probability of default or delinquency (i.e., consumer credit risk); (2) the opportunity cost of underwriting (i.e., expected return); (3) the possibility of loan recovery for the type of financial product offered, factoring in the creditor’s asset portfolio (i.e., risk adjustment).53 If the creditor accepts the consumer’s application for a loan, then the creditor calculates an estimated price range for the risk-return tradeoff that would render the credit extension profitable.

Traditionally, creditors rely on the credit reports issued by credit bureaus (e.g., Equifax, Experian, and TransUnion) to conduct risk-based lending.54 Over the past three decades, credit scores and automated scoring systems have become the dominant method for underwriting consumer credit.55 Regulators have criticized credit reports and credit scores as systematically disadvantageous to consumers with thin credit histories or lack of prior engagement with the banking system.56 In the last five years, creditors have increasingly shifted to AI to assess and underwrite consumer credit.57 The rise of AI credit underwriting coincided with the emerging practice of using alternative “fringe data” to assess consumer creditworthiness, which does not require formalized credit information used by conventional credit reporting and scoring.58 Bankers and FinTech lenders tout the use of AI as the panacea to enhance credit access for the “unbanked” and the “underbanked” consumers.59 Its usage is most concentrated in the underwriting of unsecured personal loans and credit cards.60 From 2015 to 2019, FinTech lenders nearly “doubled their share” in the unsecured personal loan market and “now account for 49% of originated loans.”61 Auto-lending62 and small business lending63 are also areas where machine learning algorithms are used for credit underwriting.

Analytical Roadmap

The remainder of this article proceeds as follows. Part I investigates two questions that lie at the heart of this article: How are AI technologies being introduced in ways that intensify systemic credit inequalities? To the extent that AI is being used to exploit consumers through the extraction and commodification of consumer data, where is the locus of algorithmic harm in these spaces?64 To answer these questions, Part I articulates a theory of price engineering and consent manufacturing to explain why and how AI technologies have been used to perpetuate unjust market conditions for credit access.

Part II explains why the contemporary consumer financial protection regime, informed by the neoliberal ideals of free markets and consumer autonomy, fails to address the risks of algorithmic harm. The principal reason for failure is that the existing fair lending and disclosure laws overly fixate on protecting individual market freedom with minimal regard to systemic and relational inequalities. As this Part aims to demonstrate, the neoliberal idolization of consumer free choice in the credit industry traces its roots to federal credit legislation that began in the 1970s.

Part III criticizes two dominant legal proposals on the table: algorithmic input scrutiny and regulatory technology. Despite correctly identifying the source of algorithmic harm, such proposals do not interrogate the flawed assumptions of free markets and consumer autonomy. Their solutions tend not to venture beyond the classic neoliberal arguments for data transparency and consumer education.65 The incompleteness of these proposals often leads to wrongheaded solutions that ultimately reinforce unjust market relations.

Part IV proposes alternative pathways to build AI accountability. It lays out steps to reshape the presently unjust market relations of data production, circulation, and retainment through (1) reimagining the nature of data ownership, (2) creating a collective intellectual property right in data, and (3) building a collective data governance infrastructure anchored in the open digital commons.

I. Current Landscape of Algorithmic Exploitation

AI is transforming the field of consumer credit. Since the mid-2010s, AI has become exponentially more accessible, sophisticated, and commercializable.66 A 2018 Fannie Mae report found that 27% of mortgage originators used machine learning and artificial intelligence in their origination processes while 58% of mortgage originators expected to adopt the technology within two years.67 In a 2020 lender survey, approximately 88% of U.S. lenders reported that they planned to invest in AI applications for credit risk assessment.68 In the U.K., 72% of financial services firms use machine learning or some variation of AI in their businesses.69 With the release of advanced DL technologies in 2023—including Generative AI and large language models that utilize artificial neural networks—AI has become more deeply integrated into the consumer underwriting industry.70 Within this decade, it is exceedingly likely that AI credit underwriting will become the new market imperative.

The rapid adoption of AI in the credit market has spawned a range of responses. On one end of the spectrum, FinTech and banks have painted a rosy image. They argue that AI can help creditors revitalize so-called credit deserts by reaching the unbanked and underbanked.71 For them, AI’s ability to amass fringe data and gain insights about consumers’ market behavior presents a valuable business opportunity: creditors will be able to lend to consumers who were previously denied credit due to the lack of formalized credit information.72 In the meantime, markets will work on their own without government regulation. On the opposite end of the spectrum, regulators and consumer advocates have expressed concern that the unbridled use of AI can encroach data privacy and erode due process.73 As creditors delegate credit decisions to AI, the credit-underwriting process can become less transparent, which will make consumer litigation under the fair lending laws more difficult.74

The reality, however, is that both responses evade the root problem. FinTech and banks are wrong to assume that free markets will eliminate credit inequalities. Regulators and consumer advocates are right to worry about AI, but they have misdiagnosed the problem as the erosion of consumer autonomy and free choice. As this Part seeks to illustrate, the true source of algorithmic harm of AI credit-underwriting is unjust relations of data production, circulation, and control that dictate the outcome of AI’s knowledge discovery processes. It is harmful, not because it is more discriminatory or intrusive than credit decisions made by human loan officers, but because AI can direct creditors’ market power towards more exploitative domains of credit consumption through engineering price-signals and manufacturing consumer consent.75

How Does AI-Based Credit Underwriting Harm Consumers?

Algorithmic Decisional Harm

How does a lender’s use of advanced credit-underwriting algorithms generate risks of consumer exploitation? To thoroughly understand the current state of algorithmic exploitation, consider three scenarios:

Scenario A1: Suppose a creditor is seeking to expand its business into a new community. The creditor purchases from data brokers a right to access a private database containing vast volumes of alternative data regarding what people in the target community consume, purchase, desire, and browse online. This private database sources its data from a wide range of intermediaries that collect personal data from mobile apps, websites, tracking devices, and social media—and it happens to include data about me collected from my daily iPhone usage. To make sense of the information gathered from this private database, the creditor uses an advanced DL algorithm to summarize its patterns and generate predictions. With this data, the algorithm reveals that my family currently suffers from a short-term liquidity crisis because I have lost my manufacturing job. It also learns, from reading my search history, that I need quick cash to pay medical expenses for my uninsured family member. Based on this information, the algorithm can micro-target me with predatory advertisements and recommend a loan that could allow me to defer interest payments for the first month (but I will have to pay a higher compounding interest after the first month according to the terms of agreement). I accept the terms because I do not have alternatives.

Scenario A2: Suppose that, after one month, I am lucky enough to find a new job and my financial situation has improved. I am no longer in need of short-term loans, but I do not yet have enough cash to pay off the entire principal and interest accrued from my previous debt. Again, with the aid of a DL algorithm, the creditor can recommend a new package that allows me to further defer the interest, but under the condition that I borrow more. I end up accepting a combined loan package that is much more costly than others who are similarly situated.

Scenario A3: Now, suppose further that another individual from my community who has similar income levels, family obligations, savings, and consumption levels is looking for new sources of credit. Like me, she has low credit scores and struggled to obtain loans from large banks. Using the information harvested from me, the AI-informed lender can engage in the same pattern of microtargeting against her and trap her into a cycle of indebtedness.

What distinguishes these three scenarios? Scenario A1 exemplifies what economists identify as first-degree price discrimination (FDPD). FDPD occurs when businesses charge the maximum possible price for each unit of goods or services consumed by the consumer.76 Scenario A2 exemplifies what economists call second-degree price discrimination (SDPD). SDPD occurs when businesses charge different prices for different quantities consumed.77 Finally, Scenario A3 exemplifies third-degree price discrimination (TDPD). TDPD occurs when businesses charge different prices to different consumer groups.78 These three forms of price discrimination differ from each other in terms of the relationship and direction of exploitation between sellers and buyers in a market transaction.

Conventionally, FDPD, SDPD, and TDPD occur on separate domains. Standard economics textbooks generally characterize price discrimination as symptoms of market failure, caused by either the lack of competition or lack of informational transparency.79 FDPD (also known as perfect price discrimination) occurs due to informational asymmetries between creditors and consumers on a direct and discrete basis, which commonly manifests in the form of “take-it-or-leave-it” situations.80 SDPD (also known as nonlinear price discrimination) occurs due to the absence of consumer bargaining power and the inability to exit an exploitative business relationship with the creditor.81 TDPD (also known as market-wide price discrimination) occurs due to monopolies over a coveted resource or informational failures across similarly situated consumers who would have shared the same market preferences absent the monopoly.82 By definition, the three domains of price discrimination must remain separate because each domain correlates with a failure in a different market relationship.

However, in the age of AI, the three domains of price discrimination are no longer separate. Rather, these domains build on each other and intensify their exploitative effects. The AI-informed creditor’s microtargeting in Scenario A1 paved the foundations for further exploitation that occurred in Scenario A2. Using the same information extracted from the consumer, the creditor in Scenario A3 can now subject another consumer that is not within the privity of contract with the initial consumer to exploitative lending terms. The creditor’s use of AI for credit-underwriting allows each form of price discrimination to overlap; advanced AI models can use data garnered from one consumer to make predictions about other members of the consumer group based on classifications from the knowledge discovery process. Moreover, with the assistance of AI, creditors can more accurately target vulnerable consumers through scraping, processing, and analyzing mass volumes of consumer data obtained from data aggregators. AI drastically lowers the cost for creditors to engage in these three levels of price discrimination.

Algorithmic Informational Harm

In addition to causing decisional harms through price discrimination, AI-based credit underwriting can cause informational harms depending on how the AI model intakes data. Typically, consumers suffer two types of informational harm—(1) individual informational harm, which refers to “harm[s] that a data subject may incur from how information about [individuals] is collected, processed, or used,”83 and (2) social informational harm, which refers to the “harms that third-party individuals may incur when information about a data subject is collected, processed, or used.”84 To understand the two forms of informational harm, consider two scenarios:

Scenario B1: Suppose a FinTech lender uses an advanced DL algorithm to underwrite consumer credit and evaluate creditworthiness. The target borrower whom the lender seeks to evaluate does not have a FICO credit score. She also lacks any other formal credit history that is indicative of creditworthiness. In fact, the borrower belongs to an underrepresented minority group whose members historically had limited prior engagement with the formal banking system (i.e., credit invisible consumers). Undeterred by the lack of available credit information, the lender purchases a right to access a nonpublic database that sources data from people’s mobile apps, online subscriptions, browser history, social media, and other “fringe data.” The database includes the borrower’s sensitive personal medical information and records of hospital visits. The lender then instructs its DL algorithm to scrape data from the nonpublic database and trains the algorithm to make predictions about the borrower’s likelihood of default. Since the frequency of medical visits and the borrower’s condition is positively correlated with indebtedness, the algorithm gave the borrower a low hypothetical credit score and computed a rate of lending return based on that information. Without knowing this data, the FinTech lender used the algorithm’s results and offered the borrower a costly short-term loan with unfavorable rates based on the assumption that she is at a high risk of default. Here, the borrower suffered individual informational harm because her sensitive medical data was being used for a different, unrelated purpose that resulted in her getting a low hypothetical credit score.

Scenario B2: Suppose the same facts as above, except that the algorithm also scraped data from other people who are similarly situated as the initial borrower. After analyzing the profiles of 1,000 individuals, the algorithm finds out that a particular minority group disproportionately suffers from the same medical conditions as the initial borrower. In fact, people from the same cultural heritage who share the same dieting habits are 50% more likely to develop the medical condition than the population average. Defining this pattern as relevant information, the algorithm factors that disparity into its learning process. When the next borrower comes to the same lender and applies for a loan, the algorithm automatically computes a hypothetical credit score that takes the medical condition into consideration. Even though the algorithm did not make a prediction based on race, ethnicity, or religious classifications, the result has a disproportionate adverse impact on borrowers from the same group. Here, the new borrower suffered social informational harm because data harvested from a different individual was repackaged into new datapoints that were used against her.

While both harms can be caused by AI information-processing systems, the two differ in terms of the directionality of informational control which generate the harms. Individual harm is caused by situating consumers within highly monitored and engineered informational systems where owners/users of AI (creditors) exert vertical control over the circulation of data and the social relations of data production.85 Social harm is produced when owners/users of AI export individual harm to similarly-situated consumers outside the vertical information flow, thereby “amplify[ing] social processes of oppression along horizontal data relations.”86

Existing data privacy laws address some aspects of individual informational harm. Generally, individual informational harm is accounted for in laws governing: (1) consent-less data collection,87 (2) denial of informational access,88 (3) consent-less disclosure of personal data (i.e., data breaches),89 and (4) use of inaccurate information in credit reporting.90 But, under existing law, individual informational harm is redressable only if such harm constitutes a violation of some aspect of individual autonomy or dignity91––e.g., right to access, right to identification, right to be informed, right to withdraw consent, right to accurate information, and right to be forgotten.92 Under existing statutory and doctrinal frameworks, individual informational harms outside the domain of intrusions raise no cause of action.

For social information harms, redresses in existing legal regimes are entirely absent from the legal lexicon. No law in the U.S. has accepted a theory of data governance beyond the protection of individual autonomy or dignity. Even the European Union’s General Data Protection Regulation (GDPR)—supposedly the “strongest data privacy and security law in the world”93—fails to account for social informational harms resulting from unjust effects of data production, circulation, and retainment.94 In strengthening consumers’ control over the terms of data extraction and use, dignitarian data-governance regimes such as the GDPR seek to rebalance the power disparities between data-collectors (owners/users of AI) and data-subjects (consumers) within the vertical relations of informational control.95 But these regimes ultimately “fail to apprehend the structural conditions driving the behavior they aim to address.”96 As demonstrated in this section, even the most progressive dignitarian data governance systems to date are incomplete in their attempts to redress social informational harm.

How Is AI Changing the Credit Market for the Worse?

The Nature and Impact of Price/Consent Defects

This section examines the nature and impact of price engineering and consent manufacturing on consumers. It explains how consumers respond to price/consent defects from a socio-behavioral perspective and how this article’s characterization of consumer behavior departs from the neoliberal presumptions.

Within the classical neoliberal imaginary, consumer preferences are exogenous to market mechanisms.97 When prices are rigged—usually because of excessive social or governmental meddling (i.e., central planning)—consumers will refuse to transact on the market because the underlying goods and services do not match their range of price preferences.98 In the same vein, neoliberals imagine consent defects to be the result of consumers’ knowledge deficiency or inability to adequately communicate their (exogenous) preferences—i.e., inability to exercise their best interests—given the resources they own.99

From the neoliberal perspective, the problems of price-engineering and consent-manufacturing are results of imperfect markets and irrational market agents. Their solution, of course, is to restore perfect markets and rational agents.100 These problems fall squarely within the remedial zones of disclosure and fair lending. Once these institutions are in place, consumers will be able to vindicate their rights through private litigation.

But this characterization of consumer behavior is inaccurate. Consumer preferences are not exogenous to the market; they are shaped by market power and reflective of socialized choices.101 What consumers choose to purchase are reflections of how they would like to perceive themselves, how they would like to situate themselves in communities and social networks where they have standing, and what markets tell them about how consumption would help them achieve their goals.102 Broadly speaking, consumer preferences involve the values and tastes that shape people’s market activities—i.e., aspects of economic decision-making that the neoliberal assumptions of exogeneity and rational choice fail to explain.

What this means is that consumer preferences are not concrete, itemized, and preexisting desires that consumers carry to the market. Instead, consumer preferences are fluid, broad, and formed within the market’s allocative processes through consumers’ constant shopping activities or engagement with other market actors.103 Thus, neoliberals misunderstand the implications of price-engineering and consent-manufacturing.104 While neoliberals strive to minimize price-engineering and consent-manufacturing because they corrupt the neoliberal ideals of free markets and consumer autonomy (and therefore make deregulation more difficult to achieve), this article argues that price-engineering and consent-manufacturing justify a shift away from individualist solutions towards greater public regulation of the private markets.

Once we understand that consumer choices are socialized and embedded, it is not hard to see why the current system—built on the discourse of individual rights and the legal infrastructure of private litigation—fails to fulfill its promises of economic justice.105 No matter how exploited the consumers are or how widespread the exploitative practice, consumers whose preferences are formed by price/consent defects will not file a case to begin with. From a critical perspective, the legal and technical protocols originally designed to protect consumers are in fact hurdles obstructing consumers from achieving meaningful credit equality. The following paragraphs explore how the business applications of AI in credit underwriting are conducive to price-engineering and consent-manufacturing.

Price Engineering in AI-Mediated Credit Markets

There are several common misconceptions about what AI does to price-signals in credit markets. The first—and perhaps most popular—misconception relates to the nature of AI decision-making. According to the mainstream argument advanced by the first generation of algorithmic enthusiasts (and endorsed by FinTech and banks), AI improves the accuracy of credit risk predictions because it (1) is better at absorbing, processing, and analyzing large volumes of information than human decision-makers; and (2) acts upon such information without human biases. This translates into more accurate pricing of consumer credit risks and more optimal allocation of financial resources. The advantage of AI, the argument goes, is that it substitutes for biased human judgment.106 It concludes that AI’s “suppression of some aspect of the self, the countering of subjectivity” leads to more desirable market outcomes.107

But the mainstream argument suffers from a critical flaw: unlike what enthusiasts depict, AI makes decisions by replicating, rather than displacing, human bias. Recall that AI decisions are made through (1) scraping available individual/market-level information about their subjects, (2) repackaging scattered data into behavioral archetypes, (3) generating predictions about human behavior based on these constructed archetypes, and (4) adjusting predictions to reflect new informational intake.108 This process inevitably recycles past human prejudice and erroneous judgements into AI’s present and future predictions.109 For instance, data about consumers’ education level, incarceration history, and court records—i.e., outcomes of past societal disparities resulting from racial-class subjugation—are typically picked up by AI in the scraping process and repackaged into behavioral archetypes about the consumer’s behavior.110 Even pure economic data—e.g., consumer income, household indebtedness, and credit history—may reflect racial-class disparities, since minorities are more frequently targeted by predatory creditors.111 When these specific individual-level data are absent, AI fills in the blank using behavioral archetypes of other consumers from the same constructed group.112 Thus, credit pricing by AI is anything but value-neutral.

The second common misconception is that AI lowers the cost of lending and increases credit access. Advocates for de-regulating AI argue that the market adoption of AI has made the underwriting process more equitable and inclusive.113 They attempted to marshal empirical support, for example, from a National Bureau of Economic Research report indicating that “FinTech algorithms discriminate 40% less than face-to-face lenders”114 when it comes to mortgage prices.115 Another study, conducted by the Consumer Financial Protection Bureau (CFPB), indicates that creditors using AI approve 23–29% more loan applicants than creditors who purely rely on human judgment for their credit decisions.116 The same study also shows that AI lending lowers the annual average interest rates by 15–17% for approved loans.117

However, if we pay attention to other metrics, it becomes unclear whether the current uses of AI in lending meaningfully improve consumers’ access to equal credit. Using administrative data of 10 million U.S. mortgages originated between 2009 and 2016, Fuster et al. found that, while AI has indeed increased aggregate credit access and average loan acceptance rates, it also widened cross-group disparity: “[W]hile a large fraction of borrowers who belong to the majority group … experience lower estimated default propensities under the machine learning technology … these benefits do not accrue to some minority race and ethnic groups … to the same degree.”118 Even within racial minority groups, disparities in lending are discovered. Those who benefit from AI are disproportionately White-Hispanic and Asian. Amongst those who lose are non-White Hispanics.119

Thus, focusing on loan acceptance rates as the measurement for credit access obscures more than it illuminates. While AI does approve more loans than human loan officers, the data does not tell us about the quality and substance of the loans being approved. A more plausible explanation for the positive correlation between AI adoption and credit access is that AI helps creditors identify previously invisible profit-making opportunities. Since AI allows creditors to assess credit risks of consumers without the use of formalized credit information, it also enables them to reach the unbanked and underbanked communities.120 But, to compensate for the high risks of lending, in these “credit deserts,” creditors need to adjust the prices to match the risks if they hope to make a profit.121 To do this, creditors typically reduce the upfront prices of lending (to make them more accessible by the low-income) but increase prices on the backend—through deferred interest payments, buy-now-pay-later schemes,122 balloon payments,123 or negatively-amortizing interest rates.124 With the use of more sophisticated AI credit models, such as continuously-learning DL algorithms, creditors can more easily reap profits from low-income borrowers and extract rents by obscuring the actual costs of consumer financial products. Increasing credit access in this way will only widen the wealth gap and systemic credit inequalities. What the mainstream proposition omits, therefore, is the flipside of credit cheapness: low quality.

The third common misconception is that more data leads to more accurate algorithmic predictions. This claim builds on the techno-chauvinist assumption that greater informational intake necessarily produces more rational decisions.125 By implication, if an AI ever makes an “irrational” decision, such as discriminating against minority consumers in the credit underwriting process, then the problem must be inadequate or insufficient data inputs.126

But the reality is that more data can reinforce algorithmic biases. Even though AI’s information-retaining capacity and computing power are vastly superior to humans’, AI makes decisions by replicating the human decision-making structure. Contrary to the public imagination, AI doesn’t make use of every piece of data gathered.127 When AI receives new data in raw, scattered form, its first task is categorizing them into existing archetypes.128 Since AI is trained using data from the observable human environment, archetypes constructed by AI inevitably reflect the same biases that exist in the human environment.129

Contrary to the techno-chauvinist assumption, AI decisions tend to emulate pre-existing staple decisions—i.e., norms that can be summarized into statistical patterns.130 These staple decisions then form the basis of AI’s self-learning process—e.g., how it tunes its parameters to reflect new information, what weight it gives to each factor, and which data it determines to be distractive or noisy.131 By design, AI marginalizes any “splinter data” that cannot be mapped onto a pre-existing norm.132 This means that AI, like humans, can exhibit confirmation biases when fed too much information.

Nevertheless, the fallacy of “more-data-means-better-outcomes” runs deep in the credit industry. The idolization of informational quantity has largely fueled the movement within the credit industry to expand the use of alternative “fringe” data. This wave began with FinTech’s push for “big data” analytics in the personal loan and small-business credit underwriting space. In 2012, a Los Angeles-headquartered start-up, ZestFinance (now “Zest AI”), became the first company to combine “machine learning style techniques and data analysis with traditional credit scoring.”133 ZestFinance’s marketing strategy emphasized AI as a solution to the persistent problem of credit invisibility in low-income communities.134 It framed its approach as using “all data as credit data.”135 By 2022, alternative data usage had become widespread.136

Piercing through the rosy image painted by ZestFinance, the reality is that proxy discrimination is ingrained in each step of AI’s analysis.137 ZestFinance’s AI model takes into consideration data that “appear to have little connection with creditworthiness.”138 For example, the AI model measures “how responsible a loan applicant is” by analyzing the speed she “scrolls through an online terms-and-conditions disclosure.”139 The number of social media connections a person has, the frequency that she deactivates an account, and the number of connections she unfriends are also used as proxies to measure risk-taking tendencies.140 The model also considers spending habits in the context of the loan applicant’s geographic location.141 For example, “paying half of one’s income [on rent] in an expensive city like San Francisco might be a sign of conventional spending, while paying the same amount in cheaper Fresno could indicate profligacy.”142 These proxies were not inserted by their human programmers—they were generated automatically via algorithmic knowledge discovery processes that merely seek to model and replicate human decision-making.143

In a nutshell, all three common misconceptions stem from a misunderstanding of how AI works in credit markets. These misconceptions are rooted in the belief that AI is fundamentally different from human intelligence and exogenous to the human environment. Yet, as the foregoing paragraphs demonstrate, these assertions cannot be further from the truth. In making predictions about human behavior and acting upon them, AI embeds, repackages, and reifies the very inequalities found in the human world. But AI also goes one step further: AI amplifies these biases by building on each other’s biases.144 Once an AI model computes a result and wraps it in the form of packaged data, such data then enters the stream of market data that is constantly being scraped and analyzed by other AI models.145 In this digital ecosystem where data is incessantly rinsed and remade, price-signals reflect the aggregate biases of the market rather than the inherent value of goods and services being transacted.

Consent Manufacturing as Information Control

Consent manufacturing is not new. It is part and parcel of the market’s disciplinary power to manipulate consumers into buying what they do not need. It is also integral to the state’s propaganda power to mobilize citizens into acting against their self-interests and serving the elite consensus.146 Its origins and manifestations are well documented in Edward Herman and Noam Chomsky’s seminal work, Manufacturing Consent. Since its coinage, the term consent-manufacturing has been amply applied to studies of social media, the internet of things, and other engineered information environments.147

Like mass communications technologies, AI ushered in an era of unprecedented suppression of the self via creating a chronic “reliance on market forces, internalized assumptions, and self-censorship, and without overt coercion.”148 This interweaving web of suppressive forces is reinforced by both the culture of neoliberal individualism149 and the material conditions of market dependency.150 It exists in all informational systems operating under capitalist logic, whether undergirded by old or new technologies.151 Here, what distinguishes AI’s suppression from that of mass communications is the form of control and the impact it has on the lives of those subject to the suppression.

In the credit market, AI manufactures consumer consent through two distinctive yet mutually-reinforcing pathways: (1) creation of personalized information silos designed to control and reset expectations of consumers within the immediate zone of the credit transaction; and (2) production of generalized knowledge about group consumption behaviors designed to manipulate prospective consumers and those who are nonparties to the credit transaction.152 Whereas the first pathway concerns the control over vertical data flows between consumers and creditors, the second concerns the control of horizontal data flows between consumer peers by creditors.153

In the first pathway, AI creates a system of self-hallucination through harvesting consumer data to learn about the consumers’ behavioral proclivities while simultaneously reshaping consumer expectations by pressing their cognitive weak spots. Within this system, consumers are ceaselessly inundated with information nudging them to choose credit products that are more exploitative and profitable for the creditor. The classic example is data aggregation in payday lending. Payday loans notoriously attract low-income, low-savings, and socially desperate consumers because they do not require credit scores or other formal credit history from the loan applicant.154 Such loans tend to have high backend costs (albeit with low entry prices) that can trap borrowers into persistent indebtedness.155 With the use of AI, payday lenders can more accurately seek out situationally precarious consumers and those who have tendencies to reborrow at high costs with very little information about any individual consumer.156 In the process of learning about the consumers’ needs, inclinations, and predispositions, the AI mixes and matches price terms in ways that consumers will most likely accept. AI can also design the optimal payday loan structure that attracts consumers who do not need or would not have otherwise applied for the loan.157 Here, the role of AI is to augment the power of creditors over consumers—via giving creditors the control over vertical flows of data between the creditor and the consumer.

In the second pathway, AI creates an ecosystem of peer-hallucination via aggregation of data from a particular consumer group and using it to shape the expectations of prospective consumers who are not a party to the credit transaction. This ecosystem undercuts consumer power on two parallel dimensions.

First, as between consumers, AI creates a horizontal system of norm-convergence whereby consumers in the same affiliated groups and their proximate social networks are exposed to the same expectations. For instance, when consumer A0 applies for a loan underwritten by AI, those within the same group—consumers A1 and A2—will be exposed to similar expectations as A0 when they apply for a loan.158 If A0’s consumer expectations are skewed by processes of self-hallucination, A1 and A2 will most likely experience the same effect. This is because the nature of AI—and especially for DL algorithms—is that it “can be used to know things about [A1] that [A1] does not know [about herself], by referring back to [A1] from [A0].”159 And, to the extent that certain aspects of group An intersect with group Bn, “data from An can be used to train models that ‘know’ things about Bn, a population that may not be in any vertical relation with the system’s owner.”160

Second, as between creditors, AI generates data flows between users of AI engaged in the same underwriting practice. It creates a two-tiered digital environment: on the one hand, creditors can share information they collect about the consumers in a networked environment constructed by AI. On the other hand, consumers who are subjects of data scraping are isolated and kept mostly in the dark about what information they generate. Like in the payday lending industry, the “data of those who have applied for a loan can be shared among lenders for retargeting.”161 Payday lenders can use horizontal behavioral insights about the consumer to target entire communities and trap repeat borrowers into unending cycles of indebtedness. Here, the role of AI is to sever direct horizontal ties between consumers while granting creditors visibility and control over the horizontal flow of consumer data.

Through the interplay of self/peer-hallucinating forms of consent-manufacturing, AI creates a digital environment where consumers are turned into data-producing machines—churning out new data each time they participate in the digital economy. Within this constructed environment, consumers are incessantly generating new marketable data through their routine engagement with the credit system. Data extracted from consumers’ everyday life are split apart, atomized, and reassembled into market price-signals; the price-signals are then re-consumed by consumers and turned into new data—a cycle of digital cannibalization.162 In this system, consumers become part of the products that they ultimately consume.

II. Neoliberal Roots of Consumer Financial Protection

This Part unearths the history of how the neoliberal ideals of free markets and consumer autonomy became entangled with the current normative paradigm of consumer financial protection. In doing so, this Part shows that neoliberal ideals are not timeless tenets of economic justice. Rather, they are products of congressional politics that served one particular historical purpose—to legitimate the federal government’s divestiture from public welfare and incorporate minorities into the free-market capitalist status quo. As such, this Part delegitimizes the dominant normative justification for delegating public solutions to credit inequality to the private markets.

Since the late-1960s, Congress has enacted a series of consumer financial protection laws163e.g., FHA, ECOA, TILA, FCRA—to bolster consumer autonomy and facilitate competitive, transparent, and equitable markets for credit provision.164 Enacted at the height of the civil rights movement, these laws used credit access as a means to solve race-based economic inequality and placate social unrest.165 Yet, as the federal government gradually aligned itself with neoliberalism beginning in the mid-to-late 1970s, the civil rights notion of equal credit access merged with the individualist, laissez-faire ideology that saw market freedom as a panacea to poverty.166 This merger became a bipartisan consensus that guided almost all significant federal regulatory responses to credit inequality, giving rise to the belief that credit inequality can largely be resolved by maintaining efficient markets and race-and-gender-neutrality.167

As the following sections aim to demonstrate, our existing consumer financial protection regime, informed by neoliberal individualism, is ill-equipped to address the novel threats of algorithmic harm because it overly fixates on the protection of private rights. Despite Congress’s intention to eradicate systemic credit inequality, these laws have had limited impact in protecting consumers. The failures of the contemporary consumer financial protection regime trace their origins to historical path-dependencies set in the 1970s.

How Neoliberalism Became Entrenched in Credit Regulation

The Pre-Neoliberal History of Congressional Credit Legislation

Before the late-1960s, credit was in congressionally uncharted waters, and instead governed by a fractured regime of state laws, industry norms, and banking customs.168 State law only regulated loan size and usury limits,169 but left “the decision as to whom credit should be granted” to creditors.170 The dominant practice among creditors in the 1960s was to consider the “three C’s of credit”: the character, capacity, and capital of the credit applicant.171 A popular credit underwriting manual in 1961 instructed creditors to label divorcees, indigenous peoples, and those living in “untidy homes” or “rundown neighborhood[s]” as having high credit risks.172 The Federal Trade Commission (FTC)’s 1970 study of major lending companies found collecting racial information a standard practice.173 In essence, credit underwriting in this era was done informally as a “relationship business” anchored in social networks, which enabled animus and bias to escape government detection.174

When Congress initially contemplated federal credit reporting and fair lending legislation in 1968, it confronted a vibrant yet unequal landscape of credit provision. For the white American working class, credit had become cheap and abundant. On the demand side, the stagnation of wages and inflation in the 1970s drove up the cost of living, turning debt-based consumption into a market imperative;175 credit became necessary for anyone hoping to purchase essential goods and services.176 Consequently, banks had to increase their credit supply. By the mid-decade, credit had “ceased to be a luxury item.”177 These institutional changes in credit provision made borrowing an essential component of the everyday consumer experience in white working-class America.

But this expansion of credit was also unequal: the 1970s marked the emergence of a credit apartheid that segregated the American consumer population. The rise of banking made borrowing easy for the suburban white middle class, but not for African Americans who made up a large portion of the urban poor.178 For them, credit was scarce and unavailable.179 Congress found the unequal access to credit to be among the leading causes for social unrest amongst the urban poor.180 In a hearing before the Senate Committee on Banking and Currency, the FTC testified that credit unavailability was the cause of economic desperation of the urban poor.181 By the mid-70s, credit inequality had become an urgent issue of social stability that Congress could not afford to ignore.

Responding to gaping credit inequality and unrest, Congress enacted the first comprehensive fair lending law: the ECOA.182 The ECOA saw the use of any racial or gender information in credit underwriting as an infringement on the individual’s exercise of free choice and economic opportunity.183 Race-and-gender neutrality and individualism were the bedrocks of fair lending protection. The House Committee on Banking, Currency, and Housing, quoting the U.S. Commission on Civil Rights, explained:

It would be difficult to exaggerate the role of credit in our society. Credit is involved in [an] endless variety of transactions reaching from the medical delivery of the newborn to the rituals associated with the burial of the dead. The availability of credit often determines an individual’s effective range of social choice and influences such basic life matters as selection of occupation and housing. Indeed, the availability of credit has a profound impact on an individual’s ability to exercise the substantive civil rights guaranteed by the Constitution.184

This notion—that unrestrained credit access undergirds consumer autonomy—embodied the consensus that Congress reached after a decade-long ordeal to grapple with entrenched credit inequality.185

Despite Congress’s good intentions, the passage of ECOA produced unintended consequences. Specifically, Congress’s reimagining of credit as a vehicle for individual social choice legitimized the federal government’s later divestiture from social welfare, which began with the government’s delegation of poverty reduction to private credit-underwriting institutions in the early 70s.186 Credit was reframed as the “private-sector alternative to the welfare state.”187 Moreover, recasting credit access as a precondition for the meaningful exercise of civil rights redirected the focus of credit access from redressing systemic racial-gender inequalities to incorporating minorities into the free-market status quo.188 As the next section will illustrate in further detail, these congressional endeavors paved the groundwork for the modern neoliberal consumer protection regime.

Displacement of Public Regulation by Private Enforcement

The rise of individualism and neutrality had profoundly impacted legislative responses to credit inequality since the mid-70s—they directed the focus of credit legislation to expanding the scope of creditor liability and access to banking services. For instance, subsequent amendments to ECOA almost exclusively revolved around adding new categories to the list of protected characteristics, bolstering consumers’ procedural rights, and adjusting the creditors’ disclosure obligations. The 1976 amendment added race, age, color, religion, national origin, and the collection of public assistance income to the original categories of sex and marital status as criteria prohibited from consideration in the credit underwriting process.189 The 1988 amendment imposed additional disclosure obligations on creditors to (1) give formal written notice to applicants of business credit about reasons of credit denial and (2) retain records for business credit applications for at least a year.190 The 1991 amendment heightened creditors’ disclosure obligations regarding residential mortgage lending.191 The 2003 revision to Regulation B, which implements ECOA, imposed an “adverse action” notice192 requirement on creditors to deliver written explanations to consumers when they make any credit decisions adversely affecting consumers’ rights under ECOA.193 Similarly, amendments to FHA in 1974, 1988, and 1996 mostly centered on heightening creditors’ disclosure obligations and consumers’ procedural rights—changes that largely mirrored amendments to ECOA.194

One reason for the growing legislative emphasis on disclosure and formal equality is that Congress increasingly pushed for private litigation as the principal means to vindicate consumers’ rights under the fair lending laws.195 When ECOA was originally legislated in 1974, Congress employed a dual enforcement model—allocating rulemaking power to the Federal Reserve Board (FRB) while delegating the power to bring enforcement actions to the FTC.196 But, beginning with the 1976 amendment, Congress gradually replaced the dual enforcement model with one that was centered on civil lawsuits.197 Subsequent amendments raised the punitive damage ceiling but further constrained the agencies’ substantive rulemaking power. While agencies were granted discretion to implement procedural safeguards protecting consumers’ right to know and creditors’ duty to inform, their authority to craft rules identifying and prohibiting new harmful lending practices shrunk dramatically from 1976 to the 2000s.198 Together, these legislative changes were designed to elevate private enforcement and relegate public enforcement to a secondary role.

However, despite the dominance of the individual rights model, empirics on private enforcement show that consumer welfare did not meaningfully improve in the decades that followed the ECOA’s enactment. Although Congress intended for private lawsuits to be the cornerstone of enforcement, the fair lending laws spawned surprisingly little litigation. For a statute promising to eradicate credit discrimination, the ECOA invited fewer than 50 cases in the decade after its enactment199-—fewer than the number of cases brought under the TILA per month during a similar period200––and far fewer than the number of employment discrimination cases filed per week under Title VII.201 This individualist regime had exacerbated credit inequality since it also amputated agencies’ substantive rulemaking power.

Ironically, an individual rights model centering on private enforcement ended up hurting individual consumers. The most critical failures of this regime are twofold.

First, the legislative emphasis on disclosure and formal equality marginalized questions about bargaining power disparity—i.e., the most central causes of transactional inequality in credit markets. This problem permeates most federal consumer financial protection laws. Under the TILA, for instance, a creditor’s good faith compliance with proper underwriting procedures and standardized disclosure forms immunizes her from liability.202 Under the ECOA, a creditor is deemed compliant with her notice obligations as long as she clearly explains reasons for denying the consumer’s credit application and demonstrates that race or gender play no part in the creditor’s decision-making.203 Under the existing individual rights regime, a consumer’s consent—even constructive consent upon sufficient disclosure—to a loan makes her responsible for the underlying consequences (including wage garnishment and collateral-repossession following an event of default).204 It matters not that she is desperate, materially deprived, lacks a viable alternative, or fell prey to exploitative terms.205

Second, a private-enforcement regime shifts the cost of compliance from creditors and regulators to consumers. Whoever contests the fairness of a transaction bears the legal costs and evidentiary/pleading burdens. Additionally, unsuccessful credit applicants are reluctant to assert their rights against creditors, large or small, out of fear of the institutions, of reprisal, and of the risks associated with alienating creditors.206 Therefore, the irony of private enforcement is that the poorest and most precarious consumers—e.g., minorities, women, immigrants, and other status-subordinated people who are most in need of protection—are typically the ones who are barred from asserting their interests in the current legal regime.207

Contemporary Neoliberal Legal Response to Credit Inequality

At its core, the contemporary neoliberal legal paradigm can be characterized as a series of commitments to the individual rights model, implemented by statutes protecting the autonomy of markets and delegating public functions to private enforcement.208 Today, these commitments have coalesced into a consistent regulatory methodology, consisting of two components: (1) elevating cost-benefit analysis above other modes of policy inquiry;209 and (2) conditioning substantive regulation upon a finding of “market failure.”210 No matter what type of credit is being regulated, how it injures consumers, or where the locus of harm lies, regulators would follow these two methods drawn straight out of the neoliberal handbook. The following paragraphs explain the logic of each method and their legal manifestations.

Elevating Cost-Benefit Analysis Above Other Inquiries

Cost-benefit analysis concerns how regulators should exercise their discretion in crafting rules to address social and economic harms in markets.211

Neoliberals prefer cost-benefit analysis to other modes of regulatory inquiry because they see it as value-neutral and derived from the unbiased analysis of market data—i.e., data produced by optimal and self-correcting market processes that are dis-embedded from extrinsic social or governmental influences.212 While the proliferation of cost-benefit analysis in policy-making and judicial review has no doubt revolutionized the administrative process by eliminating arbitrary agency actions, it has also substantially restrained the federal bureaucracy’s power to enforce established congressional public policies.213

What is critical about the neoliberal transformation is that it elevated cost-benefit analysis to the exclusion of other modes of policy inquiry—by promising to be dis-embedded, value-neutral, and untainted by political influence.214 Policies premised on the radical redistribution of wealth and reconfiguration of market power are dismissed as advancing a subversive ideological agenda.215 The elevation of cost-benefit analysis also made the presumption of free and neutral markets uncontestable in the lawmaking and policymaking forums.216

But, despite its façade of neutrality, cost-benefit analysis is value-laden and ideologically-driven. For one, numbers and statistics are highly susceptible to manipulation.217 What goes into the baseline, denominators, and benchmarks of empirical comparison are conscious political choices about who can and cannot be counted as subjects of policy inquiry. Yet, framing these conscious choices as neutral reflections of market conditions obscures the power relations that dictate what goes into the analysis.218

In the field of consumer credit, the hegemony of cost-benefit analysis is most saliently manifested in two legal standards codified in the core consumer financial protection statutes: (1) legal thresholds of recovery conditioned upon the balancing of interests between consumers and creditors that are inherently conflictual in the credit-underwriting process; and (2) judicial tests requiring agencies to show that the benefits of regulatory intervention outweigh the costs of disrupting the private ordering in markets.

The first—the balancing of consumer and creditor interests—is embedded in the very definition of discrimination in the credit inequality statutes.219 Under the classic definition of discrimination as disparate treatment, consumers seeking recovery are required to show that creditors undertook adverse credit actions against the consumers because of their protected characteristics (e.g., race, gender).220 Even under the more progressive definition of discrimination as disparate impact, plaintiffs cannot raise a cause of action if the creditors can demonstrate that the challenged practice is (1) “necessary to achieve one or more of the substantive, legitimate, nondiscriminatory goals” of the creditor; and (2) “those [legitimate] interests could not be served by another practice that has a less discriminatory effect.”221

The second—the balancing of regulatory benefits and market costs—finds legal expression in statutory provisions governing the scope of federal agencies’ substantive rulemaking power. The Dodd-Frank Act restrains the CFPB’s enforcement power to identify and prohibit “unfair” credit practices by conditioning regulatory action upon a finding of (1) substantial consumer injury; (2) such injury is not reasonably avoidable by consumers; and (3) the regulatory benefits are not outweighed by the costs to the market.222 Similarly, the FTC’s “unfairness” power to govern credit provisions is also constrained by a three-prong countervailing benefits test that requires the Commission to balance any regulatory gains from agency action against the potential business losses of creditors.223

Like any legal tests anchored in cost-benefit analysis, these statutorily mandated countervailing benefits tests are not value-neutral. By tying the hands of federal agencies through the cost-benefit inquiry, Congress opened a narrow legal forum for organized business interests to impede or push back against progressive agency actions. In the fields of payday lending224 and mortgage lending,225 creditors have successfully defeated several of the agencies’ proposed rules to regulate “unfair” credit practices by exaggerating the market costs and diminishing the regulatory gains via manipulating the parameters of comparison. In judicial review of agency action, the banking industry has persuaded federal courts to overrule newly promulgated rules on the grounds that such agency actions exceeded their statutory authority by failing the cost-benefit analysis test.226 From the lens of neoliberal politics, thus, the elevation of cost-benefit analysis over other modes of policy inquiry created a route for organized business interests to propel deregulatory agendas and impede consumer protection programs. It also led to the “judicialization” of policymaking—i.e., the removal of important policy decisions on distributive trade-offs from domains “subject to open deliberation to arenas insulated from such deliberation through legal protocols and layers of protective rules about who may access the knowledge.”227

Conditioning Intervention Upon a Finding of Market Failure

Whereas cost-benefit analysis relates to the exercise of regulatory discretion, theories of market intervention concern the goal of consumer financial protection.

Over the past five decades, neoliberalism has transformed the goal of consumer protection from directly preventing consumer harm to removing constraints on consumers’ free choice to satisfy their preferences through markets.228 For neoliberals, the regulator’s job is simple: (1) to help consumers communicate their preferences in the market through the production of neutral price-signals, and (2) to ensure markets fulfill their intended functions of satisfying consumer preferences. If companies mess with the market’s price-signals, the argument goes, there will be a chain of harmful externalities that ripple through the dynamic and complex ecosystem of market agents who respond to the signal (e.g., creating arbitrage, inefficiencies, or deadweight losses).229 Thus, regulators should only intervene where market failures prevent markets from fulfilling their natural mandate. In doing so, regulators should only intervene to the degree necessary to rectify these failures.230 Under the market failure test, agencies that pursue aims beyond these two goals are not only abusing their discretion but also doing their jobs incorrectly.

Although the market failure test purports to constrain arbitrary and paternalistic agency actions, it ends up fetishizing an idealized notion of consumer choice. This ideology is most visible in two sets of rules which dictate when a federal agency can intervene to remediate harmful practices in consumer financial markets: (1) interpretative rules confining the agencies’ rulemaking power to merely correcting market failures; and (2) judicial doctrines invalidating agency actions that “misidentified” market failures.

One of the clearest examples of such fetishization is the FTC’s 1980 Policy Statement on Unfairness (hereafter the “Policy Statement”).231 A response to congressional worries of FTC’s “overregulation,” the Policy Statement established a three-prong standard232 to limit the FTC’s exercise of rulemaking power to prohibit “unfair” market practices under section 5 of the Federal Trade Commission Act (FTCA).233 In explaining the rationale for issuing the Policy Statement, the FTC stated:

Normally, we expect the marketplace to be self-correcting, and we rely on consumer choice—the ability of individual consumers to make their own private purchasing decisions without regulatory intervention—to govern the market. We anticipate that consumers will survey the available alternatives, choose those that are most desirable, and avoid those that are inadequate or unsatisfactory. However, it has long been recognized that certain types of sales techniques may prevent consumers from effectively making their own decisions, and that corrective action may then become necessary. Most of the Commission’s unfairness matters are brought under these circumstances. They are brought, not to second-guess the wisdom of particular consumer decisions, but rather to halt some form of seller behavior that unreasonably creates or takes advantage of an obstacle to the free exercise of consumer decision-making.234

Adopted amidst the height of a neoliberal takeover of Congress and the courts, the Policy Statement reflected a deep suspicion towards regulatory paternalism and an idolization of consumer free choice.235 These sentiments were also amply echoed by the prevalent legal scholarship of the time. For instance, the then-FTC Director of Policy Planning and later-U.S. Secretary of Labor, Robert Reich, wrote that a paternalistic approach to consumer protection is “fundamentally incompatible with the liberal assumption that each person is the best judge of his or her own needs.”236 “A consumer-protection rationale focusing on the likelihood that consumers within particular markets will misestimate physical or economic risks attendant upon their purchases,” Reich explained, “can provide a strong basis for government intervention, untainted by paternalism.”237 This growing suspicion towards regulatory paternalism, both in and outside of the administrative state, converged with the prevailing neoliberal paradigm of free-market fundamentalism that was advocated by the Chicago School of law and economics.238

In the early 2000s, the FTC’s modern theory of “market failure” emerged. In the 2003 annual Marketing and Public Policy Conference, the then-Director of the FTC’s Bureau of Consumer Protection J. Howard Beales delivered a public speech, stating that “[t]he primary purpose of the Commission’s modern unfairness authority continues to be to protect consumer sovereignty by attacking practices that impede consumers’ ability to make informed choices.”239 Central to the FTC’s new unfairness standard is the notion that free markets operate in the consumer’s best interests, making regulatory intervention appropriate only when there is a clearly identifiable “substantial consumer injury caused by [a] market failure.”240 Beales’ understanding reflects the neoliberal consensus that became widely shared by both academics and regulators by the 2000s: i.e., that the government should not disrupt the market’s private ordering absent the occurrence of a market failure. Throughout the FTC’s exercise of “unfairness” rulemaking powers, business associations and financial institutions frequently invoked the “market failure” notion to challenge the validity of FTC rules in court.241

Crucially, courts do not possess the full knowledge and expertise to determine questions of economic policy. But, by enabling courts to act as regulators and overturn agencies’ decision-making, the “market failure” test transferred vital questions of economic trade-offs in consumer protection from fields of open democratic deliberation to enclosed legal institutions—a domain gate-kept by a class of legal professionals and allied business elites.242 As such, questions of market failure evolved into resource contests over who can hire the most sophisticated expert witness. Oftentimes, litigation over the evidential sufficiency of market failure became legal battles between the agencies and the organized business interests. The voices of consumers and their advocates were either watered-down or absent.

In sum, neoliberalism has reshaped both the goal and the substance of consumer financial protection. Whatever consumer financial protection used to be, it is now principally concerned with the protection of free markets and consumer autonomy. In this neoliberal transformation, each branch of the federal government played complementary roles: Congress laid down the legal foundations by creating an individual rights model of credit regulation; the agencies tied their own hands by adopting the cost-benefit analysis and market failure test; the courts disciplined the agencies for venturing beyond the unspoken neoliberal norm via judicial review. Collectively, this system created a neoliberal consensus whereby all problems arising from the credit markets—whether results of individual conduct or social processes—were approached as if they were outcomes of individual choice. This system represents the institutional equilibrium that our lawmakers, judges, and regulators have found to entrench and stabilize business interests amidst the changing credit distribution landscape from the 1970s to the 2000s.

III. Beyond Neoliberalism: Critique of Current Proposals

This section focuses on the ways in which some of the most prevalent proposals for legal reform of credit underwriting on the table have ignored the relational aspects of algorithmic harm. With some variations, most proposals advocate for: (1) enabling regulatory inspection of algorithmic inputs used in AI credit models by means of mandatory disclosure, or (2) delegating regulatory burden to private markets through fostering technological entrepreneurship investing in the development of “RegTech” solutions.243

What these proposals have in common is treating algorithmic harm as outcomes of discrete individual acts, or practices of individual creditors, divorced from the context and social relations through which such harms are produced. While each proposal addresses a particular dimension of algorithmic injustice, none of them challenge the flawed assumptions of individual responsibility—a model of credit governance that has been deeply entrenched in the current regulatory consciousness since the 1970s. Existing proposals are, by and large, progenies of the neoliberal consensus. Most proposals continue to draw extensively from the neoliberal rulebook—that is, to restore perfect markets and rational market agents through disclosure and removal of choice constraints. These proposals see public regulation only as a compliment, rather than a supplement, to the market’s private ordering. But, as the following paragraphs will show, such efforts tend to miss the target because they fail to recognize that a significant portion of algorithmic harm is generated by unjust relations between creditors and consumers in AI-mediated markets.

The Futility of Algorithmic Input Scrutiny

The dominant approach to AI governance in consumer credit is to enhance regulatory visibility of how algorithmic inputs—i.e., raw consumer data—are processed by AI models in the credit underwriting processes. To implement this approach, proponents of input scrutiny argue that regulators should demand creditors and data aggregators disclose AI training data, computational formulas, and software source codes to federal agencies by means of regulatory fiat.244 Data transparency would help regulators better identify discriminatory practices, patterns, and hold creditors accountable under existing fair lending laws. In this regard, input scrutiny shares the same goals of most existing disclosure mandates: (1) enhancing price transparency;245 (2) facilitating informed consumer choice by creating the infrastructure for fair market competition and cost comparison;246 and (3) nudging consumer choice towards welfare-optimizing financial products.247 From the proponents’ point of view, the AI-mediated credit market is sufficiently opaque and unfair that even the most devout neoliberals should find the present conditions to be a “market failure,” justifying regulatory intervention.

The algorithmic input scrutiny proposal presents two obvious advantages. First, this approach can easily fit into the existing notice-and-consent frameworks of fair lending. For instance, under Regulation B (implementing the ECOA), creditors taking an adverse action against a loan applicant are required to deliver to the applicant a notification in writing containing “a statement of specific reasons” for the adverse action “within 30 days” after taking such action.248 If this notice requirement is not followed, the creditor is deemed to have violated ECOA (a strict liability regime). If implemented, the input scrutiny mandate may phase out the use of “black-box” AI models in lending decision-making.249 Creditors seeking to comply with ECOA’s adverse action notice requirements will be incentivized to adopt “white-box”250 AI models to underwrite consumer credit.251

Second, enhancing algorithmic input aligns with the current regulatory agenda to push for more individualist, dignitarian data privacy reforms. In March 2023, the CFPB promulgated a final rule252 to compel creditors to share with consumers any data they have collected about them.253 Any potential input scrutiny rulemaking can build on the existing legal infrastructure of financial data sharing.

Despite its alignment with existing regulatory agendas, the input scrutiny approach fails to meaningfully account for either informational or decisional harms stemming from unjust data relations. Its push for dignitarian reform distracts us from the real source of algorithmic harm, which lies in creditors’ informational control over horizontal and vertical data flows. If the material underpinnings of unjust data relations remain unchanged, it is questionable whether more data transparency could lead to meaningful consumer choice and autonomy.

The input scrutiny approach also fails to address the problem of AI proxy discrimination. Without race or gender inputs, the AI model can still engage in price discrimination because it draws indirect and unsupervised inferences based on engineered data and sources that reflect preexisting socioeconomic inequalities, which are embedded in the data used to train the algorithm.254 This occurs because AI makes decisions by replicating and reinforcing human bias.255 The AppleCard, for instance, recently drew intense criticism when a male applicant complained that he received a line of credit 20 times higher than that offered to his spouse, even though the two filed joint tax returns, lived in the same community, and owned the same property.256 Goldman Sachs, the issuer of AppleCard, responded to the complaint by stating that it could not discriminate against her because its algorithm “doesn’t even use gender as an input”257 Goldman’s response belies the reality that gender-blind algorithms can still be biased against women if they draw statistical inference from inputs that happen to correlate with gender, such as purchase history and credit utilization.258 Even though the New York State Department of Financial Services subsequently investigated Goldman’s credit card practices, it concluded that Goldman did not violate its fair lending obligations under ECOA because it “did not consider prohibited characteristics.”259 The AppleCard case challenges the notion that removing suspect algorithmic inputs indicating consumers’ protected characteristics can eliminate AI bias. More importantly, the failure of algorithmic input scrutiny to eliminate AI bias calls into question the effectiveness of the colorblind approach of the ECOA and FHA to equal credit access protection.260

The Illusory Promises of “RegTech”

The emergence of “RegTech”261i.e., information technologies used by financial institutions to address the challenges posed by FinTech and ensure regulatory compliance—presents an alternative to the top-down regulatory initiatives discussed earlier. In general, RegTech encompasses a wide range of technological solutions, including those used to detect and prevent financial fraud, safeguard consumer data protection, optimize asset-liability management, monitor anti-money laundering, and automate tax/financial reporting.262

At its core, RegTech promises to safeguard equal credit access protection by tapping the strength of competitive financial markets to self-correct, adapt, and innovate.263 Proponents of RegTech argue that, by investing in informational technologies regulating AI, the market can solve its own problems through entrepreneurship and innovation—i.e., “pure” market processes untainted by regulatory paternalism. Proponents also envision RegTech to be the perfect solution to balance free markets against market-generated injustices, a pathway for financial institutions to redeem themselves. In the era of congressional gridlock and legislative inaction, RegTech presents an attractive “third way” that echoes with the existing cries for corporate social responsibility.264 Essentially, the RegTech proposal seeks to reinvent the neoliberal consensus through technology: financial institutions, by adopting RegTech to keep AI in check, can help the credit market cleanse its own imperfections through the private ordering.

But the promise of RegTech is illusory because, without changing the material conditions of exploitation that currently undergird unjust data relations, it is doubtful whether RegTech can meaningfully empower consumers against creditors. In fact, the opposite is more likely to be true. Currently, we are witnessing a wave of RegTech and FinTech acquisitions by some of the largest financial intermediaries. In June 2020, payments giant Mastercard acquired Finicity, one of the leading data aggregators in the U.S.265 Mastercard’s competitor, Visa, acquired Plaid, another leading data aggregator.266 Similarly, banks have also tried to control and internalize the process of data aggregation by pushing data aggregators to sign bilateral agreements governing their collection and transmission of consumer data from the banks’ platforms.267 As of September 2020, Wells Fargo signed seventeen such agreements with data aggregators, governing “ninety-nine percent of the information being collected from its platforms for use by other financial institutions.”268 What this means is that RegTech, like FinTech, will further empower creditors against consumers. With RegTech incorporated into creditors’ business model, creditors will effectively gain control of the entire data production process—including data aggregation, processing, distribution, and explanation.

RegTech therefore embodies a common symptom found in most neoliberal responses to social problems: subscription to the belief that the market is disconnected from social relations, and that technological problems in the market can be self-contained and resolved by technology alone. Proponents of RegTech have articulated a flawed vision of market internalism,269 that all problems stemming from the markets can be solved by the markets themselves. On a technical level, the RegTech movement has also embraced a similarly flawed vision on technology—that all problems stemming from technology are self-containable through the development of new technologies.270 But the RegTech movement has failed to realize that neither markets nor technologies can be dis-embedded from the social relations that constitute them. In ignoring the unjust social conditions giving rise to the problems that technologies were employed to solve, the RegTech and XAI movements have reframed the problem as outcomes of deviant individual conduct. As a result, the only viable solution they see is using technologies to discipline recalcitrant creditors, facilitate compliance, and then delegating the enforcement to the private markets. In this regard, RegTech has distracted us from the real sources of algorithmic harm—that is, unjust market relations of data production that enabled AI technologies to be used for commodification and exploitation.

IV. Towards Propertarian Reform: Alternative Pathways

So far, my analysis has largely centered on the dimensions of algorithmic exploitation in AI-mediated credit markets and how current proposals informed by neoliberal ideals fail to meaningfully address the risks of algorithmic exploitation. A lingering question is how to move forward.

As the last five decades of poverty intensification and systemic credit inequality have shown, neoliberalism has failed its promise of delivering meaningful equal credit access protection. The failures of neoliberalism are becoming even more salient today in the age of informational capitalism as AI has exposed the limits of free markets and consumer autonomy presumptions of regulation. To remediate these flaws, this Part explores possibilities of legal reform through (1) reimagining the nature of data ownership, (2) creating a collective intellectual property right in data, and (3) building a collective data governance infrastructure anchored in the open digital commons.

Why Collective Propertarian Data Governance?

By “propertarian reform,” I do not mean to limit the discussion to private property rights. Instead, I refer to a panoply of property-related reforms that vests legal entitlement in the ownership of things rather than of self. This includes variations of common property, such as common pool governance, collective property, and joint ownership. As Salomé Viljoen has pointed out, thinking of data governance only in narrow dichotomous terms—“propertarian” versus “dignitarian”—constrains our imagination of what is possible.271 The move to understand data in relational terms rejects the notion that individualist solutions are the only possibility for meaningful reform.

This article imagines collective data ownership as an alternative pathway to data governance. While individual data ownership helps rearrange unjust social relations of data production, circulation, and retainment within vertical systems of informational control, collective data ownership addresses horizontal relations.272 Collective data ownership also rebalances the power disparities between the owners/users of AI (creditors) and the subjects of AI (consumers) on both vertical and horizontal dimensions. Since data is the most valuable and vital input for AI systems, changing the legal foundations of data ownership will impact the occurrence of algorithmic informational and decisional harms.

In the context of consumer credit, granting consumers some form of property entitlement to the data can radically reshape existing relations of data aggregation and reorient the direction of power along the chains of data supply. For instance, if consumers are granted full property ownership over the data generated through their online activities—including the rights to possess, control, manage, use, enjoy, dispose, and sell273—then the data aggregators and brokers will need to purchase from consumers a right to access consumer data to conduct their business. Admittedly, full data ownership may have chilling effects on the speed and efficiency of data circulation since it breaks down existing economies of scale already formed between data aggregators and creditors, but full data ownership can also redirect power from creditors to consumers by incentivizing the market to invest in consumer-empowering FinTech and push data aggregators to disentangle with creditors. Even from a dignitarian standpoint, granting consumers a right to exclude others from accessing the data—anchored in the notion of personal dominion and sovereignty over things—can prevent the erosion of privacy and autonomy.274 A propertarian data governance reform that entirely transforms the material underpinnings of data production can protect consumer autonomy better than any neoliberal regulation.

Alternatively, formalizing a partial property ownership of data can also reshape data relations, albeit with less radical restructuring effects on the credit market. For example, conceptualizing data ownership as an asset or an entitlement to income can reduce consumers’ chronic dependence on unjust data relations to access the means of basic economic subsistence. Under an income-entitlement regime, data aggregators may not need explicit consumer consent to harvest data and sell them to creditors. But consumers will be entitled to a “data dividend” for the wealth generated from data usage.275 While this approach to propertarian data governance might not break up existing bonds between data aggregators and creditors, it can certainly provide a wealth cushion that helps alleviate the burdens of the low-income and reduce credit inequality.276

In contrast to an individualist or dignitarian approach, a propertarian approach to data governance reform can remediate unjust relations of data production and circulation—the root causes of algorithmic harm. Whether in full or partial form, formalizing a property right to data can provide consumers a means to regain control over the processes and fruits of AI’s atomization of consumer selfhood. However, to say that we should embrace a propertarian reform does not suggest that dignitarian interests in data are unimportant, or that individual rights do not matter. Individual autonomy, dignity, and integrity do matter—and, as the Introduction and Part I of this article have illustrated, they are embedded in the purpose of equal credit access protection. But a propertarian approach can protect these interests as well. A propertarian reform can also address systemic inequalities that have been ignored by the dignitarian approach for far too long.

Recommendations for Reshaping Unjust Data Relations

Of course, no legal reform is ever perfect—not even a radical restructuring of the market through consumer data ownership. While a propertarian framework for data governance can help us directly address the root causes of algorithmic harm in ways that no individualist or dignitarian regime can, it is important to recognize that there is no silver bullet to our present problems.277 Ultimately, whether or not we should opt for full or partial data ownership (and, in the event we opt for partial ownership, which sticks within the bundle of rights to prioritize) is a trade-off between thoroughness and administrability of legal reform that should be considered in light of the current social priorities. That trade-off should be a subject of democratic, public, and open deliberation—a policy choice that lies beyond the scope of this article. Nevertheless, there are concrete steps we can take to remove distractions obstructing our clear view of what is possible. The following paragraphs illuminate what a thorough propertarian reform to reshape unjust market relationships will likely require.

Reimagining the Nature of Data Ownership

Any propertarian reform must first address a threshold question: what does it mean to say someone owns data?278 Currently, several analogies are being deployed to make sense of data ownership: data as oil, as personhood, as salvage, and as labor.279 Each time a “data-as” analogy is proposed, the proponent is suggesting that data should be regulated the same way the other thing is currently governed. The logic of each “data-as” analogy is as follows: First, it makes an analytical claim about what makes data valuable. Second, by identifying what makes it valuable, the analogy makes a normative judgment about who should own the data. Third, to implement the normative ideal, the analogy makes a legal claim about what rights, duties, and powers should be established to buttress its particular vision of data ownership.280

(i) Data Is Not Oil: The most common legal analogy is that data is just like oil, or any depletable natural resource. This concept is popularized by British mathematician Clive Humby, who declared in 2006 that “data is the new oil.”281 What Humby meant is that data, like oil, is valueless and useless in its raw state; to generate value, data needs to be refined, processed, and turned into something else—the value of data lies in its potential.282 But “data-as-oil” fails as a legal analogy. Unlike oil, data can be infinitely supplied by its producers. It is continually updated by the consumer’s daily engagement with the credit system, whether directly (e.g., applying for loans) or indirectly (e.g., supplying credit information). In that sense, data is not like oil—oil is relatively scarce, fungible, and rivalrous in consumption; whereas data is abundant, non-fungible, and non-rivalrous.283 This challenges a central claim that many businesses have articulated in their legal battles to claim ownership of consumer data: i.e., that unprocessed data is merely raw material floating freely in the natural domain readily available for economic appropriation.

(ii) Data Is Not Personhood: A competing analogy, anchored in dignitarian concepts of personal sovereignty, sees data as imprints of human expression in cyberspace.284 Whereas “data-as-oil” views data as extracted from the natural domain, “data-as-personhood” views data as emanated from human subjectivity.285 Under this theory, data is an extension of the self, an aspect of individual integrity and autonomy that is immune from appropriation (or expropriation). This analogy encourages us to think of data as not being owned at all. It urges legislators and policymakers to completely de-commodify access to data and make it unavailable for all market actors. But this legal analogy is flawed for two reasons. First, the analogy conflates the purpose and outcome of individual expression. While it’s true that people express their personal desires, anxieties, thoughts, and lived experiences through communications in the digital medium, data is merely a byproduct of that expression. People do not engage with cyberspace for the purpose of producing data. Second, the analogy fails to recognize that people have more than a dignitarian interest in data. However uncomfortable it may be, data does have commercial value. If given the opportunity, many would trade their dignitarian interests for material benefit. Thus, the more sensible approach is to accommodate both dignitarian and propertarian interests by having consumers retain a portion of the wealth that is created through the commercialization of data.

(iii) Data Is Not Salvage: “Salvage” is defined as “a rescue of endangered property.”286 In maritime law, “salvage award” is a compensation for people who have rescued property that is lost at sea.287 In finance, “salvage value” describes the remaining value that someone should receive after disposing of an asset that has exhausted its useful life.288 What is common in both is the idea that whoever rescues an imperiled property from waste should be entitled to the value of the labor they have invested to save a property that would have perished but for the labor. In data governance, the analogy of “data-as-salvage” echoes with the sentiment that data miners and processors should be compensated for turning data into marketable outputs.289 However, this analogy is also flawed because it fails to recognize that data is collectively generated. There’s no doubt that data miners and processors have “mix[ed] their labor” in generating marketable data.290 But to say that data miners “saved” data from an “imperiled state” and turned them into something useful is to grossly overstate their contribution to data production. Let us not forget that each cog in the chain of data production—consumers, data aggregators, miners, distributors, and financial intermediaries—have materially contributed to the process. Remove any single actor from the chain, and data would not be marketable.

(iv) Data Is Not Labor: Among the pantheon of analogies, the “data-as-labor” analogy is the most promising. At its core, this analogy aims to distribute the fruits of data production according to the proportion of labor invested by each actor on the data production chain.291 Under this framework, consumers, data miners, and aggregators will each be entitled to compensation for the “wage labor” they invested in producing the data. This analogy strikes a balance between protecting both dignitarian and propertarian interests in data. It recognizes that, while people do express personhood value in the production of data, they will readily trade it for material benefit when given the opportunity. The “data-as-labor” analogy has also garnered much academic support. Glen Weyl and Eric Posner have introduced a proposal called Radical Markets, which “seeks to introduce a labor market for data.”292 In doing so, they aim to uproot the unjust foundations of data production, upon which the uncompensated fruits of “data laborers” are “distributed to a small number of wealthy savants rather than to the masses.”293 But there are still reasons to be skeptical of this analogy. First, if wage labor is equivalent to the value that each actor has invested in the production of data, then the distribution of wealth will be inherently unequal. Producers located on the lower-end of the data value chain (i.e., consumers responsible for data provision) will get minimally compensated, while producers located on the higher-end of the chain (i.e., data processors responsible for data repackaging and refinement) will retain most of the economic surplus. Second, “data-as-labor” does not account for market externalities. Crucially, markets and market prices are not neutral conduits for inherent value. While the market may be able to account for individualized value within the vertical relations of data production, it cannot account for the aggregate costs imposed on horizontal flows of data. The analogy’s key omission is assuming that markets are dis-embedded.

Creating a Collective Intellectual Property Right in Data

(i) Data as Collectively Generated Patterns: If data is not oil, personhood, salvage, or labor, then what is it? Mattias Risse conceptualizes data as collectively generated patterns.294 The idea is that the value of data “does not consist in individual items but in the emerging patterns.”295 Data is valuable not only for those who provide data within the vertical relations of data production, but also for people situated in horizontal relations of data flow, circulation, and distribution.296

The proposal that data consists of collectively generated patterns differs from other “data-as” proposals in that it is not an ontological claim about what data is or ought to be.297 It is a purely descriptive and pragmatic claim about how data currently fits into the existing “human practices of assigning commercial value to entities.”298 From a descriptive lens, data is a microcosm of vast social networks that are continually adapted, updated, and reflected by those who generate, use, and consume data for economic means.299 Thinking of data in relational rather than ontological terms helps us detect the blind spots of each aforementioned analogy.

From a legal standpoint, understanding data as collectively generated patterns opens new possibilities for restructuring the currently unjust data relations. If we accept the fluidity and amorphousness of data, then we can design a legal system that directly protects the data subjects’ (consumers and platform users) access and engagement with other sources of data production. Thinking of data in fluid terms thus enables us to formulate a collective property right in data deriving from the management of social relations. For instance, we can imagine a membership-based joint tenancy or co-ownership of data that places the onus of data management on the community. Another possibility is to grant consumers a right to access, control, and withdraw personal data from the digital commons, without granting a right to exclude. These propertarian reforms do not require analogizing data to already-existing things. Instead, it allows us to accept data as it is—that data is sui generis.

Here, it is important to note that the concept of collective property rights in data does not repudiate the notion that individuals have important dignitarian interests in data. But it does repudiate the idea that individual dignitarian interests in data are the only interests that matter to data governance law. It also rejects the notion that any interest in data is reducible to individual dignitarian interests. The fetishization of individualism, autonomy, and dignity is part and parcel of neoliberalism’s effect of reducing complex social problems into outcomes of individual choice, as well as neoliberalism’s legitimization of a systematic program of governmental divestment from public goods. By liberating ourselves from the intellectual constraints of neoliberalism, we can see new propertarian reforms for data governance and directly address the root causes of algorithmic harm.

(ii) Where Data Meets Intellectual Property: Once we recognize that the value of data lies in its circulation and compilation as collectively generated patterns, the next step is to conceptualize alternative forms of legal ownership to capture that value for the benefit of consumers. This is where data governance intersects with intellectual property (IP). Although conventional legal scholarship often associates IP with individualist propertarian solutions,300 this subsection investigates ways in which new developments in intellectual property rights (IPR) protection outside the U.S. can offer powerful insights for collectivist propertarian reform.301 Currently, copyright law protects data only in the narrow context of individual original authorship. In the U.S., copyright protection applies to data produced in connection with a creative activity or embedded in a creative expression.302 Raw data is uncopyrightable because courts consider them to be mere facts that are “discovered,” rather than “created,” under the existing copyright regime.303 Processed data, such as compilations of data in algorithmic or automatic databases, may be copyrightable as “literary works” under section 102 of the Copyright Act.304 Such data are copyrightable only if their arrangement or compilation is sufficiently creative that it amounts to original authorship.305

However, the problem with traditional IPR solutions is that they tend to reinforce, rather than redistribute, existing power inequalities in the value chain. Consumers have little control over the production and trade of consumer-generated data, despite being the ones who are subject to the information systems.

Fortunately, the U.S. can draw lessons from the legal experiments for database protection in other jurisdictions. For instance, the EU has created a sui generis legal protection for databases that are not covered by copyright.306 Protection under the EU sui generis database right is not contingent on originality, creativity, novelty, or even commercial value.307 Instead, any “maker” who takes the initiative to obtain, verify, or present the contents of the database and assumes its underlying risks is afforded property protection.308 Anyone who takes a “substantial investment” in the above can also become a rightsholder of the database.309 This broad definition of “maker” enables any collective or organization to claim direct or derivative rights in the database.

While the EU’s sui generis database right is certainly not perfect, the U.S. can learn from the EU’s successes and avoid its mistakes. To avoid the risks of over-protection impeding the free flow of data,310 the U.S. should create a two-tiered database protection system that distinguishes between original and derivative data compilations.311 For example, original databases could continue to be protected under the copyright regime, while derivative databases could be protected under the sui generis right. To ensure that the database does not devolve into a tragedy of the anti-commons, the sui generis database right should accompany legal mechanisms to ensure the free flow of information—such as restricting the sui generis database owner’s right to exclude while retaining their rights to enjoyment. Additionally, the legislation could set up sub-hierarchies of database rights within the sui generis legal conception by distinguishing between the “makers” of derivative data compilations and the rightsholders who merely “take substantial investment” in the preparation of derivative databases. These proposals are by no means exhaustive, but they can expand our imaginations of possible legal reform.

Building the Infrastructure for Open Digital Commons

This subsection considers what information infrastructures can be built to make the collective property right in data meaningfully enforceable.312 In line with existing legal scholarship on the digital public domain, this subsection considers the creation of a digital commons as the foundation for any meaningful exercise of non-exclusive right to access, use, and withdraw data.313 To implement this concept, this subsection illustrates steps to ensure that the digital commons remain open and common—meaning that it will neither regress into “tragedies of the commons”314 or devolve into “tragedies of the anti-commons.”315

(i) The Public Data Trust Option: To preserve the openness and commonality of the digital economy, it is necessary for us to resist and reverse the privatization of consumer data by creditors. One possibility is to develop an open database like the Human Genome Project.316 Another is to establish a national data trust for the public good, under the supervision of an independent public-data management authority.317 We can also draw inspiration from other countries. The UK and Canada explored national data trusts as a means to govern citizen data and regulate their access by businesses corporations.318 A public data trust would allow individuals, communities, and organizations to grant the rights of control and access their data to entrusted entities to manage their data for their benefit.319 This would turn data intermediaries into data fiduciaries—meaning that they would be subject to the heightened duties of data stewardship.

(ii) The Public Utilities Option: An alternative solution is to build on existing informational infrastructures of credit data collection and distribution. Three of the largest National Credit Reporting Agencies (NCRAs)—Equifax, TransUnion, and Experian—have already amassed vast volumes of consumer data for credit reporting.320 NCRAs have also developed extensive networks of data supply through business partnerships with FinTech companies and data aggregators.321 One possibility to create a collective propertarian data infrastructure is to regulate NCRAs as public utilities—the same way that natural gas, electric power, cable, telecommunications, and water companies are governed.322 In the common law tradition, courts have developed the public utility doctrine to ensure that industries providing goods and services essential to the public offer them “under rates and practices that [are] just, reasonable, and non-discriminatory.”323 Industries that qualify as public utilities typically meet two conditions: they are considered “natural monopolies”324 and are “affected with a public interest.”325 Today, NCRAs and other credit data platforms have already satisfied the two conditions that historically triggered a public utility recognition. As public utilities, they will have affirmative obligations to the public to provide open data access, non-discrimination, and universal service. This “ensure[s] collective, social control over vital private industries that provide[] foundational goods and services on which the rest of the society depends.”326

(iii) Collective Social Governance of Data: Whether we select the public trust or the public utilities option, governing data as open commons invites an additional challenge: how do we ensure data is made as openly accessible as possible, while still limiting access to data with the potential to do harm? Admittedly, not all data is appropriate for open public access.327 Restriction is warranted for data that contain sensitive personal information or otherwise carry potential for intentional or accidental misuse.328 Leakage of certain data can also pose security risks.

Establishing a legal infrastructure for the collective social governance of data can remediate unjust data relations without compromising people’s privacy and security interests in data. One way to achieve this is to simultaneously vest the power of data management in the hands of consumer communities, while granting data access to an independent, entrusted entity acting under public interest.329 Currently, the EU has considered a similar proposal that would allow public authorities to access data where doing so is “in the ‘general interest’ and would considerably improve the functioning of the public sector.”330 This proposal follows the logic of the 2016 French Digital Republic Act.331 In the U.S., statistical agencies, census bureaus, and the Library of Congress have also established professional expertise in managing data for the public good while adhering to strict public-purpose limitations and high confidentiality standards.332 These existing forms of public data management systems can serve as a model for collective social data governance.

Conclusion

Over the past half century, neoliberalism has entrenched a regulatory paradigm that saw social problems as outcomes of individual choice. This paradigm saw free markets and consumer autonomy as the panacea to market injustices. The twin ideals of neoliberalism find ubiquitous presence in our laws governing the supply and distribution of credit. Instead of providing meaningful credit access and equality, they have distracted us from the root problems: unjust market relations stemming from systemic social inequalities.

If the failures of neoliberal ideals of free markets and consumer autonomy were once hidden, then the ascendancy of AI made them apparent. AI situates the vast majority of consumers within systems of informational control where market price-signals are engineered and consent is manufactured. Within these digital environments, consumer data is ceaselessly harvested, extracted, refined, and repackaged into marketable products. This also causes the exploitation of consumers through microtargeting and price discrimination. Yet, existing proposals for AI governance, informed by neoliberalism, have continued to cast these problems as outcomes of imperfect markets and individual choice. They obscure the true source of algorithmic harm—unjust market relations of data production, circulation, and control that entrench and reproduce systemic inequalities.

Moving beyond neoliberalism, recognizing algorithmic harm as both individually and socially constituted can help us imagine new possibilities to address the root causes of systemic credit inequality. A purely dignitarian reform of data governance which addresses only individual harm is bound to be incomplete. To fundamentally reshape the unjust social relations that currently underpin AI exploitation and build a just credit market, we need to push for a collective propertarian reform. To strive for this possibility, we must reimagine the nature of data ownership as collectively generated and relational, conceptualize a collective IPR in data, and construct an alternative information infrastructure to govern data as open commons.


Footnotes

*J.D., 2023, Harvard Law School; A.M., 2020, Harvard Graduate School of Arts and Sciences. I am particularly indebted to Professor Yochai Benkler for his meticulous guidance and inputs into this article. I would also like to express my utmost gratitude to Valentina Liu, who not only provided invaluable insights but also selflessly offered editing, brainstorming, and emotional support throughout my research process. In addition, I would like to thank Lucy Huang, Wenda Xiang, Jamie Pang, and students at the Law and Political Economy Workshop at Harvard Law School for providing generous support and comments. Finally, I would like to extend my gratitude to Ryan Huck, Joseph Salmaggi, Eric Thompson Jr., and the editorial team of the N.Y.U. Journal of Intellectual Property & Entertainment Law for devoting significant time and effort into publishing this article. All mistakes and omissions are the fault of the author

  1. See generally Timothy P. R. Weaver, Market Privilege: The Place of Neoliberalism in American Political Development, 35 Stud. Am. Pol. Dev. 104 (2021) (describing neoliberalism as the guiding principle that has been increasingly reflected in U.S. policy ideas and institutional innovations).
  2. Credit underwriting is the process by which the creditor decides whether an applicant is creditworthy and should receive a loan through risk-based assessment. For further explanations, see discussion infra Part B n.47–48.
  3. See Taylor C. Boas & Jordan Gans-Morse, Neoliberalism: From New Liberal Philosophy to Anti-Liberal Slogan, 44 Stud. Comp. Int’l Dev. 137, 143 (describing three sets of economic policies that scholars characterize as neoliberal: those that “eliminat[e] price controls” and “deregulat[e] markets;” those that “reduce the role of the state in the economy;” and those that “contribute to fiscal austerity and macroeconomic stabilization.”).
  4. See Robert H. Lande, Market Power Without a Large Market Share: The Role of Imperfect Information and Other “Consumer Protection” Market Failures, (Am. Antitrust Inst., Working Paper No. 07-06, 2007), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1103613 [https://perma.cc/4SN6-4HV9].
  5. See, e.g., Quentin Andre et al., Consumer Choice and Autonomy in the Age of Artificial Intelligence and Big Data, 5 Customer Needs and Sols. 28, 37 (2018); Donna J. Hill & Maryon F. King, Preserving Consumer Autonomy in an Interactive Informational Environment Toward Development of a Consumer Decision Aid Model, 16 Advances in Consumer Rsch. 144 (1989); Klaus Wertenbroch et al., Autonomy in Consumer Choice, 31 Mktg. Letters 429, 439 (2020).
  6. 15 U.S.C. §§ 1601–1667f (2022).
  7. 15 U.S.C. §§ 1681–1681x (2022).
  8. See Anne Fleming, The Long History of “Truth in Lending”, 30 J. Pol’y Hist. 236, 237 (2018).[/efn_note

    The ideal of consumer autonomy is manifested by fair lending laws which aim to protect consumer choice and dignity in credit transactions.8See Abbye Atkinson, Borrowing Equality, 120 Colum. L. Rev. 1403, 1420 (2020).

  9. See 15 U.S.C. §§ 1691–1691f (2022).
  10. See 42 U.S.C. §§ 3601–19, 3631 (2022).
  11. See generally Benjamin Eidelson, Respect, Individualism, and Colorblindness, 129 Yale L.J. 1600, 1600 (2020) (characterizing the Supreme Court’s approach to race and equal protection as both “colorblind” and “individualist”).
  12. See 15 U.S.C. § 1691a (2022); 42 U.S.C. §§ 3601, 3604 (2022).
  13. Both the ECOA and FHA prohibit disparate treatment based on protected characteristics (e.g., race, sex, marital status, age, alienage). But, under current case law, only the FHA prohibits disparate impact. See Tex. Dep’t of Hous. & Cmty. Affairs v. Inclusive Cmtys. Project, Inc., 576 U.S. 519, 546 (2015).
  14. See Stephen M. Rich, Equal Opportunity, Diversity, and Other Fables in Antidiscrimination Law, 93 Tex. L. Rev. 437, 444, 454 (2015) (reviewing Joseph Fishkin, Bottlenecks: A New Theory of Equal Opportunity (2014)) (arguing that enforcement of the disparate treatment doctrine embraces traditional equal opportunity ideals).
  15. See Loïc Wacquant, Punishing the Poor: The Neoliberal Government of Social Insecurity 1 (2009) (footnote omitted) (internal quotations omitted) (“Neoliberalism, [or] an ideological project and governmental practice mandating the submission to the free market and the celebration of individual responsibility in all realms.”).
  16. See, e.g., Susanne Soederberg, Debtfare States and the Poverty Industry: Money, Discipline and the Surplus Population 84–85 (2014) (“Consumer protection essentially forms the bedrock of the neoliberal move away from the collective and rights-based social and economic protection of workers toward monetised and individualised relations, as well as market-driven forms of citizenship whereby the state simply guarantees the formal equality of exchange.”).
  17. See generally Salomé Viljoen, Ferment Is Abroad: Techlash, Legal Institutions, and the Limits of Lawfulness, L. & Pol. Econ. Project (Apr. 20, 2021), https://lpeproject.org/blog/ferment-is-abroad-techlash-legal-institutions-and-the-limits-of-lawfulness/ [https://perma.cc/57QW-BR7E] (“Over the past several years, enthusiasm for Silicon Valley’s California Ideology as a source of hope and vigor for the Western capitalist imaginary has begun to fade.”).
  18. See, e.g., Yizhu Wang, Banks, Credit Unions Testing AI Models for Underwriting in Credit Cycle, S\&P Glob. Mkt. Intel. (Oct. 10, 2023), https://www.spglobal.com/marketintelligence/en/news-insights/latest-news-headlines/banks-credit-unions-testing-ai-models-for-underwriting-in-credit-cycle-77559590 [https://perma.cc/D7BJ-4U54] (describing increasing use of AI by banks and credit unions for credit underwriting).
  19. Soederberg, supra note 17, at 84 (describing the importance of disclosure for consumer protection in the U.S. credit industry).
  20. Machine learning is a subset of AI that can “learn from data and improve its accuracy over time without being programmed to do so.” Janine S. Hiller, Fairness in the Eyes of the Beholder: AI; Fairness; and Alternative Credit Scoring, 123 W. Va. L. Rev. 907, 910 (2021) (quoting Machine Learning, IBM (alteration in original) (July 15, 2020), https://www.ibm.com/topics/machine-learning [https://perma.cc/UQ8C-94VR]).
  21. See Roger Brown, All That AI is ML But Not All That is AI is ML, Medium (Dec. 24, 2020), https://medium.com/nerd-for-tech/-95d38af2f9ea [https://perma.cc/L3AA-HYAJ].
  22. Unsupervised learning discovers hidden patterns or data groups without the need of human intervention or supervision. What is Unsupervised Learning? IBM, https://www.ibm.com/topics/unsupervised-learning [https://perma.cc/3JJC-83KD].
  23. See Florian Perteneder, Understanding Black-Box ML Models with Explainable AI, Dynatrace Eng’g (Apr. 29, 2022), https://engineering.dynatrace.com/blog/understanding-black-box-ml-models-with-explainable-ai/ [https://perma.cc/PL7E-SU6N] (“[C]omplex models, such as Deep Neural Networks with thousands or even millions of parameters (weights), are considered black boxes because the model’s behavior cannot be comprehended, even when one is able to see its structure and weights.”).
  24. “Fringe data,” also known as “alternative data,” refers to unconventional consumer information that may be correlated with a consumer’s financial capacity, but its relevance is largely questionable. “Conventional data” refers to payment history, bank account balances, cash-flow data, and other formal credit information that directly concerns an individual’s financial capacity. The increasing use of fringe data by lenders raises accountability concerns. See generally Examining the Use of Alternative Data in Underwriting and Credit Scoring to Expand Access to Credit: Hearing Before the H. Comm. On Fin Servs., 116th Cong. 7 (2019) (statement of Aaron Rieke, Managing Director, Upturn) (“Expansive data sets about people’s social connections, the kinds of websites they visit, where they shop, and how they talk do not have the simple, intuitive connection to each individual’s ability to repay a loan. These can yield blunt stereotypes that might be predictive, but for the wrong reasons.”).
  25. The credit reporting system is plagued by computer-generated inaccuracies, irrelevant and questionable information. See Brief for Center for Digital Democracy as Amici Curiae Supporting Respondents at 5–14, Spokeo, Inc. v. Robins, 578 U.S. 330 (2015) (No. 13-1339), 2015 WL 5302538. Data brokers have access not only to public information, but also private datapoints about consumers. They purchase personal data from companies and platforms that consumers do business with, combine the data with other information about the consumer, and sell repackaged data to credit underwriters and lenders. See id. at 10. For more information on the data brokerage industry, see Fed. Trade Comm’n, Data Brokers: A Call for Transparency and Accountability (2014), https://www.ftc.gov/system/files/documents/reports/data-brokers-call-transparency-accountability-report-federal-trade-commission-may-2014/140527databrokerreport.pdf [https://perma.cc/NQA5-SNXX].
  26. See, e.g., Robert Bartlett et al., Consumer-Lending Discrimination in the FinTech Era, 143 J. Fin. Econ. 30, 31 (2021).
  27. See generally Shoshanna Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power 11—12 (2019) (arguing that contemporary advances in digital information technologies have ushered an era of “surveillance capitalism” which operates by transforming human experiences into behavioral data and enabling companies to not only predict but also shape consumer behavior at scale). AI perfects surveillance capitalism by making it easier for companies to shape consumer behavior and expectations through “social engineering.” See Stu Sjouwerman, How AI Is Changing Social Engineering Forever, Forbes (Mar. 26, 2023), https://www.forbes.com/sites/forbestechcouncil/2023/05/26/how-ai-is-changing-social-engineering-forever/?sh=cadfcb8321b0 [https://perma.cc/6974-64NC] (“Social engineering is the art of manipulating, influence or deceiving users to gain control over a computer system.”). For example, AI can enable advanced forms of social engineering attacks, through using large language models to conduct phishing and using generative AI to make deepfakes more realistic. Id.
  28. See, e.g., Robert Bartlett et al., Consumer-Lending Discrimination in the FinTech Era (Nat’l Bureau of Econ. Rsch., Working Paper No. 25943, 2019), https://www.nber.org/system/files/working_papers/w25943/w25943.pdf [https://perma.cc/35TM-4QJ6] (empirically discussing rent-extraction in algorithmic lending); See also Evgeny Morozov, Digital Socialism? The Calculation Debate in the Age of Big Data, 116/117 New Left Rev. 33, 35 (2019) (normatively discussing the implications of AI-powered rent-extraction for the free market pricing system in digital consumer markets) (“The [argument] that Big Data clogs the operation of the price system [has] also been made: some observers go as far as to claim that the price signals of today’s data-saturated markets, where venture capitalists, sovereign-wealth funds and deep-pocketed tech platforms subsidize services to the point where no one really knows what they cost, resemble those of the Soviet system in the years before its final breakdown.”).
  29. See generally Barry Schwartz, The Paradox of Choice: Why More Is Less (2004); David M. Grether & Louis L. Wilde, Consumer Choice and Information: New Experimental Evidence, 1 Info. Econ. & Pol’y 115 (1983).
  30. See generally Lorenz Goette et al., Information Overload and Confirmation Bias (Cambridge Working Papers in Econ., Paper No. 2020/06, 2019).
  31. See, e.g., Hao Zhang et al., Consumer Reactions to AI Design: Exploring Consumer Willingness to Pay for AI-Designed Products, 39 Psych. & Mktg. 2171, 2183 (2022); Ilker Koksal, Artificial Intelligence May Know You Better Than You Know Yourself, Forbes (Feb. 27, 2018), https://www.forbes.com/sites/ilkerkoksal/2018/02/27/artificial-intelligence-may-know-you-better-than-you-know-yourself/?sh=5714a2b4058a [https://perma.cc/BA3A-BRNR].
  32. See generally Kate Sablosky Elengold, Consumer Remedies for Civil Rights, 99 B.U. L. Rev. 587 (2019).
  33. Liability for a disparate impact violation under the ECOA hinges on whether the creditor has reasonably (objective standard) sought out less discriminatory alternatives to pursue legitimate business interests notwithstanding any harms inflicted on consumers. 12 C.F.R. § 202 (2023); Fed. Deposit Ins. Corp., Consumer Compliance Examination Manual IV-1.1 (2023), https://www.fdic.gov/resources/supervision-and-examinations/consumer-compliance-examination-manual/documents/compliance-examination-manual.pdf [https://perma.cc/9UPS-EA4V].
  34. See Soederberg, supra note 17, at 84 (“Based on economic assumptions of rational individualism, TILA was not designed to protect borrowers in terms of the price of the loan (e.g., interest rates and fees), but instead to ensure that they were given a ‘choice’ (freedom) among lenders.”). TILA relies on disclosure as a primary method to protect consumers. Specifically, TILA requires creditors to disclose all the specifics of a given loan to protect consumers. See id. Moreover, good faith compliance (subjective standard) shields creditors from civil liability under TILA. CFPB, Laws and Regulations: Truth in Lending Act 5 (2015), https://files.consumerfinance.gov/f/201503_cfpb_truth-in-lending-act.pdf [https://perma.cc/8MYE-7NJP].
  35. See Salomé Viljoen, A Relational Theory of Data Governance, 131 Yale L.J. 573, 628 (2021) (explaining how creditors can use data to shape interactions with all those “shar[ing] population features”).
  36. Machine learning is a way of training an algorithm. Whereas conventional knowledge-based algorithms are built on decision trees and programming instructions controlling how the algorithm should process data, machine learning algorithms are given a large set of data with minimal instructions. Human intervention is limited to selecting data inputs for training and labeling the data outputs. Ways to do this include decision-tree training, clustering, reinforcement training, and Bayesian networks. See Ignacio N. Cofone, Algorithmic Bias Is an Information Problem, 70 Hastings L.J. 1389, 1395 (2019).
  37. See generally Yinan Liu & Talia Gillis, Machine Learning in the Underwriting of Consumer Loans 8-9 (Harvard L. Sch., Case Study CSP057, 2020).
  38. Cofone, supra note 37 at 1395.
  39. Id.
  40. See id.
  41. See Brown, supra note 22.
  42. See Jason Brownlee, Why Optimization is Important in Machine Learning, Mach. Learning Mastery (Oct. 12, 2021), https://machinelearningmastery.com/why-optimization-is-important-in-machine-learning/\#:~:text=Function%20optimization%20is%20the%20reason,in%20a%20predictive%20modeling%20project. [https://perma.cc/64N6-R3NN].
  43. The Consumer Financial Protection Bureau is currently contemplating regulatory action against users of DL algorithms. See CFPB Acts to Protect the Public from Black-Box Credit Models Using Complex Algorithms, CFPB (May 26, 2022), https://www.consumerfinance.gov/about-us/newsroom/cfpb-acts-to-protect-the-public-from-black-box-credit-models-using-complex-algorithms/ [https://perma.cc/RB9X-RWTQ].
  44. See, e.g., Waddah Saeed & Christian Omlin, Explainable AI (XAI): A Systematic Meta-Survey of Current Challenges and Future Opportunities, 263 Knowledge-Based Sys., Mar. 2023, at 3 (“Black-box AI systems are being utilized in many areas of our daily lives, which could result[] in unacceptable decisions, expecially those that may lead to legal effects. Thus, it poses a new challenge for the legislation.”).
  45. See discussion infra Part I.
  46. See generally Oren Bar-Gill, Cass R. Sunstein & Inbal Talgam-Cohen, Algorithmic Harm in Consumer Markets 19–23 (Harvard L. Sch., Discussion Paper No. 1091, 2023).
  47. See Viljoen, supra note 36, at 586.
  48. Cf. Colin Shearer, The CRISP-DM Model: The New Blueprint for Data Mining, 5 J. Data Warehousing 13 (2000) (describing one method of knowledge discovery).
  49. See, e.g., Tony Yiu, Understanding Random Forest: How the Algorithm Works and Why it Is So Effective, Towards Data Sci. (Jun. 12, 2019), https://towardsdatascience.com/understanding-random-forest-58381e0602d2 [https://perma.cc/G6Y2-BUCS]; See also Paul Wanyanga, Credit Scoring using Random Forest with Cross Validation, Medium (Feb. 5, 2021), https://medium.com/analytics-vidhya/credit-scoring-using-random-forest-with-cross-validation-1a70c45c1f31/ [https://perma.cc/SZX3-33EJ].
  50. See discussion infra Part I.
  51. See FDIC, Risk Management Examination Manual for Credit Card Activities 40 (2007), https://www.fdic.gov/regulations/examinations/credit_card/pdf_version/ch7.pdf [https://perma.cc/KB8S-U7SE]. If the creditor accepts the consumer’s application for a loan, then the creditor calculates an estimated price range for the risk-return tradeoff that would render the credit extension profitable; Norman E. D’Amours, Nat’l Credit Union Admin., Risk-Based Lending (1999), https://www.ncua.gov/regulation-supervision/letters-credit-unions-other-guidance/risk-based-lending [https://perma.cc/D2YW-N7CJ].
  52. See generally FDIC, supra note 52, at 40.
  53. See, e.g., Lindsay Konsko & Bev O’Shea, Credit Score vs. Credit Report: What’s the Difference? NerdWallet, https://www.nerdwallet.com/article/finance/credit-score-vs-credit-report-whats-difference [https://perma.cc/2E8W-B667] (last updated Nov. 7, 2023) (“When you apply for a credit card, apartment rental, mortgage or car loan, two things help would-be lenders assess the likelihood that you’ll pay as agreed: your credit scores and your credit reports.”).
  54. See What Are Credit Scoring and Automated Underwriting?, Fed. Rsrv. Bank of St. Louis (Jan. 1, 1998), https://www.stlouisfed.org/publications/bridges/winter-1998/what-are-credit-scoring-and-automated-underwriting [https://perma.cc/QFS4-UYVM] (explaining how automated scoring is “poised to sweep through” credit underwriting, particularly to small businesses); see also How the World of Credit Scoring Has Changed Over the Past Decade, VantageScore (Jun. 24, 2020), https://www.vantagescore.com/newsletter/how-the-world-of-credit-scoring-has-changed-over-the-past-decade/ [https://perma.cc/7XVP-FC54].
  55. See, e.g., Bd. of Governors of the Fed. Rsrv. Sys., Report to Congress on Credit Scoring and Its Effects on the Availability and Affordability of Credit S-1 (2007), https://www.federalreserve.gov/boarddocs/rptcongress/creditscore/creditscore.pdf [https://perma.cc/D3YD-STD8].
  56. See Julapa Jagtiani & Catherine Lemieux, The Roles of Alternative Data and Machine Learning in Fintech Lending: Evidence from the LendingClub Consumer Platform 1 (Fed. Rsrv. Bank of Phila., Working Paper No. 18-15/R, 2019) (“The use of alternative data sources, big data and machine learning (ML) technology, and other complex artificial intelligence (AI) algorithms could also reduce the cost of making credit decisions …”).
  57. See generally Aite Group, Alternative Data Across the Loan Life Cycle: How Fintech and Other Lenders Use It and Why, Experian 7 (2018), https://www.experian.com/assets/consumer-information/reports/Experian_Aite_AltDataReport_Final_120418.pdf [https://perma.cc/KQ4V-DZNF].
  58. See, e.g., The Impact of Artificial Intelligence on Financial Inclusion, YData (Nov. 23, 2022), https://ydata.ai/resources/the-impact-of-artificial-intelligence-on-financial-inclusion [https://perma.cc/6DFH-C8F8] (“The use of AI can significantly assist the unbanked population to receive quality and unbiased financial services.”); Financial Inclusion in Banking Through Artificial Intelligence, PwC (Jan. 7, 2020), https://www.pwc.com/us/en/industries/financial-services/library/financial-inclusion-through-artificial-intelligence.html [https://perma.cc/4WDP-5SAU] (“AI [can] help provide affordable credit without sacrificing profitability.”).
  59. See FinRegLab, The Use of Machine Learning for Credit Underwriting: Market & Data Science Context 25 (2021), https://finreglab.org/wp-content/uploads/2021/09/The-Use-of-ML-for-Credit-Underwriting-Market-and-Data-Science-Context_09-16-2021.pdf [https://perma.cc/A29H-WX5F] (“Credit cards and unsecured personal loans (including point-of-sale loans) are the consumer finance asset classes in which the use of machine learning models to make credit decisions is most advanced.”)
  60. Id.
  61. See generally Becky Yerak, AI Helps Auto-Loan Company Handle Industry’s Trickiest Turn, Wall St. J. (Jan. 3, 2019), https://www.wsj.com/articles/ai-helps-auto-loan-company-handle-industrys-trickiest-turn-11546516801 [https://perma.cc/PE9M-JLA3].
  62. See generally Trevor Dryer, How Machine Learning Is Quietly Transforming Small Business Lending, Forbes (Nov. 1, 2018), https://www.forbes.com/sites/forbesfinancecouncil/2018/11/01/how-machine-learning-is-quietly-transforming-small-business-lending/?sh=2b29155a6acc [https://perma.cc/6K46-PXTY].
  63. In this article, the term “locus of algorithmic harm” refers to the individuals affected by AI and the ways such harm materializes in the daily economic lives of consumers. To identify the locus of algorithmic harm, this article explores the process of algorithmic exploitation, the pathways of algorithmic harm, and effect of such harm on consumers.
  64. See generally Bar-Gill, supra note 47, at 33–52 (outlining proposed reforms).
  65. See generally Makada Henry-Nickie, How Artificial Intelligence Affects Financial Consumers, Brookings Inst. (Jan. 31, 2019), https://www.brookings.edu/articles/how-artificial-intelligence-affects-financial-consumers/ [https://perma.cc/U6UW-ELEV].
  66. Mortgage Lender Sentiment Survey: How Will Artificial Intelligence Shape Mortgage Lending? Fannie Mae 10 (2018), https://www.fanniemae.com/media/20256/display [https://perma.cc/H4UV-SHA3].
  67. See FinRegLab, The Use of Machine Learning for Credit Underwriting: Market & Data Science Context, supra note 60, at 22–23.
  68. See Liz Lumley, Large Language Models Advance on Financial Services, The Banker (Sep. 3, 2023 11:03 AM) https://www.thebanker.com/Banking-strategies/Investment-banking/Large-language-models-advance-on-financial-services [https://perma.cc/X2VV-QBF5] (citing Machine Learning in UK Financial Services, Bank of Eng. (Oct. 11, 2022) https://www.bankofengland.co.uk/report/2022/machine-learning-in-uk-financial-services/a> [https://perma.cc/Y2NH-8YDU]).
  69. See, e.g., Miriam Fernandez, AI in Banking: AI Will Be an Incremental Game Changer, S\&P Glob. (Oct. 31, 2023), https://www.spglobal.com/en/research-insights/featured/special-editorial/ai-in-banking-ai-will-be-an-incremental-game-changer [https://perma.cc/V9H2-U9P3]
  70. E.g., Arvind Nimbalker, Enterprise Finance and AI: Bridging the Financing Gap and Reaching the Credit Invisibles, Nasdaq (Feb. 4, 2022), https://www.nasdaq.com/articles/enterprise-finance-and-ai%3A-bridging-the-financing-gap-and-reaching-the-credit-invisibles [https://perma.cc/P896-URZU].
  71. E.g., Socially Responsible Banking: A Digital Path to Financial Inclusion, PwC, https://www.pwc.com/us/en/industries/financial-services/library/financial-inclusion-through-artificial-intelligence.html [https://perma.cc/QKN9-UZAA].
  72. Pam Dixon & Robert Gellman, World Priv. F., The Scoring of America: How Secret Consumer Scores Threaten Your Privacy and Your Future 10 (2014), https://www.worldprivacyforum.org/wp-content/uploads/2014/04/WPF_Scoring_of_America_April2014_fs.pdf [https://perma.cc/F35J-WZ3A] (“[T]hose who create unregulated scores have no legal obligation to provide Fair Information Practices or due process to consumers.”); cf. CFPB Acts to Protect the Public from Black-Box Credit Models Using Complex Algorithms, supra note 44, at 6.
  73. See Patrice Alexander Ficklin, Tom Pahl & Paul Watkins, Innovation Spotlight: Providing Adverse Action Notices When Using AI/ML Models, CFPB Blog (Jul. 7, 2020), https://www.consumerfinance.gov/about-us/blog/innovation-spotlight-providing-adverse-action-notices-when-using-ai-ml-models/ [https://perma.cc/Q4S5-H2D6].
  74. This does not imply that engineered prices and manufactured consent are phenomena specific to AI-mediated markets. Rather, my argument here is much narrower: the degree of price-manufacturing and consent-manufacturing is stronger in AI-mediated markets than in pre-AI markets. In the pre-AI market society, price-engineering and consent-manufacturing occurs mostly through mass culture, marketing, and other methods of manipulating consumer demand. The mechanisms that companies and states use to artificially manipulate demand to match supply are well studied by social theorists. See generally Edward S. Herman & Noam Chomsky, Manufacturing Consent: The Political Economy of the Mass Media (1988).
  75. See Alexandra Twin, What Is Price Discrimination, and How Does It Work?, Investopedia, https://www.investopedia.com/terms/p/price_discrimination.asp\#:~:text=In%20pure%20price%20discrimination%2C%20the,each%20group%20a%20different%20price [https://perma.cc/5237-XFEC] (last updated Jun. 13, 2022).
  76. See generally id.
  77. See generally id.
  78. E.g., Hal R. Varian, Price Discrimination, in 1 Handbook of Industrial Organization 597 (Richard Schmalensee & Robert Willig eds., 1989).
  79. E.g., Id. at 603–04.
  80. E.g., Id. at 611–13.
  81. E.g., Id. at 617–19.
  82. Viljoen, supra note 36, at 586.
  83. Id.
  84. Id. at 607–08.
  85. Id. at 641. For further discussion of the pathways of vertical and horizontal information control, see infra Part I.B.3..
  86. Consent-less data collection is conceptualized as a harm to autonomy and dignity by denying the person whose information is collected the right to informational self-determination. See generally Alan F. Westin, Privacy and Freedom: Locating the Value in Privacy 15 (1967).
  87. When people are denied access to information about themselves, informational self-determination is also harmed. See Viljoen, supra note 36, at 596; cf. Shyamkrishna Balganesh, Privative Copyright, 73 Vand. L. Rev. 1, 8—20 (2020) (explaining how a fundamental tenant of copyright is creators’ right to determine whether and how to publish).
  88. Unauthorized disclosure may cause immediate harm (e.g., reputational harm) that is redressable under existing tort law. In other circumstances, unauthorized disclosure may result in identity theft or stalking. State statutes also directly address data breaches. See, e.g., N.Y. Gen. Bus. Law §§ 899-aa to -bb (McKinney 2022). For federal level data protection laws, see Health Insurance Portability and Accountability Act of 1996 (HIPAA), 42 U.S.C. § 1320d-2 (2021) (outlining standards for information transactions and data elements regarding health information).
  89. See Fair Credit Reporting Act (FCRA) of 1970, 15 U.S.C. §§ 1681–1681x (2022).
  90. The strongest data privacy law to date, the European Union’s General Data Protection Regulation, 2016 O.J. (L 119), derives its theory of privacy and data protection from the Kantian dignitarian conceptions of data as expression of the self, “subject to deontological requirements of human dignity.” Viljoen, supra note 36, at 623 n.132.
  91. The GDPR includes “the right to be forgotten”—i.e., the right to request erasure of personal data from the Internet—as one of the eight fundamental data privacy rights. 2016 O.J. (L 119) 12–13; See also OneTrust, Complete Guide to General Data Protection Regulation (GDPR) Compliance (Apr. 16, 2021), https://www.onetrust.com/blog/gdpr-compliance/ [https://perma.cc/L42Y-ZMWG] (explaining the key features of the GDPR). The U.S. has not implemented the right to be forgotten. Some legal experts opine that the right to be forgotten is unlikely to be implemented in the U.S. due to First Amendment free expression constraints. See, e.g., Danielle Bernstein, Why the “Right to be Forgotten” Won’t Make it to the United States, Mich. Tech. L. Rev. (2020), https://mttlr.org/2020/02/why-the-right-to-be-forgotten-wont-make-it-to-the-united-states/ [https://perma.cc/JUJ3-RZQU].
  92. The General Data Protection Regulation, Eur. Council, https://www.consilium.europa.eu/en/policies/data-protection/data-protection-regulation/ [https://perma.cc/KW9S-WU5B].
  93. See Viljoen, supra note 36, at 629 & n.150.
  94. See id. at 625–26, 626 n.140.
  95. Id. at 629.
  96. See Karel Šrédl, Alexandr Soukup & Lucie Severová, Models of Consumer’s Choice, 16 E+M Ekonomie a Management 4, 9 (2013).
  97. See generally David Harvey, A Brief History of Neoliberalism 2 (2005) (“Neoliberalism is in the first instance a theory of political economic practices that proposes that human well-being can best be advanced by liberating individual entrepreneurial freedoms and skills within an institutional framework characterized by strong private property rights, free markets, and free trade. The role of the state is to create and preserve an institutional framework appropriate to such practices.”).
  98. Within the neoliberal imaginary, market price communicates objective information regarding the value of resources transacted because they are unsullied by the distortive deadweight losses generated by undue governmental or social influence. Market prices operate as signals for economic opportunity since they allow market participants to trade on their differences in preferences, forecasts, and knowledge about resource use. In this regard, a free market disconnected from governmental or social influence is necessarily a just market. See, e.g., Jason Brennan, Why Not Capitalism? 90–92 (2014).
  99. Academics constructed the ideal of consumer rational choice in the late 1970s as part of the intellectual movement to justify and spread neoliberal economics. See generally David M. Grether & Charles R. Plott, Economic Theory of Choice and the Preference Reversal Phenomenon, 69 Am. Econ. Rev. 623, 623 (1979).
  100. Standard law-and-economics models tend to assume that consumer preferences are a given and exogenously determined (i.e., not shaped through state intervention or market mechanisms). See, e.g., Ariel Porat, Changing People’s Preferences by the State and the Law 13 (U. Chi. Pub. L. Working Paper, Paper No. 722, 2019).
  101. See Michael W.M. Roos, Willingness to Consume and Ability to Consume, 66 J. Econ. Behav. & Org. 387, 388 (2008) (“[C]onsumers’ buying behavior is not completely determined by objective conditions such as their income (ability to buy), but also depends on subjective factors such as attitudes and moods (willingness to buy).”).
  102. Cf. Porat, supra note 101, at 220 (“[P]references often involve views and moral stances that might be based on accurate or false evidence or beliefs. Thus, a person might prefer sweet to non-sweet food based on the mistaken perception that the former is healthier than the latter.”)
  103. Additionally, advances in behavioral economics and sociology have shown that consumers are in fact homo socialis, rather than homo economicus. See Yochai Benkler, Power and Productivity: Institutions, Ideology, and Technology in Political Economy, in A Political Economy of Justice 27, 35 (Danielle Allen, Yochai Benkler, Leah Downey, Rebecca Henderson & Josh Simons eds., 2022) (“Homo economicus is replaced by homo socialis, whose motivations are diverse and socialized and whose decisions are situational and reasonable, not formally rational.”).
  104. For further discussion on how the current consumer financial protection regime is driven by the discourse of individual rights and private litigation, see infra Part II.A.2
  105. See, e.g., Fawn Fitter & Steven Hunt, How AI Can End Bias, SAP, https://www.sap.com/insights/viewpoints/how-ai-can-end-bias.html [https://perma.cc/2P6U-HLR7] (“Harmful human bias—both intentional and unconscious—can be avoided with the help of artificial intelligence, but only if we teach it to play fair and constantly question the results.”); See also Mirko Bagaric, Dan Hunter & Nigel Stobbs, Erasing the Bias Against Using Artificial Intelligence to Predict Future Criminality: Algorithms are Colorblind and Never Tire, 88 U. Cin. L. Rev. 1037, 1039–40 (2020) (arguing that AI remains beneficial for reducing human bias in criminal sentencing, and that the current backlash against the use of AI in criminal justice is motivated people’s illogical and innate distrust of decisions made by computers).
  106. Lorraine Daston & Peter Galison, Objectivity 36 (2007).
  107. See, e.g., Yiu, supra note 50.
  108. See, e.g., Ruha Benjamin, Race After Technology: Abolitionist Tools for the New Jim Code 3 (2019) (“[R]ather than challenging or overcoming the cycles of inequity, technical fixes too often reinforce and even deepen the status quo.”).
  109. See id.
  110. See Oren Bar-Gill & Elizabeth Warren, Making Credit Safer, 157 U. Pa. L. Rev. 1, 66–69 (2008); Cassandra Jones Havard, On the Take: The Black Box of Credit Scoring and Mortgage Discrimination, 20 B.U. Pub. Int. L. J. 241, 260–71 (2011).
  111. See Laura Abrardi, Carlo Cambini & Laura Rondi, Artificial Intelligence, Firms, and Consumer Behavior: A Survey, 36 J. Econ. Surv. 969, 978–79 (2022).
  112. See, e.g., How Businesses Are Using AI and Data to Enable Financial Inclusion, U.S. Chamber of Com. (Apr. 20, 2022), https://www.uschamber.com/on-demand/technology/how-businesses-are-using-ai-and-data-to-enable-financial-inclusion [https://perma.cc/6R3F-SBVR]; Derek Hosford, AI Can Provide a Solution to the Problem of Credit Invisibility, Am. Consumer Inst. Ctr. for Citizen Rsch. (Jun. 10, 2021), https://www.theamericanconsumer.org/2021/06/ai-can-provide-a-solution-to-the-problem-of-credit-invisibility/ [https://perma.cc/UHU3-JWVQ].
  113. Robert Bartlett, Adair Morse, Richard Stanton & Nancy Wallace, Consumer–Lending Discrimination in the FinTech Era 4 (Nat’l Bureau of Econ. Rsch., Working Paper No. 25943, 2019).
  114. For mortgage loans originated on fintech platforms using algorithmic solutions, Latinx and African American loan applicants on average pay 5.3 basis points more in interest for purchases and 2.0 basis points for refinancing. In comparison, Latinx and African Americans pay 7.9 and 3.6 basis points more in interest for home purchase and refinance mortgages respectively because of human bias. See id.
  115. Patrice Fickin & Paul Watkins, An Update on Credit Access and the Bureau’s First No-Action Letter, CFPB. (Aug. 6, 2019), https://www.consumerfinance.gov/about-us/blog/update-credit-access-and-no-action-letter/ [https://perma.cc/VF76-HPSQ].
  116. Id.
  117. Andreas Fuster, Paul Goldsmith-Pinkham, Tarun Ramadorai & Ansgar Walther, Predictably Unequal? The Effects of Machine Learning on Credit Markets, 77 J. Fin. 5, 8 (2022) (using data collected under the Home Mortgage Disclosure Act).
  118. Id. at 31–32.
  119. Since AI processes alternative data and does not require the use of formal credit information to determine a prospective borrower’s creditworthiness, FinTech companies have used AI to reach consumers who would have been rejected by formal banking institutions for lacking credit history. See The Path to a Fairer Credit Economy: Special Report: Three Ways AI/ML Can Increase Economic Inclusion in America, Zest AI 4-6 (Dec. 16, 2020), https://assets-global.website-files.com/6179287a90a6ea0e76461eba/61d56f97f550f26afbcd1647_Fairness%20White%20Paper.pdf [https://perma.cc/UPK4-VTMR]
  120. See generally Julia Kagan & Khadija Khartit, Risk-Based Pricing: What It Means, How It Works, Investopedia, https://www.investopedia.com/terms/r/riskbased-pricing.asp [https://perma.cc/KR26-MUV4] (last updated Dec. 1, 2020) (“Risk-based pricing methodologies allow lenders to use credit profile characteristics to charge borrowers interest rates that vary by credit quality. . . . This means that higher-risk borrowers who seem less likely to repay their loans in full and on time will be charged higher rates of interest while lower risk borrowers who seem to have a greater capacity to make payments will be charged lower rates of interest.”).
  121. “Buy-now-pay-later” (BNPL) refers to payment options that offer consumers the ability to receive their items or services before paying them in full. In most cases, the total cost of the consumer purchase is divided into installments that are billed to the creditor’s credit account. What makes BNPL schemes predatory is timing: if borrowers miss payments or lack the money to pay their balance in full, they will be hit with punitive late fees and high-interest rates. The BNPL feature tends to incentivize borrowers to overspend, so that they are more likely to miss payments or fail to pay their balances in full. See generally Bow Now, Pay Later’ Services: Predatory or Progressive? OfColor, https://www.ofcolor.com/blog/buy-now-pay-later-services-predatory-or-progressive [https://perma.cc/2JAU-XHPQ].
  122. “Balloon payment” refers to loans with lower monthly payments with a large payment due at the end of the loan term. Many of these payments are predatory because they are “hidden” in contract and often catch borrowers by surprise. See generally Balloon Payments: Predatory Lending: The Danger of Balloon Payments, Faster Cap., https://fastercapital.com/content/Balloon-payments–Predatory-Lending–The-Danger-of-Balloon-Payments.html [https://perma.cc/N6A2-PWW5] (last updated Oct. 2, 2023).
  123. “Negative Amortization” occurs when the principal amount of the loan increases because the loan repayments do not cover the total amount of interest costs of the period, causing the total indebtedness to increase even though the borrower has repaid every term. See generally Negative Amortization, Corp. Fin. Inst., https://corporatefinanceinstitute.com/resources/commercial-lending/negative-amortization/ [https://perma.cc/2MSU-8DZC].
  124. For further critiques of “Techno-Chauvinism,” or as they are more commonly called, “Techno-Solutionism,” see generally Meredith Broussard, Artificial Unintelligence: How Computers Misunderstand the World (2018); Evgeny Morozov, To Save Everything: Click Here: The Folly of Technological Solutionism (2014).
  125. Increasingly, legal scholars and data scientists characterize algorithmic discrimination as a “data problem,” but disagreement exists on whether poor data or insufficient data creates discriminatory AI outputs. Compare Cofone, supra note 37, at 1402 (“An algorithm can only be as good as the data that is fed. If an algorithm is mining in a section of the dataset that, for any reason, is unrepresentative of the population, it will produce a non-representative output.”), with Catherine Tucker, Algorithmic Exclusion: The Fragility of Algorithms to Sparse and Missing Data, Brookings: Ctr. on Regul. & Mkts. 3 (Jan. 18, 2023), https://www.brookings.edu/wp-content/uploads/2023/02/Algorithmic-exclusion-FINAL.pdf [https://perma.cc/W78S-7MRJ] (“Algorithmic exclusion occurs when algorithms are unable to even make predictions because they lack the data to [do] so.”).
  126. H. James Wilson, Paul R. Daugherty & Chase Davenport, The Future of AI Will be About Less Data, Not More, Harv. Bus. Rev. (Jan. 14, 2019), https://hbr.org/2019/01/the-future-of-ai-will-be-about-less-data-not-more [https://perma.cc/87GD-U6NB].
  127. See Sidath Asiri, An Introduction to Classification in Machine Learning, Built-in (Nov. 15, 2022), https://builtin.com/machine-learning/classification-machine-learning [https://perma.cc/S9YD-CYVD].
  128. See Reva Schwartz, Apostol Vassilev, Kristen Greene, Lori Perine, Andrew Burt & Patrick Hall, Towards a Standard for Identifying and Managing Bias in Artificial Intelligence 6–9 (Nat’l Inst. Standards & Tech, Special Publication No. 1270, 2022), https://tsapps.nist.gov/publication/get_pdf.cfm?pub_id=934464 [https://perma.cc/F4CP-PV6Z].
  129. See Jamie Wareham, Why Artificial Intelligence is Set Up to Fail LGBTQ People, Forbes (Mar. 21, 2021), https://www.forbes.com/sites/jamiewareham/2021/03/21/why-artificial-intelligence-will-always-fail-lgbtq-people/?sh=4c6e3946301e [https://perma.cc/32ML-4VYV] (“AIs build decision-making models by looking at existing or ‘staple’ decisions. Norms that AI then try to emulate.”).
  130. See Shivani Gupta & Atul Gupta, Dealing with Noise Problem in Machine Learning Data-Sets: A Systematic Review, 161 Procedia Comp. Sci. 466, 471 (2019).
  131. See Wareham, supra note 130 (“The problem is that what we build is the norm, the typical. By design, AI excludes and pushes to the margins anything that doesn’t have a robust example.”).
  132. Leena Rao, ZestFinance Debuts New Data Underwriting Model to Ensure Lower Consumer Loan Default Rates, Tech Crunch (Nov. 19, 2012), https://techcrunch.com/2012/11/19/zestfinance-debuts-new-data-underwriting-model-to-ensure-lower-consumer-loan-default-rates/ [https://perma.cc/4R8D-H22F]. See also ZestFinance Introduces Machine Learning Platform to Underwrite Millennials and Other Consumers with Limited Credit History, Bus. Wire (Feb. 14, 2017), hhttps://www.businesswire.com/news/home/20170214005357/en/ZestFinance-Introduces-Machine-Learning-Platform-to-Underwrite-Millennials-and-Other-Consumers-with-Limited-Credit-History [https://perma.cc/J5M3-KBKG] (describing ZestFinane’s 2017 incorporation of alternative data into its machine learning model to offer credit underwriting services to consumers with limited credit history).
  133. See ZestFinance, LinkedIn, https://www.linkedin.com/company/zestfinance/ [https://perma.cc/R3ZS-KYLJ] (“The world’s most innovative lenders rely on ZestFinance to do more profitable lending through machine learning. Our Zest Automated Machine Learning (ZAML) software is the only solution for explainable AI in credit, and we automate risk management so our customers can focus on lending safely to more people.”).
  134. Quentin Hardy, Just the Facts. Yes, All of Them, N.Y. Times, Mar. 24, 2012, at BU1 (quoting ZestFinance CEO Douglas Merrill).
  135. See Laura Burrows, 2022 State of Alterative Credit Data Report, Experian (Jul. 12, 2022) https://www.experian.com/blogs/insights/2022-state-of-alternative-credit-data-report/ [https://perma.cc/9MDE-DWVG] (footnote omitted) (“[M]any businesses are proactively turning to alterative credit data––or ‘expanded FCRA-regulated data’––to expand their lending portfolio…”).
  136. See, e.g., Anya E.R. Prince & Daniel Schwarcz, Proxy Discrimination in the Age of Artificial Intelligence and Big Data, 105 Iowa L. Rev. 1257, 1273 (2020) (“[P]roxy discrimination by AIs is virtually inevitable whenever the law seeks to prohibit use of characteristics whose predictive power cannot be measured more directly by facially neutral data…”).
  137. Mikella Hurley & Julius Adebayo, Credit Scoring in the Era of Big Data, 18 Yale J.L. & Tech. 148, 164 (2016).
  138. Id. (citing Quentin Hardy, Big Data for the Poor, N.Y. Times (Jul. 5, 2012), https://archive.nytimes.com/bits.blogs.nytimes.com/2012/07/05/big-data-for-the-poor/ [https://perma.cc/KDW5-B79P]).
  139. See id. at 164–65.
  140. Id.
  141. Hardy, supra note 139.
  142. See, e.g., Michael Carl Tschantz, What Is Proxy Discrimination?, Ass’n of Computing Mach. Digit. Libr. (Jun. 2022), https://dl.acm.org/doi/fullHtml/10.1145/3531146.3533242 [https://perma.cc/MX5Q-YP9N].
  143. See Laura Douglas, AI Is Not Just Learning Our Biases; It Is Amplifying Them, Medium (Dec. 5, 2017), https://medium.com/@laurahelendouglas/ai-is-not-just-learning-our-biases-it-is-amplifying-them-4d0dee75931d [https://perma.cc/5XN3-WJY4].
  144. See Julie E. Cohen, The Biopolitical Public Domain: The Legal Construction of the Surveillance Economy, 31 Phil. & Tech. 213, 222 (2017) (describing personal data as both raw and readily available for commercialization through “new data mining” systems).
  145. See generally Herman & Chomsky, supra note 75, at 302 (“[T]he U.S. media do not function in the manner of the propaganda system of the totalitarian state. Rather, they permit––indeed, encourage––spirited debate, criticism, and dissent, as long as these remain faithfully within the system of presuppositions and principles that constitute an elite consensus, a system so powerful to be internalized largely without awareness.”).
  146. See, e.g., Jonathan A. Obar & Anne Oeldorf-Hirsch, The Clickwrap: A Political Economic Mechanism for Manufacturing Consent on Social Media, 4 Soc. Media + Soc’y, July 2018, at 3 (referencing the use of consent-manufacturing in clickwraps to keep “individuals in a ‘buying mood’” (quoting Herman & Chomsky, supra note 75, at 17)).
  147. Herman & Chomsky, supra note 75, at 306
  148. Individualism causes self-alienation through the breakdown of communities. See generally Robert D. Putnam, Bowling Alone: The Collapse and Revival of American Community (2000) (arguing that the decline of social cohesion, networks, and communities endangers civic engagement and the functioning of representative democracy); George Monbiot, Neoliberalism Is Creating Loneliness. That’s What’s Wrenching Society Apart, The Guardian (Oct. 12, 2016), https://www.theguardian.com/commentisfree/2016/oct/12/neoliberalism-creating-loneliness-wrenching-society-apart [https://perma.cc/8SRJ-BLJK] (arguing that the social expectations of “self-interest and extreme individualism” in Western societies are causing unprecedented social isolation, depression, fear, the perception of threat, and mental illnesses).
  149. Market dependency reinforces self-suppression by compelling people to resort to exploitative markets to satisfy their basic needs of survival and subsistence. See generally Michael D. Sousa, Consumer Bankruptcy in the Neoliberal State, 39 Emory Bankr. Devs. J. 199, 204–05 (quoting Kevin T. Leicht & Scott. T. Fitzgerald, Postindustrial Peasants: The Illusion of Middle-Class Prosperity 11 (2007)) (“As a result of what neoliberalism has wrought for most Americans—stagnant incomes, rising taxes, job instability, privatization, a weakened welfare state, globalization, the pocketing of productivity gains by the corporate elite, and a surplus of readily-available credit—Americans have been characterized as ‘post-industrial peasants’: people who are ‘so in debt that those to whom they owe money (and the employers and economic elites who provide the investment and consumption capital for the system) control them.’”).
  150. Cf. Michael Burawoy, Manufacturing Consent: Changes in the Labor Process Under Monopoly Capitalism (1979) (focusing on consent manufacturing in industrial labor relations and how emerging technological, political, and ideological systems changed factory life).
  151. See, e.g., Salomé Viljoen, Data Relations, Logic(s) (May 17, 2021), https://logicmag.io/distribution/data-relations/ [https://perma.cc/W2UT-UAA6].
  152. For further discussion about the concept of vertical versus horizontal data relations, see Viljoen, supra note 36, at 607–08, 610–13.
  153. CFPB, Payday Loans, Auto Title Loans, and High-Cost Installment Loans: Highlights from CFPB Research 2 (2016), https://files.consumerfinance.gov/f/documents/Payday_Loans_Highlights_From_CFPB_Research.pdf [https://perma.cc/Y4P2-JSGY].
  154. See id. at 1. On average, payday lenders charge $15-30 interest for every $100 borrowed. Consumer Fed’n of Am., How Payday Loans Work, https://paydayloaninfo.org/how-payday-loans-work/ [https://perma.cc/WSB2-6QRU] (“For two-week loans, these finance charges can result in interest rates from 390-780% APR. Shorter term loans have even higher APRs.”) Once a borrower misses one payment, it is very typical for such payments to compound and result in revolving debt. Id.
  155. See generally James Ledbetter, Are Fintechs Going Predatory? Technomony (Apr. 23, 2021), https://techonomy.com/fintechs-going-predatory/ [https://perma.cc/6E94-7EQE] (describing how FinTech companies in the payday lending business use “rent-a-bank” partnerships to circumvent state usury laws and use AI to micro-target and identify prospective consumers who are most likely to borrow payday loans).
  156. See The Future of Short-Term Lending: How AI Is Shaping Payday Loans, GetMoney (Oct. 30, 2023), https://getmoney.com/blog/the-future-of-short-term-lending-how-ai-is-shaping-payday-loans/ [https://perma.cc/SF79-G4XN] (“AI algorithms can customize loan terms to individual borrowers based on their financial histories…”).
  157. Here, I refer to “groups” as behavioral archetypes that are summarized and categorized by AI in the knowledge discovery process. They may or may not correspond with group classifications that exist in the observable natural world, such as race, sex, gender, or religion.
  158. Viljoen, supra note 36, at 611.
  159. Id.
  160. Ciarán Daly, Addressing the Implications of AI for Individuals Seeking Payday Loans, AI Bus. (May 23, 2019), https://aibusiness.com/verticals/addressing-the-implications-of-ai-for-individuals-seeking-payday-loans [https://perma.cc/894J-V75H].
  161. See Julie E. Cohen, Between Truth and Power: The Legal Constructions of Informational Capitalism, 71–72 (2019) (“The techniques operate on ‘raw’ personal data to produce ‘refined’ data doubles and use the data doubles to generate preemptive nudges that, when well executed, operate as self-fulfilling prophecies, eliciting patterns of behavior, content consumption, and content sharing already judged most likely to occur.”).
  162. See Soederberg, supra note 17, at 84 (“A main regulatory feature of consumer protection in the United States is the Consumer Credit Protection Act of 1968 (hereafter: the 1968 Act). This Act is an umbrella consumer protection law that includes the Equal Credit Opportunity Act, the Fair Credit Billing Act, the Fair Credit Reporting Act and the Truth in Lending Act (or, TILA) that was originally part of the Consumer Credit Protection Act. It should be underlined that since the passing of the 1968 Act, there has been no comprehensive or overarching consumer protection legislation in the U.S. Instead, the emphasis has been on a series of separate laws that target specific business practices, industries, and consumer products.”).
  163. See Jamie Duitz, Battling Discriminatory Lending: Taking a Multidimensional Approach Through Litigation, Mediation, and Legislation, 20 J. Affordable Hous. & Cmty. Dev. L. 101, 111 (2010) (arguing that the fair lending laws prohibit all lending practices that result in unequal access to credit, including facially neutral lending practices that result in disparate impact); Francesca Lina Procaccini, Stemming the Rising Risk of Credit Inequality: The Fair and Faithful Interpretation of the Equal Credit Opportunity Act’s Disparate Impact Prohibition, 9 Harv. L. & Pol’y Rev. S43, S58 (2015) (arguing that Congress’s intent in legislating the ECOA was to ensure non-discriminatory provision of credit); Winnie F. Taylor, The ECOA and Disparate Impact Theory: A Historical Perspective, 26 J. L. & Pol’y 575, 631 (2018) (arguing that Congress intended for the ECOA to remove both intentional and unintentional barriers to credit equality).
  164. By the 1960s, pervasive race-based economic inequality has become a central catalyst for civil unrest and uprisings. In 1967, President Lyndon B. Johnson established the National Advisory Commission on Civil Disorders (Kerner Commission) to inquire into the reasons of social unrest and help Congress to craft legislative solutions. The Kerner Commission concluded that disparities in the pricing of goods, the dearth of mainstream consumer loans, and the pervasiveness of high-price loans resulted in “the conclusion among [African Americans] that they [were] exploited by white society.” Atkinson, supra note 9, at 1420–22 (quoting Nat’l Advisory Comm’n on Civ. Disorders, Report of The National Advisory Commission on Civil Disorders 139–40 (1967)). The civil unrests in the 1960s captured the attention of Congress, sparking a new sense of urgency to develop a comprehensive federal-level response to the problem of inequality-fueled civil instability. This historical moment laid the foundations of consumer financial protection laws. See id. at 1425.
  165. See generally Milton Friedman, Capitalism and Freedom passim (1962) (arguing that political and economic freedoms are linked, promoting laissez faire and individual choice over government intervention); David Dollar & Aart Kraay, Growth is Good for the Poor, 7 J. Econ. Growth 195, 218–19 (2002) (arguing that policies and institutions enhancing the strength of private property rights, establishing the rule of law, and promoting financialization are conducive to global poverty reduction); The World Bank, Globalization, Growth and Poverty 13, 19 (2002) (arguing that neoliberal growth paradigms focusing on protecting robust private property rights and freedom of contract is conducive to global poverty reduction).
  166. See, e.g., Tayyab Mahmud, Debt and Discipline: Neoliberal Political Economy and the Working Classes, 101 Ky. L.J. 1, 46 (2013) (“With the neoliberal call for individuals to secure their freedom, autonomy and security through financial market and not the state, practices of investment, calculation and speculation became signs of initiative, self-management, and enterprise.”).
  167. See Anne Fleming, City of Debtors: A Century of Fringe Finance 214 (2018) (“Congress had largely ceded authority over the regulation of consumer credit to the states—until 1968, when it passed the Truth in Lending Act.”).
  168. See Barbara Curran, Trends in Consumer Credit Legislation 16 (1965). Usury laws, effective in nearly every state, specified the maximum interest rate which may be charged legally. States also had laws patterned after the Uniform Small Loan Act to govern loans not exceeding a statutorily prescribed amount. Id.
  169. James A. Burns, Jr., An Empirical Analysis of the Equal Credit Opportunity Act, 13 U. Mich. J. L. Reform 102, 108 (1979).
  170. Id.
  171. Morris R. Neifeld, Neifeld’s Manual on Consumer Credit 512 (1961).
  172. See Louis Hyman, Ending Discrimination, Legitimating Debt: The Political Economy of Race, Gender, and Credit Access in the 1960s and 1970s, 12 Enter. & Soc’y 200, 224 (2011).
  173. Mehrsa Baradaran, The Color of Money: Black Banks and the Racial Wealth Gap 196 (2017) (“The most successful bankers were those at the center of a community’s social structure––who had relationships with businesses and potential leaders.”).
  174. For general background about inflation in the 1970s, see Alan S. Blinder, The Anatomy of Double-Digit Inflation in the 1970s, in Inflation: Causes & Effects 261 (Robert E. Hall ed., 1982). For further information about the rise of debt-based consumption in the U.S. that began in the 1970s, see Justin Sean Myers, Neoliberalism, Debt and Class Power, in Class: The Anthology 337, 344 (Stanley Aronowitz & Michael J. Roberts eds., 2018) (“[T]he massive financialization of daily life since the 1970s—home, education, medical care, clothing, food, car—signaled the movement of credit from the background to the foreground, from a supplement of wage-income to the primary mechanism maintaining accumulation.”).
  175. See S. Rep. No. 94—589, at 3 (1976), reprinted in 1976 U.S.C.C.A.N. 403, 405 (“Virtually all home purchases are made on credit. About two-thirds of consumer automobile purchases are on an installment basis. Large department stores report that 50% or more of their sales are on revolving or closed-end credit plans. Upward of 15% of all consumers disposable income is devoted to credit obligations other than home mortgages.”).
  176. Id.
  177. Hyman, supra note 173, at 201–02.
  178. See id. at 201 (“Ghetto retailers kept their accounts in leather-bound ledgers and collected payments door-to-door, rather than mainframes that billed automatically like suburban retailers. Credit cards were nonexistent.”)
  179. See id. at 204; Atkinson, supra note 9, at 1421 (“The Kerner Commission focused in significant part on economic barriers to equality, including access to credit, as causes of race-related domestic unrest.”).
  180. See Hyman, supra note 173, at 206–07 (citing Consumer Credit and the Poor: Before the Subcomm. on Fin. Insts. of the S. Comm. on Banking & Currency, 90th Cong. 5–6 (1968) (statement of Paul Rand Dixon, Chairman, Federal Trade Commission) https://books.google.com/books?id=agyhbuf4u0IC\&printsec=frontcover\&source=gbs_ge_summary_r\&cad=0\#v=snippet\&q=each%20member%20of%20our\&f=false [https://perma.cc/XY5N-XJHG]).
  181. Equal Credit Opportunity Act, Pub. L. No. 93-495, 89 Stat. 1521 (1974) (to be codified at 15 U.S.C. §§ 1691–1691e). When Congress initially passed ECOA in October 1974, it only forbade lending discrimination on the basis of sex and marital status. Racial discrimination was at the center of congressional debate, but Congress did not prohibit racial discrimination in lending until the 1976 amendment of the ECOA, for reasons beyond the scope of this paper. See Hyman, supra note 173, at 225–26.
  182. See Lesley Fair, Fighting Discrimination in the Credit Marketplace, FTC Bus. Blog (Mar. 26, 2021), https://www.ftc.gov/business-guidance/blog/2021/03/fighting-discrimination-credit-marketplace [https://perma.cc/27Z4-MS7T] (“Equal access to credit based on non-discriminatory criteria is an essential component of economic opportunity and a fair marketplace.”); See also Taylor, supra note 164, at 628.
  183. Taylor, supra note 164, at 631 (emphases added) (quoting H.R. Rep. No. 94-210, at 3 (1975)).
  184. E.g., Gerald Ford, Statement on Signing the Equal Credit Opportunity Act Amendments of 1976 (Mar. 23, 1976), https://www.presidency.ucsb.edu/documents/statement-signing-the-equal-credit-opportunity-act-amendments-1976 [https://perma.cc/3LZ3-JSJV] (“This administration is committed to the goal of equal opportunity in all aspects of our society. In financial transactions, no person should be denied an equal opportunity to obtain credit for reasons unrelated to his or her creditworthiness.”).
  185. Gunnar Trumbull, Credit Access and Social Welfare: The Rise of Consumer Lending in the United States and France, 40 Pol. & Soc’y 9, 20 (2012) (“[P]olicymakers and the general public gradually came to see private credit as a legitimate tool for social justice.”).
  186. Id. at 28.
  187. The rhetoric of individualism and consumer autonomy presents a legislative shift away from earlier Keynesian welfare state policies, such as President Johnson’s “Great Society” programs. Whether intentional or not, the intersection between individualism and debt-based consumption was instrumental in the creation of the U.S. neoliberal “debtfare” state. See Soederberg, supra note 17, at 50 (citations omitted) (“First, neoliberal state forms emerged from the demise of previous state forms, such as Keynesian welfare states in the global North … to deal effectively with the underlying tension and crises in capital over-accumulation and the subsequent social fallouts, such as labor unrests [and] civil rights movement … Second, in response to these struggles and tensions, the rhetorical and regulatory features of the neoliberal state forms include: a withdrawal or abstention by the state in economic matters; the shifting into the private sector (or, the contracting out) of public services and the commodification of public goods …”).
  188. Equal Credit Opportunity Act Amendments of 1976, Pub. L. No. 94–239, § 701, 90 Stat. 251, 251 (1976) (to be codified at 15 U.S.C. § 1691).
  189. Women’s Business Ownership Act of 1988, sec. 301, § 703(a), Pub. L. No. 100-533, 102 Stat. 2689, 2693 (codified at 15 U.S.C. § 1691b).
  190. Federal Deposit Insurance Corporation Improvement Act of 1991, sec. 223, § 706(g), Pub. L. No. 102–242, 105 Stat. 2236, 2306 (to be codified at 15 U.S.C. § 1691e) (mandating that creditors provide, upon applicant’s request, a copy of the appraisal report on residential real property offered as security for a loan).
  191. Regulation B defines “adverse action” as: “(1) a refusal to grant credit in substantially the amount or on substantially the terms requested in an application unless the creditor makes a counteroffer (to grant credit in a different amount or on other terms), and the applicant uses or expressly accepts the credit offered; (2) a termination of an account or an unfavorable change in the terms of an account that does not affect all or substantially all of a class of the creditor’s accounts; or (3) a refusal to increase the amount of credit available to an applicant who has made an application for an increase.” 12 C.F.R. § 1002.2(c)(1) (2023).
  192. James A. Huizinga & Krista B. LaBelle, Amendments to Regulation B and the Official Staff Commentary, 59 Bus. Law. 1137, 1138 (2004).
  193. E.g., Michael H. Schill & Samantha Friedman, The Fair Housing Amendments Act of 1988: The First Decade, 4 Cityscape: J. Pol’y Dev. & Res. 57, 59 (1999).
  194. See , e.g., Walter Gorman, Enforcement of the Equal Credit Opportunity Act, 37 Bus. Law. 1335, 1336 (1982).
  195. See, e.g., John H. Matheson, The Equal Credit Opportunity Act: A Functional Failure, 21 Harv. J. on Legis. 371, 375–77. Eleven other federal agencies shared limited authority with the Federal Trade Commission on matters relating to enforcement action. Id. at 375 n.19.
  196. E.g., John R. Walter, The Fair Lending Laws and Their Enforcement, 81 Econ. Q. 61, 68 (1995). The 1976 amendment to the ECOA initially retained the dual enforcement model. It authorized the U.S. Attorney General to institute civil proceedings in two circumstances. First, federal agencies responsible for enforcement of ECOA could refer matters to the Attorney General for litigation. Second, the Attorney General could independently commence civil proceedings to prohibit or remedy ECOA violations on behalf of a class or private individuals. Matheson, supra note 196, at 376.
  197. For instance, since 1938 the FTC has had the power pursuant to § 5 of the Federal Trade Commission Act (FTCA) to regulate “unfair and deceptive acts and practices.” 15 U.S.C. § 45 (2022). In 1980, in response to considerable controversy during the Carter Administration regarding the use of its authority to regulate unfair practices, the Commission issued a policy statement to clarify its rulemaking power. See Michael L. Denger, The Unfairness Standard and FTC Rulemaking: The Controversy Over the Scope of the Commission’s Authority, 49 Antitrust L.J. 53, 54—56 (1980) (describing the congressional controversy over the FTC’s expansive “unfairness” power under the FTCA). The FTC’s 1980 Policy Statement set up a standard restraining its own power to create rules and prohibit practices that are “unfair” under the FTCA. See FTC, Policy Statement on Unfairness (Dec. 17, 1980), https://www.ftc.gov/legal-library/browse/ftc-policy-statement-unfairness [https://perma.cc/KL27-HNW9] (defining actionable “unfair” violations as conduct that “substantially” injures consumers, that is not outweighed by “any offsetting consumer or competitive benefits,” and advances Congress’ public policy goals). Congress later amended the FTCA to incorporate the specific standard articulated by the FTC’s 1980 Policy Statement. Federal Trade Act Amendments of 1994, sec. 9, § 5, Pub. L. No. 103–312, 108 Stat. 1691, 1695 (to be codified at 15 U.S.C. § 45(n)).
  198. Matheson, supra note 196, at 377.
  199. Id. at 377 n.29 (identifying more than 14,000 lawsuits brought under TILA since its enactment in 1968).
  200. Id. at 377 n.30 (identifying over 8,000 employment discrimination cases filed in the federal courts in 1983).
  201. CFPB, Laws and Regulations: Truth in Lending Act 5 (2015), https://files.consumerfinance.gov/f/201503_cfpb_truth-in-lending-act.pdf [https://perma.cc/8MYE-7NJP].
  202. See, e.g., Sarah Ammermann, Adverse Action Notice Requirements Under the ECOA and the FCRA, Consumer Compliance Outlook (2013), https://www.consumercomplianceoutlook.org/2013/second-quarter/adverse-action-notice-requirements-under-ecoa-fcra/\#footnotes [https://perma.cc/8NJN-569U] (“Adverse action notice [requirements] are designed to help consumers and businesses by providing transparency to the credit underwriting process and protecting against potential credit discrimination by requiring creditors to explain the reasons adverse action was taken.”); See also Regulation B, 12 C.F.R. § 1002.16(c) (2023) (allowing creditors to correct inadvertent errors in the disclosure process).
  203. An important feature of the U.S. neoliberal consumer financial protection regime is its concealment of transactional inequality under the guise of consumer consent. See Soederberg, supra note 17, at 4 (“The social power of money, reinforced by the debtfare state’s rhetorical and regulatory framings, assists in distorting the exploitative, unequal and disciplinary nature of the loan. Here the loan is seen as a voluntary exchange of equivalents between two consenting parties, where class-based power and exploitation are less visible and less politicised than in a wage-labor/employer relation.”).
  204. See id.
  205. Matheson, supra note 196, at 380.
  206. The impact of private enforcement in widening income disparities and barring the poor from legal redress has been well-studied by legal scholars. See generally Luke P. Norris, The Promise and Perils of Private Enforcement, 108 Va. L. Rev. 1483, 1489–90 (2022) (“[R]ecent adaptations of private enforcement tend to exhibit less democratic promise. First, they often either do not respond to or threaten to exacerbate existing power imbalances. … Second, the suits involve enforcers bringing less direct, affected expertise to less dynamic regulatory environments. … Finally, these suits have the potential to undermine democratic deliberation in a variety of ways—including by posing citizen against citizen and fraying the social fabric and by further subordinating people who have faced historical and enduring forms of oppression.”); Eloise Pasachoff, Special Education, Poverty, and Limits of Private Enforcement, 86 Notre Dame L. Rev. 1413, 1416 (2011) (“If beneficiaries with fewer financial resources consistently bring fewer claims than their wealthier counterparts, relying heavily on private enforcement may mean that the former group will not receive their fair share of the distribution.”); Scott Ilgenfritz, The Failure of Private Actions as an ECOA Enforcement Tool: A Call for Active Governmental Enforcement and Statutory Reforms, 36 Fla. L. Rev. 447, 450 (1984) (“The relative ineffectiveness of private action as the chief method of enforcement undercuts the successful implementation of the [ECOA’s] policies.”).
  207. See, e.g., David Singh Grewal & Jedediah Purdy, 77 L. & Contemp. Probs. 1, 17 (2014) (internal citations omitted) (arguing that the neoliberal conception of justice revolves around “the idea that the pursuit of individual preferences through spreading decisions is sufficient as an account of personal liberty and of the structural relation of that liberty to a scheme of good-enough government”); See also Jedediah Britton-Purdy et al., Building a Law-and-Political-Economy Framework: Beyond the Twentieth-Century Synthesis, 129 Yale L.J. 1784, 1814 (2020) (“Inclusion in the market’s private ordering thus became a central aim of many accounts of individual rights and their purposes, including the rights of individuals subordinated in racialized and gendered hierarchies. Arguments about market freedom thus paralleled liberal arguments about self-realization[. …]”).
  208. See generally Cass R. Sunstein, The Cost-Benefit State 4 (Coase-Sandor Inst. for L. & Econ., Working Paper No. 39, 1996) (defending cost-benefit analysis as “a way of diminishing interest-group pressures on regulation”).
  209. See generally Richard Posner, Law and Economics Is Moral, 24 Val. U. L. Rev. 163, 166-67 (1990) (arguing for a regulatory commitment to free markets and limited government because “the minimum state defined by the economic analysis of market failure is the state that works best to achieve the common goals of most people in the world.”)
  210. See generally Robert Ahdieh, Reanalyzing Cost-Benefit Analysis: Toward a Framework of Function(s) and Form(s), 88 N.Y.U. L. Rev. 1983, 1995–99 (2013).
  211. See id. at 2010–22. As a mode of policy inquiry deriving regulatory insight from the intake of open market data, cost-benefit analysis promises to rationalize policymaking, reduce regulatory bias, and enhance administrative accountability. See id.
  212. Doctrinally, the debate over cost-benefit analysis has revolved around whether judicial review of agency action can and should require cost-benefit analysis as part of the court’s review. Most debate on cost-benefit analysis in the judicial review setting centers on what the scope of agency power is under their enabling statutes and how courts should review them under the arbitrary and capricious standard of section 706(2)(A) of the Administrative Procedure Act. See generally Lawrence Lessig, The New Chicago School, 27 J. Legal Stud. 661, 666—67, 671—72 (1998) (describing the rise of a second “Chicago School” that emphasizes optimizing regulations through cost-benefit analysis); see also Jody Freeman & Adrian Vermeule, Massachusetts v. EPA: From Politics to Expertise, 2007 Sup. Ct. Rev. 51, 52—54, 97 (2007) (arguing that the Supreme Court’s expertise-forcing project, as represented by its decision in MA v. EPA, reveals a growing judicial embrace of cost-benefit analysis as a solution to the problem of politicization of expertise in the administrative agencies); cf. Kathryn A. Watts, Controlling Presidential Control, 114 Mich. L. Rev. 683, 690 (2016) (arguing that the prevailing sentiment of “expertise-forcing” through cost-benefit analyses—i.e., the depoliticization of agency decision-making and removal of presidential political influences—fails to keep the executive branch in check). For recent cases interpreting the arbitrary and capricious standard of judicial review as requiring a cost-benefit analysis, See, e.g., Business Roundtable v. SEC, 647 F.3d 1144, 1149—52 (D.C. Cir. 2011).
  213. See, e.g., Theodore M. Porter, Trust in Numbers: The Pursuit of Objectivity in Science and Public Life 188–89 (1995) (describing how cost-benefit analysis became the standard for policy evaluation across all topics and industries).
  214. Cf. id. at 153 (arguing that public decisions made through conducting cost-benefit analysis would “reduce opportunities for purely political choices”).
  215. See Jedediah Britton-Purdy et al., Building a Law-and-Political-Economy Framework: Beyond the Twentieth-Century Synthesis, 129 Yale L.J. 1784 1811–12 (2020) (footnotes omitted) (“‘Interest-group capture’ became an axiomatic problem of the regulatory state, leading influential academics to argue that the only appropriate response was a move to market-mediated technocracy, in the form of cost-benefit analysis. The administrative state was remade along the way, with cost-benefit analysis used to block any regulation that did not meet a market-denominated test of value from the Reagan Administration onward.”).
  216. See Bent Flyvbjerg & Dirk W. Bester, The Cost Benefit Fallacy: Why Cost-Benefit Analysis is Broken and How to Fix It, 12 J. Cost-Benefit Analysis 395, 403–06 (2021).
  217. See Todd Philips & Sam Berger, Reckoning with Conservatives’ Bad Faith Cost-Benefit Analysis, Ctr. for Am. Progress (Aug. 14, 2020) https://www.americanprogress.org/article/reckoning-conservatives-bad-faith-cost-benefit-analysis/ [https://perma.cc/YN89-GLXC] (arguing that the conservatives have selectively used cost-benefit analysis to hide the true costs of de-regulation by ensuring that the social costs of deregulatory policies are excluded from the analysis).
  218. See, e.g., Burns, supra note 170, at 107—10 (citing S. Rep. No. 93-278, at 19 (1973)) (explaining how, to balance competing interests, the ECOA drafters omitted a definition of discrimination fearing that it might “unnecessarily limit or expand liability”).
  219. See Fed. Rsrv. Bd., Fair Lending Regulations and Statutes: Overview, Consumer Compliance Handbook (2017) hhttps://www.federalreserve.gov/boarddocs/supmanual/cch/fair_lend_over.pdf [https://perma.cc/649C-NLFB].
  220. See Tex. Dep’t of Hous. & Cmty. Affairs v. Inclusive Cmtys. Project, Inc., 576 U.S. 519, 527 (2015) (articulating the elements of a prima facie disparate impact claim under the FHA).
  221. Dodd–Frank Wall Street Reform and Consumer Protection Act, 12 U.S.C. § 5531(c)(1) (2022).
  222. Federal Trade Commission Act, 15 U.S.C. § 45(n) (2022).
  223. In 2017, the CFPB issued a payday lending rule imposing a set of underwriting requirements on short-term payday loans (“2017 Rule”). See Payday, Vehicle Title, and Certain High-Cost Installment Loans, 12 C.F.R. § 1041.02–.10, 1041.12, 1041.113 (2019). The 2017 Rule met persistent opposition by the banking industry both during its notice-and-comment stage and after promulgation. Creditors argued, among other criticisms, that the 2017 Rule had unsound empirical foundations and exaggerated the substantiality of consumer harm. 82 Fed. Reg. 54472, 54706 (published Nov. 17, 2017) (to be codified as amended at 12 C.F.R. pt. 1041). In 2019, after Trump appointee Mick Mulvaney became the CFPB Acting Director, the CFPB announced its intent to reconsider the 2017 Rule. See CFPB, Consumer Financial Protection Bureau Release Notices of Proposed Rulemaking on Payday Lending, CFPB Newsroom (Feb. 6, 2019), https://www.consumerfinance.gov/about-us/newsroom/consumer-financial-protection-bureau-releases-notices-proposed-rulemaking-payday-lending/ [https://perma.cc/GZG3-CCLE]. That reconsideration resulted in the repeal of the 2017 Rule (“2020 Rule”). See Payday, Vehicle Title, and Certain High-Cost Installment Loans, 85 Fed. Reg. 44382 (Jul. 22, 2020) (to be codified as amended at 12 C.F.R. pt. 1041). In its rationale for repealing the 2017 Rule, the 2020 Rule stated that “the 2017 Final Rule erroneously minimized the value of temporary reprieve,” and “underestimated the identified practice’s benefit to consumers.” Id. at 44412–13. With regards to re-borrowers, the 2020 Rule concludes that “there are substantial countervailing benefits from [payday lending] such as income-smoothing and avoiding a greater harm, which the 2017 Final Rule discounted.” Id. at 44412. The 2020 Rule stated that the “2017 Final Rule would constrain rapid innovation in the market.” Id. at 44414. Based on these reconsiderations, the 2020 Rule concluded that the CFPB had erroneously conducted the countervailing benefits test in the 2017 Rule and that the Rule should not have been passed in the first place. See id. at 44408.
  224. A mortgage lender’s compliance with the ability-to-repay (ATR) obligation may be “presume[d]” if the mortgage is a “qualified mortgage” (QM). Dodd–Frank Wall Street Reform and Consumer Protection Act, 15 U.S.C. § 1639c (2022). Specially, a QM must be fully amortizing, provides a term not longer than 30 years, has upfront costs, and the lender must “verify the income and financial resources” of borrowers and consider “all applicable taxes, insurances, and assessments” in making the loan. 15 U.S.C. § 1639c(b)(2)(A)(iii)-(v) (2022). But the statute does not clarify the meaning of these words. To offer interpretive clarity and further flesh out the QM presumption, the CFPB issued a qualified mortgage rule in 2013 (“2013 QM Rule”). The 2013 QM Rule included within the QM definition a debt-to-income ratio and other measures of ATR. 12 C.F.R. § 1026.43 (2016). But the Rule met pushback by mortgage lenders on the grounds that the numerical threshold lacked empirical basis. See 78 Fed. Reg. 6408, 6529 (published Jan. 30, 2013) (to be codified as amended 12 C.F.R. pt. 1026). In 2020, the CFPB undertook new rulemaking and added both a QM safe harbor and a QM rebuttable presumption based on floating Average Prime Offer Rates—that is, a specified threshold index pushed weekly reflecting the average APR offered borrowers of the best credit risk category. See 85 Fed. Reg. 86309, 86317 (Mar. 1, 2021) (to be codified as amended at 12 C.F.R. pt. 1026).
  225. Chamber of Commerce of U.S. v. CFPB, No. 6:22-cv-00381, 2023 WL 5835951, at *12 (E.D. Tex. 2023) ([T]he court holds that the CFPB’s adoption of that position in the March 2022 manual update is beyond the agency’s constitutional authority based on an Appropriations Clause violation and beyond the agency’s statutory authority to regulate ‘unfair’ acts or practices under the Dodd-Frank Act.”).
  226. Greta Krippner, Capitalizing on Crisis: The Political Origin of the Rise of Finance 145 (2012) (describing a core feature of neoliberalism’s “depoliticization of the economy”).
  227. See, e.g., Robert B. Reich, Toward a New Consumer Protection, 128 U. Pa. L. Rev. 1, 20 (1979) (arguing that regulators should view the preservation of consumer free choice as the objective of consumer protection).
  228. See, e.g., Joseph Stiglitz, Government Failure vs. Market Failure: Principles of Regulation, in Governments and Markets: Toward a New Theory of Regulation 13, 22—25 (Edward J. Balleisen & David A. Moss eds., 2010).
  229. See, e.g., Daniel Castro & Alan McQuinn, How and When Regulators Should Intervene, Info. Tech. & Innovation Found., Feb. 2015, at 2.
  230. See FTC, Policy Statement on Unfairness, supra note 198.
  231. The three prongs are: (1) whether the practice causes consumers to incur substantial injury; (2) whether consumers can reasonably avoid such injury; and (3) whether regulating the practice creates more benefits than costs to the market. Id. Before the FTC’s 1980 Policy Statement, the dominant factors for applying prohibition against “unfair” market practices were: (1) whether the practice injures consumers; (2) whether it violates established public policy; (3) whether it is unethical or unscrupulous. FTC v. Sperry & Hutchinson, 405 U.S. 233, 244–45 (1972).
  232. 15 U.S.C. § 45(n) (2022).
  233. FTC, Policy Statement on Unfairness, supra note 198.
  234. E.g., Am. Fin. Servs. Ass’n v. FTC, 767 F.2d 957, 970 (D.C. Cir. 1985) (“In its Policy Statement, subscribed to by all five Commissioners, the FTC responded to the criticism levelled at the Commission’s implementation of its unfairness authority by delineating a concrete framework for the future application of that authority.”).
  235. Reich, supra note 228, at 14.
  236. Id. at 20.
  237. See, e.g., Luke Herrine, The Folklore of Unfairness, 96 N.Y.U. L. Rev. 431, 436 (2021) (“[I]nfluence organizations funded research, messaging, and lobbying outfits to promote the idea that markets are self-correcting so long as regulators do not get in the way of the ‘free choices’ of consumers. This infrastructure primarily articulated the value of market ordering—the idea of one true ‘science’ of the market—in the language of the Chicago School’s version of neoclassical welfare economics. This discourse was also prompted as the only rational and nonpaternalistic form of policy analysis.”).
  238. J. Howard Beales, Director, Bureau of Consumer Prot., The FTC’s Use of Unfairness Authority: Its Rise, Fall, and Resurrection (May 30, 2033), https://www.ftc.gov/news-events/news/speeches/ftcs-use-unfairness-authority-its-rise-fall-resurrection [https://perma.cc/86HD-JJ6R].
  239. Id.
  240. See, e.g., Am. Fin. Servs. Ass’n v. FTC, 767 F.2d 957, 985–88 (D.C. Cir. 1985) (upholding the FTC’s Credit Practices Rule on the grounds that the Rule did not exceed the FTC’s “unfairness” powers). American Financial Services v. FTC influentially delineated the bounds of the FTC’s unfairness power. While the majority applied the FTC’s policy statement to uphold the challenged regulation, Am. Fin. Servs. Ass’n, 767 F.2d at 972, the dissent advocated for a version of the market-failure test. Id. at 993 (Tamm, J., dissenting) (“If the Commission has identified with sufficient clarity the impediment that blocks the market’s natural allocation, it may be appropriate for the Commission to intervene.”). Judge Tamm emphasized that “the principal limitation placed upon Commission authority is that it cannot, consistent with the Policy Statement, intervene merely because it believes the market is not producing the ‘best deal’ for consumers.” Id. at 992 (quoting majority opinion). Thus, in reviewing agency action, the court’s “first task” is to “ensure that the [agency’s] intervention is a genuine response to a market failure ‘which prevents free consumer choice from effectuating a self-correcting market.’” Id. at 993 (quoting majority opinion). To perform this task adequately, the reviewing court should “insist that the [agency] sufficiently understand and explain the dynamics of the marketplace.” Id.
  241. See Krippner, supra note 227, at 145.
  242. The term “RegTech” (i.e., regulatory technology) refers to a class of software applications or algorithmic innovations for managing regulatory compliance. See generally Jake Frankenfield, RegTech: Definition, Who Uses It and Why, and Example Companies, Investopedia, https://www.investopedia.com/terms/r/regtech.asp [https://perma.cc/L576-LJCS] (last updated Aug. 27, 2020).
  243. See , e.g., CFPB Acts to Protect the Public from Black-Box Credit Models Using Complex Algorithms, supra note 44.
  244. See generally Jermy Prenio & Jeffery Yong, Humans Keeping AI in Check—Emerging Regulatory Expectations in the Financial Sector 14–15 (Bank for Int’l Settlements, FSI Insights on Policy Implementation No. 35, 2021), https://www.bis.org/fsi/publ/insights35.htm [https://perma.cc/A8RE-QGUW]. But see Andrew Burt, The AI Transparency Paradox, Harv. Bus. Rev. (Dec. 13, 2019), https://hbr.org/2019/12/the-ai-transparency-paradox [https://perma.cc/63HZ-JB97] (“[I]t is becoming clear that disclosures about AI pose their own risks: Explanations can be hacked, releasing additional information may make AI more vulnerable to attacks, and disclosures can make companies more susceptible to lawsuits or regulatory action.”).
  245. See generally Angela A. Hung, Min Cong & Jeremy Burke, Effective Disclosures in Financial Decisionmaking 1 (Rand Corp., Research Paper No. RR-1270-DOL, 2015); Jeanne M. Hogarth & Ellen A. Merry, Designing Disclosures to Inform Consumer Financial Decisionmaking: Lessons Learned from Consumer Testing, Fed. Rsrv. Bull. (Oct. 21, 2011), https://www.federalreserve.gov/pubs/bulletin/2011/articles/designingdisclosures/default.htm [https://perma.cc/52TM-ZAA2].
  246. See generally Richard Thaler & Cass Sunstein, Nudge: Improving Decisions about Health, Wealth, and Happiness (2008); Cynthia Weiyi Cai, Nudging the Financial Market? A Review of the Nudge Theory, 60 Acct. & Fin. 3341, 3357–60 (2020).
  247. See 12 C.F.R. § 1002.9 (2023).
  248. See Sunil Ramlochan, The Black Box Problem: Opaque Inner Workings of Large Language Models, Prompt Eng’g Inst. (Oct. 23, 2023), https://promptengineering.org/the-black-box-problem-opaque-inner-workings-of-large-language-models/ [https://perma.cc/5H37-A79P] (describing transparent “glass-box” model architectures and transparency in model training processes as potential solutions to the “black-box” problem of advanced AI technologies such as LLMs); see also Laura Blattner, P-R Stark & Jann Spiess, Machine Learning Explainability & Fairness: Insights from Consumer Lending 6 (2022), https://finreglab.org/wp-content/uploads/2022/04/FinRegLab_Stanford_ML-Explainability-and-Fairness_Insights-from-Consumer-Lending-April-2022.pdf [https://perma.cc/RD8B-KC8P] (describing the black-box nature of AI machine learning models as the reason for growing regulatory demand for AI model transparency and data input scrutiny).
  249. In general, AI solutions classified into “white-box” and “black-box” models. White-box solutions are “transparent as to how they reach a certain conclusion, with users able to view and understand which factors influenced an algorithm’s decisions and how the algorithm behaves.” Maitreya Natu, The Move to Unsupervised Learning: Where We Are Today, The New Stack (Mar. 3, 2023), https://thenewstack.io/the-move-to-unsupervised-learning-where-we-are-today/ [https://perma.cc/KRM7-ZBKS]. Decision trees and linear-regression-based models are examples of white-box solutions. Id. In contrast, black-box solutions are “far less transparent in letting users know about how a certain outcome is reached.” Id. Examples of black-box solutions include deep neural networks and boosting algorithms. Id.
  250. See Florian Perteneder, Understanding Black-Box ML Models with Explainable AI, Dynatrace Eng’g (Apr. 29, 2022), https://engineering.dynatrace.com/blog/understanding-black-box-ml-models-with-explainable-ai/ [https://perma.cc/6SEM-8KP5].
  251. 88 Fed. Reg. 35150 (May 31, 2023) (to be codified at 12 C.F.R. pt. 1002); see also CFPB Finalizes Rule to Create a New Data Set on Small Business Lending in America, CFPB (Mar. 30, 2023), https://www.consumerfinance.gov/about-us/newsroom/cfpb-finalizes-rule-to-create-a-new-data-set-on-small-business-lending-in-america/ [https://perma.cc/SAP2-ABG2].
  252. 88 Fed. Reg. 35150, 35459–60 (May 31, 2023) (to be codified at 12 C.F.R. pt. 1002).
  253. See, e.g., Talia B. Gillis, The Input Fallacy, 106 Minn. L. Rev. 1175, 1228 (2022) (“[F]ormal exclusion of a protected characteristic may be meaningless with respect to the ability of an algorithm to actually use the characteristics. Even if an algorithm does not seek to recover the information—that is, even if it never tries to derive race or marital status—such characteristics are available to it because they are so embedded in the rest of the data.”).
  254. See, e.g., Prince & Schwarcz, supra note 137, at 1270–72.
  255. See James Vincent, Apple’s Credit Card is Being Investigated for Discriminating Against Women, The Verge (Nov. 11, 2019), https://www.theverge.com/2019/11/11/20958953/apple-credit-card-gender-discrimination-algorithms-black-box-investigation [https://perma.cc/R7KY-H49D].
  256. Will Knight, The Apple Card Didn’t ‘See’ Gender—and That’s the Problem, Wired (Nov. 19, 2019), https://wired.com/story/the-apple-card-didnt-see-genderand-thats-the-problem/ [https://web.archive.org/web/20191119174621/https://wired.com/story/the-apple-card-didnt-see-genderand-thats-the-problem/]
  257. See generally Prince & Schwarcz, supra note 137, at 1275 (“[U]nintentional proxy discrimination by AIs cannot be avoided merely by depriving the AI of information on individuals’ membership in legally suspect classes or obvious proxies for such group membership. . . . AIs can and will use training data to derive less intuitive proxies for directly predictive characteristics when they are deprived of direct data on these characteristics due to legal prohibitions.”).
  258. New York State Dept. of Fin. Serv., Report on Apple Card Investigation 2, 6 (2021) (“[T]he Department’s exhaustive review of documentation and data provided by the Bank and Apple, along with numerous interviews of consumers who complained of possible discrimination, did not produce evidence of deliberate or disparate impact discrimination but showed deficiencies in customer service and transparency . …”).
  259. In response to the failures of traditional fair-lending frameworks to address the risk of AI bias, some legal scholars have proposed “algorithmic affirmative action.” See generally Peter N. Salib, Big Data Affirmative Action, 117 Nw. U. L. Rev. 821, 821 (2022) (“[U]nlike old-fashioned affirmative action, Big Data Affirmative Action would be automated, algorithmic, and precise.”).
  260. See generally Ben Charoenwong, Zachary Kowaleski, Alan P. Kwan & Andrew Sutherland, RegTech: What It Is and Why It Matters, U. of Oxford Bus. L. Blog (Feb. 23, 2022), https://blogs.law.ox.ac.uk/business-law-blog/blog/2022/02/regtech-what-it-and-why-it-matters [https://perma.cc/3WUN-NC28].
  261. See , e.g., Technology Based Innovations for Regulatory Compliance (“RegTech”) in the Securities Industry, FINRA (Sep. 10, 2018), https://www.finra.org/sites/default/files/2018_RegTech_Report.pdf [https://perma.cc/6SVN-AQG4].
  262. See, e.g., Francois-Kim Hugé, Carlo Duprel & Giulia Pescatore, The Promise of RegTech, Inside Mag. (June 27, 2017), https://www.gaco.gi/images/pdf/2017-june/lu-the-promise-regtech-27032017.pdf [https://perma.cc/KN8D-6FEV].
  263. See, e.g., Vivienne Brand, Corporate Whistleblowing, Smart Regulation, and RegTech: The Coming of the Whistlebot?, 43 U. New S. Wales L.J. 801, 826 (2020) (“[T]his article suggests that given whistelblowing’s particular vulnerability as a corporate regulatory device to the vicissitudes of human existence, the arrival of technology enhanced whistleblowing may ultimately be more significant for whistleblowing than for some other fields of human endeavor.”).
  264. See FinRegLab, Data Diversification in Credit Underwriting 7 (2020), https://finreglab.org/data-diversification-in-credit-underwriting [https://perma.cc/3J7A-K8RS]. The top U.S. financial data aggregators include MX Technologies, Finicity, Yodlee, Plaid, and Akoya. See A List of Financial data Aggregators in the United States, MX Blog (Nov. 18, 2023), https://www.mx.com/blog/a-list-of-financial-data-aggregators-in-the-united-states/ [https://perma.cc/3HZ9-LSVW].
  265. See FinRegLab, Data Diversification in Credit Underwriting, supra note 265, at 7.
  266. See id.
  267. Id. at 8.
  268. Ben Green & Salomé Viljoen, Algorithmic Realism: Expanding the Boundaries of Algorithmic Thought, 2020 Conf. on Fairness, Accountability, and Transparency 19, 23, https://dl.acm.org/doi/pdf/10.1145/3351095.3372840 [https://perma.cc/F2Q2-HX9Z] (“Another attribute of algorithmic formalism is internalism: only considerations that are legible within the language of algorithms—e.g., efficiency and accuracy—are recognized as important design and evaluation considerations.”).
  269. See id. at 23 (discussing algorithmic formalism, in which efforts to address ethics within the technology industry are overly reliant on technology itself); Jimmy Wu, Optimize What?, Commune (Mar. 15, 2019), https://communemag.com/optimize-what/ [https://perma.cc/HB3N-HXKZ] (emphasis added) (“Techno-solutionism is the very soul of the neoliberal policy designer, fetishistically dedicated to the craft of incentive alignment and (when necessary) benevolent regulation. Such a standpoint is the effective outcome of the contemporary computational culture and its formulation as curriculum.”).
  270. Viljoen, supra note 36, at 628.
  271. See generally Peter Leonard, Beyond Data Privacy: Data “Ownership” and Regulation of Data-Driven Business, 16 SciTech Law. 10, 13–14 (2020).
  272. See generally Jesse Wall, Taking the Bundle of Rights Seriously, 50 Victoria U. Wellington L. Rev. 733, 735 (2019) (quoting Tony Honore, Ownership, in Making Law Bind: Essays Legal and Philosophical 161 (1987)) (explaining that, under the “bundle of rights theory,” property rights represent “‘an open-ended set’ of ‘activities’ or ‘privileges’, that include the ability to possess, consume, derive income from, control, manage, transfer, exchange, sell, borrow against, or otherwise use, the object, asset, or resource”).
  273. The Supreme Court has consistently treated the right to exclude is the hallmark of property ownership. See, e.g., Pruneyard Shopping Ctr. v. Robins, 447 U.S. 74, 82 (1980) (“[O]ne of the essential sticks in the bundle of property is the right to exclude.”); Kaiser Aetna v. United States, 444 U.S. 164, 180–81 (1979) (stating the right to exclude is the most important stick in the bundle of property rights).
  274. Former Presidential Candidate Andrew Yang has launched the Data Dividend Project to push companies like Meta and Google to pay users a “data dividend” for the wealth that these companies have generated through the commercialization of user data. See The Data Dividend Project, https://www.datadividendproject.com/ [https://perma.cc/DUE2-UDPQ].
  275. House Representative Alexandria Ocasio-Cortez has also posited data ownership as a potential solution to wealth inequality. Alexandria Ocasio-Cortez (@AOC), X (Feb. 19, 2020, 11:43 PM), https://twitter.com/AOC/status/1230352135335940096 [https://perma.cc/4YVZ-P3EE] (“[T]he reason many tech platforms have created billionaires is [because] they track you without your knowledge, amass your personal data [and] sell it without your express consent. You don’t own your data, [and] you should.”).
  276. There are still several reasons to be skeptical of propertarian data governance reforms. The first one is administrability. Operationalizing a reform at this scale may need significant political mobilization and legislative support. The second is incentive. Making data into personal property or some kind of income-generating asset may further incentivize consumers to share data about themselves online and sell to data aggregators. See Viljoen, supra note 36, at 621–23.
  277. Although the concept of “data” is already defined under existing data-governance laws, it does not preclude legal arguments to analogize data to other objects of ownership because these laws have broad definitions of data. For example, under GDPR, personal data is defined as “any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural, or social identity of that natural person.” 2016 O.J. (L. 679) ch. 1, art. 4.
  278. See generally Mathias Risse, Data as Collectively Generated Patterns: Making Sense of Data Ownership 4 (Carr Ctr. for Hum. Rights Pol’y, Harv. Kennedy Sch., Discussion Paper, 2021), https://carrcenter.hks.harvard.edu/files/cchr/files/210426-data_ownership.pdf [https://perma.cc/ME2W-ZVEX].
  279. Id. at 1.
  280. See, e.g., Nisha Talagala, Data as the New Oil Is Not Enough: Four Principles for Avoiding Data Fires, Forbes (Mar. 2, 2022, 5:48 PM), https://www.forbes.com/sites/nishatalagala/2022/03/02/data-as-the-new-oil-is-not-enough-four-principles-for-avoiding-data-fires/?sh=45c7db1fc208 [https://perma.cc/GW9E-RM8H].
  281. Id. When we speak of data being “mined,” we are implicitly subscribing to the idea that data can be extracted from the natural state.
  282. Risse, supra note 279, at 4; Lauren Henry Scholz, Big Data is Not Big Oil: The Role of Analogy in the Law of New Technologies, 86 Tenn. L. Rev. 863, 878—84 (2019).
  283. Risse, supra note 279, at 5.
  284. See id.
  285. Salvage, Legal Info. Inst., https://www.law.cornell.edu/wex/salvage [https://perma.cc/LS2A-646K] (last updated July 2021).
  286. See Joshua C. Teitelbaum, Inside the Blackwall Box: Explaining U.S. Marine Salvage Awards, 22 Sup. Ct. Econ. Rev. 55, 56 (2014).
  287. See Will Kenton, Salvage Value Meaning and Example, Investopedia, https://www.investopedia.com/terms/s/salvagevalue.asp [https://perma.cc/XY2S-HMAD] (last updated Apr. 17, 2023).
  288. Risse, supra note 279, at 5.
  289. Id. (citing John Locke, Second Treatise of Government § 27. (“Whatsoever then he removes out of the state that nature hath provided, and left it in, he hath mixed his labour with, and joined it to something that is his own, and thereby makes it his property.”)).
  290. See Eugene K. Kim, Data as Labor: Retrofitting Labor Law for the Platform Economy, 23 Minn. J.L. Sci. & Tech. 131, 137–40 (2021).
  291. Eric A. Posner & E. Glen Weyl, Radical Markets: Uprooting Capitalism and Democracy for a Just Society 205–22 (2018); See also Imanol Arrieta Ibarra, Len Goff, Diego Jiménez Hernández, Jaron Lanier & E. Glen Weyl, Should We Treat Data as Labor? Let’s Open Up the Discussion, Brookings Inst. (Feb. 21, 2018), https://www.brookings.edu/articles/should-we-treat-data-as-labor-lets-open-up-the-discussion/\#:~:text=We%20argue%20that%20thinking%20of,treating%20labor%20differently%20than%20capital [https://perma.cc/3FMK-GZAV].
  292. Posner & Weyl, supra note 292, at 209.
  293. Risse, supra note 279, at 6.
  294. Id.
  295. See id.
  296. See id. at 9.
  297. Id.
  298. See id.
  299. See, e.g., Wendy J. Gordon, A Property Right in Self-Expression: Equality and Individualism in the Natural Law of Intellectual Property, 102 Yale L.J. 1533, 1535 (1993) (arguing that a properly conceived natural-rights theory of IP would provide significant protection for free speech interests, including the right of self-expression); Justin Hughes, The Philosophy of Intellectual Property, 77 Geo. L.J. 287, 329-30 (1988) (invoking John Locke’s theory of labor entitlement of property to justify individual rights in IP).
  300. For a survey of recent IP scholarship exploring collective IPR, see Enninya S. Nwauche, The Emerging Right to Communal Intellectual Property, 19 Marq. Intell. Prop. L. Rev. 221 (2015).
  301. See Feist Publ’ns, Inc. v. Rural Tel. Serv. Co., 499 U.S. 340, 345 (1991).
  302. See 17 U.S.C. § 102(b) (2022).
  303. See 17 U.S.C. § 102(a)(1) (2022).
  304. See Feist Pub’ns, 499 U.S. at 348 (“Factual compilations … may possess the requisite originality. The compilation author typically chooses which facts to include, in what order to place them, and how to arrange the collected data so that they may be used effectively by readers. These choices as to selection and arrangement, so long as they are made independently by the compiler and entail a minimal degree of creativity, are sufficiently original that Congress may protect such compilations through the copyright laws.”).
  305. 1996 O.J. (L 77).
  306. Ranjit Kumar G., Database Protection—The European Way and Its Impact on India, 45 IDEA: J.L. & Tech. 97, 109 (2005) (“The sui-generis right applies irrespective of the database’s eligibility for copyright or other protection.”).
  307. See 1996 O.J. (L 77) ch. III.
  308. Id. at art. 7(1).
  309. See generally J.H. Reichman & Paul F. Ublir, Database Protection at the Crossroads: Recent Developments and Their Impact on Science and Technology, 14 Berkeley Tech. L.J. 793, 798, 812–19 (1999) (arguing against congressional enaction of a sui generis database right similar to that adopted by the EU, on the grounds that it may increase transactional costs for licensing, impede scientific research, and decrease access to public data).
  310. See generally Paul Keller, A Vanishing Right? The Sui Generis Database Right and the Proposed Data Act, Kluwer Copyright Blog (Mar. 4, 2022), https://copyrightblog.kluweriplaw.com/2022/03/04/a-vanishing-right-the-sui-generis-database-right-and-the-proposed-data-act/ [https://perma.cc/R6J7-6EUS] (noting that the European Commission’s Data Act “does not apply to databases containing data obtained from or generated by the use of a connected device”).
  311. “Infrastructure” is broadly defined as “structured arrangements that facilitate, undergird, shape, and normalize the conditions of possibility for human activity over spaces and across scales.” These arrangements represent “critical locations through which sociality, governance and politics, accumulation and dispossession, and institutions and aspirations are formed, reformed, and performed.” Julie E. Cohen, Infrastructuring the Digital Public Sphere, 25 Yale J.L. & Tech. Special Issue 1, 16 (2023).
  312. Legal scholarship on online speech and digital expression has long argued that the internet should support a digital public domain. For a sampling, see David R. Johnson & David Post, Law and Borders—The Rise of Law in Cyberspace, 48 Stan. L. Rev. 1367 (1996) (introducing the notion of the public cyberspace as an important area of regulation); Yochai Benkler, Free as the Air to Common Use: First Amendment Constraints on Enclosure of the Public Domain, 74 N.Y.U. L. Rev. 354 (1999) (arguing that the Digital Millennium Copyright Act, the proposed Article 2B of the Uniform Commercial Code governing computer contracts, and the Collections of Information Antipiracy Act collectively represent an enclosure movement to privatize the digital public domain); Lawrence Lessig, The Future of Ideas: The Fate of the Commons in a Connected World (2002) (explaining how the internet revolution produced a counter-revolution led by corporations, which established themselves as the owners of the internet and gatekeepers of the digital public domain).
  313. See generally Elinor Ostrom, Governing the Commons: The Evolution of Institutions for Collective Action (1990); Elinor Ostrom & Vincent Ostrom, A Theory for Institutional Analysis of Common Pool Problems, in Managing the Commons 157, passim (Garrett Hardin & John Baden eds., 1977).
  314. See generally Michael A. Heller, The Tragedy of the Anticommons: Property in the Transition from Marx to Markets, 111 Harv. L. Rev. 621 (1998).
  315. The Human Genome Project, Nat’l Hum. Genome Rsch. Inst., https://www.genome.gov/human-genome-project [https://perma.cc/Y4JW-438M].
  316. See Viljoen, supra note 36, at 645.
  317. See, e.g., Dame Wendy Hall & Jérôme Pesenti, Growing the Artificial Intelligence Industry in the UK, Gov.UK (Oct. 15, 2017), https://assets.publishing.service.gov.uk/media/5a824465e5274a2e87dc2079/Growing_the_artificial_intelligence_industry_in_the_UK.pdf [https://perma.cc/83XE-CVFF]; Ontario Launches Consultations to Strengthen Privacy Protections of Personal Data, Ontario (Aug. 13, 2020), https://news.ontario.ca/en/release/57985/ontario-launches-consultations-to-strengthen-privacy-protections-of-personal-data [https://perma.cc/F8ZL-QZUL]; Data Trusts: Lessons from Three Pilots, Open Data Inst. (Apr. 15, 2019), https://theodi.org/news-and-events/blog/odi-data-trusts-report/ [https://perma.cc/98LT-GTCJ].
  318. See Peter Wells, UK’s First Data Trusts to Tackle Illegal Wildlife Trade and Food Waste, Open Data Inst. (Jan. 31, 2019), https://theodi.org/news-and-events/news/uks-first-data-trusts-to-tackle-illegal-wildlife-trade-and-food-waste/ [https://perma.cc/BW5H-P2HM] (“Data trusts work by allowing people or organisations to give some control over data to a new institution, or ‘trust,’ so it can be used to create benefits for themselves or others, or both.”).
  319. See CFPB, CFPB Report Details How the Nation’s Largest Credit Bureaus Manage Consumer Data, CFPB Newsroom (Dec. 13, 2012), https://www.consumerfinance.gov/about-us/newsroom/consumer-financial-protection-bureau-report-details-how-the-nations-largest-credit-bureaus-manage-consumer-data/\#:~:text=Equifax%2C%20Experian%2C%20and%20TransUnion%20each,that%20supply%20data%20on%20consumers [https://perma.cc/294L-8C4L] (“Equifax, Experian, and TransUnion each have more than 200 million files on consumers. In a typical month, they receive updates from approximately 10,000 information ‘furnishers,’ which are entities that supply data on consumers. The furnishers do this on more than 1.3 billion ‘trade lines,’ which are individual information sources on a consumer report such as a consumer’s accounts for a car loan, mortgage loan, or credit card.”).
  320. See, e.g., Credit Score Consolidation with Equifax Data, Demyst, https://demyst.com/external-data/use-case/credit-score-consolidation/equifax [https://perma.cc/GSW6-MCMB].
  321. See, e.g., Jennifer Shkabatur, The Global Commons of Data, 22 Stan. Tech. L. Rev. 354, 399–402 (2019).
  322. Joseph D. Kearney & Thomas W. Merrill, The Great Transformation of Regulated Industries Law, 98 Colum. L. Rev. 1323, 1331 (1998).
  323. See K. Sabeel Rahman, Regulating Informational Infrastructure: Internet Platform as the New Public Utilities, 2 Geo. L. Tech. Rev. 234, 236 (2018) (“In economistic terms, public control over infrastructure is warranted in conditions of natural monopoly, where high sunk costs and increasing returns to scale suggest that private market competition is likely to under-supply the good in question.”).
  324. Munn v. Illinois, 94 U.S. 113, 129 (1876) (quoting Allnut v. Inglis [1810] 104 Eng. Rep. 206, 12 E. 527, 541) (“[W]hen private property is affected with a public interest, it ceases to be juris private only and, in case of its dedication to such a purpose as this, the owners cannot take arbitrary and excessive duties, but the duties must be reasonable.”). For further details on the modern economic theory of public utilities regulation, see generally Charles F. Phillips Jr., The Regulation of Public Utilities (3rd ed. 1993).
  325. Shkabatur, supra note 322, at 400.
  326. See, e.g., Digit. Pub. Goods Alliance et al., Exploring Data as and in Service of the Public Good 5 (2023), https://digitalpublicgoods.net/PublicGoodDataReport.pdf [https://perma.cc/M83L-N8XP] (describing data in which open access would create “security risks”).
  327. Id. at 8.
  328. See Shkabatur, supra note 322, at 394.
  329. Eur. Parl. Doc. (COM 2017/09 final) 12.
  330. See Loi 2016-1321 du 7 octobre 2016 pour une République numérique [Law 2016-1321 of October 7, 2016 Digital Republic Act], Journal Officiel de la République Française [J.O.] [Official Gazette of France], Oct. 7, 2016, p. 96.
  331. See generally Eun Seo Jo & Timnit Gebru, Lessons from Archives: Strategies for Collecting Sociocultural Data in Machine Learning, Fairness, Accountability & Transparency, Jan. 2020, at 3.