Abstract
Principal-agent problems are central to debates over algorithmic transparency, but have been underexplored. Concerns about gaming and trade secrecy, frequently invoked to justify opacity, can also mask self-serving strategic behavior by decision-makers, who may prioritize private benefits over social welfare. This article develops a principal-agent test for determining when algorithmic decision-making should trigger disclosure obligations even when concerns about gaming or trade secrecy exist. It proposes that disclosure requirements should be anchored in decision quality.
The article introduces structured patterns of error types and the distribution of decision outcomes across groups as diagnostic tools for detecting principal-agent misalignment—extending their role beyond fairness metrics. These make visible whether decision-makers align algorithmic design and implementation with the public interest or instead use opacity to obscure bias, inefficiency, or strategic shirking. Strong disparities across social groups, and asymmetries in the distribution of decision errors across outcome types, should trigger rebuttable presumptions favoring further disclosure obligations.
This principal-agent framework reorients regulatory attention toward institutional design questions that transparency debates have not focused on so far. This framework informs the adoption of differentiated disclosure regimes: staged, partial, mediated, or audience-segmented. By reframing opacity debates through the lens of principal-agent theory, this article challenges long-standing assumptions about algorithmic secrecy: it shows the domain of justifiable algorithmic opacity is significantly narrower than commonly assumed once differentiated disclosure regimes are considered. When gaming or trade secrecy risks are legitimate, actionable disclosures (such as training data characteristics and key system design parameters) ordinarily can be mandated without compromising innovation or enabling gaming. This approach offers an implementable regulatory pathway for AI transparency in both public and private sectors.
By developing burden-shifting presumptions about what type of opacity is impermissible, what minimum disclosures are required, and how to weigh trade-secrecy and gaming claims, the article provides policymakers and courts with a framework for structuring tailored disclosure orders, evaluating secrecy claims, and allocating burden of proof in transparency disputes. The analysis also shows that meaningful disclosures are possible even for technically inscrutable algorithms.
Download a PDF version of this article here.
