The Algorithmic Accountability Act, Bias, and Tech
September 20, 2022 | 4 minutes read
Due to the role that artificial intelligence continues to play in the development of new technology on an international scale, machine learning algorithms are now being used by businesses around the world to automate processes and decisions that have traditionally been made by human beings. From healthcare to housing to education, machine learning algorithms are currently being used to influence the decisions that impact the lives of individuals on a daily basis. However, in spite of the immense benefits of artificial intelligence and machine learning in certain areas, the technology is not free of the inherent bias that can be associated with the software development process that is used to create such algorithms.
For example, there have been numerous studies and use cases that have found that facial recognition algorithms struggle to identify the facial features of people with darker skin when compared to those with lighter skin. For this reason, Congress is currently considering passing a new law that would charge the Federal Trade Commission (FTC) with regulating “high-risk systems that involve personal information or make automated decisions, such as systems that use artificial intelligence or machine learning.” First introduced in 2019, the Algorithmic Accountability Act would require businesses within certain sectors of industry, such as education, housing, and employment, among others, to conduct privacy impact assessments to ensure that the automated systems they rely on to make decisions are free of bias.
How is high-risk AI defined under the law?
Under the current draft version of the Algorithmic Accountability Act, “high-risk automated decision systems include those that (1) may contribute to inaccuracy, bias, or discrimination; or (2) facilitate decision-making about sensitive aspects of consumers’ lives by evaluating consumers’ behavior. Further, an automated-decision system, or information system involving personal data, is considered high-risk if it (1) raises security or privacy concerns, (2) involves the personal information of a significant number of people, or (3) systematically monitors a large, publicly accessible physical location.”
On the other hand, the Algorithmic Accountability Act also establishes the requirements for the privacy impact assessments that should be conducted when determining whether or not a high-risk automated decision-making system may be prone to making biased decisions. Subsequently, the law states that these privacy impact assessments must include the following elements:
- A detailed description of the automated-decision making system that is being assessed.
- An assessment of the relative costs and benefits of utilizing the automated-decision making system.
- A determination of the risks that are associated with the utilization of the automated-decision making system as it relates to privacy and the protection of personal information.
- The steps that the applicable business or organization has taken or is taking to mitigate the potential risks associated with the automated-decision making system, in the event that such risks are uncovered during the course of the privacy impact assessment.
- The extent to which the automated-decision making system protects the personal information and privacy of American consumers.
A public repository
In addition to requiring businesses and organizations that use automated decision-making systems to conduct privacy impact assessments to check for any potential discrimination bias, the Algorithmic Accountability Act would also establish a public repository within the FTC for such automated systems. To this point, this public repository would serve to garner a certain level of transparency and accountability between businesses that utilize AI and machine learning algorithms within their respective systems and the customers that purchase the goods and services of such businesses, as the means by which a particular AI system arrives at a certain conclusion has historically been regarded as a trade secret and as such, something that must remain confidential.
While a single piece of legislation will not be enough to completely reduce the occurrence of AI systems that make inherently biased decisions, the enactment of the Algorithmic Accountability Act would nevertheless be a step in the right direction. To this end, AI and machine learning will only continue to be integrated into business models that are currently being used around the globe, as well as inform the creation of new business models that have yet to be discovered. Likewise, it is imperative that these AI systems are regulated in some form or another, as there has already been enough data collected to confirm that a biased algorithm can arrive at decisions that prove disastrous for customers and businesses alike.