The IRS, New Facial Recognition, Bias and Privacy Concerns

The IRS, New Facial Recognition, Bias and Privacy Concerns

On Monday, February 7, 2022, the Internal Revenue Service or IRS announced plans to discontinue its contract with ID.me, “an American online identity network that allows people to prove their legal identity online.” In mid-January of 2022, the IRS announced that American taxpayers looking to engage in certain tax-related activities online, such as receiving the Child Tax Credit or individual tax transcripts, would need to upload a selfie to ID.me’s facial recognition software service for the purposes of confirming their identity.

While the implementation and use of facial recognition software for identity verification are by no means new developments, the announcement that taxpayers would need to verify their identity via such means has led to both public outcry and privacy concerns.

Why was the IRS’s decision to use facial recognition controversial?

One of the primary concerns that were raised when the IRS announced its plans to require American taxpayers to verify their identity via ID.me’s facial recognition software services was how these biometric identifiers would be protected and secured.

When American citizens file their taxes, they must obviously submit various forms of personal identification and information. However, as the IRS is a U.S. government agency, there are specific laws and regulations that govern the collection and use of such information. However, as ID.me is a third-party company that various U.S. federal agencies use to verify the identities of individuals utilizing government services, many citizens questioned the protocol that the company would be adhering to as it relates to data security and personal privacy.

Alternatively, another major concern that arose concerning the IRS and ID.me’s facial recognition software was the level of bias and discrimination that has historically been associated with the use of such technologies. As many people have some level of bias, whether it be conscious or unconscious, the professionals that develop the algorithms and machine learning techniques that are used to build facial recognition software programs and other similar technology offerings can imbed their inherent bias into the programs. Furthermore, unlike an employee within a company that has been found to exhibit bias in the course of their work, changes, and alterations to software programs can cost millions of dollars and take months to implement.

To this point, as Albert Fox Cahn, executive director at the Surveillance Technology Oversight Project, stated, “Anytime we see biometric data being collected, it’s problematic. The fear is that we will see bias in this algorithm, as we have seen a lot of algorithms in the past that have human review but are still sort of broken or biased.”

What’s more, when the IRS initially announced their plans to contract out their identity verification requirements to ID.me, the online identity network announced that they would be utilizing one-to-one face match technology, as opposed to one-to-many facial recognition technology, as the latter has been linked to racial bias in the past. However, ID.me later revealed that the company did make use of one-to-many facial recognition as a final verification step, despite their initial claims.

Why did the IRS decide to discontinue the use of facial recognition for certain tax-related activities?

Following weeks of public concern and criticism concerning the IRS’s decision to require certain American taxpayers to verify their identity via facial recognition software, the agency ultimately decided to back away from its contract with ID.me. A significant reason for this was ID.me’s failure to maintain transparency within the American public as it concerned the types of facial recognition technology that would be used to verify individuals. Despite several university studies, as well as a 2019 U.S. Department of Commerce report claimed ID.me failed to “adequately address its known harms or deeply engage with specific findings that indicate substantial racial bias“, the company still remained defiant.

Another major reason for the IRS’s decision was pressure from U.S. lawmakers. On February 7, 2022, “Four congressional Democrats wrote to IRS Commissioner Chuck Rettig on Monday urging the agency to pause its use of facial-recognition technology for taxpayers logging into their IRS.gov accounts, citing concerns about privacy, data security, and access for people without internet access.” In their letter to Rettig, Reps. Ted Lieu, Anna Eshoo, Pramila Jayapal, and Yvette Clarke stated that “using a third party to verify taxpayers’ identity endangers them by compiling sensitive information into a biometric database that would be “a prime target for cyberattacks.”

While biometric identification and facial recognition software can have many beneficial uses for consumers within American society, utilizing such technologies in the context of tax-related services is an avenue to invasions of privacy and, in some cases, bias and racial discrimination. As every working American citizen has to file their taxes, the sheer number of information that ID.me would have access to represents a privacy concern in and of itself. As machine learning algorithms and artificial intelligence are still developing technologies, software developers will need to continue to improve such technologies before they can be safely and effectively utilized in the context of significant government functions involving citizens, as even minute problems with such software can lead to disastrous consequences.

Related Reads