Big Tech knows that age verification is necessary
State legislatures are leading a revolution to transform the experience of the internet, and with it, American childhood. In the past year, starting with Utah’s Social Media Regulation Act, S.B. 152, Arkansas, Louisiana, Texas and Virginia have all passed laws requiring parental consent and age verification for minors to open social media accounts or access pornography. By giving parents greater control over their kids’ exposure to social media and pornography, with their documented ill effects, these laws will improve kids’ lives.
But for these laws to prevail, age verification must be shown to be both effective and capable of preserving user privacy. The platforms argue, relying on Supreme Court precedents from the 1990s, that these laws are unconstitutional. But these precedents—which informed the 2004 Supreme Court ruling in ACLU v. Ashcroft, striking down a federal law requiring age verification for pornography sites—are based on factual predicates that are now demonstrably false—such as (per Reno v. ACLU) “the Internet is not as ‘invasive’ as radio or television,” and “[u]sers seldom encounter [pornographic] content by accident.” To prove the point, a recent report by Common Sense Media found that more than half of today’s teens (58 percent) have encountered pornography accidentally.
Sensing their weak legal position—particularly with the age-verification laws for social media, which regulate children’s ability to contract online, not speech—the Big Tech lobbyists are now shifting from a legal defense to a policy argument: that age verification will destroy user privacy. They make these arguments against both the state laws and a federal bill modeled after them, the Protecting Kids on Social Media Act, which would require age verification for social media nationally. Unfortunately, these arguments are gaining traction, scaring many on both the right and left away from supporting this strong federal bipartisan bill or the state efforts that are poised to bring substantial relief to teens and their families.
Of course, Big Tech’s concern for “protecting user privacy” is just a mirage. No one knows more about us—where we spend our time, what we like, who our friends are, even our financial information—than the Big Tech companies.
And Big Tech’s argument is disingenuous for a further reason: one and the same companies are actively experimenting with their own age-verification technologies. Consider the case of Meta and Facebook Dating. In 2022, the corporate colossus proudly announced that in an effort to make their products safer it was introducing biometric age verification, setting the stage, the company hoped, for further adoption across its platforms. Ironic, given the extreme privacy risks and potential abuses that biometric verification uniquely poses.
Hypocrisy aside, it is likely that online platforms will come to require authentication anyway in order to confirm the humanity of users to ensure they are not being hoodwinked by AI or bots. Proposals are already out there to create a global registry of our retinal scans to obtain “proof of humanity.” It appears that as long as AI is operative, some form of identity verification will be necessary.
But there are viable alternatives to surrendering to the panopticon that address valid concerns about protecting user privacy by relying on a third-party. These methods ensure that verification information is not surrendered to the platform itself. From a constitutional perspective, these methods would also diminish the potential burden on speech.
The first model of age verification would be a third-party using traditional methods like collecting information from the user such as financial credentials or government-issued ID. In recent years, a large number of age-verification companies, have emerged to offer such third-party services to companies offering internet products aimed at minors. Companies operating age-restricted platforms could contract with a third-party to collect the information needed, and then in a two-step process the third-party can either verify directly to the platform or provide a user with a key, showing they satisfy the age requirement, to enter the site. This process ensures the user’s privacy is preserved without providing the company any identifying information about the user whatsoever.
An even more secure way to verify age would be to have a third-party employ a cryptographic technique known as zero knowledge proof (ZKP), which offers the possibility of anonymous authentication, thus avoiding the threat to privacy altogether.
ZKP techniques build on a concept called “asymmetric encryption,” which is widely used on the internet today. Taking advantage of the feature of certain equations that are easy to solve in one direction, but not the other—asymmetric cryptography uses “public” and “private” keys, rather than the same key for both encryption and decryption. This technique is widely used today to authenticate identity and issue digital certificates. The operation is facilitated by a trusted third party, such as GlobalSign, DigiCert, or GoDaddy, which first authenticates a website or individual with traditional methods, i.e., government or financial documents. It then publishes what is known as the website’s “public key”—a number that can decrypt a message encrypted by the website’s private key (a secret number mathematically related to, but underivable from, the public key). In the context of a financial transaction, e.g., in which a consumer uses a credit card to make a purchase from a retailer via its website, the retail establishment’s website can send a message to the credit card company, which is encrypted by the private key and basically states their authenticated identity (by e.g., GoDaddy). The credit card company can use the public key to decrypt the message and can be confident that the identity of the website is truly that of the retailer.
Zero knowledge proofs take asymmetric cryptography one step further: establishing identity without a public key. In both systems, a trusted third party verifies a feature about the user, i.e., age, citizenship, insurance status. But rather than publish the public key, the trusted party gives the user a secret code. A social media platform could then ask the user a special math problem that can only be answered using the secret code. But it is in the nature of the problem that the user does not have to reveal the code to the platform to solve it. Therefore, if he can solve it, the platform learns nothing about him, except that his identity has been authenticated by the trusted third party.
What is special about ZKP, therefore, is that the user does not reveal anything about himself to the platform except his authenticated status. The trusted third party could even be made to certify via auditing that it destroys all information about the user. Thus, age certification could be achieved completely anonymously. And this whole system could be easily and cheaply automated. Already, ZKP is widely used in cryptocurrency, and both Apple and Microsoft have ZKP products.
Anonymous authentication methods completely transform the First Amendment analysis for age-verification requirements. In striking down such requirements in the Child Online Protection Act of 1998, the court found that age verification was too burdensome on adult speech in terms of expense, trouble and privacy. Now, in light of third-party verification methods, these factual claims no longer have force.
This also transforms the prospective need for human authentication via retinal scans or some other dystopian measure in response to the emerging reality of AI. Establishing anonymous age-verification methods to protect children from the harms of social media and online pornography websites will lay a strong foundation for the increasing needs for identity authentication as our world lurches into a future in which the human and non-human are increasingly confused.
So don’t believe the lies of Big Tech. Age verification need not compromise user privacy, and, yes, it is necessary to protect our children from their predatory products.
Adam Candeub is professor of law at MSU College of Law and Senior Fellow at the Center for Renewing America. Clare Morell is senior policy analyst at the Ethics and Public Policy Center, where she directs their Technology and Human Flourishing Project. Michael Toscano is executive director of the Institute for Family Studies.