I

n the past decade, technology companies have spent tens of millions of dollars funding academic research surrounding the most pressing regulatory and ethical issues. Google has funded over 329 research papers on technology regulation, antitrust law, and ethical frameworks. “AI Ethics” institutes have been co-founded by Amazon and Facebook. Harvard, MIT, Johns Hopkins, and more have all partnered in a secret contract to do research with Facebook. Researchers like Mohamed Abdalla, a PhD student at the University of Toronto, have been told “not to push” criticisms of Big Tech companies. Many former tech executives are heading AI Ethics initiatives and institutions tasked with criticizing companies to which those very executives have admitted to still having ties.

This infusion of cash, abundance of conflicts of interest, and repeated attempts to silence critics raise concerns surrounding Big Tech’s pressure on researchers in the AI ethics field to produce sympathetic conclusions and laissez-faire solutions rather than investigative criticisms. Such influence warrants consideration of the negative ramifications surrounding private companies’ funding of potentially persuasive research. 

Partial Academia

When it comes to oversight and regulation, Big Tech isn’t just playing judge, jury, and executioner — they are working behind the scenes to control the rhetoric, tone, and conclusions of academic research. Stanford, as an institution, plays a crucial role in aiding and abetting this scheme by allowing Big Tech to unduly influence not just the outcomes of academic research but the questions that are chosen to be studied. 

In the last decade, these companies have not only maximized their influence in the political sphere — now rivaling the spending of traditional lobbying moguls such as finance, pharmaceuticals, and oil — they’ve also spread their influence into academia. Their recent mission to fund key academic institutions has all but ensured that if the CEOs of Google, Facebook, and Amazon were to make it in front of a judge, chairman, or arbiter, the evidence against them would have been manipulated to their satisfaction. 

Not surprisingly, the institutions that receive funding from Big Tech have been less than transparent about industrial funding in the past. The reason for this opaqueness: Big Tech companies have spent tens of millions of dollars in the last decade to shape academic discourse surrounding AI ethics, tech regulation, and algorithmic transparency. Google, Facebook, and Amazon’s donation schemes have tied them to universities, perhaps enticing academics to favor corporate interests over transparency, fairness, and impartiality. The result is that criticism and meaningful regulatory recommendations are effectively nipped right in the bud. The monetary exchanges between academic and corporate spheres threaten impartiality, a crucial ingredient to much-needed oversight.

Who’s supposed to regulate Facebook if the research intended to criticize Facebook is funded by Facebook, and the legislators intended to regulate Facebook are financed by Facebook? That’s just way too much Facebook. We often call on tech companies to self-regulate, but they already are. And when Mark Zuckerberg calls for more regulation, this is a hollow ask for more of the same, just codified into law. 

Funding academic research goes beyond just presenting a conflict of interest. It undermines an institution’s credibility and poisons overall trust in academia.

At Home

This has to change, and it has to change here. 

A recent study found that more than half of tenure-track AI faculty at four prominent universities who disclose their funding sources, including Stanford, have received some sort of backing from Big Tech. Institutions at these universities that deal specifically with technology ethics and regulation, such as Stanford’s Human-Centered AI Institute, have direct affiliations or receive funding from the very tech companies they aim to critique. Somehow, the obvious flaws of this process have largely remained unchallenged and under-analyzed.

This conflict is compounded by Stanford HAI and Stanford Cyber Policy Center’s (CPC) lack of transparency about their funding. Despite their claims of transparency, HAI’s exact funding sources are unavailable on their website (though Google, IBM, and Microsoft all have affiliations with the Institute), and the amounts donated by each company are private. CPC doesn’t list their funding sources, either. And groups who do list funding, such as Stanford’s Center for AI Safety, are supported by tech companies like Intel, Nvidia, Siemens, and others.

Stanford’s prominence in the technology sphere is self-evident.  As Stanford takes an important additional step in arbitrating many of the controversial technologies born and developed on its campus, its impartiality must be unquestionable. We need academic research to expose the failings of companies like Facebook, Google, and Amazon, and we need it to do so unequivocally and without reservation. 

I don’t question the intentions of those who have left Microsoft, Google, and Facebook to brainstorm regulation and create moderation frameworks for Big Tech at Stanford. But I am concerned that there comes a point where they don’t have a choice. 

Undermining the authenticity of academic research in this sector is not an abstract idea. In 2017, a Google-funded research team at the Oxford Internet Institute published a paper arguing that EU citizens do not have the right to an explanation of decisions made about them by machines under the current legal standards. Researchers in the field rebutted the claims of the paper, and warned that Google’s grant, specifically intended to cover work on “underlying ethical and legal principles,” may have impacted the impartiality of the conclusions. 

It’s not difficult to imagine the same process happening when Stanford HAI publishes research on, for example, big data policing, guiding policy-makers in AI decision-making, or recommending how the next administration should regulate tech companies. 

Is this the right way to oversee and critique large technology companies? By letting them run their own trial, from the very standards of the law to the extent of their sentencing? I certainly can’t imagine calling research about “Elements of a New Ethical Framework” — funded by Google — impartial, unbiased, and even ethical to begin with. 

Industry funding is not a new phenomenon. But combined with the power Big Tech already holds in government (spending over half a billion dollars on lobbying in the last decade), the academic-corporate coalescence is dangerous. Couple this with a lack of transparency from academic institutions, and the results are deeply concerning. It seems as though our current system offers limited independent critiques and oversight of Big Tech. If we believe Big Tech should be subject to its fair share of rigorous criticism, we must start by ensuring that academia doesn’t get trapped in Big Tech’s deep pockets. 

Matthew Frank, a junior, is a column editor and writer for Stanford Politics.