Artificial Intelligence is poised to revolutionize the global economy through new efficiencies. But there are many fears about A.I., including that someday robots will get too smart and take over the human world. There’s also fear that the A.I. language models are biased against certain demographics, indicating they’re simply an extension of their creators—e.g. flawed humans.

A new study by scholars Fabio Motoki, Valdemar Pinho Neto, and Victor Rodrigues finds the liberal bias we see in today’s human-run press is also present in the large language model (LLM) behind ChatGPT.

The authors write, “We document robust evidence that ChatGPT presents a significant and sizeable political bias towards the left side of the political spectrum. In particular, the algorithm is biased towards the Democrats in the US, Lula in Brazil, and the Labour Party in the UK.”

The study found when posing questions about political affairs, the A.I. default answers leaned consistently far closer to a Democrat worldview:

Arvind Narayanan and Sayash Kapoor poke some holes in the methodology of this study, though their own informal study finds some ChatGPT leftward bias as well, though less pronounced. 

No matter how pervasive the bias, it’s deeply disturbing, the notion that “truth” is defined by a leftist robot. This biased version of ChatGPT could be used in school classrooms nationwide to indoctrinate young children. 

But these findings also point to a market opening for a more politically-neutral entrant in the booming Wild West that is A.I. Free and fair competition in the marketplace of ideas is a better solution to this leftward bias than heavy-handed government regulation, which tends to make things worse.