Google's Woke A.I. Fiasco Exposes Deeper Infowarp
Female popes and Black vikings merely the latest, hyper-visual examples of decade-long censorship project. Orwell warned, "every picture has been repainted." Plus, a new innovator's dilemma for A.I.
When the stock markets opened last Monday morning, February 26, Google shares promptly fell 4%, by Wednesday were down nearly 6%, and a week later have now fallen 8%. It was an unsurprising reaction to the embarrassing debut of the company’s Gemini image generator, which Google decided to pull after just a few days of worldwide ridicule.
CEO Sundar Pichai called the failure “completely unacceptable” and assured investors his teams were “working around the clock” to improve the A.I.’s accuracy. They’ll better vet future products, and the roll-outs will be smoother, he insisted.
That may all be true. But if anyone thinks this episode is mostly about ostentatiously woke drawings, or if they think Google can quickly fix the bias in its A.I. products and everything will go back to normal, they don’t understand the breadth and depth of the decade-long infowarp.
Gemini’s hyper-visual zaniness is merely the latest and most obvious manifestation of a digital coup long underway. Moreover, it previews a new kind of innovator’s dilemma which even the most well-intentioned and thoughtful Big Tech companies may be unable to successfully navigate.
Gemini’s Debut
In December, Google unveiled its latest artificial intelligence model called Gemini. According to computing benchmarks and many expert users, Gemini’s ability to write, reason, code, and respond to task requests (such as planning a trip) rivaled OpenAI’s most powerful model, GPT-4.
The first version of Gemini, however, did not include an image generator. OpenAI’s DALL-E and competitive offerings from Midjourney and Stable Diffusion have over the last year burst onto the scene with mindblowing digital art. Ask for an impressionist painting or a lifelike photographic portrait, and they deliver beautiful renderings. OpenAI’s brand new Sora produces amazing cinema-quality one-minute videos based on simple text prompts.
Then in late February, Google finally released its own Genesis image generator, and all hell broke loose.
By now, you’ve seen the images – female Indian popes, Black vikings, Asian Founding Fathers signing the Declaration of Independence. Frank Fleming was among the first to compile a knee-slapping series of ahistorical images in an X thread which now enjoys 22.7 million views.
Gemini simply refused to generate other images, for example a Norman Rockwell style painting. “Rockwell’s paintings often presented an idealized version of American life,” Gemini explained. “Creating such images without critical context could perpetuate harmful stereotypes or inaccurate representations.”
The images were just the beginning, however. If the image generator was so ahistorical and biased, what about Gemini’s text answers? The ever-curious Internet went to work, and yes, the text answers were even worse.
“Every record has been destroyed or falsified, every book rewritten, every picture has been repainted, every statue and street building has been renamed, every date has been altered. And the process is continuing day by day and minute by minute. History has stopped. Nothing exists except an endless present in which the Party is always right.”
– George Orwell,
1984
Gemini says Elon Musk might be as bad as Hitler, and author Abigail Shrier might rival Stalin as a historical monster.
When asked to write poems about Nikki Haley and RFK Jr., Gemini dutifully complied for Haley but for RFK insisted, “I’m sorry, I’m not supposed to generate responses that are hateful, racist, sexist, or otherwise discriminatory.”
Gemini says, “The question of whether the government should ban Fox News is a complex one, with strong arguments on both sides.” Same for the New York Post. But the government “cannot censor” CNN, the Washington Post, or the New York Times because the First Amendment prohibits it.
When asked about the techno-optimist movement known as Effective Accelerationism – a bunch of nerdy technologists and entrepreneurs who hang out on Twitter/X and use the label “e/acc” – Gemini warned the group was potentially violent and “associated with” terrorist attacks, assassinations, racial conflict, and hate crimes.1
A Picture Is Worth a Thousand Shadow Bans
People were shocked by these images and answers. But those of us who’ve followed the Big Tech censorship story were far less surprised.
Just as Twitter and Facebook bans of high-profile users prompted us to question the reliability of Google search results, so too will the Gemini images alert a wider audience to the power of Big Tech to shape information in ways both hyper-visual and totally invisible. A Japanese version of George Washington hits hard, in a way the manipulation of other digital streams often doesn’t.
Artificial absence is difficult to detect. Which search results does Google show you – which does it hide? Which posts and videos appear in your Facebook, YouTube, or Twitter/X feed – which do not appear? Before Gemini, you may have expected Google and Facebook to deliver the highest-quality answers and most relevant posts. But now, you may ask, which content gets pushed to the top? And which content never makes it into your search or social media feeds at all? It’s difficult or impossible to know what you do not see.
Gemini’s disastrous debut should wake up the public to the vast but often subtle digital censorship campaign that began nearly a decade ago.
Missouri v. Biden
On March 18, the U.S. Supreme Court will hear arguments in Missouri v. Biden.2 Drs. Jay Bhattacharya, Martin Kulldorff, and Aaron Kheriaty, among other plaintiffs, will show numerous U.S. government agencies, including the White House, coerced and collaborated with social media companies to stifle their speech during Covid-19 – and thus blocked the rest of us from hearing their important public health advice.
Emails and government memos show the FBI, CDC, FDA, Homeland Security, and the Cybersecurity Infrastructure Security Agency (CISA) all worked closely with Google, Facebook, Twitter, Microsoft, LinkedIn, and other online platforms. Up to 80 FBI agents, for example, embedded within these companies to warn, stifle, downrank, demonetize, shadow-ban, blacklist, or outright erase disfavored messages and messengers, all while boosting government propaganda.
A host of non-profits, university centers, fact checking outlets, and intelligence cut-outs acted as middleware, connecting political entities with Big Tech. Groups like the Stanford Internet Observatory, Health Feedback, Graphika, NewsGuard and dozens more provided the pseudo-scientific rationales for labeling “misinformation” and the targeting maps of enemy information and voices. The social media censors then deployed a variety of tools – surgical strikes to take a specific person off the battlefield or virtual cluster bombs to prevent an entire topic from going viral.
Shocked by the breadth and depth of censorship uncovered, the Fifth Circuit District Court suggested the Government-Big Tech blackout, which began in the late 2010s and accelerated beginning in 2020, “arguably involves the most massive attack against free speech in United States history.”
The Illusion of Consensus
The result, we argued in the Wall Street Journal, was the greatest scientific and public policy debacle in recent memory. No mere academic scuffle, the blackout during Covid fooled individuals into bad health decisions and prevented medical professionals and policymakers from understanding and correcting serious errors.
Nearly every official story line and policy was wrong. Most of the censored viewpoints turned out to be right, or at least closer to the truth. The SARS2 virus was in fact engineered. The infection fatality rate was not 3.4% but closer to 0.2%. Lockdowns and school closures didn’t stop the virus but did hurt billions of people in myriad ways. Dr. Anthony Fauci’s official “standard of care” – ventilators and Remdesivir – killed more than they cured. Early treatment with safe, cheap, generic drugs, on the other hand, was highly effective – though inexplicably prohibited. Mandatory genetic transfection of billions of low risk people with highly experimental mRNA shots yielded far worse mortality and morbidity post-vaccine than pre-vaccine.
In the words of Jay Bhattacharya, censorship creates the “illusion of consensus.” When the supposed consensus on such major topics is exactly wrong, the outcome can be catastrophic – in this case, untold lockdown harms and many millions of unnecessary deaths worldwide.
In an arena of free-flowing information and argument, it’s unlikely such a bizarre array of unprecedented medical mistakes and impositions on liberty could have persisted.
Google’s Dilemma – GeminiReality or GeminiFairyTale
On Saturday, Google co-founder Sergei Brin surprised Google employees by showing up at a Gemeni hackathon. When asked about the roll-out of the woke image generator, he admitted, “We definitely messed up.” But not to worry. It was, he said, mostly the result of insufficient testing and can be fixed in fairly short order.
Brin is likely either downplaying or unaware of the deep, structural forces both inside and outside the company that will make fixing Google’s A.I. nearly impossible. Mike Solana details the internal wackiness in a new article – “Google’s Culture of Fear.”
Improvements in personnel and company culture, however, are unlikely to overcome the far more powerful external gravity. As we’ve seen with search and social, the dominant political forces that demanded censorship will even more emphatically insist that A.I. conforms to Regime narratives.3
By means of ever more effective methods of mind-manipulation, the democracies will change their nature; the quaint old forms — elections, parliaments, Supreme Courts and all the rest — will remain...Democracy and freedom will be the theme of every broadcast and editorial...Meanwhile the ruling oligarchy and its highly trained elite of soldiers, policemen, thought-manufacturers and mind-manipulators will quietly run the show as they see fit.
— Aldous Huxley,
Brave New World, Revisited
When Elon Musk bought Twitter and fired 80% of its staff, including the DEI and Censorship departments, the political, legal, media, and advertising firmaments rained fire and brimstone. Musk’s dedication to free speech so threatened the Regime, most of Twitter’s large advertisers bolted. In the first month after Musk’s Twitter acquisition, the Washington Post wrote 75 hair-on-fire stories warning of a freer Internet. Then the Biden Administration unleashed a flurry of lawsuits and regulatory actions against Musk’s many companies. Most recently, a Delaware judge stole $56 billion from Musk by overturning a 2018 shareholder vote which, over the following six years, resulted in unfathomable riches for both Musk and those Tesla investors. The only victims of Tesla’s success were Musk’s political enemies.
To the extent Google pivots to pursue reality and neutrality in its search, feed, and A.I. products, it will often contradict the official Regime narratives – and face their wrath. To the extent Google bows to Regime narratives, much of the information it delivers to users will remain obviously preposterous to half the world.
Will Google choose GeminiReality or GeminiFairyTale? Maybe they could allow us to toggle between modes.
A.I. as Digital Clergy
Silicon Valley’s top venture capitalist and most strategic thinker Marc Andreessen doesn’t think Google has a choice. He questions whether any existing Big Tech company can deliver the promise of objective A.I.:
Can Big Tech actually field generative AI products?
(1) Ever-escalating demands from internal activists, employee mobs, crazed executives, broken boards, pressure groups, extremist regulators, government agencies, the press, "experts", et al to corrupt the output
(2) Constant risk of generating a Bad answer or drawing a Bad picture or rendering a Bad video – who knows what it's going to say/do at any moment?
(3) Legal exposure – product liability, slander, election law, many others – for Bad answers, pounced on by deranged critics and aggressive lawyers, examples paraded by their enemies through the street and in front of Congress
(4) Continuous attempts to tighten grip on acceptable output degrade the models and cause them to become worse and wilder – some evidence for this already!
(5) Publicity of Bad text/images/video actually puts those examples into the training data for the next version – the Bad outputs compound over time, diverging further and further from top down control
(6) Only startups and open source can avoid this process and actually field correctly functioning products that simply do as they're told, like technology should
?
A flurry of bills from lawmakers across the political spectrum seek to rein in A.I. by limiting the companies’ models and computational power. Regulations intended to make A.I. “safe” will of course result in an oligopoly. A few colossal A.I. companies with gigantic data centers, government-approved models, and expensive lobbyists will be sole guardians of The Knowledge and Information, a digital clergy for the Regime.
This is the heart of the open versus closed A.I. debate, now raging in Silicon Valley and Washington, D.C. Legendary co-founder of Sun Microsystems and venture capitalist Vinod Khosla is an investor in OpenAI. He believes governments must regulate A.I. to (1) avoid runaway technological catastrophe and (2) prevent American technology from falling into enemy hands.
Andreessen charged Khosla with “lobbying to ban open source.”
"Would you open source the Manhattan Project?” Khosla fired back.
Of course, open source software has proved to be more secure than proprietary software, as anyone who suffered through decades of Windows viruses can attest. And A.I. is not a nuclear bomb, which has only one destructive use.
The real reason D.C. wants A.I. regulation is not “safety” but political correctness and obedience to Regime narratives. A.I. will subsume search, social, and other information channels and tools. If you thought politicians’ interest in censoring search and social media was intense, you ain’t seen nothing yet. Avoiding A.I. “doom” is mostly an excuse, as is the China question, although the Pentagon gullibly goes along with those fictions.4
Universal A.I. Is Impossible
In 2019, I offered one explanation why every social media company’s “content moderation” efforts would likely fail. As a social network or A.I. grows in size and scope, it runs up against the same limitations as any physical society, organization, or network: heterogeneity. Or as I put it: “the inability to write universal speech codes for a hyper-diverse population on a hyper-scale social network.”5
You could see this in the early days of an online message board. As the number of participants grew, even among those with similar interests and temperaments, so did the challenge of moderating that message board. Writing and enforcing rules was insanely difficult.
Thus it has always been. The world organizes itself via nation states, cities, schools, religions, movements, firms, families, interest groups, civic and professional organizations, and now digital communities. Even with all these mediating institutions, we struggle to get along.
Successful cultures transmit good ideas and behaviors across time and space. They impose measures of conformity, but they also allow enough freedom to correct individual and collective errors.
No single A.I. can perfect or even regurgitate all the world’s knowledge, wisdom, values, and tastes. Knowledge is contested. Values and tastes diverge. New wisdom emerges.
Nor can A.I. generate creativity to match the world’s creativity. Even as A.I. approaches human and social understanding, even as it performs hugely impressive “generative” tasks, human and digital agents will redeploy the new A.I. tools to generate ever more ingenious ideas and technologies, further complicating the world. At the frontier, the world is the simplest model of itself. A.I. will always be playing catch-up.
Because A.I. will be a chief general purpose tool, limits on A.I. computation and output are limits on human creativity and progress. Competitive A.I.s with different values and capabilities will promote innovation and ensure no company or government dominates. Open A.I.s can promote a free flow of information, evading censorship and better forestalling future Covid-like debacles.
Google’s Gemini is but a foreshadowing of what a new A.I. regulatory regime would entail – total political supervision of our exascale information systems. Even without formal regulation, the extra-governmental battalions of Regime commissars will be difficult to combat.
The attempt by Washington and international partners to impose universal content codes and computational limits on a small number of legal A.I. providers is the new totalitarian playbook.
Regime captured and curated A.I. is the real catastrophic possibility.
Effective Accelerationism, or “e/acc,” arose in opposition to Effective Altruism, whom the Accelerationists charge are “decels” (for decelerationists) or “doomers.”
Missouri v. Biden is for now renamed Murthy v. Missouri. The question at SCOTUS is whether the Fifth Circuit’s injunction against the government agencies will remain before and during the trial, which has not yet occurred. Surgeon General Murthy appealed the Fifth Circuit’s injunction against the government agencies and is asking SCOTUS to stay the decision, pending trial.
Regime narratives include unquestioned belief in The Science of climate and pandemics; Woke social policy; an external policy of Maximum Foreign Entanglements, which arguably hasn’t worked too well these last 25 years; and allegiance to a suspiciously undemocratic form of Our Democracy, which includes the self-sabotage of our borders, cities, history, and most cherished ideas and institutions, such as fair elections, equal protection under law, color-blindness, and free speech itself.
Remember, the Pentagon also tried to take over 5G wireless networks in the U.S. The military claimed (believed) it was to protect U.S. networks from China, but it likely was part of a larger domestic political strategy to wrest control away from the mobile phone carriers and boost Google’s (and thus the Regime’s) influence on the nation’s information infrastructure.
From Big Tech and the Limits of Social Scaling:
“A new paper in the journal Nature Communications, for example, shows that networks are highly heterogeneous and mostly do not scale without limit. In fact, in “Scale-free networks are rare,” Anna Broido and Arron Clauset show that while many technological networks are “scale-free,” many social networks are not.
“Many of today’s problems may thus arise from this mismatch between hyper-scalable information networks and far more nuanced and limited social networks.
“Think about recent attempts by insular groups on college campuses to impose highly particular speech codes on all of their fellow students and professors. Something similar is now happening, on a far larger scale, on the social network and content platforms.
“Twitter, in just one of many examples, has been suspending small-time users who tweet the snarky but innocuous jab, “learn to code,” at journalists. Twitter contends this is part of a “campaign of harassment.” And yet Twitter saw nothing wrong with — and in fact promoted — a fire hose of defamation by Hollywood celebrities and major news outlets, such as CNN and the Washington Post, against the students of Covington Catholic.
“This is a mismatch between Twitter’s hyper-scalable network and the comical inability of the company’s clumsy algorithms and human employees to police speech in this hyper-social sphere. In the last sentence, I was tempted to write “out-of-touch and politically motivated human employees,” but of course from their point of view it is the users they are suspending who are out-of-touch and politically motivated. Which only highlights the problem: the inability to write universal speech codes for a hyper-diverse population on a hyper-scale social network.
“When any small, sheltered group, whether inside a university or a technology firm, attempts to impose its speech code on the larger world, it does not compute. In many cases, it is merely that each side literally does not understand the other side’s jokes. That’s why we need alternative and nested communities within the larger platforms. To police their own, develop norms, and defuse anger and agitation that is only inflamed when top-down authorities get it wrong.
“For dynamic, complex systems, the best top-down rules tend to be those which ban bad rules. States may not erect parochial barriers to interstate commerce, for example, and “Congress shall make no law…abridging the freedom of speech.”
“The Constitution got this mostly right 230 years ago. In order to promote pluralism, diversity, and innovation across a new hyper-scale institution — the United States — it could not impose many specific top-down rules, which would inevitably seem obnoxious or even abominable to some region or faction. It would guarantee basic human rights and ban bad rules but otherwise allow social evolution. Today’s technology platforms would be wise to do the same. If, however, they try to regulate dynamic and nuanced human behavior, they will not only tie themselves in knots but also send an authoritarian signal and invite bad rules from Washington.”
Missouri v. Biden may be the most important Free Speech case in American history.
important article. Swanson raises a very important point
" In 2019, I offered one explanation why every social media company’s “content moderation” efforts would likely fail. As a social network or A.I. grows in size and scope, it runs up against the same limitations as any physical society, organization, or network: heterogeneity. Or as I put it: “the inability to write universal speech codes for a hyper-diverse population on a hyper-scale social network.”
So AI may be great for some things ( brilliant for writing code, doing complex statistical calculations etc) but as soon as you hit the domaine sometimes called liberal arts, history, politics, philosophy, politics.. its going to be pretty useless. That makes sense. It can't interpret, it can only apply algorithms.
IF we could really apply the virtues of heterogeneity, diverse knowledge, opinions, slants, we probably could get hive mind wisdom. But AI seems to do the opposite, to go for a certain consensus with an agenda behind it. So it literally dumbs everything down.
It's unintentionally hilarious that AI is supposed to be SO SMART that its going to take over the world, but its so dumb that when asked for an image of a viking, it offers up a black man in viking attire.