How the company is repeating Elizabeth Holmes’ mistakes.


The new artificial intelligence features Google announced just weeks ago are finally breaking through to the mainstream—albeit not in the manner Google might prefer.

As you may have gleaned from recent coverage and chatter (or even experienced yourself), the autogenerated A.I. Overviews now sitting atop so many Google search results are giving answers that … well, to call them incorrect is true but doesn’t quite nail it. Try surreal and ridiculous and potentially dangerous instead. Since their rollout, A.I. Overviews have told users to smoke cigarettes while pregnant, add glue to their home-baked pizza, sprinkle used antifreeze on their lawns, and boil mint in order to cure their appendicitis.

To address the erroneous answers to both straightforward and jokey queries, Google appears to be addressing each incident one by one and tweaking the relevant Overviews accordingly. Still, the broken top-of-Google answers may even be spilling over into the search engine’s other features, like its automatic calculator: One U.S.–based user found, posting a screenshot to X, that Google’s tech couldn’t even scan that the unit cm stands for centimeter, reading the measure as a whole meter. Search engine optimization expert Lily Ray claimed to have independently verified this finding.

The mass rollout of A.I. Overviews has prompted users and analysts to share other, even buggier Google discoveries: The underlying Gemini bot appears to spawn “answers” first, then find citations. This process appears to be causing a lot of old, spammy, and broken links to show up as supporting information for these responses. Nevertheless, Google—which still sweeps up piles of digital-ad dollars, despite recently losing some of that market share—wants to insert more ads into Overviews, some of which could be “A.I.–powered” themselves.

Meanwhile, the very appearance of the A.I. Overviews is already redirecting traffic from more reliable sources that would normally pop up on Google. Contrary to CEO Sundar Pichai’s statements, SEO experts have found that links featured in Overviews are not earning many click-through boosts from their placement. (This factor, along with the misinformation, is just part of the reason why plenty of major news organizations, including Slate, have opted out of inclusion within A.I. Overviews. A Google spokesperson told me that “such analyses are not a reliable or comprehensive way to assess traffic from Google Search.”)

Ray’s studies find that search-result Google traffic to publishers has been dropping overall this month, with much more visibility going to posts from Reddit—the site that, by the way, was the source for the infamous glue-on-pizza recommendation and that has signed multimillion-dollar agreements with Google in favor of more of that. (The Google spokesperson responded, “This is in no way a comprehensive or representative study of traffic to news publications from Google Search.”)

Google likely was aware of all the problems before pushing A.I. Overviews into prime time. Pichai has called chatbots’ “hallucinations” (that is, their tendency to make stuff up) an “inherent feature” and has even admitted that such tools, engines, and data sets “aren’t necessarily the best approach to always get at factuality.” This is something he thinks Google Search data and capabilities will fix, Pichai told the Verge. That seems dubious in light of Google’s algorithms obscuring the search visibility of various trustworthy news sources and also possibly “torching small sites on purpose,” as SEO expert Mike King noted in his study of recently leaked Google Search documents. (The Google spokesperson claims that this was “categorically false” and that “we would caution against making inaccurate assumptions about Search based on out-of-context, outdated, or incomplete information.”)

More to the point: Google’s errant A.I. has been in public view for a while now. Back in 2018, Google demonstrated a voice-assistant technology that could purportedly call and answer people in real time, but Axios found that the demo may have actually used prerecorded conversations, not live ones. (Google declined to comment at the time.) Google’s pre-Gemini chatbot, Bard, was showcased in February 2023 and gave an incorrect answer that temporarily sank the company’s stock price. Later that year, the company’s impressive video introduction of Gemini’s multimodal A.I. was revealed to have been edited after the fact, to make its reasoning capability seem faster than it actually was. (Cue another subsequent stock-price depression.) And the company’s annual developers conference, which occurred just weeks ago, also featured Gemini not only generating but highlighting an erroneous suggestion for fixing your film camera.

In fairness to Google, which has long been working on A.I. development, the rapid deployment of and hype-building around all these tools is likely its way of keeping up in the era of ChatGPT—a chatbot that, by the way, is still generating a significant amount of wrong answers in various subjects. It’s not as though other companies following the investor-mollifying A.I. trends aren’t making their own risible mistakes or faking their most impressive demos.

Last month, Amazon’s supposedly A.I.–powered, human-free “Just Walk Out” grocery-store concept actually featured … many humans behind the scenes to monitor and program the shopping experience. Similar results were found in supposedly “A.I.–powered” human-free drive-thrus used by chains like Checkers and Carl’s Jr. There’s also the “driverless” Cruise cars, which require remote human intervention almost every couple of miles traveled. ChatGPT parent company OpenAI is not immune to this, having employed a lot of humans to clean up and polish the animated visual landscapes supposedly generated wholesale by prompts made to its not-yet-public Sora image and film generator.

All of this, mind you, constitutes just another layer of labor hidden on top of the human operations outsourced to countries like Kenya, Nigeria, Pakistan, and India, where workers are underpaid or allegedly forced into conditions of “modern-day slavery” to consistently provide feedback to A.I. bots and label horrific imagery and videos for content-moderation purposes. Don’t forget, also, the humans who work at the data centers, chip manufacturers, and energy generators required in heaping amounts to even power all this stuff.

So, let’s recap: After years of teasing, disproved claims, staged demos, refusals to provide further transparency, and the use of “human-free” branding while in reality employing a lot of humans in a lot of different (and harmful) ways, these A.I. creations are still bad. They keep broadly making up stuff, plagiarizing from their training sources, and offering information, advice, “news,” and “facts” that are wrong, nonsensical, and potentially dangerous for your health, the body politic, people trying to do simple math, and others scratching their heads and attempting to figure out where their car’s “blinker fluid” is.

Does that remind you of anything else in tech history? Perhaps Elizabeth Holmes, who herself faked plenty of demos and put forth fantastic claims about her company, Theranos, to sell a “tech innovation” that was simply impossible?

Holmes is now behind bars, but the scandal still lingers in the public imagination, for good reason. In retrospect, the glaring signs should have been so obvious, right? Her biotech startup Theranos had no health experts on its board. It promoted zany scientific claims that weren’t backed by any authorities and refused to explain any justifications for those statements. It established partnerships with massive (and actually trusted) institutions like Walgreens without verifying the safety of its output. It inculcated a deep, intimidating culture of secrecy among its employees and made them sign aggressive agreements to that effect. It brought in unthinking endorsements from famous and powerful folks, like Vice President Joe Biden, through the sheer force of awe alone. And it constantly hid whatever was actually fueling its systems and creations, until dogged reporters looked for themselves.

It’s been nearly 10 years since Holmes was finally exposed. Yet, clearly, the crowds of tech observers and analysts that took her at her word are also willing to put all their trust in the people behind these error-producing, buggy, manned-behind-the-curtain A.I. bots that, their creators promise, will change everything and everyone. Unlike Theranos, of course, companies like OpenAI have actually made products for public consumption that are functional and can pull off some impressive feats. But the rush to force this stuff everywhere, to have it take on tasks for which it’s likely not close to being prepared, and to keep it accessible despite a not-so-obscure track record of missteps and mistakes—that’s where we seem to be borrowing from the Theranos playbook all over again. We’ve learned nothing. And the masterminds behind the chatbots that really teach you nothing may in fact prefer that.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *