What an AI-powered World Cup obscures
Things with which this World Cup is laden so far: Geopolitical intrigue and controversy. Messy soccer-world drama. Improbable first-half England goals.
And, of course: A slate of hyped-up artificial intelligence applications.
Wait, what?
FIFA is touting an AI-powered decision-making system that will use sensors in the actual soccer ball to help determine calls. A vast network of facial recognition-enabled cameras will track the crowd, with technology in the same family as that deployed by the controversial firm Clearview AI. AI-powered sensors in the stadiums will even help control the climate.
Which all sounds very cool. But it also raises the question — is all that really “AI”? And if it is, how is it possible that the same technology is powering such a disparate slate of applications, not to mention generating surreal art, or prefab legal documents?
In one sense, the AI hype around this World Cup is just a marketing push by the host country and organization. Qatar prides itself on having used its (relatively) newfound natural-gas fortune to power it into the ranks of other wealthy gulf states like Saudi Arabia and the UAE, and FIFA has aggressively played up its high-tech additions to the game.
This buzzy invocation of AI is the flip side of the anxiety that has been rising around the technology among industry watchdogs. Both ways of thinking about AI tend to conflate different issues into one big topic. And they all both point to a larger question: How is the public supposed to think about AI?
One reason that matters, a lot, right now: Politics have finally discovered AI. The Biden administration is attempting to nudge the field toward its preferred values and practices with the AI Bill of Rights. Europe is doing the same, but with statutory teeth. Governments are moving to regulate AI at a pace that’s slower than the technology itself is developing, but faster than the layperson’s understanding of it. That poses a political problem, as the marketing “wow factor” around AI increasingly obscures how it actually works and impacts our lives, leaving the public relatively clueless in the face of the regulatory decisions being made.
“If the yellow first-down line in football appeared today rather than in 1998, they’d say it was generated by AI,” said Ben Recht, a professor in the Department of Electrical Engineering and Computer Sciences at the University of California, Berkeley who has written extensively on AI and machine learning. “AI has become nothing more than a marketing term to mean ‘things we do automatically with computers.’”
The history of what artificial intelligence actually is might be beyond the scope of this afternoon newsletter. The mathematics and computing historian Stephanie Dick described the term’s long semantic drift in a 2019 essay for the Harvard Data Science Review that focused on the field’s roots in computer-powered attempts to model human intelligence. As the field drifted away from that effort and toward powerful machine-learning systems like those that power DALL-E or GPT-3, the initial branding has stuck, obscuring those systems’ actual functions behind a fog of hype and sci-fi speculation about sentient machines or human-like “general artificial intelligence.”
We’ve now come to use AI as a basket term for, as computer scientist Louis Rosenberg put it when I talked to him, “processing massive datasets, finding patterns in those datasets, and then using those patterns to make predictions or draw insights.”
When you put it that way, AI’s application to a soccer ball or an AC system is (slightly) demystified. But that only scratches the surface of how those machine-learning systems are insinuating themselves into our lives. The policy discourse around AI right now focuses on much more high-stakes issues like systemic bias creeping into decision-making systems, or unchecked facial-recognition surveillance like that being deployed in Qatar right now, or data harvesting without consent.
Those are the kinds of issues that show up in the Biden administration’s new AI policy, but there’s still a massive gulf in understanding between policymakers and the public on the issue. A Stanford report written last year noted that “accurate scientific communication has not engaged a sufficiently broad range of publics in gaining a realistic understanding of AI’s limitations, strengths, social risks, and benefits,” and that “Given the historical boom/bust pattern in public support for AI, it is important that the AI community not overhype specific approaches or products and create unrealistic expectations” — a dynamic likely not helped by the World Cup hype machine.
And while guidelines like the Biden administration’s might be useful, they’re still… just guidelines. There are still few, if any, laws in place to prevent the kind of AI-induced harms that might be perpetuated under the radar amid a general haze of curiosity and misunderstanding — which makes public understanding of the tech far more important than one might at first think.
“First, AI isn’t some form of magic and, second, that we aren’t on a predetermined path with regard to where the technology is headed and what we do with it,” Maximilian Gahntz, senior policy researcher at the Mozilla Foundation, told me. “As consumers, people get to vote with their feet if they have the necessary information to make informed choices about products and services that use AI. And as voters, people can push for tech companies and those deploying AI to be held accountable.”
Read the article here: What an AI-powered World Cup obscures