When I logged onto a Zoom meeting recently, I was offered the chance to let the company use some kind of whiz-bangy AI magic that would summarize the meeting for me. Cool? Maybe. Artificial intelligence? Not by my definition. New? Not really. New name? Absolutely
I’m sure you’ve had this experience a lot lately. “AI-powered” marketing-speak is everywhere, sweeping the planet faster than dot com mania ever did. In fact, it’s come so fast and furious that the White House issued an executive order about AI on Monday. AI hasn’t taken over the planet, but AI-speak sure has. It’s smart to worry about computers taking over the world and doing away with humanity, but I think marketing hype might be AI’s most dangerous weapon.
Look, chatbots are kind of cool and impressive in their own way. New? Well, as consumers, we’ve all been hijacked by some “smart” computer giving us automated responses when we just want a human being at a company to help us with a problem. The answers are *almost* helpful, but not really. And chatbots … are…not really new.
I like to remind people who work in this field — before “the Cloud” there were servers. Before Web 3.0 there was the Internet of Things, and before that, cameras that connected to your WiFi. Before analytics, and AI, there was Big Data. Many of these things work better than they did ten or twenty years ago, but it was the magic label — not a new Silicon Valley tech, but a new Madison Avenue slogan — that captured public imagination. Just because someone calls something AI does not make it so. It might just be search, or an updated version of Microsoft Bob that isn’t terrible.
I don’t at all mean to minimize concern about tech being used for evil purposes. Quite the opposite, really. If you read the smartest people I can find right now, this is the concern you’ll hear. It’s fine to fret about ChatGPT Jr., or ChatGPT’s evil half-brother, making a nuclear bomb, or making it substantially easier to make a nuclear bomb. We’ve been worried about something like that since the 1950s and 60s. And we should still be concerned about it. But that’s not happening today.
Meanwhile, tech (aka “AI”) is being used to hurt people right now. There’s real concern all the focus on a sci-fi future is taking attention away from what needs to be done to reign in large technology companies right now.
Big databases have been used to harm people for a long time. Algorithms decide prison sentences — often based on flawed algorithms and data. (Yes, that is real!) Credit scores rule our lives as consumers. The credit reports on which they are built are riddled with errors. And as people seem to forget, credit scores did virtually nothing to stop the housing bubble. I just read that credit scores are at an all-time high, despite the fact that consumer debt is at very high levels — and, in a classic toxic combination — interest rates are also very high. So just how predictive are credit scores?
I know this — Folks looking to regulate AI/Big Data/algorithmic bias haven’t done nearly enough research into the decades-long battle among credit reporting agencies, consumer advocacy groups, and government regulators. Hint: It’s not over.
There is a lot to like in the recent AI executive order. I’ve long been an advocate that tech companies should include “abuse/harm testing” into new products, the way cybersecurity teams conduct penetration testing to predict how hackers might attack. Experienced, creative technologists should sit beside engineers as they dream up new products and ponder: “If I were a bad person, how might I twist this technology for dark uses?” So when a large tech firm comes up with a great new tool for tracking lost gadgets, someone in the room will stop them and ask, “How do we prevent enraged ex-partners from using this tech to stalk victims?” Those conversations should be had in real-time, during product development, not after something is released to the world and is already being abused.
Today’s executive order calls for red-teaming and sharing of results with regulators. In theory, such reports would head off a nuclear-bomb-minded bot at the pass. Good. I just hope we don’t race past algorithmic-enhanced racial discrimination in housing decisions — which happens today, and has been happening for decades.
The best piece I read on the executive order appeared in MIT Technology Review — a conversation with Joy Buolamwini, who has a new book out titled Unmasking AI: My Mission to Protect What Is Human in a World of Machines. She’s been ringing the alarm bell on current-day risks for nearly a decade