Privacy problems? Think of them as side effects

Bob Sullivan

Not long ago, I was approached by someone to help write a book about the race to cure cancer. It was an intriguing idea, and it sent me down a rabbit whole of research so I’d be able to understand what I’d be getting into. What I found was one Greek myth-like tale after another, of a wonderful breakthrough followed by a tragic outcome.  An incredibly promising development followed by crushing consequence.  Of treatments that killed cancer but also killed patients. Of cures that are worse than the disease.

Sometimes, these are stories about egos blinded by a God complex, refusing to see they are hurting instead of helping. Usually, they are stories about people who spend decades in service to humanity and the slow, very unsteady, very unsure march of progress.

And these are stories about damned side effects.

I usually tell people that I’m a tech reporter, but that I focus on the unintended consequences of technology — tech’s dark side.  Privacy, hacking, viruses, manipulation of consumers via big data. These things are kind of like the nuclear waste of “progress.” But lately I’ve been thinking about changing that description.

Now, I think the problem is a lot more like the medical concept of side effects.

Companies like Facebook, Uber, and Google are full of brilliant engineers who spend all their time and energy trying to solve some of the world’s great problems, and they often do.  Uber and its imitators are wonderful at solving vexing transportation problems.  Facebook *has* connected billions of people, and let millions of families share baby photos easily.  These are good tools. Amazing tools.

But tech firms aren’t built to think about side effects.  Long before the Russian trolls in 2016, plenty of people warned Facebook about the crap its service was spewing, about how its tool had been hijacked and weaponized. But Facebook didn’t listen. The firm was too focused on the “cure” it was inventing — maybe too arrogant, maybe too naive — to see the damage it was doing.

There are similar tales all across tech-land.

Banking apps let us pay our friends instantly; they also let criminals steal from us instantly. Talk to banks about this, and you can almost hear the mad-scientist approach (I hear, “Well, consumers really should protect themselves,” as “We can’t let a few victims get in the way of progress!”)

Cell phone companies have created amazing products. And now, we know, they also make it easy for law enforcement to track us.

There’s a cynical way to view this, of course. Facebook is only concerned with making money, Google doesn’t really care about making the world a better place, just making its balance sheet a better place. If you believe that, I’m not trying to talk you out of it.  Corporations are people after all, our Supreme Court says, and greedy people at that. It’s illegal for them to act otherwise; it would be negligent not to maximize shareholder value.

I’ve spent 20 years talking to people in the tech industry, however, and there’s plenty of folks in it who don’t think that way.  I think most folks in tech who fail us are better described a naive Utopians rather than greedy bastards.

In the coming months, I’ll be working on a new set of initiatives around this notion.  The effort really started this year with re-release of Gotcha Capitalism.  My podcast “Breach” is also part of this. So are some new audio projects I’m working on. I’m being vague because I have to, for now.  You might see a bit of a slowdown in posts as I ready this projects, but rest assured, I’m on the beat.

In the new introduction to the new Gotcha Capitalism, I sum up what I feel is the civil rights issue of our time: Big Data being used against consumers.  It fits the Failed Utopia model like a T.  Folks wanted to remove the human element — often susceptible to racial and other forms of bias — from important decisions in realms like credit and criminal punishment. So credit scores are now used to grant mortgages, and formulas are used in sentencing decisions.  Unfortunately, as my Dad taught me in the 1970s, “Garbage In, Garbage Out,” is still the primal rule in computing.  Algorithms can suffer from bias, too. What makes this scary, however, is many folks haven’t woken up to this fact yet.  Just as, once upon a time, people believed that photographs can’t lie, today, many blindly think that data can’t lie.

It can, and does. More important, in the wrong hands, data can be abused.  So now we have the even-worse story of a powerful tool built by a Utopian falling into the wrong hands and being abused by an evil genius.

This is the story of tech today.

I’m hardly the only one who recognizes this. Organizations like the Center for Humane Technology are springing up all over.  This is promising. But the forces aligned against such thoughtful use of tech are powerful, and billions of dollars are at stake.  Sometimes, it can feel like the the onslaught of tech’s takeover is a force of nature, like gravity.  Just ask anyone who’s ever tried to convince a startup to think about security or privacy while it’s racing to release new features.

Not unlike someone racing to invent a cure, side effects be damned.

I hope you’ll join me in this effort. Little things mean a lot — such as this woman’s suggestions for getting people to put down their smartphones when she wants to talk.  Mere awareness of the issue helps a lot. Think about how much news you get from Facebook or Twitter today compared to five years ago. Would your high school civics teacher be proud?

When tech is released in to the world, side effects like privacy and security issues shouldn’t be an afterthought. They should be considered and examined with all the rigor that the medical profession has long practiced. That’s how we’ll make sense out of our future.

Leave a Reply

Your email address will not be published. Required fields are marked *