Category Archives: Uncategorized

Do your computers have ID? The state of machine identity management

Ponemon Institute and Keyfactor kicked off the first-ever State of Machine Identity Management Report with one purpose: Drive industry awareness around the importance of managing and protecting machine identities, such as keys, certificates, and other secrets, in digital business.

For the 2021 State of Machine Identity Management Report, Ponemon Institute surveyed 1,162 respondents across North America and EMEA who work in IT, information security, infrastructure, development, and other related areas.

We hope that IT and security leaders can use this research to drive forward the need for an enterprise-wide machine identity management strategy. No matter where you are in the business – IT, security, or development – and no matter the size of your company, this report
offers important insights into why machine identities matter.

In recent years, we’ve witnessed the rapid growth of internet-connected devices and machines in the enterprise. From IoT and mobile devices to software-defined applications, cloud instances, containers, and even the code running within them, machines already far
outnumber humans.

Much like the human identities we rely on to access apps and devices we use every day (e.g., passwords, multi-factor, etc.), machines require a set of credentials to authenticate and securely
connect with other devices and apps on the network. Despite their critical importance, these “machine identities” are often left unmanaged and unprotected.

In the 2020 Hype Cycle for Identity and Access Management Technologies, Gartner introduced a new category: Machine Identity Management. The addition reflects the increasing importance of managing cryptographic keys,  X.509 certificates, SSH keys, and other non-human identities.

Machine identities have undoubtedly become a critical piece in enterprise IAM strategy, and awareness has reached even the highest levels of the organization. Sixty-one percent of respondents say they are either familiar or very familiar with the term machine identity management.

“Machine identities, such as keys, certificates and secrets, are essential to securing connections between thousands of servers, cloud workloads, IoT and mobile devices,” said Chris Hickman, chief security officer at Keyfactor. “Yet the survey highlights a concerning and significant gap between plan and action when it comes to machine identity management strategy. Acknowledgment is a step in the right direction, but a lack of time, skilled resources and attention paid to managing machine identities make organizations vulnerable to highly disruptive security risks and service outages.”

In this section, we highlight key findings based on Keyfactor’s analysis of the research data compiled by Ponemon Institute. For more in-depth analysis, see the complete findings.

Strategies for crypto and machine identity management are a work in progress.

Despite growing awareness of machine identity management, the majority of survey respondents said their organization either does not have a strategy for managing cryptography and machine identities (18 percent of respondents), or they have a limited strategy that is applied only to certain applications or use cases (42 percent of respondents).

The top challenges that stand in the way of setting an enterprise-wide strategy are too much change and uncertainty (40 percent of respondents) and lack of skilled personnel (40 percent
of respondents).

Shorter certificate lifespans, key misconfiguration, and limited visibility are top concerns.

Challenges in managing machine identities include the increased workload and risk of outages caused by shorter SSL/TLS certificate lifespans (59 percent of respondents), misconfiguration of keys and certificates (55 percent of respondents), and not knowing exactly how many keys and certificates the organization has (53 percent of respondents).

A significant driver of these challenges is the recent reduction in the lifespan of all publicly-trusted SSL/TLS certificates by roughly half, from 27 months to 13 months, on September 1, 2020. It is worth noting that the real impact of this change will likely not be realized
until the months and years ahead.

Crypto-agility emerged as a top strategic priority.

Moving into the top position on the list, more than half of respondents (51 percent) identified crypto-agility as a strategic priority for digital security, followed by reducing complexity of IT infrastructure and investing in hiring and retaining qualified personnel (both 50
percent of respondents).

Cloud and Zero-Trust strategies are driving the deployment of PKI and machine identities.

While many trends are driving the deployment of PKI, keys, and certificates, the two most important trends are cloud-based services (52 percent of respondents), and Zero-Trust security strategy (50 percent of respondents). Other notable trends include the remote workforce and IoT devices (both 43 percent of respondents).

SSL/TLS certificates take priority, but every machine identity is critical.

Overall, respondents agree that managing and protecting every machine identity is critical. That said, SSL/TLS certificates were widely considered the most important machine identities to manage and protect, according to 82 percent of respondents.

To see the report’s full findings, visit KeyFactor.com’s website 

 

What’s the original sin of the Internet? A new podcast

Bob Sullivan

Is there an Original Sin of the Internet? Join me on a journey to find out.

Today I’m sharing a passion project of mine that’s been years in the making. I’m lucky. I’m getting old. Much better than the alternative! My career has spanned such a fascinating time in the history of technology. I learned to *literally* cut and paste at my first newspaper. Now, most of the world is run by computer code that’s been virtually cut and pasted. Often, carelessly cut and pasted. Look around, and it’s fair to ask: Has all this technology really made our lives better? My answer is yes, but by a margin so slim that objectors might call for a recount.

Whatever your answer, there is no denying that tech has landed us in a lot of trouble, and the techlash is real. And for those of us who thought the Internet might end up as humanity’s greatest invention, this time is depressing. One of my guests — a real Internet founder — thinks perhaps he should have done something else with his life.

Debugger, launching today, is a podcast, but I think of it more as an audio documentary. There are no sound bites. I let my guests talk and try to stay out of the way. So you can make up your own mind. Thanks to the great people at Duke’s Sanford School of Public Policy and the Kenan Institute for Ethics, I have access to amazing people who were there at the dawn of the Internet Age. I hope you’ll listen, but if you’d rather read, I’ll spend this week sending out edited transcripts from each guest.

First up: Richard Purcell, one of the first privacy executives. From him, you’ll learn as much about working on the railroad as you will about the abuse of power through privacy invasions. But before that, I try to explain what I mean by “original sin” in the introduction, and why that matters.

Future Debugger episodes will deal with similar foundational questions about technology and its role in democratic society. Why do 1,000 companies know every time I visit one web page? How do data brokers interfere with free and fair elections? What should we do with too-big-to-fail tech giants? How can we capture medical miracles trapped in data without violating patients? And how can we build tech that isn’t easily weaponized by abusing people or enemy combatants? That’s coming soon, on Debugger. On to the transcript for today. Click here to visit the podcast home page. Or, click below to listen.


[00:01:27] Bob Sullivan: Welcome to Debugger, a podcast about technology brought to you by Duke University’s Sanford School of Public Policy and the Duke Kenan Institute for Ethics. I’m your host, Bob Sullivan. And I care a lot about today’s topic. So please indulge me for a moment or two while I try to frame this issue.

I came across a story many years ago, it still haunts me as a technologist and an early believer in the internet. It haunts me because it reads like a sad pre-obituary about a once-famous pop singer who’s now a broke has-been with a drug problem … and as a writer because its prose is nearly poetry. At least to my ears, the kind of thing I wish I’d written credit. Steve Maich at Maclean’s and Canadian magazine for the words. Dramatic reading by old friend Alia Tavakolian:

[00:02:24] Alia Tavakolian: The people who conceived and pioneered the web described a kind of enlightened utopia built on mutual understanding. A world in which knowledge is limited only by one’s curiosity. Instead, we have constructed a virtual Wild West where the masses indulge their darkest vices, pirates of all kinds troll for victims, and the rest of us have come to accept that cyberspace isn’t the kind of place you’d want to raise your kids. The great multinational exchange of ideas and goodwill has devolved into a food fight. And the virtual marketplace is a great place to get robbed. The answers to the great questions of our world may be out there somewhere, but finding them will require you to first wade through an ocean of misinformation, trivia and sludge. We’ve been sold a bill of goods. We’re paying for it through automatic monthly withdrawals from our PayPal accounts.

Let’s put this in terms crude enough for all cyber dwellers to grasp: The internet sucks.

[00:03:23] Bob Sullivan: The internet sucks? I’ve thought about this story for years, come back to it once in a while, but it’s been a while. In fact, it’s been 15 years since those words were first written, a lot has happened since then.

·         My name is Ed Snowden. I’m 29 years old. I work for Booz Allen Hamilton as an infrastructure analyst for NSA in Hawaii.

·         What exactly are they saying these Russians did? ….Well, there’s a lot of things that were alleging the Internet Research Agency did. Um, the main thing is that they posed as American citizens to amplify and spread content that causes division in our society

·         Tonight, Facebook stock tanking, dropping nearly 7% after allegations that data from Cambridge Analytica secretly harvested the personal information of 50 million unsuspecting Facebook users,

·         Cyber experts warn the Equifax hack has the potential to haunt Americans for decades. And every adult should assume their information was stolen.

·         Social media is just one of many factors that played a role in the deadly attack on the U S. Capital, but it’s a huge one that attack was openly planned online for weeks.

Bob Sullivan: If the internet sucked in 2006, what should we say about it now? I remember being an intern with Microsoft in 1995, a small part of the launch team for Windows 95. I helped launch internet news. I remember feeling at the time … it was very heady. Like John Perry, Barlow the co-founder of the electronic frontier foundation and a Grateful Dead lyricist…We both felt the internet could one day rival fire in the importance to humanity. Well, actually what he said was it was the most transforming technological event since the capture of fire.

So I think we should all admit we haven’t captured the internet. It’s a lot more like an uncontrolled fire right now. Or maybe like a wild animal. We haven’t domesticated. Not yet. Anyway, how did this happen? How did we lose control of it? Where did we go wrong? Was there some original sin of the internet? A moment when we turned right, as we should have turned left. Looking backward, isn’t always worthwhile, but sometimes it is. When you’re doing a long mathematics calculation and you make a mistake, it’s not possible to erase the answer and correct it. You have to trace your steps back to the original error and calculate forward anew.

I think it’s time. We did that with the web.

Maybe this seems like an academic question, but it’s not. The coronavirus pandemic has taught humanity a very painful lesson by now. We’ve all come to realize that like it or not, we’re in this together. We can’t rid half the planet of COVID-19 and hope for the best. That won’t work. We have to all pull in the same direction. All do the things we need to do. Wear masks, avoid indoor spaces, vaccinate when we can …to get and keep the virus on the run. And that won’t happen if we don’t all agree on the same set of facts. But right now the most powerful disinformation machine ever, the biggest lie spreading tool ever, seems to have truth on the run.

So it’s not just academic, it’s personal. It’s life or death.

How do we capture digital fire? How do we domesticate the wild animal that is the internet. The best way to get out of a hole is to stop digging. So I want to begin there.

For the next 45 minutes or so, I’m going to pursue this question of an original sin with the help of a series of experts who were there. As you’ll find out, while some of them might not like the way I frame the question, no one disagrees with the basic premise: We’ve built fatal flaws into our digital lives and we’d better fix them fast.


My first stop is with Richard Purcell. We were at Microsoft together. He was chief privacy officer at Microsoft, one of the first people to ever hold that title, back when I was a cub reporter at msnbc.com, we hadn’t talked in years. I caught up with him on Data Privacy Day, a holiday that’s been celebrated for more than a decade in the U.S. though, perhaps you don’t celebrate it.

—-

Bob Sullivan: Okay. So I forgot by the way, to wish you a happy data, privacy day,

[00:07:52] Richard Purcell: This is Data Privacy Day, it is the 28th. And you know, today we in an odd way, Bob, in the United States, people like me and you and others ascross the United States are celebrating the Europeans decision to ensconce privacy as a fundamental human right. Um, There are people who would say, gosh, you know, we shouldn’t be celebrating foreign countries, foreign regions, uh, social awareness. We should be doing it ourselves.

[00:08:24] Bob Sullivan: Richard took what you’d think of today as a very unusual route to an executive job at a big software company.  But then when Richard was a teenager, there really weren’t big software companies.

Richard, when I was preparing to talk to you today, I read a little bit about you. And learned some things I didn’t know. Um, including you work at a railroad maintenance when you were a kid.

[00:08:49] Richard Purcell: I did. I did. I like to ask people about what they did in their 18th years.  So  imagine you graduated high school, you’re perhaps off to university or some other, life, study to launch yourself into adulthood, what’d you do? And, and I’ve asked that question for a lot of people and I’ve had fascinating answers. Privileged people haven’t done much in my opinion, and in my research, which is anecdotal.

But what I did is I went out on, the Union Pacific track lines and I repaired railroad tracks for two summers in a row actually to pay for, for university tuition. So I sweated in the hot sun swinging hammer and pushing railroad steel around and pulling out and putting back in creosote timbers for ties and all of that kind of stuff.

It’s what’s called a Gandy dancer. That’s when you have one foot on the ground and one foot on your shovel and you’re pushing rock underneath a railroad tie in order to secure it and keep it from moving in. That’s the Gandy dance. When you get 20 people out there, Gandy dancing, it looks pretty funny.

[00:10:01] Bob Sullivan:  Richard’s work on the railroad provides an interesting metaphorical starting point for our discussion.

[00:10:08] Richard Purcell: I’ve repaired a few derailments down in the on the Columbia River, where locomotives are on their side in a slough puffin and still running and pushing bubbles into the dirty water. It’s pretty, it’s pretty bizarre when you’re working on a river.

[00:10:23] Bob Sullivan: I feel like you just described the state of the internet.

[00:10:27] Richard Purcell: I know.. Don’t you think? Yeah. Laying on its side puffing. Yeah, no I I’m with you, you know, maybe, maybe that’s true, Bob, maybe it’s not. Because I predicted when Facebook faced its Cambridge Analytica scandal, which was a tremendous scandal and, and was, uh, not only an impeachable offense, but one which they should have been convicted for that their, that their value would eventually drop.

That it would take a while, but their value would eventually drop, frankly. It just hasn’t. The users of these internet services seem to be highly resistant to the social ramifications of the kind of negative effects of those companies. And, you know, is somebody worth $62 billion to exploit the the world’s social fabric? I don’t think so. That’s not a bargain I would want to make. But it’s one we have made.

[00:11:30] Bob Sullivan: Richard’s unusual path to the tech world colors his perceptions about the internet today, and about the role of power in social circles and in leadership

[00:11:39] Richard Purcell: I grew up … strictly the 50s, middle-class easy, no problem life, but you know, but absolutely no prosperity whatsoever. But what I saw was in everyday life is that there are these power relationships that are unfair. Those with power, even in a small town, like I grew up in are loathe to give up that power. And for some reason are inuredo the fact of their privilege;  they feel like their privilege is an entitlement.

I worked in the forest. I’ve done a lot of things. I ran a grocery store. I started a newspaper. I did all these things in communities and the vibrancy and the health of a community is what I find lacking. And leadership begins to be tainted by the objective of actually maintaining a power relationship instead of sharing it, or instead of using it more, to create more community vibrancy and health, I find those practical experiences made a big difference in my life.

[00:13:05] Bob Sullivan: It seems like you connect to privacy to power, maybe more than someone else might.

[00:13:11] Richard Purcell: [00:13:11] Oh, it is not power. Yeah. Yeah. It’s unquestionably about power. If I can know enough about you, I can manipulate you without a question. And, and that is a power relationship and the more I more successful I am in the more clever I am about that.And the more disguised I am about my motivation, uh, the more advantageous it is to me. But yes, the lack of privacy is the lack of power. Without question, because frankly it is the lack of dignity. It’s the lack of, of control over my own life. And in fact, the European Union … we celebrate data privacy day today.. the European Union’s basis of data protection is the freedom to develop a personality. That’s the language that they use when they promoted data protection and privacy some 40 years ago. And so the whole idea that you are free to develop your own personality indicates how much of a power relationship.

[00:14:21] Bob Sullivan: So if data equals power and privacy is about power. And 40 years ago, people were thinking about this, where do we go wrong? Where did the engineers drive the train off the tracks? If you will. Richard, what is the original sin of the internet?

[00:14:38] Richard Purcell: The original sin of the internet to me is a failure on our part to key in on the basic question of just because I can do something, doesn’t mean that I should do it. In other words, if I can engineer something … internet history demonstrates that because I can engineer it, then I should use it in any way that that engineering allows. And that’s just, isn’t how life should work. We’ve had many, many follies in our time over that. I don’t want to get overly dramatic about that, and I don’t want to use too harsh and examples of that, but the question really is the internet was developed as an electronic means of communication without regard to the content of that communication, largely because the engineers enabled scientists and researchers to communicate with one another. And they had benign intents for the most part. And it was never thought that anybody using it would have any other kind of intent.

[00:15:50] Bob Sullivan: Our first history lesson of this podcast. We’ll talk a lot about naive take going forward. And we’ll also talk about the word privacy, which I’m here to tell you is always a pretty big risk as a storyteller.

I think the conversation we’re having, you know, if we had it three or four years ago, it would have felt a really academic and be pretty boring to most people.

[00:16:12] Richard Purcell: It has been, you’re right. It has been very boring. I’ve bored people for a long time with this kind of. Gosh, what if, jeez, you know, shouldn’t it be this way or that way? And then the stark reality comes with Cambridge Analytica and, Oh my gosh, look at this. We can manipulate people.

[00:16:31] Bob Sullivan: But I think what is new to people is, okay, it’s one thing to manipulate them into buying a certain brand of toothpaste. It’s another thing to manipulate them. Into not believing in democracy anymore.

[00:16:42] Richard Purcell: Isn’t that the truth? I mean, now that nefarious, you know, characters really have some sophisticated controls, not just blunt instrument controls, but sophisticated controls, and have clear objectives.

It’s hard to understand isn’t it, Bob? What the clear objective of somebody who wants to create an unconstitutional limit on free and fair elections. What would their clear objective be? And there’s no way that’s a beneficent objective. That’s a very much a malicious objective, um, because it means about the accumulation and centralization of some kind of power and authority and control over large populations.

That’s what’s frightening me the most is there are…there are actors in the background who have a clear objective to create a centralized, powerful control mechanism. Um, and democracy is standing in its way.

[00:17:52] Bob Sullivan: Democracy is standing in the way. Thank goodness for that. Except when this new digital battleground for control was built, we didn’t have great models to rely on. So we borrowed heavily from the one we had and that, well, that might actually be the wrong left turn we made.

[00:18:10] Richard Purcell: In the United States, our commercial world runs largely by a model from telecommunications history, way back in radio and television that said, Hey, you know, it’s free to use. We just have advertising to support it.

So you don’t have to subscribe to it. And that was back when it was an airwaves broadcast methodology. That model, unfortunately, it’s persisting, even though it’s not an airwaves model anymore, that by which we transmit this information and communicate, but still that free access to online content persists with the underlying advertising model. And they have very strong reasons to believe that. Advertising as a model has its own dark side. And we see that and we see that from all kinds of points of view, of course. Um, but Google and Facebook are. They’re not technology companies as much as they are advertising companies, Google, and Facebook, and really all internet companies.

[00:19:14] Bob Sullivan: They’re all advertising companies now, but this is a very different kind of advertising. The best TV could do was create programming that probably attracted 18 to 34 year olds. Things have changed and changed fast

[00:19:32] Richard Purcell: Narrow casting means that I can put out a blog. I can put out a podcast, I can put out a website that has a very narrow audience, but the fact is even a narrow audience in global terms can have a large population and therefore create more advertising contacts. And as a result, Better monetization. Those issues are just a profound part of how the internet works.

[00:20:00] Bob Sullivan: It sounds obvious to say that privacy stands in the way of this business model. Is that true?

[00:20:06] Richard Purcell: [00:20:06] Absolutely. No question about it. Privacy is not friendly to the advertising model of monetization and content narrow casting because frankly. The basis of advertising is for the internet particularly, but has always been the demographics of the audience

The role of transparency and security assurance in driving technology decision-making

The purpose of this research is to understand what affects an organization’s security technology investment decision-making. Sponsored by Intel, Ponemon Institute surveyed 1,875 individuals in the US, UK, EMEA and Latin America who are involved in securing or overseeing the security of their organization’s information systems or IT infrastructure. In addition, they are familiar with their organization’s purchase of IT security technologies and services. The full report is available from Intel at this website.

A key finding from this research is the importance of technology providers being transparent and proactive in helping organizations manage their cybersecurity risks. Seventy-three percent of respondents say their organizations are more likely to purchase technologies and services from companies that are finding, mitigating and communicating security vulnerabilities proactively. Sixty-six percent of respondents say it is very important for their technology provider to have the capability to adapt to the changing threat landscape. Yet 54 percent of respondents say their technology providers don’t offer this capability.

“Security doesn’t just happen. If you are not finding vulnerabilities, then you are not looking hard enough,” said Suzy Greenberg, vice president, Intel Product Assurance and Security. “Intel takes a transparent approach to security assurance to empower customers and deliver product innovations that build defenses at the foundation, protect workloads and improve software resilience. This intersection between innovation and security is what builds trust with our customers and partners.”

Key findings from the study include:

  • Seventy-three percent of respondents say their organization is more likely to purchase technologies and services from technology providers that are proactive about finding, mitigating and communicating security vulnerabilities. Forty-eight percent say their technology providers don’t offer this capability.
  • Seventy-six percent of respondents say it is highly important that their technology provider offer hardware-assisted capabilities to mitigate software exploits.
  • Sixty-four percent of respondents say it is highly important for their technology provider to be transparent about available security updates and mitigations. Forty-seven percent say their technology provider doesn’t provide this transparency.
  • Seventy-four percent of respondents say it is highly important for their technology provider to apply ethical hacking practices to proactively identify and address vulnerabilities in its own products.
  • Seventy-one percent of respondents say it is highly important for technology providers to offer ongoing security assurance and evidence that the components are operating in a known and trusted state.

Part 2. The characteristics of the ideal technology provider

The characteristics are broken down into three categories: security assurance, innovation and adoption. Following are the most important characteristics of a technology provider and its ability to have this capability. As shown, there is a significant gap between the importance of these features and the ability of many providers to have this capability.

Security Assurance
The ability to identify vulnerabilities in its own products and mitigate them. Sixty-six percent say this is highly important. Only 46 percent of respondents say their current technology provider has this capability

The ability to be transparent about security updates and mitigations that are available. Sixty-four percent of respondents say this is highly important. Less than half (48 percent) of respondents say their technology providers have this capability.

Ability to offer ongoing security assurance and evidence that the components are operating in a known and trusted state. Seventy-one percent say this is highly important.

Ability for the technology provider to have the capability to apply ethical hacking practices to proactively identify and address vulnerabilities in its own products. Seventy-four percent of respondents believe this is highly important.

Innovation
Protecting distributed workloads, data in use and hardware-assisted capabilities to defend against software exploits are highly important. The protection of customer data from insider threats is considered highly important by 79 percent of respondents. Organizations prioritize protecting data in use over data in transit and data at rest. Similarly, 76 percent of respondents say hardware-assisted capabilities to defend against software exploits and 72 percent of respondents say protecting distributed workloads are highly important.

Adoption
Interoperability issues and installation costs are the primary influencers when making investments in technologies. The top five factors that influence the deployment of security technologies are interoperability issues (63 percent of respondents), installation costs (58 percent of respondents), system complexity issues (57 percent of respondents), vendor support issues (55 percent of respondents) and scalability issues (53 percent of respondents).

As part of their decision-making process, organizations are measuring the economic benefits of security technologies deployed by their organizations. Forty-seven percent of respondents use metrics to understand the value of their technologies. The measures most often used are ROI (58 percent of respondents), the decrease in false positive rates (48 percent of respondents) and the total cost of ownership (46 percent of respondents).

Organizations are at risk because of the inability to quickly address vulnerabilities. As discussed, a top goal of the IT function is to improve the ability to quickly vulnerabilities. Thirty-six percent of respondents say they only scan every month or more than once a month.

While 30 percent of respondents say their organizations can patch critical or high priority vulnerabilities in a week or less, on average, it takes almost six weeks to patch a vulnerability once it is detected. The delays in patching are mainly caused by human error (63 percent of respondents), the inability to take critical applications and systems off-line in order to patch quickly (58 percent of respondents) and not having a common view of applications and assets across security and IT teams (52 percent of respondents).

Organizations’ IT budgets are not sufficient to support a strong security posture. Eighty-six percent of respondents say their IT budget is only adequate (45 percent of respondents) or less than adequate (41 percent of respondents). Fifty-three percent of respondents say the IT security budget is part of the overall IT budget.

Responsibility for security is still uncertain across organizations. Twenty-one percent of respondents agree the security leader (CISO) should be responsible for IT security objectives, while 19 percent of respondents believe the CIO/CTO and 17 percent of respondents think the business unit leader should be responsible. The conclusion is that there is uncertainty in responsibility.

To read the rest of this study, visit Intel’s website at https://newsroom.intel.com/wp-content/uploads/sites/11/2021/03/2021-intel-poneman-study.pdf (PDF)

The ‘de-platforming’ of Donald Trump and the future of the Internet

Bob Sullivan

“De-platforming.”

That’s the word that stormed tech-land earlier this year, and it’s about time. After the storming of the U.S. Capitol by a criminal mob in January, a host of companies cut off then-President Trump’s digital air supply. His Tweets and Facebook posts fell first; next came second-order effects, like payment processors cutting off the money supply. Finally, upstart right-wing social media platform Parler was removed from app stores and then denied Internet service by Amazon.

Let the grand argument of our time begin.

The story of Donald Trump’s de-platforming involves a dramatic collision of Net Neutrality, and Section 230 “immunity,” and free speech, and surveillance capitalism, even privacy. I think it’s the issue of our time. It deserves much more than a story; it deserves an entire school of thought. But I’ll try to nudge the ball forward here.

This is not a political column, though I realize that everything is political right now. In this piece, I will not examine the merits of banning Trump from Twitter; you can read about that elsewhere.

The deplatforming of Trump is an important moment in the history of the Internet, and it should be examined as such. Yes, it’s very fair to ask, if this can happen to Trump, if it can happen to Parler, can’t it happen to anyone?

But let’s examine it the way teen-agers do in their first year of college. Let’s not scream “free speech” or “slippery slope” at each other and then pop open a can of beer, assuming that that’s some kind of mic drop. Kids do that. Adults live on planet Earth, where decisions are complex, and evolve, and have real-life consequences.

I’ll start here. You can sell guns and beer in most places in America. You can’t sell a gun to someone who walks into your store screaming, “I’m going to kill someone,” and you can’t sell beer to someone who’s obviously drunk and getting into the driver’s seat. You can’t keep selling a car — or even a toaster! — that you know has a defect which causes fires. Technology companies are under no obligation to allow users to abuse others with the tools they build. Cutting them off is not censorship. In fact, forcing these firms to allow such abuse because someone belongs to a political party IS censorship, the very thing that happens in places like China. Tech firms are, and should be, free to make their own decisions about how their tools are used. (With…some exceptions! This is the real world.)

I’ll admit freely: This analogy is flawed. When it comes to technology law — and just technology choices — everyone reaches for analogies, because we want so much for there to be a precedent for our decisions. That takes the pressure off us. We want to say, “This isn’t my logic. It’s Thomas Jefferson’s logic! He’s the reason Facebook must permit lies about the election to be published in my news feed!” Sorry, life isn’t like that. We’re adults. We have to make these choices. They will be hard. They’re going to involve a version 1, and a version 2 and a version 3, and so on. The technology landscape is full of unintended, unexpected consequences, and we must rise to that challenge. We can’t cede our agency in an effort to find silver bullet solutions from the past. We have to make them up as we go along.

That’s why the best thing I read the past few months  about the Trump deplatforming was this piece by Techdirt’s Mike Masnick. He raises an issue that many tech folks want to avoid: Twitter and Facebook have tried really hard to explain their choices by shoehorning them into standing policy violations. That has left everyone unhappy. (Why didn’t they do this months ago? Wait, what policy was broken?) Masnick gets to the heart of the matter quickly.

So, Oremus is mostly correct that they’re making the rules up as they go along, but the problem with this framing is that it assumes that there are some magical rules you can put in place and then objectively apply them always. That’s never ever been the case. The problem with so much of the content moderation debate is that all sides assume these things. They assume that it’s easy to set up rules and easy to enforce them. Neither is true. Radiolab did a great episode a few years ago, detailing the process by which Facebook made and changed its rules. And it highlights some really important things including that almost every case is different, that it’s tough to apply rules to every case, and that context is always changing. And that also means the rules must always keep changing.

A few years back, we took a room full of content moderation experts and asked them to make content moderation decisions on eight cases — none of which I’d argue are anywhere near as difficult as deciding what to do with the President of the United States. And we couldn’t get these experts to agree on anything. On every case, we had at least one person choose each of the four options we gave them, and to defend that position. The platforms have rules because it gives them a framework to think about things, and those rules are useful in identifying both principles for moderation and some bright lines.

But every case is different.

For a long time, I have argued that tech firms’ main failing is they don’t spend anywhere near enough money on safety and security. They have, nearly literally, gotten away with murder for years while the tools they have made cause incalculable harm in the world. Depression, child abuse, illicit drugs sales, societal breakdowns, income inequality…tech has driven all these things. The standard “it’s not the tech, it’s the people” argument is another “pop open a beer” one-liner used by techie libertarians who want to count their money without feeling guilty, but we know it’s a dangerous rationalization. Would so many people believe the Earth is flat without YouTube’s recommendation engine? No. Would America be so violently divided without social media? No. You built it, you have to fix it.

If a local mall had a problem with teen-age gangs hanging around at night causing mayhem, the mall would hire more security guards. That’s the cost of doing business. For years, big tech platforms have tried to get away with “community moderation” — i.e., they’ve been cheap. They haven’t spent anywhere near enough money to stop the crime committed on their platforms. Why? Because I think it’s quite possible the entire idea of Faceook wouldn’t exist if it had to be safe for users. Safety doesn’t scale. Safety is expensive. It’s not sexy to investors.

How did we get here? In part, thanks to that Section 230 you are hearing so much about. You’ll hear it explained this way: Section 230 gives tech firms immunity from bad things that happen on their platforms. Suddenly, folks who’ve not cared much about it for decades are yelling for its repeal. As many have expressed, it would be better to read up on Section 230 before yelling about it (Here’s my background piece on it). But in short, repealing it wholesale would indeed threaten the entire functioning of the digital economy. Section 230 was designed to do exactly what it is I am hinting at here — to give tech firms the ability to remove illegal or offensive content without assuming business-crushing liability for everything they do. Again, it’s a law written from an age before Amazon existed, so it sure could use modernization by adults. But pointing to it as some untouchable foundational document, or throwing out the baby with the bathwater, are both the behavior of children, not adults. We’re going to have to make some things up as we go along.

Here’s the thing about “free speech” on platforms like Twitter and Facebook. As a private company, Twitter can do whatever it wants to do with the tool it makes. Forcing it to carry this or that post is a terrible idea. President Trump can stand in front of a TV camera, or buy his own TV station (as seems likely) and say whatever he wants. Suspending his account is not censorship. As I explain in a recent Section 230 post, however, even that line of logic is incomplete. social media firms use public resources, and some are so dominant that they are akin to a public square. We just might need a new rule for this problem! I suspect the right rule isn’t telling Twitter what to post, but perhaps making sure there is more diversity in technology tools available to speakers.

But here’s the thing I find myself saying most often right now: The First Amendment guarantees your right to say (most) things; it doesn’t guarantee your right to algorithmic juice. I believe this is the main point of confusion we face right now, and one that really sends a lot of First Amendment thinking for a loop. Information finds you now. That information has been hand-picked to titillate you like crack cocaine, or like sex. The more extreme things people say, the more likely they are to land on your Facebook wall, or in your Twitter feed, or wherever you live. It’s one thing to give Harold the freedom to yell to his buddies at the bar about children locked in a basement by Democrats at a pizza place in Washington D.C. It’s quite another to give him priority access to millions of people using a tool designed to make them feel like they are viewing porn. That’s what some people are calling free speech right now. James Madison didn’t include a guaranteed right to “virality” in the Bill of Rights. No one guaranteed that Thomas Paine’s pamphlets were to be shoved under everyone’s doors, let alone right in front of their eyeballs at the very moment they were most likely to take up arms. We’re going to need new ways to think about this. In the algorithmic world, the beer-popping line, “The solution to hate speech is more speech,” just doesn’t cut it.

I’m less interested in the Trump ban than I am in the ban of services like Parler. Trump will have no trouble speaking; but what of everyday conservatives who are now wondering if tech is out the get them? If you are liberal: Imagine if some future government decides Twitter is a den of illegal activity and shuts it down. That’s an uncomfortable intellectual exercise, and one we shouldn’t dismiss out of hand.

Parler is a scary place. Before anyone has anything to say about its right to exist, I think you really should spend some time there, or at least read my story about the lawsuit filed by Christopher Krebs. Ask yourself this question: What percentage of a platform’s content needs to be death threats, or about organizing violence, before you’ll stop defending its right to exist? Let’s say we heard about a tool named “Anteroom” which ISIS cells used to radicalize young Muslims, spew horrible things about Americans, teach bomb-making, and organize efforts to…storm the Capitol building in D.C.. Would you really be so upset if Apple decided not to host Anteroom on its play store?

So, what do we do with Parler? Despite all this, I’m still uncomfortable with removing its access to network resources the way Amazon has. I think that feels more like smashing a printing press than forcing books into people’s living rooms. Amazon Web Services is much more like a public utility than Twitter is. The standard for removal there should be much higher, I think. And if it makes you uncomfortable that a private company made that decision, rather than some public body that is responsible to voting citizens, it should. At the same time, if you can think of no occasion for banning a service like Parler, then you just aren’t thinking.

These are complex issues and we will need our best, most creative legal minds to ponder them in the years to come. Here’s one: Matt Stoler, in his Big newsletter about tech monopolies, offers a thoughtful framework for dealing with this problem. I’d like to hear yours. But don’t hit me with beer-popping lines or stuff you remember from Philosophy 101. This is no time for academic smugness. We have real problems down here on planet Earth.

Should conservative social network Parler Be Removed from AWS, Google, and Apple app stores? This is an interesting question, because Parler is where some of the organizing for the Capitol hill riot took place. Amazon just removed Parler from its cloud infrastructure, and Google and Apple removed Parler from their app stores. Removing the product may seem necessary to save lives, but having these tech oligarchs do it seems like a dangerous overreach of private authority. So what to do?

My view is that what Parler is doing should be illegal, because it should be responsible on product liability terms for the known outcomes of its product, aka violence. This is exactly what I wrote when I discussed the problem of platforms like Grindr and Facebook fostering harm to users. But what Parler is doing is *not* illegal, because Section 230 means it has no obligation for what its product does. So we’re dealing with a legal product and there’s no legitimate grounds to remove it from key public infrastructure. Similarly, what these platforms did in removing Parler should be illegal, because they should have a public obligation to carry all customers engaging in legal activity on equal terms. But it’s not illegal, because there is no such obligation. These are private entities operating public rights of way, but they are not regulated as such.

In other words, we have what should be an illegal product barred in a way that should also be illegal, a sort of ‘two wrongs make a right’ situation. I say ‘sort of’ because letting this situation fester without righting our public policy will lead to authoritarianism and censorship. What we should have is the legal obligation for these platforms to carry all legal content, and a legal framework that makes business models fostering violence illegal. That way, we can make public choices about public problems, and political violence organized on public rights-of-way certain is a public problem.

 

Data Center Downtime at the Core and the Edge: A Survey of Frequency, Duration and Attitudes

Edge computing is expanding rapidly and re-shaping the data center ecosystem as organizations across industries move computing and storage closer to users to improve response times and reduce bandwidth requirements.

While forms of distributed computing have been common in some sectors for years, this current evolution is distinct in that it is enabling a broad range of new and emerging applications and has higher criticality requirements than traditional distributed computing sites.

At the same time, core data center managers are dealing with increased complexity and balancing multiple and sometimes conflicting priorities that can compromise availability.

As a result, today’s data center networks are more vulnerable to downtime than ever before. In an effort to quantify that vulnerability, the Ponemon Institute conducted a study of downtime frequency, duration and attitudes at the core and the edge, sponsored by Vertiv.

The study is based on responses from 425 participants representing 132 data centers and 1,667 edge locations. All core and edge data centers included in the study are located in the United States/Canada and Latin America (LATAM).

The study found data center networks vulnerable to downtime events across the network. Core data centers experienced an average of 2.4 total facility shutdowns per year with an average duration of more than two hours (138 minutes). This is in addition to almost 10 downtime events annually isolated to select racks or servers. At the edge, the frequency of total facility shutdowns was even higher, although the duration of those outages was less than half that of those in core data centers.

The study also looks at the attitudes that shape decisions regarding core and edge data centers to help identify factors that could be contributing to downtime events. More than half (54%) of all core data centers are not using best practices in system design and redundancy, and 69% say their risk of an unplanned outage is increased as a result of cost constraints.

Leading causes of unplanned downtime events at the core and the edge included cyberattacks, IT equipment failures, human error, UPS battery failure, and UPS equipment failure.

Finally, the study asked participants to identify the actions their organizations could take to prevent future downtime events. They identified activities ranging from investment in new equipment to infrastructure redundancy to improved training and documentation.

Key Findings

Facility Size
Edge data centers aren’t necessarily defined by size but by function. For the purpose of this research, edge data centers are defined as facilities that bring computation and data storage closer to the location where it is needed to improve response times and save bandwidth. Nevertheless edge data centers were on average about one-third the size of the core data centers.

The extrapolated size for core data centers that participated in this study is 15,153 square feet/1,408 square meters. For edge computing facilities, the average size is 5,010 square feet/465 square meters.

Frequency of Core and Edge Downtime

 Figure 3 shows the shutdown experience of participating data centers over the past 24 months. As can be seen, total data center shutdown has the lowest frequency (4.81). However, these events are also the most disruptive, and the 4.81 unplanned total facility shutdowns over a 24-month period would be considered unacceptable for many organizations.

Partial outages of certain racks in the data center have the highest frequency at 9.93, followed by individual server outages at 9.43.

It can be difficult to directly compare the total number of downtime events in edge and core facilities due to the higher complexity generally found in core data centers and the increased presence of personnel in these facilities. However, it is possible to compare total facility shutdowns for core and edge data centers. Edge data centers experienced a slightly higher frequency of total facility shutdowns at an average of 5.39 over 24 months. As edge sites continue to proliferate, reducing the frequency of outages at the edge will become a high priority for many organizations.

TO READ THE REST OF THIS REPORT, VISIT VERTIV’S WEBSITE

Facebook needs a corrections policy, viral circuit breakers, and much more

Bob Sullivan

“After hearing that Facebook is saying that posting the Lord’s Prayer goes against their policies, I’m asking all Christians please post it. Our Father, who art in Heaven…”

Someone posted this obvious falsehood on Facebook recently, and it ended up on my wall. Soon after, a commenter in the responses broke the news to the writer that this was fake. The original writer than did what many social media users do: responded with a simple, “Well, someone else said it and I was just repeating it.” No apology. No obvious shame. And most important, no correction.

I don’t know how many people saw the original post, but I am certain far fewer people saw the response and link to Snopes showing it was a common hoax.

If there’s one thing that people rightly hate about journalists, it’s our tendency to put mistakes on the front page and corrections on the back page. That’s unfair, of course. If you make a mistake, you should try to make sure just as many people see the correction as the mistake. Journalists might be bad at this, but Facebook is awful at it. This is one reason social media is such a dastardly tool for sharing misinformation.

Fixing this is one of the novel recommendations in a great report published late last year by the Forum on Information and Democracy, an international group formed to make recommendations about the future of social media. The report suggests that, when an item like this Our Father assertion spreads on social media and is determined to be misinformation by an independent group of fact checkers, a correction should be shown to each and every user exposed to it. So, not just a lame “I dunno, a friend said it” that’s buried in later comments. Rather, it would require an attempt to undo the actual spread of incorrect information.

Anyone concerned about the future of democracy and discourse should take a look at this report. I’ve summarized a few of the highlights below.

Facebook recently said it would start downplaying political posts in an effort to deal with the disinformation problem. It must do much more than that.   The Forum on Information and Democracy offers a good start.

 

Circuit breakers

The report calls for creation of friction to prevent misinformation from spreading like wildfore. My favorite part of this section calls for creation of “circuit breakers” that would slow down viral content after it reaches a certain threshold, giving fact-checkers a chance to examine it. The concept is borrowed from Wall Street. When trading in a certain stock falls dramatically out of pattern — say there’s a sudden spike in sales — circuit breakers kick in to give traders a moment to breathe and digest whatever news might exist. Sometimes, that short break re-introduces rationality to the market. Note: This wouldn’t infringe on anyone’s ability to say what they want, it would merely slow down the automated augmentation of this speach.

A digital ‘building code’ and ‘agency by design’

You can’t buy a toaster that hasn’t been tested to make sure it’s safe, but you can use software that might, metaphorically, burn your house down. This metaphor was used a lot after the mortgage meltdown in 2008, and it applies here, too. “In the same way that fire safety tests are conducted prior to a building being opened to the public, such a ‘digital building code’ would also result in a shift towards prevention of harm through testing prior to release to the public,” the report says.

The report adds a few great nuggets. While being critical of click-to-consent agreements, it says: “In the case of civil engineering, there are no private ‘terms and conditions’ that can override the public’s presumption of safety.” The report also calls for a concept called “agency by design” that would require software engineers to design opt-ins and other moments of permission so users are most likely to understand what they are agreeing to. This is “proactively choice-enhancing,” the report argues.

Abusability Testing

I’ve long been enamored of this concept. Most software is now tested by security professionals hired to “hack” it, a process called penetration testing. This concept should be expanded to include any kind of misuse of software by bad guys. If you create a new messenger service, could it be abused by criminals committing sweetheart scams? Could it be turned into a weapon by a nation-state promoting disinformation campaigns? Could it aid and abet those who commit domestic violence? Abusability testing should be standard, and critically, it should be conducted early on in software development.

Algorithmic undue influence

In other areas of law. contracts can be voided if one party has so much influence over the other that true consent could not be given. The report suggests that algorithms create this kind of imbalance online, so the concept should be extended to social media.

“(Algorithmic undue influence) could result in individual choices that would not occur but for the duplicitous intervention of an algorithm to amplify or withhold select information on the basis of engagement metrics created by a platform’s design or engineering choice.”

“Already, the deleterious impact of algorithmic amplification of COVID-19 misinformation has been seen, and there are documented cases where individuals took serious risks to themselves and others as a result of deceptive conspiracy theories presented to them on social platforms. The law should view these individuals as victims who relied on a hazardously engineered platform that exposed them to manipulative information that led to serious harm.”

De-segregation

Social media has created content bubbles and echo chambers. In the real world, it can be illegal to engineer residential developments that encourage segregation. The report suggests similar limitations online.

“Platforms … tacitly assume a role akin to town planners. … Some of the most insidious harms documented from social platforms have resulted from the algorithmic herding of users into homogenous clusters.. What we’re seeing is a cognitive segregation, where people exist in their own informational ghettos,” the report says.

In order to promote a shared digital commons, the report makes these suggestions.

Consider applying equivalent anti-segregation legal principles to the digital commons, including a ban on ‘digital redlining’, where platforms allow groups or advertisers to prevent particular racial or religious groups from accessing content.

Create legal tests focused on the ultimate effects of platform design on racial inequities and substantive fairness, regardless of the original intent of design.

Create specific standards and testing requirements for algorithmic bias.

Disclose conflict of interests

The report includes a lot more on algorithms and transparency, but one item I found noteworthy: If a platform promotes or demotes content in a way that is self-serving, it has to disclose that. If Facebook’s algorithm makes a decision about a new social media platform, or about a government’s efforts to regulate social media, Facebook would have to disclose that.

These are nuggets I pulled out, but to grasp the larger concepts, I strongly recommend you read the entire report.

The State of Breach and Attack Simulation and the Need for Continuous Security Validation: A Study of US and UK Organizations

The purpose of this research, sponsored by Cymulate, is to better understand how the rapidly evolving threat landscape and the frequency of changes in the IT architecture and in security are creating new challenges. The research focuses on the testing and validation of security controls in this extremely dynamic environment. We also seek to understand the issues organizations have in their ability to detect and remediate threats through assessments and testing of security controls.

Although change has always been a constant in both IT and cybersecurity, COVID-19 has accelerated business digitization and security adaptations. Seventy-nine percent of respondents say that they have had to modify security policies to accommodate working from home.

Sixty-two percent of respondents say their organizations had to acquire new security technologies to protect WFH, and yet 62 percent of respondents say their organizations did not validate these newly deployed security controls.

Ponemon Institute surveyed 1,016 IT and IT security practitioners in the United States and United Kingdom who are familiar with their organizations’ testing and evaluation of security controls. An average of 13 individuals staff the security team in organizations represented in this research.

Following are key takeaways from the research.

  • Sixty-one percent of respondents say the benefit of continuous security validation or frequent security testing is the ability to identify security gaps due to changes in the IT architecture followed by 59 percent of respondents who say it is the ability to identify security gaps caused by human error and misconfigurations.
  •  Sixty percent of respondents say their organizations are making frequent changes to security controls; daily (27 percent of respondents) and weekly (33 percent of respondents). Sixty-seven percent of respondents say that it is very important to test that the changes applied to the security controls have not created security gaps such as software bugs or vulnerabilities, misconfigurations and human error.
  • Seventy percent of respondents say it is important to validate the effectiveness of security controls against new threats and hacker techniques and tactics.

 The following findings are based on a deeper analysis of the research.

 Vigilance in testing the effectiveness of security controls increases confidence that security controls are working as they are supposed to.

  • Organizations that self-reported their organization is vigilant in testing the effectiveness of their security controls (38 percent respondents) have a much higher level of confidence that their organization’s security controls are working as they are supposed to. Of the 22 percent of respondents who rate their level of confidence as high, almost half (47 percent) of respondents say they are vigilant in their effectiveness testing. 

High confidence in security controls increases the security posture in an evolving threat landscape.

  • Organizations that have a high level of confidence that their organization’s security controls are working as they are supposed to are applying changes to security controls (e.g., configuration setting, software or signature update policy rules, etc.) daily or weekly.
  • These organizations have a much lower percentage of security controls that fail pen testing and/or attack simulation within each cycle. Specifically, 25 percent of respondents with high confidence say less than 10 percent of security controls fail pen testing and/or attack simulation.

“It is clear from the report that security experts see the need for continuous security validation. Given that the primary methodology for security testing is limited in scope, manual and a lengthy process, it does not meet the pace of new threats and business-driven IT change. It comes as no surprise that threat actors are free to exploit remote access, remote desktop, and virtual desktop vulnerabilities, as companies expanded the use of these technologies without security validation, to support employees working from home.” Said Eyal Wachsman, Co-Founder and CEO at Cymulate.

The report is organized according to the following topics.

  • The impact of current approaches to the testing of security controls on an organization’s security posture
  • Security control validation and Breach and Attack Simulation (BAS)
  • Steps taken to address possible security risks due to COVID-19
  • Perceptions about the effectiveness of Managed Security Service Providers (MSSPs)
  • Differences between organizations in the US and UK

Read the entire report on Cymulate’s website

 

The ‘de-platforming’ of Donald Trump and the future of the Internet

Bob Sullivan

“De-platforming.”

That’s the word of the week in tech-land, and it’s about time. After the storming of the U.S. Capitol by a criminal mob on Wednesday, a host of companies have cut off President Trump’s digital air supply. His Tweets and Facebook posts fell first; next came second-order effects, like payment processors cutting off the money supply. Finally, upstart right-wing social media platform Parler was removed from app stores and then denied Internet service by Amazon.

Let the grand argument of our time begin.

The story of Donald Trump’s de-platforming involves a dramatic collision of Net Neutrality, and Section 230 “immunity,” and free speech, and surveillance capitalism, even privacy. I think it’s the issue of our time. It deserves much more than a story; it deserves an entire school of thought. But I’ll try to nudge the ball forward here.

This is not a political column, though I realize that everything is political right now. In this piece, I will not examine the merits of banning Trump from Twitter; you can read about that elsewhere.

The deplatforming of Trump is an important moment in the history of the Internet, and it should be examined as such. Yes, it’s very fair to ask, if this can happen to Trump, if it can happen to Parler, can’t it happen to anyone?

But let’s examine it the way teen-agers do in their first year of college. Let’s not scream “free speech” or “slippery slope” at each other and then pop open a can of beer, assuming that that’s some kind of mic drop. Kids do that. Adults live on planet Earth, where decisions are complex, and evolve, and have real-life consequences.

I’ll start here. You can sell guns and beer in most places in America. You can’t sell a gun to someone who walks into your store screaming, “I’m going to kill someone,” and you can’t sell beer to someone who’s obviously drunk and getting into the driver’s seat. You can’t keep selling a car — or even a toaster! — that you know has a defect which causes fires. Technology companies are under no obligation to allow users to abuse others with the tools they build. Cutting them off is not censorship. In fact, forcing these firms to allow such abuse because someone belongs to a political party IS censorship, the very thing that happens in places like China. Tech firms are, and should be, free to make their own decisions about how their tools are used. (With…some exceptions! This is the real world.)

I’ll admit freely: This analogy is flawed. When it comes to technology law — and just technology choices — everyone reaches for analogies, because we want so much for there to be a precedent for our decisions. That takes the pressure off us. We want to say, “This isn’t my logic. It’s Thomas Jefferson’s logic! He’s the reason Facebook must permit lies about the election to be published in my news feed!” Sorry, life isn’t like that. We’re adults. We have to make these choices. They will be hard. They’re going to involve a version 1, and a version 2 and a version 3, and so on. The technology landscape is full of unintended, unexpected consequences, and we must rise to that challenge. We can’t cede our agency in an effort to find silver bullet solutions from the past. We have to make them up as we go along.

That’s why the best thing I read the past few days about the Trump deplatforming was this piece by Techdirt’s Mike Masnick. He raises an issue that many tech folks want to avoid: Twitter and Facebook have tried really hard to explain their choices by shoehorning them into standing policy violations. That has left everyone unhappy. (Why didn’t they do this months ago? Wait, what policy was broken?) Masnick gets to the heart of the matter quickly.

So, Oremus is mostly correct that they’re making the rules up as they go along, but the problem with this framing is that it assumes that there are some magical rules you can put in place and then objectively apply them always. That’s never ever been the case. The problem with so much of the content moderation debate is that all sides assume these things. They assume that it’s easy to set up rules and easy to enforce them. Neither is true. Radiolab did a great episode a few years ago, detailing the process by which Facebook made and changed its rules. And it highlights some really important things including that almost every case is different, that it’s tough to apply rules to every case, and that context is always changing. And that also means the rules must always keep changing.

A few years back, we took a room full of content moderation experts and asked them to make content moderation decisions on eight cases — none of which I’d argue are anywhere near as difficult as deciding what to do with the President of the United States. And we couldn’t get these experts to agree on anything. On every case, we had at least one person choose each of the four options we gave them, and to defend that position. The platforms have rules because it gives them a framework to think about things, and those rules are useful in identifying both principles for moderation and some bright lines.

But every case is different.

For a long time, I have argued that tech firms’ main failing is they don’t spend anywhere near enough money on safety and security. They have, nearly literally, gotten away with murder for years while the tools they have made cause incalculable harm in the world. Depression, child abuse, illicit drugs sales, societal breakdowns, income inequality…tech has driven all these things. The standard “it’s not the tech, it’s the people” argument is another “pop open a beer” one-liner used by techie libertarians who want to count their money without feeling guilty, but we know it’s a dangerous rationalization. Would so many people believe the Earth is flat without YouTube’s recommendation engine? No. Would America be so violently divided without social media? No. You built it, you have to fix it.

If a local mall had a problem with teen-age gangs hanging around at night causing mayhem, the mall would hire more security guards. That’s the cost of doing business. For years, big tech platforms have tried to get away with “community moderation” — i.e., they’ve been cheap. They haven’t spent anywhere near enough money to stop the crime committed on their platforms. Why? Because I think it’s quite possible the entire idea of Faceook wouldn’t exist if it had to be safe for users. Safety doesn’t scale. Safety is expensive. It’s not sexy to investors.

How did we get here? In part, thanks to that Section 230 you are hearing so much about. You’ll hear it explained this way: Section 230 gives tech firms immunity from bad things that happen on their platforms. Suddenly, folks who’ve not cared much about it for decades are yelling for its repeal. As many have expressed, it would be better to read up on Section 230 before yelling about it (Here’s my background piece on it). But in short, repealing it wholesale would indeed threaten the entire functioning of the digital economy. Section 230 was designed to do exactly what it is I am hinting at here — to give tech firms the ability to remove illegal or offensive content without assuming business-crushing liability for everything they do. Again, it’s a law written from an age before Amazon existed, so it sure could use modernization by adults. But pointing to it as some untouchable foundational document, or throwing out the baby with the bathwater, are both the behavior of children, not adults. We’re going to have to make some things up as we go along.

Here’s the thing about “free speech” on platforms like Twitter and Facebook. As a private company, Twitter can do whatever it wants to do with the tool it makes. Forcing it to carry this or that post is a terrible idea. President Trump can stand in front of a TV camera, or buy his own TV station (as seems likely) and say whatever he wants. Suspending his account is not censorship. As I explain in my Section 230 post, however, even that line of logic is incomplete. social media firms use public resources, and some are so dominant that they are akin to a public square. We just might need a new rule for this problem! I suspect the right rule isn’t telling Twitter what to post, but perhaps making sure there is more diversity in technology tools available to speakers.

But here’s the thing I find myself saying most often right now: The First Amendment guarantees your right to say (most) things; it doesn’t guarantee your right to algorithmic juice. I believe this is the main point of confusion we face right now, and one that really sends a lot of First Amendment thinking for a loop. Information finds you now. That information has been hand-picked to titillate you like crack cocaine, or like sex. The more extreme things people say, the more likely they are to land on your Facebook wall, or in your Twitter feed, or wherever you live. It’s one thing to give Harold the freedom to yell to his buddies at the bar about children locked in a basement by Democrats at a pizza place in Washington D.C. It’s quite another to give him priority access to millions of people using a tool designed to make them feel like they are viewing porn. That’s what some people are calling free speech right now. James Madison didn’t include a guaranteed right to “virality” in the Bill of Rights. No one guaranteed that Thomas Paine’s pamphlets were to be shoved under everyone’s doors, let alone right in front of their eyeballs at the very moment they were most likely to take up arms. We’re going to need new ways to think about this. In the aglorithmic world, the beer-popping line, “The solution to hate speech is more speech,” just doesn’t cut it.

I’m less interested in the Trump ban than I am in the ban of services like Parler. Trump will have no trouble speaking; but what of everyday conservatives who are now wondering if tech is out the get them? If you are liberal: Imagine if some future government decides Twitter is a den of illegal activity and shuts it down. That’s an uncomfortable intellectual exercise, and one we shouldn’t dismiss out of hand.

Parler is a scary place. Before anyone has anything to say about its right to exist, I think you really should spend some time there, or at least read my story about the lawsuit filed by Christopher Krebs. Ask yourself this question: What percentage of a platform’s content needs to be death threats, or about organizing violence, before you’ll stop defending its right to exist? Let’s say we heard about a tool named “Anteroom” which ISIS cells used to radicalize young Muslims, spew horrible things about Americans, teach bomb-making, and organize efforts to…storm the Capitol building in D.C.. Would you really be so upset if Apple decided not to host Anteroom on its play store?

So, what do we do with Parler? Despite all this, I’m still uncomfortable with removing its access to network resources the way Amazon has. I think that feels more like smashing a printing press than forcing books into people’s living rooms. Amazon Web Services is much more like a public utility than Twitter is. The standard for removal there should be much higher, I think. And if it makes you uncomfortable that a private company made that decision, rather than some public body that is responsible to voting citizens, it should. At the same time, if you can think of no occasion for banning a service like Parler, then you just aren’t thinking.

These are complex issues and we will need our best, most creative legal minds to ponder them in the years to come. Here’s one: Matt Stoler, in his Big newsletter about tech monopolies, offers a thoughtful framework for dealing with this problem. I’d like to hear yours. But don’t hit me with beer-popping lines or stuff you remember from Philosophy 101. This is no time for academic smugness. We have real problems down here on planet Earth.

Should conservative social network Parler Be Removed from AWS, Google, and Apple app stores? This is an interesting question, because Parler is where some of the organizing for the Capitol hill riot took place. Amazon just removed Parler from its cloud infrastructure, and Google and Apple removed Parler from their app stores. Removing the product may seem necessary to save lives, but having these tech oligarchs do it seems like a dangerous overreach of private authority. So what to do?

My view is that what Parler is doing should be illegal, because it should be responsible on product liability terms for the known outcomes of its product, aka violence. This is exactly what I wrote when I discussed the problem of platforms like Grindr and Facebook fostering harm to users. But what Parler is doing is *not* illegal, because Section 230 means it has no obligation for what its product does. So we’re dealing with a legal product and there’s no legitimate grounds to remove it from key public infrastructure. Similarly, what these platforms did in removing Parler should be illegal, because they should have a public obligation to carry all customers engaging in legal activity on equal terms. But it’s not illegal, because there is no such obligation. These are private entities operating public rights of way, but they are not regulated as such.

In other words, we have what should be an illegal product barred in a way that should also be illegal, a sort of ‘two wrongs make a right’ situation. I say ‘sort of’ because letting this situation fester without righting our public policy will lead to authoritarianism and censorship. What we should have is the legal obligation for these platforms to carry all legal content, and a legal framework that makes business models fostering violence illegal. That way, we can make public choices about public problems, and political violence organized on public rights-of-way certain is a public problem.

 

Rethinking Firewalls: Security and Agility for the Modern Enterprise

The purpose of the research sponsored by Guardicore is to learn how enterprises perceive their legacy firewalls within their security ecosystems on premises and in the cloud. Ponemon Institute surveyed 603 IT and IT security practitioners in the United States who are decision makers or influencers of technology decisions to protect enterprises’ critical applications and data. Respondents also are involved at different levels in purchasing or using the firewall technologies.

Legacy firewalls are ineffective in securing applications and data in the data center. Respondents were asked how effective their legacy firewalls are on a scale from 1 = not effective to 10 = very effective. Figure 1 shows the 7+ responses. According to the Figure, only 33 percent of respondents say their organizations are very or highly effective in securing applications and data in the data center. Legacy firewalls are also mostly ineffective at preventing a ransomware attack. Only 36 percent of respondents say their organizations are highly effective in preventing such an attack.

The findings of the report show the number one concern of firewall buyers is whether they can actually get next-gen firewalls to work in their environments. As organizations move into the cloud, legacy firewalls do not have the scalability, flexibility or reliability to secure these environments, driving up costs while failing to reduce the attack surface. As a result, organizations are reaching the conclusion that firewalls are simply not worth the time and effort and they’re actually negatively impacting digital transformation initiatives. This is driving a move toward modern security solutions like micro-segmentation, that can more effectively enforce security at the edge.

Following are research findings that reveal why legacy firewalls are ineffective.

Legacy firewalls are ineffective in preventing cyberattacks against applications. Only 37 percent of respondents say their organizations’ legacy firewalls’ ability to prevent cyberattacks against critical business and cloud-based applications is high or very high.

Organizations are vulnerable to a data breach. Only 39 percent of respondents say their organizations are confident that it can contain a breach of its data center perimeter.

Legacy firewalls do not protect against ransomware attacks. Only 36 percent of respondents say their legacy firewalls are highly effective at preventing a ransomware attack. Only 33 percent of respondents say their organizations are very or highly effective in securing applications and data in the data center.

Legacy firewalls are failing to enable Zero Trust across the enterprise. Only 37 percent of respondents rate their organizations’ legacy firewalls at enabling Zero Trust across the enterprise as very or highly effective.

Legacy firewalls are ineffective in securing applications and data in the cloud. Sixty-four percent of respondents say cloud security is essential (34 percent) or very important (30 percent). However, only 39 percent of respondents say the use of legacy firewalls are very or highly effective in securing applications and data in the cloud.

Legacy firewalls kill flexibility and speed. Organizations are at risk because of the lack of flexibility and speed in making changes to legacy firewalls. On average, it takes at least three weeks or more than a month to update legacy firewall rules to accommodate an update or a new application. Only 37 percent of respondents say their organizations are very flexible in making changes to its network or applications and only 24 percent of respondents say their organizations have a high ability to quickly secure new applications or change security configurations for existing applications.

Legacy firewalls limit access control and are costly to implement. Sixty-two percent of respondents say access control policies are not granular enough and almost half (48 percent of respondents) say legacy firewalls take too long to implement.

The majority of organizations in this study are ready to get rid of their legacy firewalls because of their ineffectiveness. Fifty-three percent of respondents say their organizations are ready to purchase an alternative or complementary solution. The two top reasons are the desire to have a single security solution for on-premises and cloud data center security (44 percent of respondents) and to improve their ability to reduce lateral movement and secure access to critical data (31 percent of respondents).

Firewall labor and other costs are too high. Sixty percent of respondents say their organizations would consider reducing their firewall because of the high costs. Fifty-one percent of organizations are considering a reduction in its firewall footprint because labor and other costs are too high (60 percent of respondents). In addition, 52 percent of respondents say it is because current firewalls do not provide adequate security for internal data center east-west traffic.

“The findings of the report reflect what many CISOs and security professionals already know – digital transformation has rendered the legacy firewall obsolete,” said Pavel Gurvich, co-founder and CEO, Guardicore. “As organizations adopt cloud, IoT, and DevOps to become more agile, antiquated network security solutions are not only ineffective at stopping attacks on these properties, but actually hinder the desired flexibility and speed they are hoping to attain.”

To read a full copy of the report, please visit Guardicore’s website.

 

Facebook needs a corrections policy, viral circuit breakers, and much more

I strongly recommend you read the entire report. (PDF)

Bob Sullivan

“After hearing that Facebook is saying that posting the Lord’s Prayer goes against their policies, I’m asking all Christians please post it. Our Father, who art in Heaven…”

Someone posted this obvious falsehood on Facebook recently, and it ended up on my wall. Soon after, a commenter in the responses broke the news to the writer that this was fake. The original writer than did what many social media users do: responded with a simple, “Well, someone else said it and I was just repeating it.” No apology. No obvious shame. And most important, no correction.

I don’t know how many people saw the original post, but I am certain far fewer people saw the response and link to Snopes showing it was a common hoax.

If there’s one thing that people rightly hate about journalists, it’s our tendency to put mistakes on the front page and corrections on the back page. That’s unfair, of course. If you make a mistake, you should try to make sure just as many people see the correction as the mistake. Journalists might be bad at this, but Facebook is awful at it. This is one reason social media is such a dastardly tool for sharing misinformation.

Fixing this is one of the novel recommendations in a great report published last week by the Forum on Information and Democracy, an international group formed to make recommendations about the future of social media. The report suggests that, when an item like this Our Father assertion spreads on social media and is determined to be misinformation by an independent group of fact checkers, a correction should be shown to each and every user exposed to it. So, not just a lame “I dunno, a friend said it” that’s buried in later comments. Rather, it would require an attempt to undo the actual spread of incorrect information.

Anyone concerned about the future of democracy and discourse should take a look at this report. I’ve summarized a few of the highlights below.

Circuit breakers

The report calls for creation of friction to prevent misinformation from spreading like wildfore. My favorite part of this section calls for creation of “circuit breakers” that would slow down viral content after it reaches a certain threshold, giving fact-checkers a chance to examine it. The concept is borrowed from Wall Street. When trading in a certain stock falls dramatically out of pattern — say there’s a sudden spike in sales — circuit breakers kick in to give traders a moment to breathe and digest whatever news might exist. Sometimes, that short break re-introduces rationality to the market. Note: This wouldn’t infringe on anyone’s ability to say what they want, it would merely slow down the automated augmentation of this speach.

A digital ‘building code’ and ‘agency by design’

You can’t buy a toaster that hasn’t been tested to make sure it’s safe, but you can use software that might, metaphorically, burn your house down. This metaphor was used a lot after the mortgage meltdown in 2008, and it applies here, too. “In the same way that fire safety tests are conducted prior to a building being opened to the public, such a ‘digital building code’ would also result in a shift towards prevention of harm through testing prior to release to the public,” the report says.

The report adds a few great nuggets. While being critical of click-to-consent agreements, it says: “In the case of civil engineering, there are no private ‘terms and conditions’ that can override the public’s presumption of safety.” The report also calls for a concept called “agency by design” that would require software engineers to design opt-ins and other moments of permission so users are most likely to understand what they are agreeing to. This is “proactively choice-enhancing,” the report argues.

Abusability Testing

I’ve long been enamored of this concept. Most software is now tested by security professionals hired to “hack” it, a process called penetration testing. This concept should be expanded to include any kind of misuse of software by bad guys. If you create a new messenger service, could it be abused by criminals committing sweetheart scams? Could it be turned into a weapon by a nation-state promoting disinformation campaigns? Could it aid and abet those who commit domestic violence? Abusability testing should be standard, and critically, it should be conducted early on in software development.

Algorithmic undue influence

In other areas of law. contracts can be voided if one party has so much influence over the other that true consent could not be given. The report suggests that algorithms create this kind of imbalance online, so the concept should be extended to social media.

“(Algorithmic undue influence) could result in individual choices that would not occur but for the duplicitous intervention of an algorithm to amplify or withhold select information on the basis of engagement metrics created by a platform’s design or engineering choice.”

“Already, the deleterious impact of algorithmic amplification of COVID-19 misinformation has been seen, and there are documented cases where individuals took serious risks to themselves and others as a result of deceptive conspiracy theories presented to them on social platforms. The law should view these individuals as victims who relied on a hazardously engineered platform that exposed them to manipulative information that led to serious harm.”

De-segregation

Social media has created content bubbles and echo chambers. In the real world, it can be illegal to engineer residential developments that encourage segregation. The report suggests similar limitations online.

“Platforms … tacitly assume a role akin to town planners. … Some of the most insidious harms documented from social platforms have resulted from the algorithmic herding of users into homogenous clusters.. What we’re seeing is a cognitive segregation, where people exist in their own informational ghettos,” the report says.

In order to promote a shared digital commons, the report makes these suggestions.

Consider applying equivalent anti-segregation legal principles to the digital commons, including a ban on ‘digital redlining’, where platforms allow groups or advertisers to prevent particular racial or religious groups from accessing content.

Create legal tests focused on the ultimate effects of platform design on racial inequities and substantive fairness, regardless of the original intent of design.

Create specific standards and testing requirements for algorithmic bias.

Disclose conflict of interests

The report includes a lot more on algorithms and transparency, but one item I found noteworthy: If a platform promotes or demotes content in a way that is self-serving, it has to disclose that. If Facebook’s algorithm makes a decision about a new social media platform, or about a government’s efforts to regulate social media, Facebook would have to disclose that.

These are nuggets I pulled out, but to grasp the larger concepts, I strongly recommend you read the entire report. (PDF)