Monthly Archives: March 2021

The role of transparency and security assurance in driving technology decision-making

The purpose of this research is to understand what affects an organization’s security technology investment decision-making. Sponsored by Intel, Ponemon Institute surveyed 1,875 individuals in the US, UK, EMEA and Latin America who are involved in securing or overseeing the security of their organization’s information systems or IT infrastructure. In addition, they are familiar with their organization’s purchase of IT security technologies and services. The full report is available from Intel at this website.

A key finding from this research is the importance of technology providers being transparent and proactive in helping organizations manage their cybersecurity risks. Seventy-three percent of respondents say their organizations are more likely to purchase technologies and services from companies that are finding, mitigating and communicating security vulnerabilities proactively. Sixty-six percent of respondents say it is very important for their technology provider to have the capability to adapt to the changing threat landscape. Yet 54 percent of respondents say their technology providers don’t offer this capability.

“Security doesn’t just happen. If you are not finding vulnerabilities, then you are not looking hard enough,” said Suzy Greenberg, vice president, Intel Product Assurance and Security. “Intel takes a transparent approach to security assurance to empower customers and deliver product innovations that build defenses at the foundation, protect workloads and improve software resilience. This intersection between innovation and security is what builds trust with our customers and partners.”

Key findings from the study include:

  • Seventy-three percent of respondents say their organization is more likely to purchase technologies and services from technology providers that are proactive about finding, mitigating and communicating security vulnerabilities. Forty-eight percent say their technology providers don’t offer this capability.
  • Seventy-six percent of respondents say it is highly important that their technology provider offer hardware-assisted capabilities to mitigate software exploits.
  • Sixty-four percent of respondents say it is highly important for their technology provider to be transparent about available security updates and mitigations. Forty-seven percent say their technology provider doesn’t provide this transparency.
  • Seventy-four percent of respondents say it is highly important for their technology provider to apply ethical hacking practices to proactively identify and address vulnerabilities in its own products.
  • Seventy-one percent of respondents say it is highly important for technology providers to offer ongoing security assurance and evidence that the components are operating in a known and trusted state.

Part 2. The characteristics of the ideal technology provider

The characteristics are broken down into three categories: security assurance, innovation and adoption. Following are the most important characteristics of a technology provider and its ability to have this capability. As shown, there is a significant gap between the importance of these features and the ability of many providers to have this capability.

Security Assurance
The ability to identify vulnerabilities in its own products and mitigate them. Sixty-six percent say this is highly important. Only 46 percent of respondents say their current technology provider has this capability

The ability to be transparent about security updates and mitigations that are available. Sixty-four percent of respondents say this is highly important. Less than half (48 percent) of respondents say their technology providers have this capability.

Ability to offer ongoing security assurance and evidence that the components are operating in a known and trusted state. Seventy-one percent say this is highly important.

Ability for the technology provider to have the capability to apply ethical hacking practices to proactively identify and address vulnerabilities in its own products. Seventy-four percent of respondents believe this is highly important.

Innovation
Protecting distributed workloads, data in use and hardware-assisted capabilities to defend against software exploits are highly important. The protection of customer data from insider threats is considered highly important by 79 percent of respondents. Organizations prioritize protecting data in use over data in transit and data at rest. Similarly, 76 percent of respondents say hardware-assisted capabilities to defend against software exploits and 72 percent of respondents say protecting distributed workloads are highly important.

Adoption
Interoperability issues and installation costs are the primary influencers when making investments in technologies. The top five factors that influence the deployment of security technologies are interoperability issues (63 percent of respondents), installation costs (58 percent of respondents), system complexity issues (57 percent of respondents), vendor support issues (55 percent of respondents) and scalability issues (53 percent of respondents).

As part of their decision-making process, organizations are measuring the economic benefits of security technologies deployed by their organizations. Forty-seven percent of respondents use metrics to understand the value of their technologies. The measures most often used are ROI (58 percent of respondents), the decrease in false positive rates (48 percent of respondents) and the total cost of ownership (46 percent of respondents).

Organizations are at risk because of the inability to quickly address vulnerabilities. As discussed, a top goal of the IT function is to improve the ability to quickly vulnerabilities. Thirty-six percent of respondents say they only scan every month or more than once a month.

While 30 percent of respondents say their organizations can patch critical or high priority vulnerabilities in a week or less, on average, it takes almost six weeks to patch a vulnerability once it is detected. The delays in patching are mainly caused by human error (63 percent of respondents), the inability to take critical applications and systems off-line in order to patch quickly (58 percent of respondents) and not having a common view of applications and assets across security and IT teams (52 percent of respondents).

Organizations’ IT budgets are not sufficient to support a strong security posture. Eighty-six percent of respondents say their IT budget is only adequate (45 percent of respondents) or less than adequate (41 percent of respondents). Fifty-three percent of respondents say the IT security budget is part of the overall IT budget.

Responsibility for security is still uncertain across organizations. Twenty-one percent of respondents agree the security leader (CISO) should be responsible for IT security objectives, while 19 percent of respondents believe the CIO/CTO and 17 percent of respondents think the business unit leader should be responsible. The conclusion is that there is uncertainty in responsibility.

To read the rest of this study, visit Intel’s website at https://newsroom.intel.com/wp-content/uploads/sites/11/2021/03/2021-intel-poneman-study.pdf (PDF)

The ‘de-platforming’ of Donald Trump and the future of the Internet

Bob Sullivan

“De-platforming.”

That’s the word that stormed tech-land earlier this year, and it’s about time. After the storming of the U.S. Capitol by a criminal mob in January, a host of companies cut off then-President Trump’s digital air supply. His Tweets and Facebook posts fell first; next came second-order effects, like payment processors cutting off the money supply. Finally, upstart right-wing social media platform Parler was removed from app stores and then denied Internet service by Amazon.

Let the grand argument of our time begin.

The story of Donald Trump’s de-platforming involves a dramatic collision of Net Neutrality, and Section 230 “immunity,” and free speech, and surveillance capitalism, even privacy. I think it’s the issue of our time. It deserves much more than a story; it deserves an entire school of thought. But I’ll try to nudge the ball forward here.

This is not a political column, though I realize that everything is political right now. In this piece, I will not examine the merits of banning Trump from Twitter; you can read about that elsewhere.

The deplatforming of Trump is an important moment in the history of the Internet, and it should be examined as such. Yes, it’s very fair to ask, if this can happen to Trump, if it can happen to Parler, can’t it happen to anyone?

But let’s examine it the way teen-agers do in their first year of college. Let’s not scream “free speech” or “slippery slope” at each other and then pop open a can of beer, assuming that that’s some kind of mic drop. Kids do that. Adults live on planet Earth, where decisions are complex, and evolve, and have real-life consequences.

I’ll start here. You can sell guns and beer in most places in America. You can’t sell a gun to someone who walks into your store screaming, “I’m going to kill someone,” and you can’t sell beer to someone who’s obviously drunk and getting into the driver’s seat. You can’t keep selling a car — or even a toaster! — that you know has a defect which causes fires. Technology companies are under no obligation to allow users to abuse others with the tools they build. Cutting them off is not censorship. In fact, forcing these firms to allow such abuse because someone belongs to a political party IS censorship, the very thing that happens in places like China. Tech firms are, and should be, free to make their own decisions about how their tools are used. (With…some exceptions! This is the real world.)

I’ll admit freely: This analogy is flawed. When it comes to technology law — and just technology choices — everyone reaches for analogies, because we want so much for there to be a precedent for our decisions. That takes the pressure off us. We want to say, “This isn’t my logic. It’s Thomas Jefferson’s logic! He’s the reason Facebook must permit lies about the election to be published in my news feed!” Sorry, life isn’t like that. We’re adults. We have to make these choices. They will be hard. They’re going to involve a version 1, and a version 2 and a version 3, and so on. The technology landscape is full of unintended, unexpected consequences, and we must rise to that challenge. We can’t cede our agency in an effort to find silver bullet solutions from the past. We have to make them up as we go along.

That’s why the best thing I read the past few months  about the Trump deplatforming was this piece by Techdirt’s Mike Masnick. He raises an issue that many tech folks want to avoid: Twitter and Facebook have tried really hard to explain their choices by shoehorning them into standing policy violations. That has left everyone unhappy. (Why didn’t they do this months ago? Wait, what policy was broken?) Masnick gets to the heart of the matter quickly.

So, Oremus is mostly correct that they’re making the rules up as they go along, but the problem with this framing is that it assumes that there are some magical rules you can put in place and then objectively apply them always. That’s never ever been the case. The problem with so much of the content moderation debate is that all sides assume these things. They assume that it’s easy to set up rules and easy to enforce them. Neither is true. Radiolab did a great episode a few years ago, detailing the process by which Facebook made and changed its rules. And it highlights some really important things including that almost every case is different, that it’s tough to apply rules to every case, and that context is always changing. And that also means the rules must always keep changing.

A few years back, we took a room full of content moderation experts and asked them to make content moderation decisions on eight cases — none of which I’d argue are anywhere near as difficult as deciding what to do with the President of the United States. And we couldn’t get these experts to agree on anything. On every case, we had at least one person choose each of the four options we gave them, and to defend that position. The platforms have rules because it gives them a framework to think about things, and those rules are useful in identifying both principles for moderation and some bright lines.

But every case is different.

For a long time, I have argued that tech firms’ main failing is they don’t spend anywhere near enough money on safety and security. They have, nearly literally, gotten away with murder for years while the tools they have made cause incalculable harm in the world. Depression, child abuse, illicit drugs sales, societal breakdowns, income inequality…tech has driven all these things. The standard “it’s not the tech, it’s the people” argument is another “pop open a beer” one-liner used by techie libertarians who want to count their money without feeling guilty, but we know it’s a dangerous rationalization. Would so many people believe the Earth is flat without YouTube’s recommendation engine? No. Would America be so violently divided without social media? No. You built it, you have to fix it.

If a local mall had a problem with teen-age gangs hanging around at night causing mayhem, the mall would hire more security guards. That’s the cost of doing business. For years, big tech platforms have tried to get away with “community moderation” — i.e., they’ve been cheap. They haven’t spent anywhere near enough money to stop the crime committed on their platforms. Why? Because I think it’s quite possible the entire idea of Faceook wouldn’t exist if it had to be safe for users. Safety doesn’t scale. Safety is expensive. It’s not sexy to investors.

How did we get here? In part, thanks to that Section 230 you are hearing so much about. You’ll hear it explained this way: Section 230 gives tech firms immunity from bad things that happen on their platforms. Suddenly, folks who’ve not cared much about it for decades are yelling for its repeal. As many have expressed, it would be better to read up on Section 230 before yelling about it (Here’s my background piece on it). But in short, repealing it wholesale would indeed threaten the entire functioning of the digital economy. Section 230 was designed to do exactly what it is I am hinting at here — to give tech firms the ability to remove illegal or offensive content without assuming business-crushing liability for everything they do. Again, it’s a law written from an age before Amazon existed, so it sure could use modernization by adults. But pointing to it as some untouchable foundational document, or throwing out the baby with the bathwater, are both the behavior of children, not adults. We’re going to have to make some things up as we go along.

Here’s the thing about “free speech” on platforms like Twitter and Facebook. As a private company, Twitter can do whatever it wants to do with the tool it makes. Forcing it to carry this or that post is a terrible idea. President Trump can stand in front of a TV camera, or buy his own TV station (as seems likely) and say whatever he wants. Suspending his account is not censorship. As I explain in a recent Section 230 post, however, even that line of logic is incomplete. social media firms use public resources, and some are so dominant that they are akin to a public square. We just might need a new rule for this problem! I suspect the right rule isn’t telling Twitter what to post, but perhaps making sure there is more diversity in technology tools available to speakers.

But here’s the thing I find myself saying most often right now: The First Amendment guarantees your right to say (most) things; it doesn’t guarantee your right to algorithmic juice. I believe this is the main point of confusion we face right now, and one that really sends a lot of First Amendment thinking for a loop. Information finds you now. That information has been hand-picked to titillate you like crack cocaine, or like sex. The more extreme things people say, the more likely they are to land on your Facebook wall, or in your Twitter feed, or wherever you live. It’s one thing to give Harold the freedom to yell to his buddies at the bar about children locked in a basement by Democrats at a pizza place in Washington D.C. It’s quite another to give him priority access to millions of people using a tool designed to make them feel like they are viewing porn. That’s what some people are calling free speech right now. James Madison didn’t include a guaranteed right to “virality” in the Bill of Rights. No one guaranteed that Thomas Paine’s pamphlets were to be shoved under everyone’s doors, let alone right in front of their eyeballs at the very moment they were most likely to take up arms. We’re going to need new ways to think about this. In the algorithmic world, the beer-popping line, “The solution to hate speech is more speech,” just doesn’t cut it.

I’m less interested in the Trump ban than I am in the ban of services like Parler. Trump will have no trouble speaking; but what of everyday conservatives who are now wondering if tech is out the get them? If you are liberal: Imagine if some future government decides Twitter is a den of illegal activity and shuts it down. That’s an uncomfortable intellectual exercise, and one we shouldn’t dismiss out of hand.

Parler is a scary place. Before anyone has anything to say about its right to exist, I think you really should spend some time there, or at least read my story about the lawsuit filed by Christopher Krebs. Ask yourself this question: What percentage of a platform’s content needs to be death threats, or about organizing violence, before you’ll stop defending its right to exist? Let’s say we heard about a tool named “Anteroom” which ISIS cells used to radicalize young Muslims, spew horrible things about Americans, teach bomb-making, and organize efforts to…storm the Capitol building in D.C.. Would you really be so upset if Apple decided not to host Anteroom on its play store?

So, what do we do with Parler? Despite all this, I’m still uncomfortable with removing its access to network resources the way Amazon has. I think that feels more like smashing a printing press than forcing books into people’s living rooms. Amazon Web Services is much more like a public utility than Twitter is. The standard for removal there should be much higher, I think. And if it makes you uncomfortable that a private company made that decision, rather than some public body that is responsible to voting citizens, it should. At the same time, if you can think of no occasion for banning a service like Parler, then you just aren’t thinking.

These are complex issues and we will need our best, most creative legal minds to ponder them in the years to come. Here’s one: Matt Stoler, in his Big newsletter about tech monopolies, offers a thoughtful framework for dealing with this problem. I’d like to hear yours. But don’t hit me with beer-popping lines or stuff you remember from Philosophy 101. This is no time for academic smugness. We have real problems down here on planet Earth.

Should conservative social network Parler Be Removed from AWS, Google, and Apple app stores? This is an interesting question, because Parler is where some of the organizing for the Capitol hill riot took place. Amazon just removed Parler from its cloud infrastructure, and Google and Apple removed Parler from their app stores. Removing the product may seem necessary to save lives, but having these tech oligarchs do it seems like a dangerous overreach of private authority. So what to do?

My view is that what Parler is doing should be illegal, because it should be responsible on product liability terms for the known outcomes of its product, aka violence. This is exactly what I wrote when I discussed the problem of platforms like Grindr and Facebook fostering harm to users. But what Parler is doing is *not* illegal, because Section 230 means it has no obligation for what its product does. So we’re dealing with a legal product and there’s no legitimate grounds to remove it from key public infrastructure. Similarly, what these platforms did in removing Parler should be illegal, because they should have a public obligation to carry all customers engaging in legal activity on equal terms. But it’s not illegal, because there is no such obligation. These are private entities operating public rights of way, but they are not regulated as such.

In other words, we have what should be an illegal product barred in a way that should also be illegal, a sort of ‘two wrongs make a right’ situation. I say ‘sort of’ because letting this situation fester without righting our public policy will lead to authoritarianism and censorship. What we should have is the legal obligation for these platforms to carry all legal content, and a legal framework that makes business models fostering violence illegal. That way, we can make public choices about public problems, and political violence organized on public rights-of-way certain is a public problem.