Category Archives: Uncategorized

Survey: Managing third-party permissions is ‘overwhelming’

The purpose of this research is to understand organizations’ approach to managing third-party remote access risk and to provide guidance, based on the findings, on how to prepare for the future. A significant problem that needs to be addressed is third parties accessing an organization’s networks and, as a result, exposing it to security and non-compliance risks.

Sponsored by SecureLink, Ponemon Institute surveyed 627 individuals who have some level of involvement in their organization’s approach to managing remote third-party data risks. They were also instructed to focus their responses on only those outsourcing relationships that require the sharing of sensitive or confidential information and involve processes or activities that require providing access to such information.

Organizations are having data breaches caused by giving too much privileged access to third parties. Only 36 percent of respondents are confident that third parties would notify their organization if they had a data breach involving their sensitive and confidential information. More than half (51 percent) of respondents say their organizations have experienced a data breach caused by a third party that resulted in the misuse of its sensitive or confidential information either directly or indirectly. There could possibly be more data breaches because of respondents’ lack of confidence that they would be contacted by their third parties, as discussed above.

In the past 12 months, 44 percent of respondents say their organizations experienced a data breach caused by a third party either directly or indirectly. Of these respondents, 74 percent of respondents say it was the result of giving too much privileged access to third parties.

The following findings reveal the risks created by third-party remote access

Organizations are at risk for non-compliance with regulations because third parties are not aware of their industry’s data breach reporting regulations.  On average, less than half of respondents (48 percent) say their third parties are aware of their industry’s data breach reporting regulations. Only 44 percent of respondents rate the effectiveness of their third parties in achieving compliance with security and privacy regulations that affect their organization as very high.

Managing remote access to the network is overwhelming. Seventy-three percent of respondents say managing third-party permissions and remote access to their networks is overwhelming and a drain on their internal resources. As a consequence, 63 percent of respondents say remote access is becoming their organization’s weakest attack surface.

It is understandable why 69 percent of respondents say cybersecurity incidents and data breaches involving third parties is increasing because only 40 percent of respondents say their organizations are able to provide third parties with just enough access to perform their designated responsibilities and nothing more. Further, only 37 percent of respondents say their organizations have visibility into the level of access and permissions for both internal and external users.

Many organizations do not know all the third parties with access to their networks. Only 46 percent of respondents say their organizations have a comprehensive inventory of all third parties with access to its network. The average number of third parties in this inventory is 2,368.

Fifty-four percent of respondents say they don’t have an inventory (50 percent) or are unsure (4 percent). Respondents say it is because there is no centralized control over third parties (59 percent) and 47 percent of respondents say it is because of the complexity in third party relationships.

Organizations are not taking the necessary steps to reduce third-party remote access risk. Instead of taking steps to stop third-party data breaches and cybersecurity attacks, organizations are mostly focused on collecting relevant and up-to-date contact information for each vendor.

Very few are collecting and documenting information about third-party network access (39 percent of respondents), confirmation security practices are in place (36 percent of respondents), identification of third parties that have the most sensitive data (35 percent of respondents), confirmation that basic security protocols are in place (32 percent of respondents) and past and/or current known vulnerabilities in hardware or software.

Most organizations are not evaluating the security and privacy practices of all third parties before they are engaged. Less than half of respondents (49 percent) say their organizations are assessing the security and privacy practices of all third parties before allowing them to have access to sensitive and confidential information.

Of these respondents, 59 percent of respondents say their organizations rely on signed contracts that legally obligates the third party to adhere to security and privacy practices. Fifty-one percent of respondents say they obtain evidence of security certifications such as ISO 2700/27002 or SOC. Only 39 percent of respondents say their organizations conduct an assessment of the third party’s security and privacy practices.

Reliance on reputation is why the majority of organizations are not evaluating the privacy and security practices of third parties. Fifty-one percent of respondents say their organizations are not evaluating third-parties privacy and security practices and the main reason is reliance on their reputation (63 percent of respondents) and data protection regulations (60 percent of respondents). However, as discussed previously, only 48 percent of respondents say their organizations are aware of their industry’s data breach reporting regulations. Less than half of respondents (48 percent) have confidence in the third party’s ability to secure information.

Organizations are in the dark about the third-party risk because most are not required to complete security questionnaires. An average of only 35 percent of third parties are required to fill out security questionnaires and only an average of 26 percent is required to conduct remote on-site assessments.

If organizations monitor third-party security and privacy practices, they mostly rely upon legal or procurement review. Only 46 percent of respondents say their organizations are monitoring the security and privacy practices of third parties that they share sensitive or confidential information with on an ongoing basis. Fifty percent of respondents say the law or procurement functions conduct the monitoring. Only 41 percent of respondents say they use automated monitoring tools.

Again, reliance on contracts is why 54 percent of respondents say their organizations are not monitoring the third parties’ security and privacy practices. Sixty-one percent of respondents say their organizations do not feel the need to monitor because of contracts and another 61 percent of respondents say they rely upon the business reputation of the third party.

Third-party risk in most organizations is not defined or ranked in most organizations. Only 39 percent of respondents say their third-party management program defines and ranks level of risk. The top three indicators of risk are poorly written security and privacy policies and procedures, lack of screening or background checks for third-party key personnel and history of frequent data breach incidents.

Organizations are ineffective in preventing third parties from sharing credentials in the form of usernames and passwords. Respondents were asked to rate their organizations effectiveness in knowing all third-party concurrent users, controlling third-party access to their networks and preventing third parties from sharing credentials in the form of usernames and/or passwords on a scale from 1 = not effective to 10 = highly effective. Only 41 percent of respondents say their organizations are very effective in controlling third-party access to their networks and preventing third parties from sharing credentials in the form of usernames and passwords.

Click here to access the full report at SecureLink’s website.


‘Please turn off your surveillance gadgets before dinner’

Bob Sullivan

What do you do if you think your friend is bugging you? I don’t mean bothering you. I mean…bugging you…using a device to listen to you, maybe e1ven recording your conversations, when you visit their home. Well, that’s the world most of us live in now. Personal assistants, many modern TVs, smart doorbells…they all incorporate listening devices. What if you don’t want to be surveilled like that? Should you ask your friends to turn off their Alexa when you walk in the door?  Should they offer?

Tech etiquette sounds like something maybe we should be talking to Ann Landers about politeness or whatnot, but in reality, tech etiquette has a very, very serious side. When somebody visits a friend’s home right now, it’s quite likely they have some kind of electronic devices that could be listening, smart, doorbells, smart televisions, personal assistance. What are the rules around these things? Social rules, legal rules, and what can be done about them.

For my latest episode of Debugger, I talked with Jolynn Dellinger, a privacy law professor at Duke University, where she is also a senior fellow at the Kenan Institute for Ethics.  (Kenan and Duke’s Sanford School of Public Policy support podcast).

You can listen by clicking here, or clicking the play button below if it appears in your browser. A short transcript follows.

[00:02:05] Jolynn:  I’ve just been thinking about it more recently with the proliferation of personal assistants like Alexa, Google Assistant. But I had a very interesting conversation on Twitter the other day with a bunch of random people that I don’t know. Uh, and it seemed to spark a lot of interest and along a lot of strong opinions about … What, what is the etiquette? And my question was .. When you go to somebody’s house, do you need to ask them whether they have always on listening devices or should a homeowner when you have people over as guests? Let those people know I have listening devices on? Is that okay? And this seems like a weird question because I don’t think anyone is engaging in this kind of etiquette right now.

And as a couple of people pointed out on the Twitter conversation, you know, this is just as important to ask with phones. Because folks who have their personal assistants activated on their phones, which I don’t, but some people do, whether it’s Siri or, or whatever on an Android, it presents the same issue of being always on.

And I think that raises some questions we should be talking about in terms of what’s appropriate in terms of potentially recording other people without their permission or consent

[00:03:27] Bob: Something I can’t even imagine right now. Welcome. Let me take your coat. Would you like a glass of wine and oh, are you okay if I record it this entire evening.

[00:03:36] Jolynn: Yes, exactly. Exactly. They’re not recording the entire evening and I don’t want to over-represent, but they are always listening and able to record. Of course.

But I think back in the day, well, at least when I was in college, it was almost people treated it almost like an imposition if you said you were a vegetarian. Right? Like they invite you to dinner and you’re like, okay, well I’m a vegetarian. Oh really? And so I think nowadays, when people invite someone over for dinner, they almost always say, are there any dietary restrictions? And I don’t know whether that’s because becoming vegetarian or vegan is more mainstream or there is a proliferation of allergies. You want to make sure somebody doesn’t have a nut allergy or, or something like that. So it’s gained acceptance over time and I’m kind of wondering if we should be heading in that direction with, surveillance technology.

[00:04:36] Bob: I wonder mechanically, how it would even be accomplished. So let’s say someone says, I just really morally opposed to having listening devices around me activated, how would someone turn off all their smart devices in a swoop like that?Is that even possible?

[00:04:53] Jolynn: I was asking that on Twitter. I mean, there are certainly people far more technologically proficient than myself and many pointed that out. Somebody said, well, I have 20 of those devices around my house. It would take me half an hour to go around and dismantle everything. And another person made a really good point as well. This person had some physical challenges, um, and was using listening devices in a way that enabled him or her to, to live a safe life. So that turning off those devices would actually pose a serious problem for that person. And I think, you know, we always need to be aware of that and, and I would never ask somebody to turn something off that they were using in that way.

I think the more common situation though, is folks using these for convenience or fun. And so they’re cooking and they can say, Alexa, play my blah-blah-blah playlist. And in that circumstance, I think that, um, my preference or anyone’s preference to not have the device on, uh, should supersede. However, do you just unplug it or do you put it in the microwave or the refrigerator? I mean, these were all things that were discussed on this Twitter thread, which was hilarious. Some people said it wouldn’t be that hard to disconnect.

[00:06:08] Bob: I was surprised to see several people …seemed to say, if you have a problem with my Siri or my Alexa, then I’m, I’m uninviting you. You’re not allowed in my house anymore. That was a shocking reaction to me.

[00:06:21] Jolynn: Right. I thought so too. And that’s what I was seeing. You know, if you invited your friend for dinner, I mean, we need to keep this focused on the fact that these are usually our friends that we’re inviting over and have you invited somebody for dinner? And they said, I’m a vegetarian. You likely would not say, well, tough. I’m having steak, right? I mean, that’s probably not what you would say. I think there are many, many technologies that we use. And part of the reason I pose this question is every time that we make personal decisions to use emerging technologies, surveillance technologies, and others, it may be something that affects only ourselves and our families, or maybe something that affects others.

And this even goes for a direct to consumer DNA test, right? That’s not protected. And your DNA gets out information about other people besides just you. And is that something where you should be asking your family before you send spit in a cup to 23 and me, um, other things around the house besides Alexa is like nest cameras.

You know, some folks have those around their homes so they can see in their children’s rooms when they’re out for the evening. Well, if I send my child over to play with that child is my child then being observed by those cameras? Internet of things, Barbie dolls that talk to children or other toys, rain cameras that maybe doing audio and video recording of conversations that take place outside people’s apartments or homes.

I think there’s a lot of these questions that we should be talking about

[00:07:54] Bob: So I had a Ring experience. I sold a house, not that long ago, and I put up a ring camera because I was going to be traveling during some of the process. And I put it up with the default settings and right away, a realtor and prospective buyer showed up at the house and I could hear them talking about my house and, and I thought to myself, I was horrified by this.

Of course, there was a piece of me that was tempted to listen in closely and say, oh, perhaps I should change the paint on that wall, everyone hates the color in the living room or whatnot. Right? But that would give me an unfair advantage potentially. And more than an unfair advantage. When I dug through the paperwork, uh, then the specifications on the Ring camera, it suggested to me that it might be an illegal wiretap to be recording people’s audio conversations without their knowledge. Aren’t a lot of these things potentially?

[00:08:44] Jolynn: I’m not really sure if there’s a lot of case law about that yet. I did see a case in New Hampshire where a judge said it was not a violation. And I think they might’ve had a two party consent rule there, where the judge said it was not a problem. I think it should be a problem.

Uh, I think that the basis of the decision there was, well, the conversation was taking place in a public space where you should expect … other people may be able to hear you. But I think when you’re standing on somebody’s porch, you don’t necessarily expect that. I think if you’re in an apartment building… say, and you live across the hall from someone who may have installed a Ring doorbell, you may know, you may not know if you come in with a friend and you’re unlocking your door and no one else is in the hallway.

[00:09:29] I think you naturally would expect privacy around that conversation. So you can disable the audio on Ring doorbells. And I think Amazon suggests that people may want to consider doing that if they feel the need, but I don’t think Amazon is providing legal advice.

[00:09:45] Bob: Yeah. And I disabled my audio immediately out of fear of just that. And it also, it seemed just wildly unfair, but that suggests that there’s perhaps more serious implications to some of this. And you raised one of the most serious of all, what might be the national security implications of all of these unintended overheard conversations.

[00:10:07] Jolynn: Well, that question didn’t get as much uptake on Twitter, but I’m very interested about it. I think there was a big deal about whether or not there could be a Peloton in the White House because of the various things … it can hear with, microphones or cameras. And I just wondered about all the people working from home who are dealing with classified or secure materials, confidential materials.

If they’re sitting in their office on a Zoom call and you question about it being a Aoom call and what devices do they have around them that may be picking things up and should that be not permissible?

[00:10:44] Bob: It’s not hard to imagine a foreign adversary figuring out how to wake up all of these listening devices without people’s knowledge and using it as a sort of extensive spy network.

[00:10:55] Jolynn: It seems possible. I just think as we get more and more of these ubiquitous technologies, we should all at least be willing to discuss and have a conversation about what’s the right thing to do here for our neighbors, friends, and family.

[00:11:13] Bob: Have you ever walked into a house and said to someone, is anything listening to me?

[00:11:17] Jolynn: Yes, I have. I have

[00:11:19] Bob: I was hoping you would say that. And what was the reaction?

[00:11:23] Jolynn: These are friends of mine, so they were nice about it. And one time I said the refrigerator might be a good place for that model.

[00:11:34] Bob: You can always put your gadgets in the fridge if you’re worried about them. Professor Jolynn Dellinger from Duke University, thank you very much for your time.


The Fourth Annual Global Study on Exchanging Cyber Threat Intelligence: There Has to Be a Better Way

Since Ponemon Institute conducted the first study on threat intelligence sharing in 2014, organizations that use and exchange threat intelligence are improving their security posture and the ability to prevent and mitigate the consequences of a cyberattack. As revealed in The Fourth Annual Study on Exchanging Cyber Threat Intelligence: There Has to Be a Better Way, Some 74 percent of respondents that had a cyberattack believe that the availability of timely and accurate threat intelligence could have prevented or mitigated the consequences of such an attack.

According to the 1,432 IT and IT security practitioners surveyed in North America, EMEA and Asia Pacific , the consumption and exchange of threat intelligence continues to increase, as shown in Figure 1. Despite the increase in the exchange and use of threat intelligence, more work needs to be done to improve the timeliness and actionability of the threat intelligence.

Following are 11 trends that describe the current state of threat intelligence sharing.

1. Satisfaction with the ability to obtain threat intelligence decreases slightly. This year, 40 percent of respondents say they are very satisfied or satisfied with the way their organizations obtain threat intelligence. This is a slight decrease from 41 percent of respondents in 2017. To increase satisfaction, threat intelligence needs to be more timely, less complex and more actionable.

2. Organizations do not have confidence in free sources of threat intelligence. Reasons for paying for threat intelligence is because it has proven effective in stopping security incidents and a lack of confidence in free sources of intelligence.

3. On a positive note, the accuracy of threat intelligence is increasing. However, the majority of organizations believe the timeliness and the actionability of threat intelligence is low.

4. The two main metrics are the ability to prioritize threat intelligence and its timely delivery. Other metrics are the ability to implement the threat intelligence and the number of false positives.

5. When it comes to measuring the ROI of their threat intelligence, 39 percent of respondents say their organizations calculate the ROI. The top ROI metrics organizations look at include the following factors: reduction in the dwell time of a breach, reduction in the number of successful breaches and faster, more effective incident response.

6. Timeliness of threat intelligence is critical but not achieved. Only 11 percent of respondents say threat intelligence is provided real time and only 13 percent of respondents say threat intelligence is provided hourly

7. Threat indicators provide valuable intelligence. Eighty-five percent of respondents say they use threat indicators. The most valuable types of indicators are malicious IP addresses and indicators of malicious URLs.

8. Most organizations either currently or plan to consolidate threat intelligence data from multiple solutions. However, 53 percent of respondents say their organizations mainly use manual processes to accomplish the consolidation.

9. With regards to how threat intelligence is used through the network, the majority of organizations are using it in IDS. United Threat Management (UTM) is usually a single security appliance that provides multiple security functions as a single point on the network. The use of UTMs has increased significantly since 2017.

10. Internal silos prevent more effective collaboration and the exchange of threat intelligence with other organizations. Only 40 percent of respondents say the collaboration between their organization and other companies in the exchange of threat intelligence is either very effective or effective.

11. The use of automated processes to investigate threats is gaining traction. Fifty-four percent of respondents, an increase from 47 percent of respondents, are using automated processes to investigate threats. There also has been a significant increase in the use of machine learning and AI since 2017.

To read the full report visit the Infoblox website.

Plane crashes are investigated. Computer crashes should be, too

Bob Sullivan

When a plane crashes, a government agency rushes to the scene looking for answers…and lessons that might prevent the next plane crash. When computers crash — and the economy crashes, as we’ve seen this week — there is no such fact-finding mission. There should be. And now, perhaps, there will be.

The National Safety Transportation Board, while imperfect, has a remarkable track record for getting to the bottom of transportation disasters. Air travel is remarkably safe, in no small part because of all the public hearings and final reports issued by the NTSB through the years. Yes, wounds are exposed and companies take it on the chin after a crash. That’s the price of learning. Lives are at stake.

Cybersecurity could benefit dramatically from this kind of soul-searching after major attacks.

This week’s Colonial Pipeline ransomware incident and resulting run on gas stations is just the latest incident that screams for some kind of independent agency devoted to this kind of soul searching. And I do mean “just the latest.” A quick trip down memory lane had me re-reading essay after essay calling for a “Computer Network Safety Board” or a “National Cybersecurity Safety Board.” This 2016 report that was part of a NIST Commission cites a 1990(!) publication named Computers at Risk: Safe Computing in the Information Age which called for creation of an incident database, saying “one possible model for data collection is the incident reporting system administered by the National Transportation Safety Board.”

So, this is an idea whose time has come. And perhaps it will. In the wake of the pipeline ransomware incident, President Biden issued an executive order this week addressing cybersecurity. These things can seem like pageantry, but they don’t have to be. The list of actions in the order is non-controversial and has been in the works for a while. Things like: raising government security standards, stronger supply chain/vendor oversight, and improved information sharing. But to me, this is the most critical part of the order:

Establish a Cybersecurity Safety Review Board. The Executive Order establishes a Cybersecurity Safety Review Board, co-chaired by government and private sector leads, that may convene following a significant cyber incident to analyze what happened and make concrete recommendations for improving cybersecurity. Too often organizations repeat the mistakes of the past and do not learn lessons from significant cyber incidents. When something goes wrong, the Administration and private sector need to ask the hard questions and make the necessary improvements. This board is modeled after the National Transportation Safety Board, which is used after airplane crashes and other incidents.


This….CSRB?….faces a lot of obstacles. Paul Rosenzweig, one of the essayists who has called for such a thing in the past, laid these obstacles out well in his 2018 paper for R Street. There’s (usually) no wreckage after a computer crash, so investigations will be much harder. There are tens of thousands of important computer hacks every year. Can’t study them all. How will the CSRB pick which ones to examine? Victim companies are notoriously hesitant to share details after an attack, fearing those details will end up in a lawsuit. Sometimes…often…the investigation will be inconclusive. And finally: the “flaw” found by such an investigation will often be a person, not software or hardware.


I’ve been to 100 conferences where security professionals spend a week talking about fancy new software and then at a closing address, someone ends by saying, “It all comes down to the human element.” I suspect a CSRB will find *many* incidents come down to a mistake made by a person. That’s a good start. Of course, no one person can really screw up something like this. That person is part of a team. S/he is nearly always overworked, part of a flawed system, walking a tightrope without a net, and acting on the wrong incentives. These are the kinds of real problems that can finally be exposed by CSRB reports.

Having covered this industry for 25 years, I am suspicious of the idea that many investigations will be inconclusive. Yes, there are occasional Zero Day hacks and nation-state-sponsored attacks that might elude investigators. But many, many hacks fall into the Equifax camp — they involve an event cascade of errors that should have been caught, like a horror movie where the protagonists miss a dozen or more chances to avert the disaster.

Every one of those movies should be made, and studied, by the CSRB.

Perhaps one conclusion might be limitations on workload, the kind that now protect truck drivers, train engineers and pilots. Perhaps other innovative recommendations will arise from shining such a public light on hacking incidents. Perhaps there will be so many that we’ll move past shaming cybersecurity workers to solving the real problem. If we don’t, we’re going to see a lot more gas lines that result from malicious computer code.

Do your computers have ID? The state of machine identity management

Ponemon Institute and Keyfactor kicked off the first-ever State of Machine Identity Management Report with one purpose: Drive industry awareness around the importance of managing and protecting machine identities, such as keys, certificates, and other secrets, in digital business.

For the 2021 State of Machine Identity Management Report, Ponemon Institute surveyed 1,162 respondents across North America and EMEA who work in IT, information security, infrastructure, development, and other related areas.

We hope that IT and security leaders can use this research to drive forward the need for an enterprise-wide machine identity management strategy. No matter where you are in the business – IT, security, or development – and no matter the size of your company, this report
offers important insights into why machine identities matter.

In recent years, we’ve witnessed the rapid growth of internet-connected devices and machines in the enterprise. From IoT and mobile devices to software-defined applications, cloud instances, containers, and even the code running within them, machines already far
outnumber humans.

Much like the human identities we rely on to access apps and devices we use every day (e.g., passwords, multi-factor, etc.), machines require a set of credentials to authenticate and securely
connect with other devices and apps on the network. Despite their critical importance, these “machine identities” are often left unmanaged and unprotected.

In the 2020 Hype Cycle for Identity and Access Management Technologies, Gartner introduced a new category: Machine Identity Management. The addition reflects the increasing importance of managing cryptographic keys,  X.509 certificates, SSH keys, and other non-human identities.

Machine identities have undoubtedly become a critical piece in enterprise IAM strategy, and awareness has reached even the highest levels of the organization. Sixty-one percent of respondents say they are either familiar or very familiar with the term machine identity management.

“Machine identities, such as keys, certificates and secrets, are essential to securing connections between thousands of servers, cloud workloads, IoT and mobile devices,” said Chris Hickman, chief security officer at Keyfactor. “Yet the survey highlights a concerning and significant gap between plan and action when it comes to machine identity management strategy. Acknowledgment is a step in the right direction, but a lack of time, skilled resources and attention paid to managing machine identities make organizations vulnerable to highly disruptive security risks and service outages.”

In this section, we highlight key findings based on Keyfactor’s analysis of the research data compiled by Ponemon Institute. For more in-depth analysis, see the complete findings.

Strategies for crypto and machine identity management are a work in progress.

Despite growing awareness of machine identity management, the majority of survey respondents said their organization either does not have a strategy for managing cryptography and machine identities (18 percent of respondents), or they have a limited strategy that is applied only to certain applications or use cases (42 percent of respondents).

The top challenges that stand in the way of setting an enterprise-wide strategy are too much change and uncertainty (40 percent of respondents) and lack of skilled personnel (40 percent
of respondents).

Shorter certificate lifespans, key misconfiguration, and limited visibility are top concerns.

Challenges in managing machine identities include the increased workload and risk of outages caused by shorter SSL/TLS certificate lifespans (59 percent of respondents), misconfiguration of keys and certificates (55 percent of respondents), and not knowing exactly how many keys and certificates the organization has (53 percent of respondents).

A significant driver of these challenges is the recent reduction in the lifespan of all publicly-trusted SSL/TLS certificates by roughly half, from 27 months to 13 months, on September 1, 2020. It is worth noting that the real impact of this change will likely not be realized
until the months and years ahead.

Crypto-agility emerged as a top strategic priority.

Moving into the top position on the list, more than half of respondents (51 percent) identified crypto-agility as a strategic priority for digital security, followed by reducing complexity of IT infrastructure and investing in hiring and retaining qualified personnel (both 50
percent of respondents).

Cloud and Zero-Trust strategies are driving the deployment of PKI and machine identities.

While many trends are driving the deployment of PKI, keys, and certificates, the two most important trends are cloud-based services (52 percent of respondents), and Zero-Trust security strategy (50 percent of respondents). Other notable trends include the remote workforce and IoT devices (both 43 percent of respondents).

SSL/TLS certificates take priority, but every machine identity is critical.

Overall, respondents agree that managing and protecting every machine identity is critical. That said, SSL/TLS certificates were widely considered the most important machine identities to manage and protect, according to 82 percent of respondents.

To see the report’s full findings, visit’s website 


What’s the original sin of the Internet? A new podcast

Bob Sullivan

Is there an Original Sin of the Internet? Join me on a journey to find out.

Today I’m sharing a passion project of mine that’s been years in the making. I’m lucky. I’m getting old. Much better than the alternative! My career has spanned such a fascinating time in the history of technology. I learned to *literally* cut and paste at my first newspaper. Now, most of the world is run by computer code that’s been virtually cut and pasted. Often, carelessly cut and pasted. Look around, and it’s fair to ask: Has all this technology really made our lives better? My answer is yes, but by a margin so slim that objectors might call for a recount.

Whatever your answer, there is no denying that tech has landed us in a lot of trouble, and the techlash is real. And for those of us who thought the Internet might end up as humanity’s greatest invention, this time is depressing. One of my guests — a real Internet founder — thinks perhaps he should have done something else with his life.

Debugger, launching today, is a podcast, but I think of it more as an audio documentary. There are no sound bites. I let my guests talk and try to stay out of the way. So you can make up your own mind. Thanks to the great people at Duke’s Sanford School of Public Policy and the Kenan Institute for Ethics, I have access to amazing people who were there at the dawn of the Internet Age. I hope you’ll listen, but if you’d rather read, I’ll spend this week sending out edited transcripts from each guest.

First up: Richard Purcell, one of the first privacy executives. From him, you’ll learn as much about working on the railroad as you will about the abuse of power through privacy invasions. But before that, I try to explain what I mean by “original sin” in the introduction, and why that matters.

Future Debugger episodes will deal with similar foundational questions about technology and its role in democratic society. Why do 1,000 companies know every time I visit one web page? How do data brokers interfere with free and fair elections? What should we do with too-big-to-fail tech giants? How can we capture medical miracles trapped in data without violating patients? And how can we build tech that isn’t easily weaponized by abusing people or enemy combatants? That’s coming soon, on Debugger. On to the transcript for today. Click here to visit the podcast home page. Or, click below to listen.

[00:01:27] Bob Sullivan: Welcome to Debugger, a podcast about technology brought to you by Duke University’s Sanford School of Public Policy and the Duke Kenan Institute for Ethics. I’m your host, Bob Sullivan. And I care a lot about today’s topic. So please indulge me for a moment or two while I try to frame this issue.

I came across a story many years ago, it still haunts me as a technologist and an early believer in the internet. It haunts me because it reads like a sad pre-obituary about a once-famous pop singer who’s now a broke has-been with a drug problem … and as a writer because its prose is nearly poetry. At least to my ears, the kind of thing I wish I’d written credit. Steve Maich at Maclean’s and Canadian magazine for the words. Dramatic reading by old friend Alia Tavakolian:

[00:02:24] Alia Tavakolian: The people who conceived and pioneered the web described a kind of enlightened utopia built on mutual understanding. A world in which knowledge is limited only by one’s curiosity. Instead, we have constructed a virtual Wild West where the masses indulge their darkest vices, pirates of all kinds troll for victims, and the rest of us have come to accept that cyberspace isn’t the kind of place you’d want to raise your kids. The great multinational exchange of ideas and goodwill has devolved into a food fight. And the virtual marketplace is a great place to get robbed. The answers to the great questions of our world may be out there somewhere, but finding them will require you to first wade through an ocean of misinformation, trivia and sludge. We’ve been sold a bill of goods. We’re paying for it through automatic monthly withdrawals from our PayPal accounts.

Let’s put this in terms crude enough for all cyber dwellers to grasp: The internet sucks.

[00:03:23] Bob Sullivan: The internet sucks? I’ve thought about this story for years, come back to it once in a while, but it’s been a while. In fact, it’s been 15 years since those words were first written, a lot has happened since then.

·         My name is Ed Snowden. I’m 29 years old. I work for Booz Allen Hamilton as an infrastructure analyst for NSA in Hawaii.

·         What exactly are they saying these Russians did? ….Well, there’s a lot of things that were alleging the Internet Research Agency did. Um, the main thing is that they posed as American citizens to amplify and spread content that causes division in our society

·         Tonight, Facebook stock tanking, dropping nearly 7% after allegations that data from Cambridge Analytica secretly harvested the personal information of 50 million unsuspecting Facebook users,

·         Cyber experts warn the Equifax hack has the potential to haunt Americans for decades. And every adult should assume their information was stolen.

·         Social media is just one of many factors that played a role in the deadly attack on the U S. Capital, but it’s a huge one that attack was openly planned online for weeks.

Bob Sullivan: If the internet sucked in 2006, what should we say about it now? I remember being an intern with Microsoft in 1995, a small part of the launch team for Windows 95. I helped launch internet news. I remember feeling at the time … it was very heady. Like John Perry, Barlow the co-founder of the electronic frontier foundation and a Grateful Dead lyricist…We both felt the internet could one day rival fire in the importance to humanity. Well, actually what he said was it was the most transforming technological event since the capture of fire.

So I think we should all admit we haven’t captured the internet. It’s a lot more like an uncontrolled fire right now. Or maybe like a wild animal. We haven’t domesticated. Not yet. Anyway, how did this happen? How did we lose control of it? Where did we go wrong? Was there some original sin of the internet? A moment when we turned right, as we should have turned left. Looking backward, isn’t always worthwhile, but sometimes it is. When you’re doing a long mathematics calculation and you make a mistake, it’s not possible to erase the answer and correct it. You have to trace your steps back to the original error and calculate forward anew.

I think it’s time. We did that with the web.

Maybe this seems like an academic question, but it’s not. The coronavirus pandemic has taught humanity a very painful lesson by now. We’ve all come to realize that like it or not, we’re in this together. We can’t rid half the planet of COVID-19 and hope for the best. That won’t work. We have to all pull in the same direction. All do the things we need to do. Wear masks, avoid indoor spaces, vaccinate when we can …to get and keep the virus on the run. And that won’t happen if we don’t all agree on the same set of facts. But right now the most powerful disinformation machine ever, the biggest lie spreading tool ever, seems to have truth on the run.

So it’s not just academic, it’s personal. It’s life or death.

How do we capture digital fire? How do we domesticate the wild animal that is the internet. The best way to get out of a hole is to stop digging. So I want to begin there.

For the next 45 minutes or so, I’m going to pursue this question of an original sin with the help of a series of experts who were there. As you’ll find out, while some of them might not like the way I frame the question, no one disagrees with the basic premise: We’ve built fatal flaws into our digital lives and we’d better fix them fast.

My first stop is with Richard Purcell. We were at Microsoft together. He was chief privacy officer at Microsoft, one of the first people to ever hold that title, back when I was a cub reporter at, we hadn’t talked in years. I caught up with him on Data Privacy Day, a holiday that’s been celebrated for more than a decade in the U.S. though, perhaps you don’t celebrate it.


Bob Sullivan: Okay. So I forgot by the way, to wish you a happy data, privacy day,

[00:07:52] Richard Purcell: This is Data Privacy Day, it is the 28th. And you know, today we in an odd way, Bob, in the United States, people like me and you and others ascross the United States are celebrating the Europeans decision to ensconce privacy as a fundamental human right. Um, There are people who would say, gosh, you know, we shouldn’t be celebrating foreign countries, foreign regions, uh, social awareness. We should be doing it ourselves.

[00:08:24] Bob Sullivan: Richard took what you’d think of today as a very unusual route to an executive job at a big software company.  But then when Richard was a teenager, there really weren’t big software companies.

Richard, when I was preparing to talk to you today, I read a little bit about you. And learned some things I didn’t know. Um, including you work at a railroad maintenance when you were a kid.

[00:08:49] Richard Purcell: I did. I did. I like to ask people about what they did in their 18th years.  So  imagine you graduated high school, you’re perhaps off to university or some other, life, study to launch yourself into adulthood, what’d you do? And, and I’ve asked that question for a lot of people and I’ve had fascinating answers. Privileged people haven’t done much in my opinion, and in my research, which is anecdotal.

But what I did is I went out on, the Union Pacific track lines and I repaired railroad tracks for two summers in a row actually to pay for, for university tuition. So I sweated in the hot sun swinging hammer and pushing railroad steel around and pulling out and putting back in creosote timbers for ties and all of that kind of stuff.

It’s what’s called a Gandy dancer. That’s when you have one foot on the ground and one foot on your shovel and you’re pushing rock underneath a railroad tie in order to secure it and keep it from moving in. That’s the Gandy dance. When you get 20 people out there, Gandy dancing, it looks pretty funny.

[00:10:01] Bob Sullivan:  Richard’s work on the railroad provides an interesting metaphorical starting point for our discussion.

[00:10:08] Richard Purcell: I’ve repaired a few derailments down in the on the Columbia River, where locomotives are on their side in a slough puffin and still running and pushing bubbles into the dirty water. It’s pretty, it’s pretty bizarre when you’re working on a river.

[00:10:23] Bob Sullivan: I feel like you just described the state of the internet.

[00:10:27] Richard Purcell: I know.. Don’t you think? Yeah. Laying on its side puffing. Yeah, no I I’m with you, you know, maybe, maybe that’s true, Bob, maybe it’s not. Because I predicted when Facebook faced its Cambridge Analytica scandal, which was a tremendous scandal and, and was, uh, not only an impeachable offense, but one which they should have been convicted for that their, that their value would eventually drop.

That it would take a while, but their value would eventually drop, frankly. It just hasn’t. The users of these internet services seem to be highly resistant to the social ramifications of the kind of negative effects of those companies. And, you know, is somebody worth $62 billion to exploit the the world’s social fabric? I don’t think so. That’s not a bargain I would want to make. But it’s one we have made.

[00:11:30] Bob Sullivan: Richard’s unusual path to the tech world colors his perceptions about the internet today, and about the role of power in social circles and in leadership

[00:11:39] Richard Purcell: I grew up … strictly the 50s, middle-class easy, no problem life, but you know, but absolutely no prosperity whatsoever. But what I saw was in everyday life is that there are these power relationships that are unfair. Those with power, even in a small town, like I grew up in are loathe to give up that power. And for some reason are inuredo the fact of their privilege;  they feel like their privilege is an entitlement.

I worked in the forest. I’ve done a lot of things. I ran a grocery store. I started a newspaper. I did all these things in communities and the vibrancy and the health of a community is what I find lacking. And leadership begins to be tainted by the objective of actually maintaining a power relationship instead of sharing it, or instead of using it more, to create more community vibrancy and health, I find those practical experiences made a big difference in my life.

[00:13:05] Bob Sullivan: It seems like you connect to privacy to power, maybe more than someone else might.

[00:13:11] Richard Purcell: [00:13:11] Oh, it is not power. Yeah. Yeah. It’s unquestionably about power. If I can know enough about you, I can manipulate you without a question. And, and that is a power relationship and the more I more successful I am in the more clever I am about that.And the more disguised I am about my motivation, uh, the more advantageous it is to me. But yes, the lack of privacy is the lack of power. Without question, because frankly it is the lack of dignity. It’s the lack of, of control over my own life. And in fact, the European Union … we celebrate data privacy day today.. the European Union’s basis of data protection is the freedom to develop a personality. That’s the language that they use when they promoted data protection and privacy some 40 years ago. And so the whole idea that you are free to develop your own personality indicates how much of a power relationship.

[00:14:21] Bob Sullivan: So if data equals power and privacy is about power. And 40 years ago, people were thinking about this, where do we go wrong? Where did the engineers drive the train off the tracks? If you will. Richard, what is the original sin of the internet?

[00:14:38] Richard Purcell: The original sin of the internet to me is a failure on our part to key in on the basic question of just because I can do something, doesn’t mean that I should do it. In other words, if I can engineer something … internet history demonstrates that because I can engineer it, then I should use it in any way that that engineering allows. And that’s just, isn’t how life should work. We’ve had many, many follies in our time over that. I don’t want to get overly dramatic about that, and I don’t want to use too harsh and examples of that, but the question really is the internet was developed as an electronic means of communication without regard to the content of that communication, largely because the engineers enabled scientists and researchers to communicate with one another. And they had benign intents for the most part. And it was never thought that anybody using it would have any other kind of intent.

[00:15:50] Bob Sullivan: Our first history lesson of this podcast. We’ll talk a lot about naive take going forward. And we’ll also talk about the word privacy, which I’m here to tell you is always a pretty big risk as a storyteller.

I think the conversation we’re having, you know, if we had it three or four years ago, it would have felt a really academic and be pretty boring to most people.

[00:16:12] Richard Purcell: It has been, you’re right. It has been very boring. I’ve bored people for a long time with this kind of. Gosh, what if, jeez, you know, shouldn’t it be this way or that way? And then the stark reality comes with Cambridge Analytica and, Oh my gosh, look at this. We can manipulate people.

[00:16:31] Bob Sullivan: But I think what is new to people is, okay, it’s one thing to manipulate them into buying a certain brand of toothpaste. It’s another thing to manipulate them. Into not believing in democracy anymore.

[00:16:42] Richard Purcell: Isn’t that the truth? I mean, now that nefarious, you know, characters really have some sophisticated controls, not just blunt instrument controls, but sophisticated controls, and have clear objectives.

It’s hard to understand isn’t it, Bob? What the clear objective of somebody who wants to create an unconstitutional limit on free and fair elections. What would their clear objective be? And there’s no way that’s a beneficent objective. That’s a very much a malicious objective, um, because it means about the accumulation and centralization of some kind of power and authority and control over large populations.

That’s what’s frightening me the most is there are…there are actors in the background who have a clear objective to create a centralized, powerful control mechanism. Um, and democracy is standing in its way.

[00:17:52] Bob Sullivan: Democracy is standing in the way. Thank goodness for that. Except when this new digital battleground for control was built, we didn’t have great models to rely on. So we borrowed heavily from the one we had and that, well, that might actually be the wrong left turn we made.

[00:18:10] Richard Purcell: In the United States, our commercial world runs largely by a model from telecommunications history, way back in radio and television that said, Hey, you know, it’s free to use. We just have advertising to support it.

So you don’t have to subscribe to it. And that was back when it was an airwaves broadcast methodology. That model, unfortunately, it’s persisting, even though it’s not an airwaves model anymore, that by which we transmit this information and communicate, but still that free access to online content persists with the underlying advertising model. And they have very strong reasons to believe that. Advertising as a model has its own dark side. And we see that and we see that from all kinds of points of view, of course. Um, but Google and Facebook are. They’re not technology companies as much as they are advertising companies, Google, and Facebook, and really all internet companies.

[00:19:14] Bob Sullivan: They’re all advertising companies now, but this is a very different kind of advertising. The best TV could do was create programming that probably attracted 18 to 34 year olds. Things have changed and changed fast

[00:19:32] Richard Purcell: Narrow casting means that I can put out a blog. I can put out a podcast, I can put out a website that has a very narrow audience, but the fact is even a narrow audience in global terms can have a large population and therefore create more advertising contacts. And as a result, Better monetization. Those issues are just a profound part of how the internet works.

[00:20:00] Bob Sullivan: It sounds obvious to say that privacy stands in the way of this business model. Is that true?

[00:20:06] Richard Purcell: [00:20:06] Absolutely. No question about it. Privacy is not friendly to the advertising model of monetization and content narrow casting because frankly. The basis of advertising is for the internet particularly, but has always been the demographics of the audience

The role of transparency and security assurance in driving technology decision-making

The purpose of this research is to understand what affects an organization’s security technology investment decision-making. Sponsored by Intel, Ponemon Institute surveyed 1,875 individuals in the US, UK, EMEA and Latin America who are involved in securing or overseeing the security of their organization’s information systems or IT infrastructure. In addition, they are familiar with their organization’s purchase of IT security technologies and services. The full report is available from Intel at this website.

A key finding from this research is the importance of technology providers being transparent and proactive in helping organizations manage their cybersecurity risks. Seventy-three percent of respondents say their organizations are more likely to purchase technologies and services from companies that are finding, mitigating and communicating security vulnerabilities proactively. Sixty-six percent of respondents say it is very important for their technology provider to have the capability to adapt to the changing threat landscape. Yet 54 percent of respondents say their technology providers don’t offer this capability.

“Security doesn’t just happen. If you are not finding vulnerabilities, then you are not looking hard enough,” said Suzy Greenberg, vice president, Intel Product Assurance and Security. “Intel takes a transparent approach to security assurance to empower customers and deliver product innovations that build defenses at the foundation, protect workloads and improve software resilience. This intersection between innovation and security is what builds trust with our customers and partners.”

Key findings from the study include:

  • Seventy-three percent of respondents say their organization is more likely to purchase technologies and services from technology providers that are proactive about finding, mitigating and communicating security vulnerabilities. Forty-eight percent say their technology providers don’t offer this capability.
  • Seventy-six percent of respondents say it is highly important that their technology provider offer hardware-assisted capabilities to mitigate software exploits.
  • Sixty-four percent of respondents say it is highly important for their technology provider to be transparent about available security updates and mitigations. Forty-seven percent say their technology provider doesn’t provide this transparency.
  • Seventy-four percent of respondents say it is highly important for their technology provider to apply ethical hacking practices to proactively identify and address vulnerabilities in its own products.
  • Seventy-one percent of respondents say it is highly important for technology providers to offer ongoing security assurance and evidence that the components are operating in a known and trusted state.

Part 2. The characteristics of the ideal technology provider

The characteristics are broken down into three categories: security assurance, innovation and adoption. Following are the most important characteristics of a technology provider and its ability to have this capability. As shown, there is a significant gap between the importance of these features and the ability of many providers to have this capability.

Security Assurance
The ability to identify vulnerabilities in its own products and mitigate them. Sixty-six percent say this is highly important. Only 46 percent of respondents say their current technology provider has this capability

The ability to be transparent about security updates and mitigations that are available. Sixty-four percent of respondents say this is highly important. Less than half (48 percent) of respondents say their technology providers have this capability.

Ability to offer ongoing security assurance and evidence that the components are operating in a known and trusted state. Seventy-one percent say this is highly important.

Ability for the technology provider to have the capability to apply ethical hacking practices to proactively identify and address vulnerabilities in its own products. Seventy-four percent of respondents believe this is highly important.

Protecting distributed workloads, data in use and hardware-assisted capabilities to defend against software exploits are highly important. The protection of customer data from insider threats is considered highly important by 79 percent of respondents. Organizations prioritize protecting data in use over data in transit and data at rest. Similarly, 76 percent of respondents say hardware-assisted capabilities to defend against software exploits and 72 percent of respondents say protecting distributed workloads are highly important.

Interoperability issues and installation costs are the primary influencers when making investments in technologies. The top five factors that influence the deployment of security technologies are interoperability issues (63 percent of respondents), installation costs (58 percent of respondents), system complexity issues (57 percent of respondents), vendor support issues (55 percent of respondents) and scalability issues (53 percent of respondents).

As part of their decision-making process, organizations are measuring the economic benefits of security technologies deployed by their organizations. Forty-seven percent of respondents use metrics to understand the value of their technologies. The measures most often used are ROI (58 percent of respondents), the decrease in false positive rates (48 percent of respondents) and the total cost of ownership (46 percent of respondents).

Organizations are at risk because of the inability to quickly address vulnerabilities. As discussed, a top goal of the IT function is to improve the ability to quickly vulnerabilities. Thirty-six percent of respondents say they only scan every month or more than once a month.

While 30 percent of respondents say their organizations can patch critical or high priority vulnerabilities in a week or less, on average, it takes almost six weeks to patch a vulnerability once it is detected. The delays in patching are mainly caused by human error (63 percent of respondents), the inability to take critical applications and systems off-line in order to patch quickly (58 percent of respondents) and not having a common view of applications and assets across security and IT teams (52 percent of respondents).

Organizations’ IT budgets are not sufficient to support a strong security posture. Eighty-six percent of respondents say their IT budget is only adequate (45 percent of respondents) or less than adequate (41 percent of respondents). Fifty-three percent of respondents say the IT security budget is part of the overall IT budget.

Responsibility for security is still uncertain across organizations. Twenty-one percent of respondents agree the security leader (CISO) should be responsible for IT security objectives, while 19 percent of respondents believe the CIO/CTO and 17 percent of respondents think the business unit leader should be responsible. The conclusion is that there is uncertainty in responsibility.

To read the rest of this study, visit Intel’s website at (PDF)

The ‘de-platforming’ of Donald Trump and the future of the Internet

Bob Sullivan


That’s the word that stormed tech-land earlier this year, and it’s about time. After the storming of the U.S. Capitol by a criminal mob in January, a host of companies cut off then-President Trump’s digital air supply. His Tweets and Facebook posts fell first; next came second-order effects, like payment processors cutting off the money supply. Finally, upstart right-wing social media platform Parler was removed from app stores and then denied Internet service by Amazon.

Let the grand argument of our time begin.

The story of Donald Trump’s de-platforming involves a dramatic collision of Net Neutrality, and Section 230 “immunity,” and free speech, and surveillance capitalism, even privacy. I think it’s the issue of our time. It deserves much more than a story; it deserves an entire school of thought. But I’ll try to nudge the ball forward here.

This is not a political column, though I realize that everything is political right now. In this piece, I will not examine the merits of banning Trump from Twitter; you can read about that elsewhere.

The deplatforming of Trump is an important moment in the history of the Internet, and it should be examined as such. Yes, it’s very fair to ask, if this can happen to Trump, if it can happen to Parler, can’t it happen to anyone?

But let’s examine it the way teen-agers do in their first year of college. Let’s not scream “free speech” or “slippery slope” at each other and then pop open a can of beer, assuming that that’s some kind of mic drop. Kids do that. Adults live on planet Earth, where decisions are complex, and evolve, and have real-life consequences.

I’ll start here. You can sell guns and beer in most places in America. You can’t sell a gun to someone who walks into your store screaming, “I’m going to kill someone,” and you can’t sell beer to someone who’s obviously drunk and getting into the driver’s seat. You can’t keep selling a car — or even a toaster! — that you know has a defect which causes fires. Technology companies are under no obligation to allow users to abuse others with the tools they build. Cutting them off is not censorship. In fact, forcing these firms to allow such abuse because someone belongs to a political party IS censorship, the very thing that happens in places like China. Tech firms are, and should be, free to make their own decisions about how their tools are used. (With…some exceptions! This is the real world.)

I’ll admit freely: This analogy is flawed. When it comes to technology law — and just technology choices — everyone reaches for analogies, because we want so much for there to be a precedent for our decisions. That takes the pressure off us. We want to say, “This isn’t my logic. It’s Thomas Jefferson’s logic! He’s the reason Facebook must permit lies about the election to be published in my news feed!” Sorry, life isn’t like that. We’re adults. We have to make these choices. They will be hard. They’re going to involve a version 1, and a version 2 and a version 3, and so on. The technology landscape is full of unintended, unexpected consequences, and we must rise to that challenge. We can’t cede our agency in an effort to find silver bullet solutions from the past. We have to make them up as we go along.

That’s why the best thing I read the past few months  about the Trump deplatforming was this piece by Techdirt’s Mike Masnick. He raises an issue that many tech folks want to avoid: Twitter and Facebook have tried really hard to explain their choices by shoehorning them into standing policy violations. That has left everyone unhappy. (Why didn’t they do this months ago? Wait, what policy was broken?) Masnick gets to the heart of the matter quickly.

So, Oremus is mostly correct that they’re making the rules up as they go along, but the problem with this framing is that it assumes that there are some magical rules you can put in place and then objectively apply them always. That’s never ever been the case. The problem with so much of the content moderation debate is that all sides assume these things. They assume that it’s easy to set up rules and easy to enforce them. Neither is true. Radiolab did a great episode a few years ago, detailing the process by which Facebook made and changed its rules. And it highlights some really important things including that almost every case is different, that it’s tough to apply rules to every case, and that context is always changing. And that also means the rules must always keep changing.

A few years back, we took a room full of content moderation experts and asked them to make content moderation decisions on eight cases — none of which I’d argue are anywhere near as difficult as deciding what to do with the President of the United States. And we couldn’t get these experts to agree on anything. On every case, we had at least one person choose each of the four options we gave them, and to defend that position. The platforms have rules because it gives them a framework to think about things, and those rules are useful in identifying both principles for moderation and some bright lines.

But every case is different.

For a long time, I have argued that tech firms’ main failing is they don’t spend anywhere near enough money on safety and security. They have, nearly literally, gotten away with murder for years while the tools they have made cause incalculable harm in the world. Depression, child abuse, illicit drugs sales, societal breakdowns, income inequality…tech has driven all these things. The standard “it’s not the tech, it’s the people” argument is another “pop open a beer” one-liner used by techie libertarians who want to count their money without feeling guilty, but we know it’s a dangerous rationalization. Would so many people believe the Earth is flat without YouTube’s recommendation engine? No. Would America be so violently divided without social media? No. You built it, you have to fix it.

If a local mall had a problem with teen-age gangs hanging around at night causing mayhem, the mall would hire more security guards. That’s the cost of doing business. For years, big tech platforms have tried to get away with “community moderation” — i.e., they’ve been cheap. They haven’t spent anywhere near enough money to stop the crime committed on their platforms. Why? Because I think it’s quite possible the entire idea of Faceook wouldn’t exist if it had to be safe for users. Safety doesn’t scale. Safety is expensive. It’s not sexy to investors.

How did we get here? In part, thanks to that Section 230 you are hearing so much about. You’ll hear it explained this way: Section 230 gives tech firms immunity from bad things that happen on their platforms. Suddenly, folks who’ve not cared much about it for decades are yelling for its repeal. As many have expressed, it would be better to read up on Section 230 before yelling about it (Here’s my background piece on it). But in short, repealing it wholesale would indeed threaten the entire functioning of the digital economy. Section 230 was designed to do exactly what it is I am hinting at here — to give tech firms the ability to remove illegal or offensive content without assuming business-crushing liability for everything they do. Again, it’s a law written from an age before Amazon existed, so it sure could use modernization by adults. But pointing to it as some untouchable foundational document, or throwing out the baby with the bathwater, are both the behavior of children, not adults. We’re going to have to make some things up as we go along.

Here’s the thing about “free speech” on platforms like Twitter and Facebook. As a private company, Twitter can do whatever it wants to do with the tool it makes. Forcing it to carry this or that post is a terrible idea. President Trump can stand in front of a TV camera, or buy his own TV station (as seems likely) and say whatever he wants. Suspending his account is not censorship. As I explain in a recent Section 230 post, however, even that line of logic is incomplete. social media firms use public resources, and some are so dominant that they are akin to a public square. We just might need a new rule for this problem! I suspect the right rule isn’t telling Twitter what to post, but perhaps making sure there is more diversity in technology tools available to speakers.

But here’s the thing I find myself saying most often right now: The First Amendment guarantees your right to say (most) things; it doesn’t guarantee your right to algorithmic juice. I believe this is the main point of confusion we face right now, and one that really sends a lot of First Amendment thinking for a loop. Information finds you now. That information has been hand-picked to titillate you like crack cocaine, or like sex. The more extreme things people say, the more likely they are to land on your Facebook wall, or in your Twitter feed, or wherever you live. It’s one thing to give Harold the freedom to yell to his buddies at the bar about children locked in a basement by Democrats at a pizza place in Washington D.C. It’s quite another to give him priority access to millions of people using a tool designed to make them feel like they are viewing porn. That’s what some people are calling free speech right now. James Madison didn’t include a guaranteed right to “virality” in the Bill of Rights. No one guaranteed that Thomas Paine’s pamphlets were to be shoved under everyone’s doors, let alone right in front of their eyeballs at the very moment they were most likely to take up arms. We’re going to need new ways to think about this. In the algorithmic world, the beer-popping line, “The solution to hate speech is more speech,” just doesn’t cut it.

I’m less interested in the Trump ban than I am in the ban of services like Parler. Trump will have no trouble speaking; but what of everyday conservatives who are now wondering if tech is out the get them? If you are liberal: Imagine if some future government decides Twitter is a den of illegal activity and shuts it down. That’s an uncomfortable intellectual exercise, and one we shouldn’t dismiss out of hand.

Parler is a scary place. Before anyone has anything to say about its right to exist, I think you really should spend some time there, or at least read my story about the lawsuit filed by Christopher Krebs. Ask yourself this question: What percentage of a platform’s content needs to be death threats, or about organizing violence, before you’ll stop defending its right to exist? Let’s say we heard about a tool named “Anteroom” which ISIS cells used to radicalize young Muslims, spew horrible things about Americans, teach bomb-making, and organize efforts to…storm the Capitol building in D.C.. Would you really be so upset if Apple decided not to host Anteroom on its play store?

So, what do we do with Parler? Despite all this, I’m still uncomfortable with removing its access to network resources the way Amazon has. I think that feels more like smashing a printing press than forcing books into people’s living rooms. Amazon Web Services is much more like a public utility than Twitter is. The standard for removal there should be much higher, I think. And if it makes you uncomfortable that a private company made that decision, rather than some public body that is responsible to voting citizens, it should. At the same time, if you can think of no occasion for banning a service like Parler, then you just aren’t thinking.

These are complex issues and we will need our best, most creative legal minds to ponder them in the years to come. Here’s one: Matt Stoler, in his Big newsletter about tech monopolies, offers a thoughtful framework for dealing with this problem. I’d like to hear yours. But don’t hit me with beer-popping lines or stuff you remember from Philosophy 101. This is no time for academic smugness. We have real problems down here on planet Earth.

Should conservative social network Parler Be Removed from AWS, Google, and Apple app stores? This is an interesting question, because Parler is where some of the organizing for the Capitol hill riot took place. Amazon just removed Parler from its cloud infrastructure, and Google and Apple removed Parler from their app stores. Removing the product may seem necessary to save lives, but having these tech oligarchs do it seems like a dangerous overreach of private authority. So what to do?

My view is that what Parler is doing should be illegal, because it should be responsible on product liability terms for the known outcomes of its product, aka violence. This is exactly what I wrote when I discussed the problem of platforms like Grindr and Facebook fostering harm to users. But what Parler is doing is *not* illegal, because Section 230 means it has no obligation for what its product does. So we’re dealing with a legal product and there’s no legitimate grounds to remove it from key public infrastructure. Similarly, what these platforms did in removing Parler should be illegal, because they should have a public obligation to carry all customers engaging in legal activity on equal terms. But it’s not illegal, because there is no such obligation. These are private entities operating public rights of way, but they are not regulated as such.

In other words, we have what should be an illegal product barred in a way that should also be illegal, a sort of ‘two wrongs make a right’ situation. I say ‘sort of’ because letting this situation fester without righting our public policy will lead to authoritarianism and censorship. What we should have is the legal obligation for these platforms to carry all legal content, and a legal framework that makes business models fostering violence illegal. That way, we can make public choices about public problems, and political violence organized on public rights-of-way certain is a public problem.


Data Center Downtime at the Core and the Edge: A Survey of Frequency, Duration and Attitudes

Edge computing is expanding rapidly and re-shaping the data center ecosystem as organizations across industries move computing and storage closer to users to improve response times and reduce bandwidth requirements.

While forms of distributed computing have been common in some sectors for years, this current evolution is distinct in that it is enabling a broad range of new and emerging applications and has higher criticality requirements than traditional distributed computing sites.

At the same time, core data center managers are dealing with increased complexity and balancing multiple and sometimes conflicting priorities that can compromise availability.

As a result, today’s data center networks are more vulnerable to downtime than ever before. In an effort to quantify that vulnerability, the Ponemon Institute conducted a study of downtime frequency, duration and attitudes at the core and the edge, sponsored by Vertiv.

The study is based on responses from 425 participants representing 132 data centers and 1,667 edge locations. All core and edge data centers included in the study are located in the United States/Canada and Latin America (LATAM).

The study found data center networks vulnerable to downtime events across the network. Core data centers experienced an average of 2.4 total facility shutdowns per year with an average duration of more than two hours (138 minutes). This is in addition to almost 10 downtime events annually isolated to select racks or servers. At the edge, the frequency of total facility shutdowns was even higher, although the duration of those outages was less than half that of those in core data centers.

The study also looks at the attitudes that shape decisions regarding core and edge data centers to help identify factors that could be contributing to downtime events. More than half (54%) of all core data centers are not using best practices in system design and redundancy, and 69% say their risk of an unplanned outage is increased as a result of cost constraints.

Leading causes of unplanned downtime events at the core and the edge included cyberattacks, IT equipment failures, human error, UPS battery failure, and UPS equipment failure.

Finally, the study asked participants to identify the actions their organizations could take to prevent future downtime events. They identified activities ranging from investment in new equipment to infrastructure redundancy to improved training and documentation.

Key Findings

Facility Size
Edge data centers aren’t necessarily defined by size but by function. For the purpose of this research, edge data centers are defined as facilities that bring computation and data storage closer to the location where it is needed to improve response times and save bandwidth. Nevertheless edge data centers were on average about one-third the size of the core data centers.

The extrapolated size for core data centers that participated in this study is 15,153 square feet/1,408 square meters. For edge computing facilities, the average size is 5,010 square feet/465 square meters.

Frequency of Core and Edge Downtime

 Figure 3 shows the shutdown experience of participating data centers over the past 24 months. As can be seen, total data center shutdown has the lowest frequency (4.81). However, these events are also the most disruptive, and the 4.81 unplanned total facility shutdowns over a 24-month period would be considered unacceptable for many organizations.

Partial outages of certain racks in the data center have the highest frequency at 9.93, followed by individual server outages at 9.43.

It can be difficult to directly compare the total number of downtime events in edge and core facilities due to the higher complexity generally found in core data centers and the increased presence of personnel in these facilities. However, it is possible to compare total facility shutdowns for core and edge data centers. Edge data centers experienced a slightly higher frequency of total facility shutdowns at an average of 5.39 over 24 months. As edge sites continue to proliferate, reducing the frequency of outages at the edge will become a high priority for many organizations.


Facebook needs a corrections policy, viral circuit breakers, and much more

Bob Sullivan

“After hearing that Facebook is saying that posting the Lord’s Prayer goes against their policies, I’m asking all Christians please post it. Our Father, who art in Heaven…”

Someone posted this obvious falsehood on Facebook recently, and it ended up on my wall. Soon after, a commenter in the responses broke the news to the writer that this was fake. The original writer than did what many social media users do: responded with a simple, “Well, someone else said it and I was just repeating it.” No apology. No obvious shame. And most important, no correction.

I don’t know how many people saw the original post, but I am certain far fewer people saw the response and link to Snopes showing it was a common hoax.

If there’s one thing that people rightly hate about journalists, it’s our tendency to put mistakes on the front page and corrections on the back page. That’s unfair, of course. If you make a mistake, you should try to make sure just as many people see the correction as the mistake. Journalists might be bad at this, but Facebook is awful at it. This is one reason social media is such a dastardly tool for sharing misinformation.

Fixing this is one of the novel recommendations in a great report published late last year by the Forum on Information and Democracy, an international group formed to make recommendations about the future of social media. The report suggests that, when an item like this Our Father assertion spreads on social media and is determined to be misinformation by an independent group of fact checkers, a correction should be shown to each and every user exposed to it. So, not just a lame “I dunno, a friend said it” that’s buried in later comments. Rather, it would require an attempt to undo the actual spread of incorrect information.

Anyone concerned about the future of democracy and discourse should take a look at this report. I’ve summarized a few of the highlights below.

Facebook recently said it would start downplaying political posts in an effort to deal with the disinformation problem. It must do much more than that.   The Forum on Information and Democracy offers a good start.


Circuit breakers

The report calls for creation of friction to prevent misinformation from spreading like wildfore. My favorite part of this section calls for creation of “circuit breakers” that would slow down viral content after it reaches a certain threshold, giving fact-checkers a chance to examine it. The concept is borrowed from Wall Street. When trading in a certain stock falls dramatically out of pattern — say there’s a sudden spike in sales — circuit breakers kick in to give traders a moment to breathe and digest whatever news might exist. Sometimes, that short break re-introduces rationality to the market. Note: This wouldn’t infringe on anyone’s ability to say what they want, it would merely slow down the automated augmentation of this speach.

A digital ‘building code’ and ‘agency by design’

You can’t buy a toaster that hasn’t been tested to make sure it’s safe, but you can use software that might, metaphorically, burn your house down. This metaphor was used a lot after the mortgage meltdown in 2008, and it applies here, too. “In the same way that fire safety tests are conducted prior to a building being opened to the public, such a ‘digital building code’ would also result in a shift towards prevention of harm through testing prior to release to the public,” the report says.

The report adds a few great nuggets. While being critical of click-to-consent agreements, it says: “In the case of civil engineering, there are no private ‘terms and conditions’ that can override the public’s presumption of safety.” The report also calls for a concept called “agency by design” that would require software engineers to design opt-ins and other moments of permission so users are most likely to understand what they are agreeing to. This is “proactively choice-enhancing,” the report argues.

Abusability Testing

I’ve long been enamored of this concept. Most software is now tested by security professionals hired to “hack” it, a process called penetration testing. This concept should be expanded to include any kind of misuse of software by bad guys. If you create a new messenger service, could it be abused by criminals committing sweetheart scams? Could it be turned into a weapon by a nation-state promoting disinformation campaigns? Could it aid and abet those who commit domestic violence? Abusability testing should be standard, and critically, it should be conducted early on in software development.

Algorithmic undue influence

In other areas of law. contracts can be voided if one party has so much influence over the other that true consent could not be given. The report suggests that algorithms create this kind of imbalance online, so the concept should be extended to social media.

“(Algorithmic undue influence) could result in individual choices that would not occur but for the duplicitous intervention of an algorithm to amplify or withhold select information on the basis of engagement metrics created by a platform’s design or engineering choice.”

“Already, the deleterious impact of algorithmic amplification of COVID-19 misinformation has been seen, and there are documented cases where individuals took serious risks to themselves and others as a result of deceptive conspiracy theories presented to them on social platforms. The law should view these individuals as victims who relied on a hazardously engineered platform that exposed them to manipulative information that led to serious harm.”


Social media has created content bubbles and echo chambers. In the real world, it can be illegal to engineer residential developments that encourage segregation. The report suggests similar limitations online.

“Platforms … tacitly assume a role akin to town planners. … Some of the most insidious harms documented from social platforms have resulted from the algorithmic herding of users into homogenous clusters.. What we’re seeing is a cognitive segregation, where people exist in their own informational ghettos,” the report says.

In order to promote a shared digital commons, the report makes these suggestions.

Consider applying equivalent anti-segregation legal principles to the digital commons, including a ban on ‘digital redlining’, where platforms allow groups or advertisers to prevent particular racial or religious groups from accessing content.

Create legal tests focused on the ultimate effects of platform design on racial inequities and substantive fairness, regardless of the original intent of design.

Create specific standards and testing requirements for algorithmic bias.

Disclose conflict of interests

The report includes a lot more on algorithms and transparency, but one item I found noteworthy: If a platform promotes or demotes content in a way that is self-serving, it has to disclose that. If Facebook’s algorithm makes a decision about a new social media platform, or about a government’s efforts to regulate social media, Facebook would have to disclose that.

These are nuggets I pulled out, but to grasp the larger concepts, I strongly recommend you read the entire report.