Category Archives: Uncategorized

Data Center Downtime at the Core and the Edge: A Survey of Frequency, Duration and Attitudes

Edge computing is expanding rapidly and re-shaping the data center ecosystem as organizations across industries move computing and storage closer to users to improve response times and reduce bandwidth requirements.

While forms of distributed computing have been common in some sectors for years, this current evolution is distinct in that it is enabling a broad range of new and emerging applications and has higher criticality requirements than traditional distributed computing sites.

At the same time, core data center managers are dealing with increased complexity and balancing multiple and sometimes conflicting priorities that can compromise availability.

As a result, today’s data center networks are more vulnerable to downtime than ever before. In an effort to quantify that vulnerability, the Ponemon Institute conducted a study of downtime frequency, duration and attitudes at the core and the edge, sponsored by Vertiv.

The study is based on responses from 425 participants representing 132 data centers and 1,667 edge locations. All core and edge data centers included in the study are located in the United States/Canada and Latin America (LATAM).

The study found data center networks vulnerable to downtime events across the network. Core data centers experienced an average of 2.4 total facility shutdowns per year with an average duration of more than two hours (138 minutes). This is in addition to almost 10 downtime events annually isolated to select racks or servers. At the edge, the frequency of total facility shutdowns was even higher, although the duration of those outages was less than half that of those in core data centers.

The study also looks at the attitudes that shape decisions regarding core and edge data centers to help identify factors that could be contributing to downtime events. More than half (54%) of all core data centers are not using best practices in system design and redundancy, and 69% say their risk of an unplanned outage is increased as a result of cost constraints.

Leading causes of unplanned downtime events at the core and the edge included cyberattacks, IT equipment failures, human error, UPS battery failure, and UPS equipment failure.

Finally, the study asked participants to identify the actions their organizations could take to prevent future downtime events. They identified activities ranging from investment in new equipment to infrastructure redundancy to improved training and documentation.

Key Findings

Facility Size
Edge data centers aren’t necessarily defined by size but by function. For the purpose of this research, edge data centers are defined as facilities that bring computation and data storage closer to the location where it is needed to improve response times and save bandwidth. Nevertheless edge data centers were on average about one-third the size of the core data centers.

The extrapolated size for core data centers that participated in this study is 15,153 square feet/1,408 square meters. For edge computing facilities, the average size is 5,010 square feet/465 square meters.

Frequency of Core and Edge Downtime

 Figure 3 shows the shutdown experience of participating data centers over the past 24 months. As can be seen, total data center shutdown has the lowest frequency (4.81). However, these events are also the most disruptive, and the 4.81 unplanned total facility shutdowns over a 24-month period would be considered unacceptable for many organizations.

Partial outages of certain racks in the data center have the highest frequency at 9.93, followed by individual server outages at 9.43.

It can be difficult to directly compare the total number of downtime events in edge and core facilities due to the higher complexity generally found in core data centers and the increased presence of personnel in these facilities. However, it is possible to compare total facility shutdowns for core and edge data centers. Edge data centers experienced a slightly higher frequency of total facility shutdowns at an average of 5.39 over 24 months. As edge sites continue to proliferate, reducing the frequency of outages at the edge will become a high priority for many organizations.

TO READ THE REST OF THIS REPORT, VISIT VERTIV’S WEBSITE

Facebook needs a corrections policy, viral circuit breakers, and much more

Bob Sullivan

“After hearing that Facebook is saying that posting the Lord’s Prayer goes against their policies, I’m asking all Christians please post it. Our Father, who art in Heaven…”

Someone posted this obvious falsehood on Facebook recently, and it ended up on my wall. Soon after, a commenter in the responses broke the news to the writer that this was fake. The original writer than did what many social media users do: responded with a simple, “Well, someone else said it and I was just repeating it.” No apology. No obvious shame. And most important, no correction.

I don’t know how many people saw the original post, but I am certain far fewer people saw the response and link to Snopes showing it was a common hoax.

If there’s one thing that people rightly hate about journalists, it’s our tendency to put mistakes on the front page and corrections on the back page. That’s unfair, of course. If you make a mistake, you should try to make sure just as many people see the correction as the mistake. Journalists might be bad at this, but Facebook is awful at it. This is one reason social media is such a dastardly tool for sharing misinformation.

Fixing this is one of the novel recommendations in a great report published late last year by the Forum on Information and Democracy, an international group formed to make recommendations about the future of social media. The report suggests that, when an item like this Our Father assertion spreads on social media and is determined to be misinformation by an independent group of fact checkers, a correction should be shown to each and every user exposed to it. So, not just a lame “I dunno, a friend said it” that’s buried in later comments. Rather, it would require an attempt to undo the actual spread of incorrect information.

Anyone concerned about the future of democracy and discourse should take a look at this report. I’ve summarized a few of the highlights below.

Facebook recently said it would start downplaying political posts in an effort to deal with the disinformation problem. It must do much more than that.   The Forum on Information and Democracy offers a good start.

 

Circuit breakers

The report calls for creation of friction to prevent misinformation from spreading like wildfore. My favorite part of this section calls for creation of “circuit breakers” that would slow down viral content after it reaches a certain threshold, giving fact-checkers a chance to examine it. The concept is borrowed from Wall Street. When trading in a certain stock falls dramatically out of pattern — say there’s a sudden spike in sales — circuit breakers kick in to give traders a moment to breathe and digest whatever news might exist. Sometimes, that short break re-introduces rationality to the market. Note: This wouldn’t infringe on anyone’s ability to say what they want, it would merely slow down the automated augmentation of this speach.

A digital ‘building code’ and ‘agency by design’

You can’t buy a toaster that hasn’t been tested to make sure it’s safe, but you can use software that might, metaphorically, burn your house down. This metaphor was used a lot after the mortgage meltdown in 2008, and it applies here, too. “In the same way that fire safety tests are conducted prior to a building being opened to the public, such a ‘digital building code’ would also result in a shift towards prevention of harm through testing prior to release to the public,” the report says.

The report adds a few great nuggets. While being critical of click-to-consent agreements, it says: “In the case of civil engineering, there are no private ‘terms and conditions’ that can override the public’s presumption of safety.” The report also calls for a concept called “agency by design” that would require software engineers to design opt-ins and other moments of permission so users are most likely to understand what they are agreeing to. This is “proactively choice-enhancing,” the report argues.

Abusability Testing

I’ve long been enamored of this concept. Most software is now tested by security professionals hired to “hack” it, a process called penetration testing. This concept should be expanded to include any kind of misuse of software by bad guys. If you create a new messenger service, could it be abused by criminals committing sweetheart scams? Could it be turned into a weapon by a nation-state promoting disinformation campaigns? Could it aid and abet those who commit domestic violence? Abusability testing should be standard, and critically, it should be conducted early on in software development.

Algorithmic undue influence

In other areas of law. contracts can be voided if one party has so much influence over the other that true consent could not be given. The report suggests that algorithms create this kind of imbalance online, so the concept should be extended to social media.

“(Algorithmic undue influence) could result in individual choices that would not occur but for the duplicitous intervention of an algorithm to amplify or withhold select information on the basis of engagement metrics created by a platform’s design or engineering choice.”

“Already, the deleterious impact of algorithmic amplification of COVID-19 misinformation has been seen, and there are documented cases where individuals took serious risks to themselves and others as a result of deceptive conspiracy theories presented to them on social platforms. The law should view these individuals as victims who relied on a hazardously engineered platform that exposed them to manipulative information that led to serious harm.”

De-segregation

Social media has created content bubbles and echo chambers. In the real world, it can be illegal to engineer residential developments that encourage segregation. The report suggests similar limitations online.

“Platforms … tacitly assume a role akin to town planners. … Some of the most insidious harms documented from social platforms have resulted from the algorithmic herding of users into homogenous clusters.. What we’re seeing is a cognitive segregation, where people exist in their own informational ghettos,” the report says.

In order to promote a shared digital commons, the report makes these suggestions.

Consider applying equivalent anti-segregation legal principles to the digital commons, including a ban on ‘digital redlining’, where platforms allow groups or advertisers to prevent particular racial or religious groups from accessing content.

Create legal tests focused on the ultimate effects of platform design on racial inequities and substantive fairness, regardless of the original intent of design.

Create specific standards and testing requirements for algorithmic bias.

Disclose conflict of interests

The report includes a lot more on algorithms and transparency, but one item I found noteworthy: If a platform promotes or demotes content in a way that is self-serving, it has to disclose that. If Facebook’s algorithm makes a decision about a new social media platform, or about a government’s efforts to regulate social media, Facebook would have to disclose that.

These are nuggets I pulled out, but to grasp the larger concepts, I strongly recommend you read the entire report.

The State of Breach and Attack Simulation and the Need for Continuous Security Validation: A Study of US and UK Organizations

The purpose of this research, sponsored by Cymulate, is to better understand how the rapidly evolving threat landscape and the frequency of changes in the IT architecture and in security are creating new challenges. The research focuses on the testing and validation of security controls in this extremely dynamic environment. We also seek to understand the issues organizations have in their ability to detect and remediate threats through assessments and testing of security controls.

Although change has always been a constant in both IT and cybersecurity, COVID-19 has accelerated business digitization and security adaptations. Seventy-nine percent of respondents say that they have had to modify security policies to accommodate working from home.

Sixty-two percent of respondents say their organizations had to acquire new security technologies to protect WFH, and yet 62 percent of respondents say their organizations did not validate these newly deployed security controls.

Ponemon Institute surveyed 1,016 IT and IT security practitioners in the United States and United Kingdom who are familiar with their organizations’ testing and evaluation of security controls. An average of 13 individuals staff the security team in organizations represented in this research.

Following are key takeaways from the research.

  • Sixty-one percent of respondents say the benefit of continuous security validation or frequent security testing is the ability to identify security gaps due to changes in the IT architecture followed by 59 percent of respondents who say it is the ability to identify security gaps caused by human error and misconfigurations.
  •  Sixty percent of respondents say their organizations are making frequent changes to security controls; daily (27 percent of respondents) and weekly (33 percent of respondents). Sixty-seven percent of respondents say that it is very important to test that the changes applied to the security controls have not created security gaps such as software bugs or vulnerabilities, misconfigurations and human error.
  • Seventy percent of respondents say it is important to validate the effectiveness of security controls against new threats and hacker techniques and tactics.

 The following findings are based on a deeper analysis of the research.

 Vigilance in testing the effectiveness of security controls increases confidence that security controls are working as they are supposed to.

  • Organizations that self-reported their organization is vigilant in testing the effectiveness of their security controls (38 percent respondents) have a much higher level of confidence that their organization’s security controls are working as they are supposed to. Of the 22 percent of respondents who rate their level of confidence as high, almost half (47 percent) of respondents say they are vigilant in their effectiveness testing. 

High confidence in security controls increases the security posture in an evolving threat landscape.

  • Organizations that have a high level of confidence that their organization’s security controls are working as they are supposed to are applying changes to security controls (e.g., configuration setting, software or signature update policy rules, etc.) daily or weekly.
  • These organizations have a much lower percentage of security controls that fail pen testing and/or attack simulation within each cycle. Specifically, 25 percent of respondents with high confidence say less than 10 percent of security controls fail pen testing and/or attack simulation.

“It is clear from the report that security experts see the need for continuous security validation. Given that the primary methodology for security testing is limited in scope, manual and a lengthy process, it does not meet the pace of new threats and business-driven IT change. It comes as no surprise that threat actors are free to exploit remote access, remote desktop, and virtual desktop vulnerabilities, as companies expanded the use of these technologies without security validation, to support employees working from home.” Said Eyal Wachsman, Co-Founder and CEO at Cymulate.

The report is organized according to the following topics.

  • The impact of current approaches to the testing of security controls on an organization’s security posture
  • Security control validation and Breach and Attack Simulation (BAS)
  • Steps taken to address possible security risks due to COVID-19
  • Perceptions about the effectiveness of Managed Security Service Providers (MSSPs)
  • Differences between organizations in the US and UK

Read the entire report on Cymulate’s website

 

The ‘de-platforming’ of Donald Trump and the future of the Internet

Bob Sullivan

“De-platforming.”

That’s the word of the week in tech-land, and it’s about time. After the storming of the U.S. Capitol by a criminal mob on Wednesday, a host of companies have cut off President Trump’s digital air supply. His Tweets and Facebook posts fell first; next came second-order effects, like payment processors cutting off the money supply. Finally, upstart right-wing social media platform Parler was removed from app stores and then denied Internet service by Amazon.

Let the grand argument of our time begin.

The story of Donald Trump’s de-platforming involves a dramatic collision of Net Neutrality, and Section 230 “immunity,” and free speech, and surveillance capitalism, even privacy. I think it’s the issue of our time. It deserves much more than a story; it deserves an entire school of thought. But I’ll try to nudge the ball forward here.

This is not a political column, though I realize that everything is political right now. In this piece, I will not examine the merits of banning Trump from Twitter; you can read about that elsewhere.

The deplatforming of Trump is an important moment in the history of the Internet, and it should be examined as such. Yes, it’s very fair to ask, if this can happen to Trump, if it can happen to Parler, can’t it happen to anyone?

But let’s examine it the way teen-agers do in their first year of college. Let’s not scream “free speech” or “slippery slope” at each other and then pop open a can of beer, assuming that that’s some kind of mic drop. Kids do that. Adults live on planet Earth, where decisions are complex, and evolve, and have real-life consequences.

I’ll start here. You can sell guns and beer in most places in America. You can’t sell a gun to someone who walks into your store screaming, “I’m going to kill someone,” and you can’t sell beer to someone who’s obviously drunk and getting into the driver’s seat. You can’t keep selling a car — or even a toaster! — that you know has a defect which causes fires. Technology companies are under no obligation to allow users to abuse others with the tools they build. Cutting them off is not censorship. In fact, forcing these firms to allow such abuse because someone belongs to a political party IS censorship, the very thing that happens in places like China. Tech firms are, and should be, free to make their own decisions about how their tools are used. (With…some exceptions! This is the real world.)

I’ll admit freely: This analogy is flawed. When it comes to technology law — and just technology choices — everyone reaches for analogies, because we want so much for there to be a precedent for our decisions. That takes the pressure off us. We want to say, “This isn’t my logic. It’s Thomas Jefferson’s logic! He’s the reason Facebook must permit lies about the election to be published in my news feed!” Sorry, life isn’t like that. We’re adults. We have to make these choices. They will be hard. They’re going to involve a version 1, and a version 2 and a version 3, and so on. The technology landscape is full of unintended, unexpected consequences, and we must rise to that challenge. We can’t cede our agency in an effort to find silver bullet solutions from the past. We have to make them up as we go along.

That’s why the best thing I read the past few days about the Trump deplatforming was this piece by Techdirt’s Mike Masnick. He raises an issue that many tech folks want to avoid: Twitter and Facebook have tried really hard to explain their choices by shoehorning them into standing policy violations. That has left everyone unhappy. (Why didn’t they do this months ago? Wait, what policy was broken?) Masnick gets to the heart of the matter quickly.

So, Oremus is mostly correct that they’re making the rules up as they go along, but the problem with this framing is that it assumes that there are some magical rules you can put in place and then objectively apply them always. That’s never ever been the case. The problem with so much of the content moderation debate is that all sides assume these things. They assume that it’s easy to set up rules and easy to enforce them. Neither is true. Radiolab did a great episode a few years ago, detailing the process by which Facebook made and changed its rules. And it highlights some really important things including that almost every case is different, that it’s tough to apply rules to every case, and that context is always changing. And that also means the rules must always keep changing.

A few years back, we took a room full of content moderation experts and asked them to make content moderation decisions on eight cases — none of which I’d argue are anywhere near as difficult as deciding what to do with the President of the United States. And we couldn’t get these experts to agree on anything. On every case, we had at least one person choose each of the four options we gave them, and to defend that position. The platforms have rules because it gives them a framework to think about things, and those rules are useful in identifying both principles for moderation and some bright lines.

But every case is different.

For a long time, I have argued that tech firms’ main failing is they don’t spend anywhere near enough money on safety and security. They have, nearly literally, gotten away with murder for years while the tools they have made cause incalculable harm in the world. Depression, child abuse, illicit drugs sales, societal breakdowns, income inequality…tech has driven all these things. The standard “it’s not the tech, it’s the people” argument is another “pop open a beer” one-liner used by techie libertarians who want to count their money without feeling guilty, but we know it’s a dangerous rationalization. Would so many people believe the Earth is flat without YouTube’s recommendation engine? No. Would America be so violently divided without social media? No. You built it, you have to fix it.

If a local mall had a problem with teen-age gangs hanging around at night causing mayhem, the mall would hire more security guards. That’s the cost of doing business. For years, big tech platforms have tried to get away with “community moderation” — i.e., they’ve been cheap. They haven’t spent anywhere near enough money to stop the crime committed on their platforms. Why? Because I think it’s quite possible the entire idea of Faceook wouldn’t exist if it had to be safe for users. Safety doesn’t scale. Safety is expensive. It’s not sexy to investors.

How did we get here? In part, thanks to that Section 230 you are hearing so much about. You’ll hear it explained this way: Section 230 gives tech firms immunity from bad things that happen on their platforms. Suddenly, folks who’ve not cared much about it for decades are yelling for its repeal. As many have expressed, it would be better to read up on Section 230 before yelling about it (Here’s my background piece on it). But in short, repealing it wholesale would indeed threaten the entire functioning of the digital economy. Section 230 was designed to do exactly what it is I am hinting at here — to give tech firms the ability to remove illegal or offensive content without assuming business-crushing liability for everything they do. Again, it’s a law written from an age before Amazon existed, so it sure could use modernization by adults. But pointing to it as some untouchable foundational document, or throwing out the baby with the bathwater, are both the behavior of children, not adults. We’re going to have to make some things up as we go along.

Here’s the thing about “free speech” on platforms like Twitter and Facebook. As a private company, Twitter can do whatever it wants to do with the tool it makes. Forcing it to carry this or that post is a terrible idea. President Trump can stand in front of a TV camera, or buy his own TV station (as seems likely) and say whatever he wants. Suspending his account is not censorship. As I explain in my Section 230 post, however, even that line of logic is incomplete. social media firms use public resources, and some are so dominant that they are akin to a public square. We just might need a new rule for this problem! I suspect the right rule isn’t telling Twitter what to post, but perhaps making sure there is more diversity in technology tools available to speakers.

But here’s the thing I find myself saying most often right now: The First Amendment guarantees your right to say (most) things; it doesn’t guarantee your right to algorithmic juice. I believe this is the main point of confusion we face right now, and one that really sends a lot of First Amendment thinking for a loop. Information finds you now. That information has been hand-picked to titillate you like crack cocaine, or like sex. The more extreme things people say, the more likely they are to land on your Facebook wall, or in your Twitter feed, or wherever you live. It’s one thing to give Harold the freedom to yell to his buddies at the bar about children locked in a basement by Democrats at a pizza place in Washington D.C. It’s quite another to give him priority access to millions of people using a tool designed to make them feel like they are viewing porn. That’s what some people are calling free speech right now. James Madison didn’t include a guaranteed right to “virality” in the Bill of Rights. No one guaranteed that Thomas Paine’s pamphlets were to be shoved under everyone’s doors, let alone right in front of their eyeballs at the very moment they were most likely to take up arms. We’re going to need new ways to think about this. In the aglorithmic world, the beer-popping line, “The solution to hate speech is more speech,” just doesn’t cut it.

I’m less interested in the Trump ban than I am in the ban of services like Parler. Trump will have no trouble speaking; but what of everyday conservatives who are now wondering if tech is out the get them? If you are liberal: Imagine if some future government decides Twitter is a den of illegal activity and shuts it down. That’s an uncomfortable intellectual exercise, and one we shouldn’t dismiss out of hand.

Parler is a scary place. Before anyone has anything to say about its right to exist, I think you really should spend some time there, or at least read my story about the lawsuit filed by Christopher Krebs. Ask yourself this question: What percentage of a platform’s content needs to be death threats, or about organizing violence, before you’ll stop defending its right to exist? Let’s say we heard about a tool named “Anteroom” which ISIS cells used to radicalize young Muslims, spew horrible things about Americans, teach bomb-making, and organize efforts to…storm the Capitol building in D.C.. Would you really be so upset if Apple decided not to host Anteroom on its play store?

So, what do we do with Parler? Despite all this, I’m still uncomfortable with removing its access to network resources the way Amazon has. I think that feels more like smashing a printing press than forcing books into people’s living rooms. Amazon Web Services is much more like a public utility than Twitter is. The standard for removal there should be much higher, I think. And if it makes you uncomfortable that a private company made that decision, rather than some public body that is responsible to voting citizens, it should. At the same time, if you can think of no occasion for banning a service like Parler, then you just aren’t thinking.

These are complex issues and we will need our best, most creative legal minds to ponder them in the years to come. Here’s one: Matt Stoler, in his Big newsletter about tech monopolies, offers a thoughtful framework for dealing with this problem. I’d like to hear yours. But don’t hit me with beer-popping lines or stuff you remember from Philosophy 101. This is no time for academic smugness. We have real problems down here on planet Earth.

Should conservative social network Parler Be Removed from AWS, Google, and Apple app stores? This is an interesting question, because Parler is where some of the organizing for the Capitol hill riot took place. Amazon just removed Parler from its cloud infrastructure, and Google and Apple removed Parler from their app stores. Removing the product may seem necessary to save lives, but having these tech oligarchs do it seems like a dangerous overreach of private authority. So what to do?

My view is that what Parler is doing should be illegal, because it should be responsible on product liability terms for the known outcomes of its product, aka violence. This is exactly what I wrote when I discussed the problem of platforms like Grindr and Facebook fostering harm to users. But what Parler is doing is *not* illegal, because Section 230 means it has no obligation for what its product does. So we’re dealing with a legal product and there’s no legitimate grounds to remove it from key public infrastructure. Similarly, what these platforms did in removing Parler should be illegal, because they should have a public obligation to carry all customers engaging in legal activity on equal terms. But it’s not illegal, because there is no such obligation. These are private entities operating public rights of way, but they are not regulated as such.

In other words, we have what should be an illegal product barred in a way that should also be illegal, a sort of ‘two wrongs make a right’ situation. I say ‘sort of’ because letting this situation fester without righting our public policy will lead to authoritarianism and censorship. What we should have is the legal obligation for these platforms to carry all legal content, and a legal framework that makes business models fostering violence illegal. That way, we can make public choices about public problems, and political violence organized on public rights-of-way certain is a public problem.

 

Rethinking Firewalls: Security and Agility for the Modern Enterprise

The purpose of the research sponsored by Guardicore is to learn how enterprises perceive their legacy firewalls within their security ecosystems on premises and in the cloud. Ponemon Institute surveyed 603 IT and IT security practitioners in the United States who are decision makers or influencers of technology decisions to protect enterprises’ critical applications and data. Respondents also are involved at different levels in purchasing or using the firewall technologies.

Legacy firewalls are ineffective in securing applications and data in the data center. Respondents were asked how effective their legacy firewalls are on a scale from 1 = not effective to 10 = very effective. Figure 1 shows the 7+ responses. According to the Figure, only 33 percent of respondents say their organizations are very or highly effective in securing applications and data in the data center. Legacy firewalls are also mostly ineffective at preventing a ransomware attack. Only 36 percent of respondents say their organizations are highly effective in preventing such an attack.

The findings of the report show the number one concern of firewall buyers is whether they can actually get next-gen firewalls to work in their environments. As organizations move into the cloud, legacy firewalls do not have the scalability, flexibility or reliability to secure these environments, driving up costs while failing to reduce the attack surface. As a result, organizations are reaching the conclusion that firewalls are simply not worth the time and effort and they’re actually negatively impacting digital transformation initiatives. This is driving a move toward modern security solutions like micro-segmentation, that can more effectively enforce security at the edge.

Following are research findings that reveal why legacy firewalls are ineffective.

Legacy firewalls are ineffective in preventing cyberattacks against applications. Only 37 percent of respondents say their organizations’ legacy firewalls’ ability to prevent cyberattacks against critical business and cloud-based applications is high or very high.

Organizations are vulnerable to a data breach. Only 39 percent of respondents say their organizations are confident that it can contain a breach of its data center perimeter.

Legacy firewalls do not protect against ransomware attacks. Only 36 percent of respondents say their legacy firewalls are highly effective at preventing a ransomware attack. Only 33 percent of respondents say their organizations are very or highly effective in securing applications and data in the data center.

Legacy firewalls are failing to enable Zero Trust across the enterprise. Only 37 percent of respondents rate their organizations’ legacy firewalls at enabling Zero Trust across the enterprise as very or highly effective.

Legacy firewalls are ineffective in securing applications and data in the cloud. Sixty-four percent of respondents say cloud security is essential (34 percent) or very important (30 percent). However, only 39 percent of respondents say the use of legacy firewalls are very or highly effective in securing applications and data in the cloud.

Legacy firewalls kill flexibility and speed. Organizations are at risk because of the lack of flexibility and speed in making changes to legacy firewalls. On average, it takes at least three weeks or more than a month to update legacy firewall rules to accommodate an update or a new application. Only 37 percent of respondents say their organizations are very flexible in making changes to its network or applications and only 24 percent of respondents say their organizations have a high ability to quickly secure new applications or change security configurations for existing applications.

Legacy firewalls limit access control and are costly to implement. Sixty-two percent of respondents say access control policies are not granular enough and almost half (48 percent of respondents) say legacy firewalls take too long to implement.

The majority of organizations in this study are ready to get rid of their legacy firewalls because of their ineffectiveness. Fifty-three percent of respondents say their organizations are ready to purchase an alternative or complementary solution. The two top reasons are the desire to have a single security solution for on-premises and cloud data center security (44 percent of respondents) and to improve their ability to reduce lateral movement and secure access to critical data (31 percent of respondents).

Firewall labor and other costs are too high. Sixty percent of respondents say their organizations would consider reducing their firewall because of the high costs. Fifty-one percent of organizations are considering a reduction in its firewall footprint because labor and other costs are too high (60 percent of respondents). In addition, 52 percent of respondents say it is because current firewalls do not provide adequate security for internal data center east-west traffic.

“The findings of the report reflect what many CISOs and security professionals already know – digital transformation has rendered the legacy firewall obsolete,” said Pavel Gurvich, co-founder and CEO, Guardicore. “As organizations adopt cloud, IoT, and DevOps to become more agile, antiquated network security solutions are not only ineffective at stopping attacks on these properties, but actually hinder the desired flexibility and speed they are hoping to attain.”

To read a full copy of the report, please visit Guardicore’s website.

 

Facebook needs a corrections policy, viral circuit breakers, and much more

I strongly recommend you read the entire report. (PDF)

Bob Sullivan

“After hearing that Facebook is saying that posting the Lord’s Prayer goes against their policies, I’m asking all Christians please post it. Our Father, who art in Heaven…”

Someone posted this obvious falsehood on Facebook recently, and it ended up on my wall. Soon after, a commenter in the responses broke the news to the writer that this was fake. The original writer than did what many social media users do: responded with a simple, “Well, someone else said it and I was just repeating it.” No apology. No obvious shame. And most important, no correction.

I don’t know how many people saw the original post, but I am certain far fewer people saw the response and link to Snopes showing it was a common hoax.

If there’s one thing that people rightly hate about journalists, it’s our tendency to put mistakes on the front page and corrections on the back page. That’s unfair, of course. If you make a mistake, you should try to make sure just as many people see the correction as the mistake. Journalists might be bad at this, but Facebook is awful at it. This is one reason social media is such a dastardly tool for sharing misinformation.

Fixing this is one of the novel recommendations in a great report published last week by the Forum on Information and Democracy, an international group formed to make recommendations about the future of social media. The report suggests that, when an item like this Our Father assertion spreads on social media and is determined to be misinformation by an independent group of fact checkers, a correction should be shown to each and every user exposed to it. So, not just a lame “I dunno, a friend said it” that’s buried in later comments. Rather, it would require an attempt to undo the actual spread of incorrect information.

Anyone concerned about the future of democracy and discourse should take a look at this report. I’ve summarized a few of the highlights below.

Circuit breakers

The report calls for creation of friction to prevent misinformation from spreading like wildfore. My favorite part of this section calls for creation of “circuit breakers” that would slow down viral content after it reaches a certain threshold, giving fact-checkers a chance to examine it. The concept is borrowed from Wall Street. When trading in a certain stock falls dramatically out of pattern — say there’s a sudden spike in sales — circuit breakers kick in to give traders a moment to breathe and digest whatever news might exist. Sometimes, that short break re-introduces rationality to the market. Note: This wouldn’t infringe on anyone’s ability to say what they want, it would merely slow down the automated augmentation of this speach.

A digital ‘building code’ and ‘agency by design’

You can’t buy a toaster that hasn’t been tested to make sure it’s safe, but you can use software that might, metaphorically, burn your house down. This metaphor was used a lot after the mortgage meltdown in 2008, and it applies here, too. “In the same way that fire safety tests are conducted prior to a building being opened to the public, such a ‘digital building code’ would also result in a shift towards prevention of harm through testing prior to release to the public,” the report says.

The report adds a few great nuggets. While being critical of click-to-consent agreements, it says: “In the case of civil engineering, there are no private ‘terms and conditions’ that can override the public’s presumption of safety.” The report also calls for a concept called “agency by design” that would require software engineers to design opt-ins and other moments of permission so users are most likely to understand what they are agreeing to. This is “proactively choice-enhancing,” the report argues.

Abusability Testing

I’ve long been enamored of this concept. Most software is now tested by security professionals hired to “hack” it, a process called penetration testing. This concept should be expanded to include any kind of misuse of software by bad guys. If you create a new messenger service, could it be abused by criminals committing sweetheart scams? Could it be turned into a weapon by a nation-state promoting disinformation campaigns? Could it aid and abet those who commit domestic violence? Abusability testing should be standard, and critically, it should be conducted early on in software development.

Algorithmic undue influence

In other areas of law. contracts can be voided if one party has so much influence over the other that true consent could not be given. The report suggests that algorithms create this kind of imbalance online, so the concept should be extended to social media.

“(Algorithmic undue influence) could result in individual choices that would not occur but for the duplicitous intervention of an algorithm to amplify or withhold select information on the basis of engagement metrics created by a platform’s design or engineering choice.”

“Already, the deleterious impact of algorithmic amplification of COVID-19 misinformation has been seen, and there are documented cases where individuals took serious risks to themselves and others as a result of deceptive conspiracy theories presented to them on social platforms. The law should view these individuals as victims who relied on a hazardously engineered platform that exposed them to manipulative information that led to serious harm.”

De-segregation

Social media has created content bubbles and echo chambers. In the real world, it can be illegal to engineer residential developments that encourage segregation. The report suggests similar limitations online.

“Platforms … tacitly assume a role akin to town planners. … Some of the most insidious harms documented from social platforms have resulted from the algorithmic herding of users into homogenous clusters.. What we’re seeing is a cognitive segregation, where people exist in their own informational ghettos,” the report says.

In order to promote a shared digital commons, the report makes these suggestions.

Consider applying equivalent anti-segregation legal principles to the digital commons, including a ban on ‘digital redlining’, where platforms allow groups or advertisers to prevent particular racial or religious groups from accessing content.

Create legal tests focused on the ultimate effects of platform design on racial inequities and substantive fairness, regardless of the original intent of design.

Create specific standards and testing requirements for algorithmic bias.

Disclose conflict of interests

The report includes a lot more on algorithms and transparency, but one item I found noteworthy: If a platform promotes or demotes content in a way that is self-serving, it has to disclose that. If Facebook’s algorithm makes a decision about a new social media platform, or about a government’s efforts to regulate social media, Facebook would have to disclose that.

These are nuggets I pulled out, but to grasp the larger concepts, I strongly recommend you read the entire report. (PDF)

The Need to Close the Cultural Divide between Application Security and Developers

A security risk that many organizations are not dealing with is the cultural divide between application security and developers. In this research sponsored by ZeroNorth, we refer to the cultural divide as when AppSec and developers lack a common vision for delivering software capabilities required by the business—securely. As a result, AppSec and developers are less likely to work effectively as a team and achieve the goals of building and delivering code in a timely manner with security integrated throughout the application development process.

Ponemon Institute surveyed 581 security practitioners who are involved in and knowledgeable about their organization’s software application security activities and 549 who are involved in and knowledgeable about their organization’s software application development process.

Following are findings that reveal why the cultural divide exists and its effect on the security of applications

  • Who is responsible for the security of applications?  Developer and AppSec respondents don’t agree on which function is ultimately responsible for the security of applications. Only 39 percent of developer respondents say the security team is ultimately responsible for application security. In contrast, 67 percent of AppSec say their teams are responsible. This lack of alignment demonstrates the potential for security to simply fall through the cracks if ownership is not clearly understood.
  • AppSec and developer respondents admit working together is difficult. AppSec respondents say it’s because the developers publish code with known vulnerabilities. They also believe developers will accept flaws if they believe the application will be a big seller. Developers say security does not understand the pressure they have to meet their deadlines. Developers also believe working with the AppSec team stifles their ability to innovate. It’s clear that today, priorities, goals and objectives across these two teams are not aligned and this disconnect drives a wedge between the functions.
  • Now more than ever AppSec and developers need to work as a team. Digital transformation is putting pressure on organizations to develop applications at increasing speeds, potentially putting their security at risk. Sixty-five percent of developer respondents say they feel the pressure to develop applications faster than before digital transformation. Fifty percent of AppSec respondents agree.
  • AppSec respondents see serious problems with application security practices in their organization. Seventy-one percent of AppSec respondents say the state of security is undermined by developers who don’t care about the need to secure applications early in the SDLC. Sixty-nine percent of AppSec respondents say developers do not have visibility into the overall state of application security. As evidence of the tension between security and developers, 53 percent of AppSec respondents say developers view security as a hindrance to releasing new applications. Here again, competing priorities—speed for developers, security for AppSec—are often at odds.
  • Security respondents and developers disagree on whether the application security risk is increasing. Only 35 percent of developer respondents say application security risk in the organization is significantly increasing or increasing. In contrast, 60 percent of AppSec respondents say application security risk is increasing. This raises a question: which teams have clear visibility into the security posture of an application throughout its lifecycle?

Conclusion
As shown in this research, technology alone cannot bridge the cultural divide. Rather, senior leadership needs to understand the serious risks to business-critical applications as a result of admissions in this research by AppSec and developers that working together is very difficult.  A first step to closing the cultural divide is for senior leadership to create a culture that encourages teamwork, collaboration and accountability.

Please download the full report at the ZeroNorth website.

What’s it really like to negotiate with ransomware gangs?

Bob Sullivan

It might be the worst-kept secret in all of cybersecurity: the FBI says don’t pay ransomware gangs. But corporations do it all the time, sending millions every year in Bitcoin to recover data that’s been taken “hostage.” Sometimes, federal agents even help victims find experienced virtual ransom negotiators.

That’s what Art Ehuan does.   During a career that has spanned the FBI, the U.S. Air Force, Cisco, USAA, and now the Crypsis Group, he’s found himself on the other side of numerous tricky negotiations.

And he’s only getting busier. According to Sophos, roughly half of U.S. corporations report being attacked by ransomware last year.  The gangs are becoming more organized, and the attacks are getting more vicious. The days where victims could simply pay ransom for an encryption key, unscramble their data, and move on are ending. Now that some companies have managed to avoid paying ransom by restoring from backup, the gangs have upped their game. Their new trick is to extract precious company data before encrypting it, so the attacks pack a one-two punch — they threaten embarrassing data breaches on top of crippling data destruction.

Ransomware gangs also attack companies when they are at their most vulnerable  — during Covid-19, they have stepped up their attacks on health care firms, for example, adding a real life-or-death component to an already stressful situation.  By the time Ehuan gets involved, victims just want to put their computers and their lives back together as quickly as possible.  That often means engaging the gang that’s involved, reaching a compromise, making a payment, and trusting the promise of a criminal.

It can sound strange, but during a recent lecture at Duke University, Ehuan said there were “good” cybercriminals — gangs that have a reputation for keeping those promises. After all, it’s their business. If they were to take the Bitcoin and run, security firms would stop making payments.  On the other hand, you can’t trust every criminal — only the “good” ones.

This is the murky world where Ehuan works. During his lecture, Ehuan talked in broad strokes about the major issues facing companies trying to stay safe in an increasingly dangerous digital world.

After the lecture, I asked him to share more about what it’s like to deal with a ransomware gang (as part of my new In Conversation at Duke University series — read the series here). Who makes the first move? Are you sending emails? Talking on the phone? How do you know which criminals to “trust?” How do you gain their trust? Do they ever accuse you of being law enforcement?  Here’s his response:

Art Ehuan

“When the malware is deployed there is also information provided on how to contact (the crime gang) to pay the fee that they are looking for and receive the key to unencrypt the data.

“Our firm, and others like it, will then have a discussion with the client and counsel to decide if they will pay and how much they are willing to pay. Once authorized by counsel/client, contact is made with the TA (gang) on the dark web to advise them that systems are impacted and we would like to discuss getting our data back, or data not being released to public sites, etc.  We provide them with a known encrypted file to make sure they are able to unencrypt and provide us back the known file to ensure that actually have the decryptor.  We have a discussion with the TA over the dark web to lower price due to funds the client has available, etc.,

“There is good success in negotiating a fee lower than what was initially asked by these groups.  Once the fee is agreed and payment made, most often than not by bitcoin, TA sends the decryptor that is then tested in an isolated environment to make sure that it does what it is supposed to do and not potentially introduce other malware into the environment.  Once evaluated, it is provided to the client for decryption of their data.  If the negotiation is for them not to release the data, they will provide proof of the files being deleted on their end (we have to take their word for it that they haven’t kept other copies).  Sometimes this takes several days due to the time difference between U.S. and Eastern Europe when communicating.

“Even with the decryptor, unencrypting the data is a painful and costly experience for a company.  My continuous message to clients is to secure and segment their infrastructure so these attacks are not as successful. That is cheaper than the response efforts that occur with a breach.

“Hopefully, this provides at a high-level process that is taking place.”

(Read the full conversation here.) 

 

A tale of two security operations centers – 78% say the job is ‘very painful’

The 2020 Devo SOC Performance Report tells a tale of two SOCs. Based on the results of an independent survey of IT and IT security practitioners, the second annual report looks at the latest trends in security operations centers (SOCs), both positive and negative. The report presents an unvarnished view of the current state of SOC performance and effectiveness based on responses from people with first-hand knowledge of SOC operations, identifies areas of change from the prior year’s survey, and highlights the challenges that continue to hinder many SOCs from achieving their performance goals.

Devo commissioned the Ponemon Institute to conduct a comprehensive, independent survey in March and April 2020 of professionals working in IT and security. The survey posed a broad range of questions designed to elicit insights into several key aspects of SOC operations, including:

  • The perceived value of SOCs to organizations
  • Areas of effectiveness and ineffectiveness
  • The ongoing challenge of SOC analyst burnout, its causes, and effects

The picture painted by the data from nearly 600 respondents shows that while some aspects of SOC performance show modest year-over-year improvement, major problems persist that continue to adversely affect organizational cybersecurity efforts and the well-being of SOC analysts.

A Tale of Two SOCs
Overall, the survey results tell a tale of two SOCs. One is a group of high-performing SOCs that are, for the most part, doing reasonably well in delivering business value. This group generally enjoys sufficient talent, tools, and technology to have a fighting chance of overcoming the relentless challenges that commonly afflict many SOCs.

Sharply contrasting with the high performers are the low-performing SOCs. This group struggles greatly because they are unable to overcome the myriad problems hindering their ability to deliver better performance. These SOCs generally lack the people, technology, and budget resources to overcome these challenges, resulting in them sinking even lower in effectiveness, putting their organizations at ever-greater risk of cyberattacks.

This report examines the specific areas where high- and low-performing SOCs most diverge, while also shining a light on the challenges with which both groups struggle. By identifying the differences and similarities between the two classes of SOCs, it illuminates the variable return on investment these SOCs are delivering to their organizations.

The Good(-ish) News
Before delving into the most significant—and in many cases, disturbing—findings from the survey, let’s start by looking at how organizations rate the value their SOC provides. This year, 72% of respondents said the SOC is a key component of their cybersecurity strategy. That’s up from 67% in 2019. This increase reflects more respondents feeling their SOC plays an important role in helping the organization understand the external threat landscape.

Other findings with a somewhat positive take on SOC performance include:

There is an eight-percentage-point increase among respondents who say their SOC is highly effective in gathering evidence, investigating, and identifying the source of threats. So far, so good. However, when you realize that last year only 42% of respondents felt that way, this year’s “jump” to 50% means that half of those surveyed still don’t believe their SOC is performing particularly well.

Respondents see improvements in their SOC’s ability to mitigate risks. This is another example of good news/bad news. Last year only 40% of respondents felt their SOC was doing a good job reducing risks. In 2020, a still-modest 51% say their SOC is getting the job done in this area. That’s a nice increase, but it still means that almost half of all respondents find their SOC lacking in this important capability.

Contributing to this rise, more SOCs (50%, up from 42% in 2019) are providing incident-response capabilities including attack mitigation and forensic services. The brightest spot in this aspect of SOC performance is that in 2020, 63% of respondents say SOCs are helpful in understanding the external threat environment by collecting and analyzing information on attackers and their tactics, techniques, and procedures (TTPs), up from 56% last year.

There was a slight bump in the alignment between the SOC and the objectives and needs of the business. This year 55% of respondents say their SOCs are fully aligned (21%) or partially aligned (34%), a slight increase from 51% in 2019. One possible reason for the improved alignment is that more lines of business are leading the SOC team (27% this year vs. 18% in 2019). But that practice also could be contributing to the rise in turf battles and silo issues. More on that later.

Organizations are investing in SOC technologies. Seventy percent of respondents say it is very likely (34%) or likely (36%) that their organization will open up their wallets to introduce new tools designed to improve SOC operations.

The SOC forecast is cloudy. A majority of organizations, 60%, now operate their SOC mostly (34%) or partly (26%) in the cloud. In 2019, only 53% of organizations identified as mostly cloud (29%) or operating a hybrid environment (24%). SOCs with limited cloud presence are declining, with only 40% of organizations identifying as mostly on-premises, down from 47% in 2019. This trend toward more cloud-based SOC operations reflects the overall move of IT and other business operations technologies taking advantage of the scale and cost benefits of cloud deployments.

The Really-Not-So-Good News

The first Devo SOC Performance Report in 2019 showed that the issue of analyst turnover due to stress-related burnout was significant. Unfortunately, it’s become an even bigger problem in 2020:

  • 78% say working in the SOC is very painful, up from 70% last year
  • An increased workload is the number-one reason for burnout according to 75% of respondents, up from 73%
  • Information overload is an even bigger problem this year (67%) than in 2019 (62%)
  • And 53% say “complexity and chaos” in the SOC is a major pain point, up from 49%

For all of these reasons, and many more as you’ll see, organizations must find ways to reduce the stress of working in the SOC—now.

Respondents are concerned that frustrated, stressed, and burnt-out analysts will vote with their feet and quit their jobs. An appalling 60% say the stress of working in the SOC has caused them to consider changing careers or leaving their jobs. Even worse, 69% of respondents say it is very likely or likely that experienced security analysts would quit the SOC, more discouraging than the 66% who felt that way last year.

Turf tussles and silo skirmishes are killing SOC effectiveness. This is another problem that’s getting worse. This year, 64% of respondents say these internal battles over who is in charge of what are a huge obstacle to their SOC’s success, a disheartening increase from 57% in 2019. 27% of respondents say lines of business are in charge of the SOC, an increase from 18% in 2019. However, 17% of respondents say no single function has clear authority and accountability for the SOC. And it’s not a stretch to connect the dots and realize that an organization infected with in-fighting among its technology teams is likely to be more vulnerable to the potentially devastating effects of a successful cyberattack.

Budgets are not adequate to support a more effective SOC. SOC budgets increased slightly year over year, but not enough to close the gaps in effectiveness and performance. The average annual cybersecurity budget for the survey respondents’ organizations rose to $31 million this year, a slight bump from $26 million. The average funding allocation for the SOC is 32% of the total cybersecurity budget or $9.9 million, a slight increase from 30% or $7.8 million in 2019. These figures are heading in the right direction, but they’re still insufficient to fund the important work of an effective SOC team.

You can’t stop what you can’t see. SOC teams are handcuffed by limited visibility into the attack surface, which 65% of respondents cite as one of the primary causes of SOC analyst pain.

The mean time to resolution remains unacceptably high. MTTR is one of the benchmark metrics for SOC performance, and the responses to the survey show it is another significant problem area. According to 39% of respondents, MTTR can take months or even… years! Less than a quarter of respondents, 24%, say their SOC can resolve security incidents within hours or days. Compare these unsettling metrics with the industry estimate that it takes skilled hackers less than 19 minutes to move laterally after compromising the first machine in an organization. This points to a significant gap for the vast majority of SOCs, as only 8% have an estimated MTTR that is “within hours,” which is even worse than the 9% of organizations in 2019.

Is it time for the rise of the machines? It’s obvious from these survey results that the trend of SOC analyst stress, burnout, and turnover is getting worse. The question is what can organizations do to turn the tide? Well, if you listen to 71% of those surveyed, a big step in the right direction would be to introduce automation to the analyst workflow, and 63% state that  implementing advanced analytics/machine learning would help. Respondents feel organizations should invest in technologies that would reduce analyst workloads. They believe automation and machine learning are even more important than a normalized work schedule in reducing SOC pain. The idea is to automate many of the repetitive, pressure-packed tasks typically performed by Tier-1 analysts who often have had enough of SOC work before they ever make it to Tier-2.

READ THE FULL REPORT AT DEVO’S WEBSITE.

 

I was asked to help both the defense and the prosecution at Father of ID theft’s sentencing

This is quite a business card. One of my prized possessions from James’ better days. ‘Because there should be only one you.’

Bob Sullivanh

I ended up in a press release issued by the Department of Justice last week — I believe that was a first for me. Fortunately, I was not the subject of the release. My book, Your Evil Twin, was used by prosecutors to help put a notorious identity thief behind bars for 17 years. That criminal was James Rinaldo Jackson, whom I had named “The Father of Identity Theft” in my book almost 20 years ago.

Thus ended — for now — a crazy episode in my life that involved an old prison pen pal and a federal case in which I was asked to help both the prosecution and the defendant.

Most recently, James had lit fires in his house and kept a woman and her three children hostage while trying to destroy evidence after police surrounded his place …and soon after, tried to use a dead man’s identity to buy a Corvette.

To James, that was a pretty typical Tuesday.

James’ story is so convoluted, episodic, tragic, and amorphous that I can only hope to offer you a glimpse in this email. I’m hard at work looking for broader ways to tell this crazy story. While he’s now going to be in a federal prison for 207 months, likely the rest of his life (he’s 58), I can’t help thinking his story isn’t really over.

I hadn’t thought about James for nearly a decade when I received an email from the DOJ about his case last December. James had been in and out of jail and managed to squirt back out into public life again and again. This time, DOJ wanted to throw the book at him — MY book — and a federal agent wanted to know if I had any additional evidence I could share.

I had spent a couple of years writing letters back and forth to James when he was in jail for previous crimes. In thousands of single-spaced, typed pages, he had disclosed amazing “how I did it” details about his early days committing insurance fraud, and then trail-blazing in ID theft.

Like all journalists, I was in a strange spot. Generally, reporters don’t share information with prosecutors unless compelled to do so by a court. On the other hand, it really is best for James and the rest of the world that he be protected from society and vice versa.

While I pondered the situation and made plans to cover his sentencing hearing in Tennessee, I was contacted by James’ court-appointed defense attorney. James had told his legal team that I could be a character witness for him at sentencing. His letters to me were always framed as an effort to warn the world about a coming wave of ID theft — and he was right about that. He thought perhaps I could help the judge go easy on him.

I called journalism ethics experts to discuss my next steps and stalled. Then, Covid hit. James’ sentencing was repeatedly delayed. I suspected he might somehow get out of a long jail sentence. But last week, he was put away for a long time without my involvement.

“Aggravated identity theft and schemes to defraud or compromise the personal and financial security of vulnerable and deceased victims will not be tolerated,” said U.S. Attorney D. Michael Dunavant in the press release. “This self-proclaimed ‘Father of Identity Theft’ will now have to change his name to ‘Father Time’, because he will be doing plenty of it in federal prison.”

James has done some very bad things, and hurt a lot of people. Still, I felt a strange sadness. I thought about all the opportunities he had to set his life straight; all the second chances wasted. He just couldn’t NOT be a criminal. I’ve met other criminals like this in my life. One rather pointedly told me, “I just get too comfortable.” For some people, it seems, comfort is intolerable.

If you could indulge me for a bit, let me go back in time, to when I was first contacted by prosecutors about James:


When a federal prosecutor sends you an email with a subject line that’s the title of a book you wrote almost 20 years ago, you call immediately.

“This is probably the least surprising call you’ve ever received, but James is in trouble again,” the prosecutor said to me.

James Jackson had recently tried to burn his house down with his female friend and her kids held hostage inside, sort of.  Then after that, he was arrested at a car dealership while trying to buy a car with a stolen identity. A Corvette. James doesn’t do anything small.

The prosecutor had found my book when they executed a search of James’ home and his life.  Of course he did.

Two decades ago, James Rinaldo Jackson — the man often credited with ‘inventing’ identity theft — was my prison pen pal.  I was a cub technology reporter at MSNBC and I had latched onto a new, scary kind of financial fraud. It was so new, the term identity theft hadn’t really been coined yet.  James was my teacher. We spent years corresponding; I often received 2-3, 4 missives a week. He’s hopelessly OCD, and the letters were often dozens of pages, single-spaced, impeccably typed.  Slowly, but surely, hidden inside pages of rambling prose, James unraveled for me all the tricks he used to steal millions of dollars, to amass a fleet of luxury cars, to impersonate a Who’s Who of famous CEOs and Hollywood personality of the 1990s – often armed only with a payphone and some coins.

At one point, James stole Warner Bros CEO Terry Semel’s identity, and sent Semel the evidence to him home via FedEx, in the form of a movie pitch — “Really, sir, it would be an important film. People are at great risk,” he wrote.  For good measure, he included evidence of stolen IDs from famous actors he hoped star in the movie.

James  Jackson’s misadventures became the core of my book about identity theft, Your Evil Twin, published in 2003. In it, I dubbed James The Father of Identity Theft.  The name stuck.

Years later, James served his debt to society, got out, and we finally met. He was beyond charming, and I liked him. It was easy to see why people would give him hundreds of thousands of dollars.  He became mini-famous for a while, He starred in infomercials about tech crimes. The last time I saw him, we spoke on a panel together in New York at a bank fraud conference.  I remember riding up a glorious escalator with him in the Heart of Times Square, and he beamed that someone else was paying for his $400-a-night hotel room. James could easily have become another been Frank Abignale, the real-life criminal-turned-hero from the film Catch Me if You Can, who now nets 5-figure payouts for speeches.

Instead, he couldn’t even be James Jackson.

James insatiable desire to be someone more important than himself — not to mention his desire for Corvettes – couldn’t be tamed.  James took that escalator and just kept going down, so low that he eventually found himself once again surrounded by police. A new fleet of luxury cars had attracted law enforcement attention. It’s crazy, but when I heard about the fire, I was sad for James. I know what happened. He panicked and started lighting computers and paperwork on fire, hoping to destroy the evidence.

To James, crime was always just a game. He never “hurt” anyone; he just talked his way into the nicer things in life. In fact, he usually targeted people who’d recently died – stealing their money is rather trivial – so where’s the victim? The way he flaunted the proceeds, it was also obvious to me that James was always desperate to be caught. Who tries to buy a Corvette while on the run for trying to burn down your house with your family inside?

The prosecutor called me for help putting James behind bars for good. He wanted more evidence that would convince a judge and jury that James is beyond reform, that 20 years ago James had told me things that still possess him today.   And I have (had? It was a long time ago) mountains of letters that might sound like confessions to a jury today.

This is, to say it mildly, an unusual request. Journalists don’t share information with prosecutors. But then, James is a most unusual case. It would be good for James, and the rest of the world, for James to be kept away from telephones and computers forever.  But that’s not my job. So, now what?

An excerpt of my book detailing James Jackson’s original crimes, originally published by NBC News, can be found here:

http://www.nbcnews.com/id/5763781/ns/technology_and_science-security/t/id-thief-stars-tells-all/#.XXfrUyhKjIU

Eventually the NYTimes covered some of his story:

https://www.nytimes.com/2002/05/29/nyregion/identity-theft-and-these-were-big-identities.html

Here is James on an infomercial, about as close as he ever came to straightening out his life:

Here’s a local story about the more recent fire at James house and his arrest

https://wreg.com/2015/02/28/man-accused-of-stealing-from-the-dead-abusing-children/

And here’s a local story about his sentencing, with plenty of details from Your Evil Twin

https://www.charlotteobserver.com/news/nation-world/national/article246155270.html