Separating the truths from the myths in cybersecurity

Larry Ponemon

CLICK HERE TO WATCH A WEBINAR OF THESE RESULTS

Ponemon Institute, with sponsorship from BMC, conducted the study on Separating the Truths from the Myths in Cybersecurity to better understand the security myths that can be barriers to a more effective IT security function and to determine the truths that should be considered important for the overall security posture. In the context of this survey, cybersecurity truths are based on the actual experience of participants in this research. In contrast, cybersecurity myths are based on their perceptions, beliefs and gut feel.

More than 1,300 IT and IT security professionals in North America (NA), United Kingdom (UK) and EMEA who have various roles in IT operations and security were surveyed. All respondents are knowledgeable about their organizations’ IT security strategies.

Separating the truths from the myths in cybersecurity

Following are statements about cybersecurity technologies, personnel and governance practices. Participants in this research were asked if these statements are considered truthful or if they are based solely on conjecture or gut feel (i.e. myth). Specifically, respondents rated each statement on a five-point scale from -2 = absolute myth, -1 = mostly myth, 0 = can’t be determined, +1 = mostly truth and + 2 = absolute truth. The number shown next to each statement represents the average index value compiled from all responses in this study. As can be seen, all myths and truths are not equal and range from -1.04 to +0.78.

Drawing upon nonparametric statistical methods, we separated those statements that had a statistically significant positive value that was above 0 (i.e. truth) from those statements that had a statistically significant negative value at or below 0 (i.e. myth).

Truth – The test statistic confirms the following statements are mostly believed to be a fact

 

  1. There is a skills gap in the IT security field. +0.78
  2. Security patches can cause greater risk of instability than the risk of a data breach +0.52
  3. The cloud is cost effective because it is easier and faster to deploy new software and applications than on-premises +0.52
  4. Greater visibility into al applications, data and devices and how they are connected lowers and organization’s security risk. +0.45
  5. Malicious or criminal attacks are the root cause of most data breaches. +0.42
  6. A strong security posture enables companies to innovate and take risks that can lead to greater profitability. +0.33
  7. IT security and IT operations work closely to make sure resolution and remediation of security problems are completed successfully. +0.22
  8. Many organizations are suffering from investments in disjointed, non-integrated security products that increase cost and complexity. +0.09

 

Myth – test statistic confirms the following statements are mostly a myth

 

  1. Too much security diminishes productivity. -1.04
  2. A strong security posture does not affect consumer trust. (In other words, a strong security posture is considered beneficial to improving consumers’ trust in the organization.) -0.87
  3. Automation is going to reduce the need for IT security expertise. -0.55
  4. Artificial intelligence and machine learning will reduce the need for IT security expertise. -0.50
  5. It is difficult or impossible to allocate the time and resources to patching vulnerabilities because it leads to costly business disruptions and downtime. -0.41
  6. Insider threats are costlier to detect and contain than external attacks. -0.27
  7. Nation state attacks are mainly a threat for government organizations. -0.24
  8. Security intelligence tools provide too much information to be effective in investigating threats. -0.21

Discussion — the state of cybersecurity 

Senior management believes in the importance of the IT security function. Sixty-one percent of respondents say their senior management does not think IT security is strictly a tactical activity that reduces its importance in the eyes of senior management. Respondents concur that IT security in their organization is considered a strategic imperative.

Companies face a shortage of skilled and competent in-house staff. According to another Ponemon Institute study[1] , 70 percent of chief information security officers and other IT security professionals surveyed say a lack of competent in-house staff is what they worry about most when trying to defend their companies against cyberattacks. Further, 65 percent of these respondents say the top reason they are likely to have a data breach is because they have inadequate in-house expertise.

Are tensions between the IT and IT security function diminishing the security of organizations? Fifty-six percent of respondents agree that there is tension between IT security and IT operations because of a lack of alignment of their different priorities. Specifically, IT operations is more concerned with the organization’s business objectives and IT security is focused on securing the enterprise from cybersecurity threats.

However, many respondents believe that despite this tension, IT security and IT operations work closely to make sure resolution and remediation of security problems are completed successfully. Collaboration between these two groups can be improved through the use of tools that bring these two functions closer together and foster teamwork which will benefit the organization as a whole.

Investments in security technologies should be aligned with the overall IT strategy and not lead to complexity. While the priorities of IT security and IT operations are often not in alignment, investments in technologies are consistent with their organizations’ overall IT strategy, according to 60 percent of respondents. However, respondents believe many organizations are suffering from investments in disjointed, non-integrated security products that increase cost and complexity.

Technology investments are often motivated by well-publicized data breaches.  Fifty percent of respondents say data breaches that are widely reported in media can influence the decisions to purchase security technologies. While companies may purchase cyber insurance to manage the financial consequences of a data breach, only 34 percent of respondents say such a policy would reduce their investments in security technologies.

CLICK HERE TO LEARN MORE AND SEE A WEBINAR EXPLAINING THESE RESULTS

Mark Zuckerberg is the world’s front-page editor now. That’s the real problem

Bob Sullivan

Mark Zuckerberg never set out to be the world’s editor in chief, but here we are.  And sorry Mark, you are a terrible front page editor.

Hearings in Congress today dug into the weeds of why Americans feel like social media is letting them down — it was a ready-made tool for Russian election interference; it’s now silencing some voices based on vague criteria, and so on.  But these aren’t aren’t THE problem. They are just symptoms.

Two thirds of Americans get their news from social media today. Most from their Facebook wall. That’s s a very, very small window through which to see the world.  Worse yet, most of them don’t know how social media really works.  Pew just released a study showing a majority have no idea how stories are selected for Facebook’s news feed. And don’t believe they have any influence over what appears there.

That’s THE problem.

Fairly recently, a consumer reading a newspaper who didn’t like what was on the front page could do something simple, but now seems revolutionary — she could turn the page.  Over and over.  And within 10 minutes or so, she’d be exposed to hundreds of stories, neatly organized in sections.  If she were really smart, she might do this with three or four papers. More to the point, she had a pretty good understanding of why those headlines and those stories appeared in those sections.

Today, we scroll.  A supercomputer designed to hack our attention span optimizes that “front page” for “engagement,” with the goal of hypnotizing you into sticking around. There’s no sections, no priorities. Only click-bait.  And whatever Facebook has decided is important to the hypnotics that month (Live video! Puppies!) If a good story doesn’t click with the first few folks who see it, it’s dismissed into the long tail of Internet oblivion, destined to be a tree that’s fallen silently in an empty forest. This story, I’d think, will be a good candidate for that scrap heap.

I don’t begrudge that (ok, of course I do. Facebook’s algorithm changes have killed my website in recent months).  But I found this piece of Pew’s most recent survey the most troubling: Facebook offers token tools for adjusting what’s on users’ front pagea, but even these are rarely used. Fully two-thirds of users have never even tried to influence the content on their news feed. Of course, the older users get, the less likely they’ve taken an active step to change their feed, such unfollowing groups or asking that certain friends be prioritized. (Please choose “see more” of me.)

In other words, news consumption in America is dangerously passive.  And Mark Z is the most powerful front page editor in history.

This is not what Facebook set out to do; I genuinely think many at the company are horrified by this state of affairs.  I am one who believes it is an existential threat to the company — it’s very far from the Mark’s core expertise. And users will eventually revolt. In a separate Pew survey, researchers found that 42% of users had taken some kind of Facebook break recently. And 26% said they had deleted the app from their phone. Those numbers seem awfully high to me, but you get the point.  People sort of hate Facebook now for what it’s done to their lives.  That’s not a great business model.

And it’s getting worse. As Facebook works frantically to save itself, and to diffuse the bomb it’s been turned into, news feed is often shrunken. Puppy photos are back on top; interesting news stories (like this one!) are out.  Users see an even smaller selection of “follows” when they look.  You might have 500 friends, but only 25 of them appear in your feed, urban legends and empirical evidence tells us.

Why are we really here? Since the beginning of time, Facebook has refused to offer an unfiltered option that would simply list every post from every friend.  When a software maker invented a third-party app to make such a raw feed, Facebook forced it to shut down. Users would be overwhelmed by so many posts, the firm believes.  News feed must be edited.  And so, here we are.

Yes, in some ways, we did this to ourselves.  Nothing stops Americans from visiting SeattleTimes.com on their own, instead of relying on the news feed (or Google News) for their headlines. Heaven forbid, we could actually subscribe to a newspaper, too.  But, as I began this piece, here we are.  The world’s most efficient tool for connecting human beings, one of the Internet’s original killer app, has killed our curiosity.  We’re devolving into digital-made tribes, only listening to the 25 or so people who make the front page of our lives.

As the saying goes, you made this mess, Mark. You have to clean it up.

The value of Artificial Intelligence in Cybersecurity

Larry Ponemon

Ponemon Institute is pleased to present The Value of Artificial Intelligence in Cybersecurity sponsored by IBM Security. The purpose of this research is to understand trends in the use of artificial intelligence and how to overcome barriers to full adoption.

Ponemon Institute surveyed 603 IT and IT security practitioners in US organizations that have either deployed or plan to deploy AI as part of their cybersecurity program or infrastructure. According to the findings, these participants strongly believe in the importance and value of AI but admit that being able to get the maximum value from technologies is a challenge.

The adoption of AI can have a very positive impact on an organization’s security posture and bottom line. The biggest benefit is the increase in speed of analyzing threats (69 percent of respondents) followed by an acceleration in the containment of infected endpoints/devices and hosts (64 percent of respondents). Because AI reduces the time to respond to cyber exploits organizations can potentially save an average of more than $2.5 million in operating costs.

In addition to greater efficiencies in analyzing and containing threats, 60 percent of respondents say AI identifies application security vulnerabilities. In fact, 59 percent of respondents say that AI increases the effectiveness of their organizations’ application security activities.

To improve the effectiveness of AI technologies, organizations should focus on the following three activities.

 Attract and retain IT security practitioners with expertise in AI technologies. AI may improve productivity but it will increase the need for talented IT security personnel. Fifty-two percent of respondents say AI will increase the need for in-house expertise and dedicated headcount.

Simplify and streamline security architecture. While some complexity in an IT security architecture is expected in order to deal with the many threats facing organizations, too much complexity can impact the effectiveness of AI. Fifty-six percent of respondents say their organizations need to simplify and streamline security architecture to obtain maximum value from AI-based security technologies. Sixty-one percent say it is difficult to integrate AI-based security technologies with legacy systems.

Supplement IT security personnel with outside expertise. Fifty percent of respondents say it requires too much staff to implement and maintain AI-based technologies and 57 percent of respondents say outside expertise is necessary to maximize the value of AI-based security technologies.

As the adoption of AI technologies matures, the more committed organizations become to investing in these technologies.

In this research, 139 respondents of the total sample of 603 respondents self-reported that their organizations have either fully deployed AI (55) or partially deployed AI (84). We refer to these respondents as AI users. We conducted a deeper analysis of how these respondents perceive the benefits and value of AI. Following are some of the most interesting differences between AI users and the overall sample of respondents who are in the planning stages of their deployment of AI. 

  • AI users are more likely to appreciate the benefits of AI technology. Seventy-one percent of AI users vs. 60 percent of the overall sample say an important benefit is the ability of AI to deliver deeper security than if organizations relied exclusively on their IT security staff.
  • AI users are more likely to believe these technologies simplify the process of detecting and responding to application security threats. As a result, AI users are more committed to AI technologies.
  • While AI users are more likely to believe AI will increase the need for in-house expertise and dedicated headcount (60 percent of AI users vs. 52 percent in the overall sample), these respondents are more aware than the overall sample that AI benefits their organization because it increases the productivity of security personnel.
  • AI has reduced application security risk in organizations that have achieved greater deployment of these technologies. When asked about the effectiveness of AI in reducing application security risk, 69 percent of respondents say these technologies have significantly increased or increased the effectiveness of their application security activities vs. 59 percent of respondents in the overall sample who say their effectiveness increased in reducing application security risk.
  • AI technologies tend to decrease the complexity of organizations’ security architecture. Fifty-six percent of respondents in organizations that have more fully deployed AI report that instead of adding complexity AI actually decreases complexity. Only 24 percent of AI users say it increases complexity.
  • As the use of AI increases, the more knowledgeable the IT security staff becomes in identifying areas where the use of advanced technologies would be most beneficial. Fifty-six percent of AI users rate their organizations’ ability to accurately identify areas in their security infrastructure where AI and machine learning would create the most value as very high.
  • AI improves the ability to detect previously “undetectable” zero-day exploits. On average, AI users are able to detect 63 percent of previously “undetectable” zero-day exploits. In contrast, respondents in the overall sample say AI can increase detection by an average of 41 percent.

Download the entire report from IBM here. 

The newest, most devastating cyber-weapon: ‘patriotic trolls’

Bob Sullivan

Governments around the world are waging war on a new battleground: Social Media.  Their fighting force is an army of trolls. And if you are reading this story, you’ve probably been drafted.

Troll armies have helped overthrow governments and control populations. The playbook has been repeated in places like Turkey, India, and the Philippines. Once installed, trolls become engines of state propaganda, shouting down and crowding out voices of dissension.

While America is embroiled in an endless back-and-forth about Russian election meddling, this larger development has largely been missed: The 2016 election was just a data point in a much larger, more alarming trend. Trolling has become perhaps the most powerful weapon in 21st Century warfare.

If free speech has a weakness, this is it.  And it’s being used against democratic societies across the globe.

Sometimes called “patriotic trolling,” it’s a stunning reversal from the way dictatorial regimes used to handle the information superhighway — by shutting off the on ramps.  Increasingly, those in power are instead flooding the highway with misinformation, overwhelming it with noisy and malicious traffic.  It’s easier, and far cheaper, to control populations with a hashtag than the barrel of a gun.

The Great Firewall is being replaced by the Great Troll.

“States have realized that the internet offers new and innovative opportunities for propaganda dissemination that, if successful, obviate the need for censorship. This approach is one of ‘speech itself as a censorial weapon,’ ” write authors Carly Nyst and Nicholas Monaco in a chilling new report called “State-Sponsored Trolling: How Governments Are Deploying Disinformation as Part of Broader Digital Harassment Campaigns.”  The report was published by the Institute for the Future, which says it is a non-partisan research group based in Palo Alto, California.   “States are seizing upon declining public trust in traditional media outlets and the proliferation of new media sources and platforms to control information in new ways. States are using the same tools they once perceived as a threat to deploy information technology as a means for power consolidation and social control.”

What does state-sponsored trolling look like? Government officials and political leaders encourage personal attacks on opponents and civil rights groups.  They sow seeds of disbelief around the work of traditional watchdogs, like judges and journalists.  They encourage public vitriol and cynicism by citizens to protect themselves and their policies from traditional scrutiny and debate.

In some cases, professional trolls are hired to sow seeds of doubt and frustration. Other regimes sign up volunteers into an organized “cyber militia” to harass journalists and civil rights groups.  But in many cases, citizens are nudged to do the dirty work of trolls with little or no prompting from those in power.

You probably see evidence of this kind of behavior every day on your social media feeds; people lining up to lob personal attacks on those who disagree. That’s low-level trolling, however. The stakes get higher, fast.

Bloomberg recently investigated the phenomenon worldwide and came up with a long list of examples:

“In Venezuela, prospective trolls sign up for Twitter and Instagram accounts at government-sanctioned kiosks in town squares and are rewarded for their participation with access to scarce food coupons, according to Venezuelan researcher Marianne Diaz of the group @DerechosDigitales. A self-described former troll in India says he was given a half-dozen Facebook accounts and eight cell phones after he joined a 300-person team that worked to intimidate opponents of Prime Minister Narendra Modi. And in Ecuador, contracting documents detail government payments to a public relations company that set up and ran a troll farm used to harass political opponents.”

If you are shocked by the spread of conspiracy theories like Pizzagate online — and the emergence of a cottage industry that profits from the spread of such crazy ideas — don’t be. It’s not an accident, the report says.

“The new digital political landscape is one in which the state itself sows seeds of distrust in the media, fertilizes conspiracy theories and untruths, and harvests the resulting disinformation to serve its own ends,” the state-sponsored trolling report says.  “States have shifted from seeking to curtail online activity to attempting to profit from it, motivated by a realization that the data individuals create and disseminate online itself constitutes information translatable into power.”

The authors spent 18 months examining widespread trolling efforts in seven countries around the world: Azerbaijan, Bahrain, Ecuador, the Philippines, Turkey, Venezuela … and yes, the United States.

“Such attacks appear organic by design, both to exacerbate their intimidation effects on the target and to distance the attack from state responsibility,” the report says.  “However, in the cases we studied, attributing trolling attacks to states is not only possible, it is also critical to understanding and reducing the harmful effects of this trend on democratic institutions.

  • The report cites multiple examples of government propaganda by trolling.
  • In China members of the “50 Cent Army” are paid nominal sums to engage in nationalistic propaganda
  • In Turkey, journalist Ceyda Karan was subjected to a three-day-long trolling campaign in which two high-profile media actors played a key role:
  • Pro-Erdoğan journalist Fatih Tezcan, who has more than 560,000 followers, and Bayram Zilan, a self-declared “AKP journalist” with 49,000 followers. Tezcan and Zilan were central players in a campaign that involved 13,723 tweets against Karan sent by 5,800 Twitter users
  • The Twitter account of Indian prime minister Narendra Modi follows at least twenty-six known troll accounts, and the prime minister has hosted a reception attended by many of the same trolls
  • Filipino president Rodrigo Duterte has given bloggers active in online harassment campaigns accreditation to cover presidential foreign and local trips. Duterte groomed a cyber militia of around five hundred volunteers during his election campaign, eventually promoting key volunteers to government jobs after his election (For more on Duterte’s use of trolls, read this Bloomberg story.)
  • The Turkish government maintains a volunteer group of six thousand “social media representatives” spread across Turkey who receive training in Ankara in order to promote party perspectives and monitor online discussion
  • In Venezuela, former vice president Diosdado Cabello, who currently hosts the TV show Con el Mazo Dando (Hitting with the Sledgehammer) on the Venezuelan state-owned TV channel VTV8, used his TV show and a Telegram channel associated with it to encourage Twitter attacks on opposition politician Luis Florido using the hashtag #FloridoEresUnPajuo (“Florido, you’re a lying idiot”). Attacks on Florido lasted for days; they were vitriolic and crude and frequently accused him of being a traitor to Venezuela.
  • In Russia, state-sponsored trolling has been professionalized, with “troll farms” operating in a corporatized manner to support government social media campaigns. The most well-known troll farm is the Internet Research Agency (IRA), but there are reportedly scores of such organizations all around the country

Trolling efforts work in part because the trolls have access to data which help them game social media algorithms; their posts fool Facebook and Twitter into giving them more prominence. That worked during the U.S. presidential campaign, when the Russian troll group Heart of Texas gained 200,000 likes soon after launch – more than the official state GOP page.

“In one form of algorithm gaming, trolls hijack hashtags in order to drown out legitimate expression,” the report says.

Don’t be part of a troll army

If all this sounds to you like a fairly traditional propaganda campaign, I agree.  It’s just far more targeted, thanks to the information age. And, Americans seem particularly vulnerable to propaganda at the moment, for a variety of reasons. But you don’t have to be.

If you don’t want to be part of the troll/propaganda army, what should you do?  Do all the things your high school English said to do. Don’t be a troll.  Don’t say things just to get an emotional reaction, because you like setting people’s hair on fire. Always provide evidence, stick to facts, and don’t be drawn into ad hominem attacks.  Rise above them. When you see a vitriolic post by someone whose Twitter handle includes random strings of numbers, or who otherwise has a thin social media profile, assume you are dealing with a troll – even if the person seems to be on your side. Remember, America’s enemies simply want to sow discord, they don’t really care whose “side” they’re on. At a bare minimum, don’t repeat things you haven’t verified yourself just because you agree with the sentiment expressed.  Read numerous independent sources before passing on information.

Meanwhile, if you see or hear someone dismissing independent media with over-the-top criticisms, question their motives. Disagreeing with facts is healthy. Questioning someone’s integrity and patriotism, or persuading others to ignore an entire group or industry, should be viewed with deep skepticism.

Here’s how you recognize trolling, according to the Institute for the Future report:

  • Accusations of collusion with foreign intelligence agencies.
  • Accusations of treason.
  • Use of violent hate speech as a means of overwhelming and intimidating targets. Every female target of government-backed harassment receives rape threats
  • Creation of elaborate cartoons and memes.
  • Trolls often accuse targets of the very behaviors the state is engaging in. In numerous countries, for example, trolls make claims that targets are affiliated with Nazism or fascist elements. Politicians and their proxies use claims of “fake news” as a form of dog whistling to state-sponsored trolls.

In which state are consumers most prepared for a cyber attack?

Larry Ponemon

Ponemon Institute is pleased to presents the results of a U.S.-based survey of consumers located in all 50 states and Washington D.C. Survey findings were used to create the Cyber Hygiene Index (CHI) that attempts to measure consumers’ ability to protect themselves from various criminal attacks, especially in the online environment.

The CHI consists of a series of positive and negative survey questions weighted by the relative importance of each question for achieving a high level of readiness.

In the context of this research we define cyber hygiene as an individual’s ability to maintain a high level of readiness in order to prevent, detect and respond to cyber-related attacks such as malware, phishing, ransomware and identity/credential theft. The index provides a score ranging from +37 points (highest possible CHI) to -39 points (lowest possible CHI).

A total of 4,290 respondents were surveyed, which represented a 3.2 percent response rate from a proprietary sampling frame of consumers located throughout the United States. A total of 553 surveys were removed from the final sample because of reliability failure. The state-by-state sample sizes varied from a low of 40 completed surveys in Wyoming to a high of 179 completed surveys in New York.

Figure 1 provides the CHI scores for the top 5 and bottom 5 U.S. states. The bracketed number next to each state is the relative ranking from the most positive score for New Hampshire (re: 4.29) to the most negative score for Florida (re: -6.29).

Figure 1

In this section, we provide an analysis of the CHI and survey findings. The figures summarize the results of our survey. Each chart provides the overall survey response compiled from our total sample of 4,290 U.S. consumers with comparison to the 100 individuals with the most risky responses. We call this group the Bottom 100.

The complete audited research results are presented in the Appendix of this report. We have organized the report according to the following topics:

  • The impact of identity theft on cyber hygiene
  • The impact of malware and phishing attacks on cyber hygiene
  • The impact of a lost device on cyber hygiene
  • The impact of password practices on cyber hygiene
  • The impact of online behavior on cyber hygiene
  • The impact of identity theft on cyber hygiene

Figure 2 shows the percentage of respondents who said they experienced an identity fraud or another identity theft crime over the past 12 months. Our hypothesis is that consumers who experience an identity related crime were less likely to have strong cyber hygiene at the time of the incident.

Figure 2

Figure 3 shows the immediate consequences of the identity theft. As can be seen, both the Overall and Bottom 100 show a similar pattern. The most significant consequence is the decline in credit because of a low FICO score, followed by the misuse or theft of the respondents’ credit or debit cards.

Figure 4 presents respondents’ level of cautiousness resulting from the identity theft incident. As shown, 42 percent of respondents said the incident had a significant impact on their level of caution when connected to the Internet or when sharing their personal information. In sharp contrast, 60 percent of the Bottom 100 said the incident had no impact on their online behaviors.

Figure 4

There are dozens more findings and charts in the report, which you can download for free at this link on Webroot.com.

 

 

 

 

 

 

 

 

 

Who likes long airport lines? For Clear, and airports, frustration is a sales pitch

Bob Sullivan

“Skip the lines! No wait times!” yelled the “Clear” salespeople swarming beleaguered fliers at Sea-Tac airport on Thursday. The standard passenger security screening line wound far down the usual hallway. Travelers who approached slumped their shoulders when it came in view.  But all around this frustrated and captive audience were sales staff offering an immediate, easy answer: Sign up for Clear.  There’s a free trial.  You’ll be escorted to the front of the line! You can’t lose!

Actually, passengers are losing. That long-line alternative costs $179 a year.

What’s Clear? It’s kind of like TSA Pre or Global Entry. Passengers sign up with these services before flying and trade some personal information for a chance at shorter security lines when they get to the airport.

Clear address a a different part of the security screening process, however. It lets fliers use their fingerprints (or their eyes) instead of their IDs when entering security checkpoints. Clear users still go through standard passenger screening — shoes off, etc.  For consumers, the main benefit is the chance to skip ahead to the physical screening portion of security checkpoints.

But that chance to cut in line, especially if you are running late, is a pretty compelling offer.  Especially when Clear-only lines look so friendly, calm, and inviting, compared to the chaos happening at the other end of the hallway.

It’s understandable for passengers to look at the situation and wonder if the airport is somehow conspiring to nudge fliers to sign up for Clear — especially when you consider that the Port of Seattle, which operates Sea-Tac, gets 10% of Clear gross sales at SeaTac, according to Seattle radio station KOUW.  

Are these two entities profiting off of flier misery? Or even orchestrating it? It natural to wonder about that, said aviation expert Will McGee, an airline passenger advocate and author of the book Attention All Passengers.

“It’s like first you create a problem, and then you hit people with a (paid) solution to the problem,”  he said.

To be clear, the Transportation and Security Administration sets staffing levels at the nation’s airports using a complex formula based on busy times, not Clear or the Port of Seattle.  And TSA often doesn’t do a good job of that.  Two years ago, when security lines during summer travel reached crisis proportions, TSA had the fewest number of full-time staffers since its creation.

The agency hired hundreds more agents this year to avoid a repeat, but that’s a drop in the bucket compared to the surge in traffic many airports are experiencing.  There were 43 percent more SeaTac passengers in 2017 than five years ago. That means frustrating delays are still common. Airlines like United and Alaska are sending out warnings to passengers, suggesting they arrive a full two hours at the airport before some domestic flights.

Other TSA efforts to stem the problem have seen mixed results. It’s TSA Pre program, which costs $75 for five years, turned out to be too popular with fliers, who now sometimes face long wait lines at airport security, anyway.

Clear says it’s just trying to help.  TSA’s failure creates a market opportunity.  Clear’s value proposition is simple: Give the firm some biometric information, and you’ll won’t have to pull out your license or passport at the airport.  In an instant, you can pass the first part of every airport’s security two-step — the identity verification.  Instead, you can skip to the passenger screening.

At the moment, based on the wait-free exclusive Clear lines I saw this week at Sea-Tac, the value is quite real. A spokesman for the company told me most Clear uses pass screening in five minutes.

“It lets you take that extra meeting, or spend more time with family,” the spokesman, who asked not to be named, said.

The firm claims Clear works because it opens up a bottleneck in screening — eliminating the TSA agent who looks at your license, then at your face, and then scribbles on your boarding pass.

In my experience, the bottleneck isn’t in ID verification, however. It’s in screening.  You’ll frequently see TSA agents deliberately slow down because the line behind them gets too long. And as far as the added security of biometric identification, that’s questionable. The Clear spokesman told me it was “100% accurate,” a risky claim to make with any technology. Clearly, in one way, it eliminates human error. Repeated red-team tests have shown the failure rate of TSA agents is high. On the other hand, biometric information can be faked, and Clear also eliminates the human element from screening.  A well-trained TSA agent can theoretically spot potentially dangerous would-be passengers during those brief human encounters.

I asked Clear if it had any data or studies to back up claims that it genuinely speeds up the screening process — vs. simply creating a kind of airport HOT lane — and the firm hasn’t gotten back to me yet.

I also asked the Port of Seattle to respond to the impression that it is profiting off of passenger’s misery — or somehow might have a hand in making that misery. In a statement, the agency said it has worked with TSA to increase the number of agents, and pointed out that only about 3% of passengers currently use Clear. The agency’s full statement is pasted below.

I’m still awaiting a response from TSA.

I talked to a couple of sales folks at SeaTac and expressed my dismay at this; one conceded that the situation didn’t look good, and he didn’t think the free trial arrangement was ideal. He did say that ultimately, Clear’s partnership with  airports is ultimately a good thing for fliers, because it will help fix a clearly over-burdened system.

As often happens in this case, outsourcing government tasks to a private company is a tempting solution.  Instead, it’s both a band-aid and an abdication of responsibility.

“The problem is that in many cases airport authorities share much of the blame for security congestion and passenger delays through screening,” McGee says. “They should be working on developing sensible solutions to alleviate such problems for all passengers, not developing for-profit solutions for the few who can and will pay to avoid such messes.”

If you find yourself standing on a long line this summer, being upsold on Clear by hawkers promising a chance to cut in line, it would be worth asking yourself: Do I trust this company long-term with my biometric information?  You should also wonder if Clear’s future might look anything like TSA-Pre — it works for early adopters, until it becomes so popular that long lines follow. And naturally, you might also wonder: What if those folks were actually helping with passenger screening instead of giving sales pitches?

Perry Cooper of the Port of Seattle issued this statement to me:

“The TSA and Homeland security determines the staffing assignments for all airports throughout the country. We have worked with our Congressional delegation for the last several years to encourage additional staffing as we’ve been the fastest growing airport in the country over the last five years. The TSA has faced staffing challenges with the boom in the region. They have recently brought in more staff from around the country to help immediately and they have more staff recently hired going through training who expect to be on the job in the next few weeks. In addition, the TSA has worked to get more K9 teams here to Sea-Tac as well. The combination of additional TSA staff and K9 teams helps improve throughput at the checkpoints.

“The Port has increased our efforts in our area of responsibility outside the checkpoints. We hired 8 additional Pathfinders for the summer, and recently approved four more, who help to ‘cue balance’ which means moving people from one line to another.

“For more information, here’s a blog post we’ve put up recently to help walk people through some of the details of checkpoints and what arriving early means in your planning.

“Clear is a trusted traveler product approved by the TSA just like PreCheck. It is used in over 30 airports across the country. The numbers we see going through Clear lanes is about 3% of our monthly total of passengers and does not have an effect on the speed of the general lines. The fee collected is a concessions fees just as any airport would collect from a dining or retail tenant. All of those monies are required to go back into the Airport Improvement Fund which fund amenities at the airport. PreCheck and Clear are provided as choices for travelers to use.”

 

While negligence causes the most breaches, insiders do the most damage

Larry Ponemon

Ponemon Institute and ObserveIT have released The 2018 Cost of Insider Threats: Global Study, on what companies have spent to deal with a data breach caused by a careless or negligent employee or contractor, criminal or malicious insider or a credential thief. While the negligent insider is the root cause of most breaches, the bad actor who steals employees’ credentials is responsible for the most costly incidents.

The first study on the cost of insider threats was conducted in 2016 and focused exclusively on companies in the United States. In this year’s benchmark study, 717 IT and IT security practitioners in 159 organizations in North America (United States and Canada), Europe, Middle East and Africa, and Asia-Pacific were interviewed.

According to the research, if the incident involved a negligent employee or contractor, companies spent an average of $283,281. The average cost more than doubles if the incident involved an imposter or thief who steals credentials ($648,845). Hackers cost the organizations represented in this research an average of $607,745 per incident.

Here are the main findings of the research:

Imposter risk is the most costly.

The cost ranges significantly based on the type of incident. If it involves a negligent
employee or contractor, each incident can average $283,281. The average cost
more than doubles if the incident involves an imposter or thief who steals credentials
($648,845). Hackers cost the organizations represented in this research
an average of $607,745 per incident. The activities that drive costs are: monitoring &
surveillance, investigation, escalation, incident response, containment, ex-post
analysis and remediation.

The negligent insider is the root cause of most incidents

Most incidents in this research were caused by insider negligence. Specifically, the careless
employee or contractor was the root cause of almost 2,081 of the 3,269 incidents reported. The
most expensive incidents are due to imposters stealing credentials and were the least reported.
There were a total of 440 incidents involving stolen credentials.

Organizational size and industry affects the cost per incident

The cost of incidents varies according to organizational size. Large organizations with a
headcount of more than 75,000 spent an average of $2,081 million over the past year to resolve
insider-related incidents. To deal with the consequences of an insider incident, smaller-sized
organizations with a headcount below 500 spent an average of $1.80 million. Companies in
financial services, energy & utilities and industrial & manufacturing incurred average costs of
$12.05 million, $10.23 million and $8.86 million, respectively

All types of threat of insider risks are increasing.

Since 2016 the average number of incidents involving employee or contractor negligence has increased from 10.5 to 13.4. The average number of credential theft incidents has tripled over the past two years, from 1.0 to 2.9.

Employee or contractor negligence costs companies the most.

In terms of total annual costs, it is clear that employee or contractor negligence represents the most expensive insider profile. While credential theft is the most expensive on a unit cost basis, it represents the least expensive profile on an annualized basis.

It takes an average of more than two months to contain an insider incident.

It took an average of 73 days to contain the incident. Only 16 percent of incidents were contained in less than 30 days.

We conclude that companies need to intensify their efforts to minimize the insider risk because of rising costs and frequency of incidents. Since 2016 the average number of incidents involving employee or contractor negligence has increased from 10.5 to 13.4. The average number of credential theft incidents has tripled over the past two years, from 1.0 to 2.9. In addition, these incidents are not resolved quickly.

Click here to read the rest of this study.

 

Privacy problems? Think of them as side effects

Bob Sullivan

Not long ago, I was approached by someone to help write a book about the race to cure cancer. It was an intriguing idea, and it sent me down a rabbit whole of research so I’d be able to understand what I’d be getting into. What I found was one Greek myth-like tale after another, of a wonderful breakthrough followed by a tragic outcome.  An incredibly promising development followed by crushing consequence.  Of treatments that killed cancer but also killed patients. Of cures that are worse than the disease.

Sometimes, these are stories about egos blinded by a God complex, refusing to see they are hurting instead of helping. Usually, they are stories about people who spend decades in service to humanity and the slow, very unsteady, very unsure march of progress.

And these are stories about damned side effects.

I usually tell people that I’m a tech reporter, but that I focus on the unintended consequences of technology — tech’s dark side.  Privacy, hacking, viruses, manipulation of consumers via big data. These things are kind of like the nuclear waste of “progress.” But lately I’ve been thinking about changing that description.

Now, I think the problem is a lot more like the medical concept of side effects.

Companies like Facebook, Uber, and Google are full of brilliant engineers who spend all their time and energy trying to solve some of the world’s great problems, and they often do.  Uber and its imitators are wonderful at solving vexing transportation problems.  Facebook *has* connected billions of people, and let millions of families share baby photos easily.  These are good tools. Amazing tools.

But tech firms aren’t built to think about side effects.  Long before the Russian trolls in 2016, plenty of people warned Facebook about the crap its service was spewing, about how its tool had been hijacked and weaponized. But Facebook didn’t listen. The firm was too focused on the “cure” it was inventing — maybe too arrogant, maybe too naive — to see the damage it was doing.

There are similar tales all across tech-land.

Banking apps let us pay our friends instantly; they also let criminals steal from us instantly. Talk to banks about this, and you can almost hear the mad-scientist approach (I hear, “Well, consumers really should protect themselves,” as “We can’t let a few victims get in the way of progress!”)

Cell phone companies have created amazing products. And now, we know, they also make it easy for law enforcement to track us.

There’s a cynical way to view this, of course. Facebook is only concerned with making money, Google doesn’t really care about making the world a better place, just making its balance sheet a better place. If you believe that, I’m not trying to talk you out of it.  Corporations are people after all, our Supreme Court says, and greedy people at that. It’s illegal for them to act otherwise; it would be negligent not to maximize shareholder value.

I’ve spent 20 years talking to people in the tech industry, however, and there’s plenty of folks in it who don’t think that way.  I think most folks in tech who fail us are better described a naive Utopians rather than greedy bastards.

In the coming months, I’ll be working on a new set of initiatives around this notion.  The effort really started this year with re-release of Gotcha Capitalism.  My podcast “Breach” is also part of this. So are some new audio projects I’m working on. I’m being vague because I have to, for now.  You might see a bit of a slowdown in posts as I ready this projects, but rest assured, I’m on the beat.

In the new introduction to the new Gotcha Capitalism, I sum up what I feel is the civil rights issue of our time: Big Data being used against consumers.  It fits the Failed Utopia model like a T.  Folks wanted to remove the human element — often susceptible to racial and other forms of bias — from important decisions in realms like credit and criminal punishment. So credit scores are now used to grant mortgages, and formulas are used in sentencing decisions.  Unfortunately, as my Dad taught me in the 1970s, “Garbage In, Garbage Out,” is still the primal rule in computing.  Algorithms can suffer from bias, too. What makes this scary, however, is many folks haven’t woken up to this fact yet.  Just as, once upon a time, people believed that photographs can’t lie, today, many blindly think that data can’t lie.

It can, and does. More important, in the wrong hands, data can be abused.  So now we have the even-worse story of a powerful tool built by a Utopian falling into the wrong hands and being abused by an evil genius.

This is the story of tech today.

I’m hardly the only one who recognizes this. Organizations like the Center for Humane Technology are springing up all over.  This is promising. But the forces aligned against such thoughtful use of tech are powerful, and billions of dollars are at stake.  Sometimes, it can feel like the the onslaught of tech’s takeover is a force of nature, like gravity.  Just ask anyone who’s ever tried to convince a startup to think about security or privacy while it’s racing to release new features.

Not unlike someone racing to invent a cure, side effects be damned.

I hope you’ll join me in this effort. Little things mean a lot — such as this woman’s suggestions for getting people to put down their smartphones when she wants to talk.  Mere awareness of the issue helps a lot. Think about how much news you get from Facebook or Twitter today compared to five years ago. Would your high school civics teacher be proud?

When tech is released in to the world, side effects like privacy and security issues shouldn’t be an afterthought. They should be considered and examined with all the rigor that the medical profession has long practiced. That’s how we’ll make sense out of our future.

‘Knowledge asset’ risk comes into focus; nation-states a bigger concern

Larry Ponemon

The Second Annual Study on the Cybersecurity Risk to Knowledge Assets, produced in collaboration between Kilpatrick Townsend and Ponemon Institute, was done to see whether and in what ways organizations are beginning to focus on how they are safeguarding confidential information critical to the development, performance and marketing of their core businesses in a period of targeted attacks on these assets.

Ponemon Institute surveyed 634 IT security practitioners who are familiar and involved with their organization’s approach to managing knowledge assets. All organizations represented in this study have a program or set of activities for managing knowledge assets. The first study, Cybersecurity Risk to Knowledge Assets, was released in July 2016

Awareness of the risk to knowledge assets increases. More respondents acknowledge that their companies very likely failed to detect a breach involving knowledge assets (an increase from 74 percent of respondents in 2016 to 82 percent of respondents in this year’s research). Moreover, in this year’s research, 65 percent of respondents are aware that one or more pieces of the company’s knowledge assets are now in the hands of a competitor, an increase from 60 percent of respondents in the 2016 study.

The cost to recover from an attack against knowledge assets increases. The average total cost incurred by organizations represented in this research due to the loss, misuse or theft of knowledge assets over the past 12 months increased 26 percent from $5.4 million to $6.8 million.

Eighty-four percent of respondents state that the maximum loss their organizations could experience as a result of a material breach of knowledge assets is greater than $100 million as compared to 67 percent of respondents in 2016.

Actions taken that support the growing awareness of the risk to knowledge assets

Following are findings that illustrate how the growing awareness of the risk to knowledge assets is improving cybersecurity practices in many of the companies represented in this study.

  • Companies are making the protection of knowledge assets an integral part of their IT security strategy (68 percent of respondents vs. 62 percent of respondents in 2016).
  • Boards of directors are requiring assurances that knowledge assets are managed and safeguarded appropriately (58 percent of respondents vs. 50 percent of respondents in 2016).
  • Companies are addressing the risk of employee carelessness in the handling of knowledge assets. Specifically, training and awareness programs are focused on decreasing employee errors in the handling of sensitive and confidential information (73 percent of respondents) and confirming employees’ understanding and ability to apply what they learn to their work (68 percent of respondents).
  • Companies are adopting specific technologies designed to protect knowledge assets. The ones for which use is increasing most rapidly include big data analytics, identity management and authentication and SIEM.
  • There is a greater focus on assessing which knowledge assets are more difficult to secure and will require stricter safeguards for their protection. These are presentations, product/market information and private communications.
  • There is greater recognition that third party access to a company’s knowledge assets is a significant risk. As a result, more companies are requiring proof that the third party meets generally accepted security requirements (an increase from 31 percent of respondents in 2016 to 41 percent in this year’s study) and proof that the third party adheres to compliance mandates (an increase from 25 percent of respondents in 2016 to 34 percent in this year’s study).
  • Companies are aware that nation-state attackers are targeting their company’s knowledge assets (an increase from 50 percent to 61 percent in this year’s study) and 79 percent of respondents believe their companies’ trade secrets or knowledge assets are very valuable or valuable to a nation-state attacker.

To download the full study at Kilpatrick Townsend, click here 

Why my futile search for tuxedo pants shows the Russians are winning

Bob Sullivan

I’ve been raging about Facebook-style privacy invasions for a long time, so I’m glad that folks *seem* to be listening now –though the distance between noise and action is quite far.

I’m not a Luddite, however. My complaints are a lot more practical.  I’ll often make this point: On one side of the ledger, we are surrendering privacy at unprecedented levels, granting black checks to future corporations and governments with consequences we can’t possibly imagine. And we’re getting very little for it.  Meanwhile, Russia, China, and other enemies now have an incredibly powerful weapon to use against us and our freedom. That’s a bad deal. Let me explain.

What are we supposed to be getting in exchange for all this tracking of our every move? Better ads! I will concede that better ads would certainly be lovely. But, as anyone who’s ever worked in advertising knows, there’s still an awful lot of snake oil being sold in the name of better ads.  In fact, today’s “targeted” ads continue to create some of the singularly worst ads imaginable.  Even when some of the biggest and most honorable names in retail and media are involved. Let me show you.

I have a black tie event to attend soon, which means dragging my, ahem, inexpensive tuxedo out of the back of my closet.  Not surprisingly, the pants no longer fit.  So I did what any sensible consumer who attends a black tie event every five years would do — I poked around Nordstrom Rack hoping to find a pair that could pass for a single evening.  I’ll be sitting at a table most of the night, so who’ll notice if they aren’t a perfect match? (Sorry, Kim Peterson. You tried your best.)

I gave up in about 3 minutes, when the small degree of fashion pride I had set in, realizing that my plan wouldn’t work.  So I schlepped to a Nordstrom Rack store the next day and tried on a bunch of black pants to make sure I wouldn’t embarrass myself.  Let me note that I shop at the store often enough that I am a member, because hey, I like deep discounts.

These two great brands are getting hoodwinked.  The consequences are larger than you think.

Fast forward to this morning when I open my daily New York Times email, which came with an enticing headline about allergies.  And what do I see at the top of the email? An ad for tuxedo pants.  I’ve made this point before, and I’ll continue to make it, perhaps for decades.  Do you see what happened there?  Billions of dollars and huge media companies conspired to deliver me an ad that was not just bad, it was uniquely bad. It was catastrophically bad. It was targeted bad.  It was an ad for something that I had just purchased…in fact, something I had just purchased from the very store that paid for the ad. There could be no worse time to show me this ad. Any random ad would be better than an ad for the very thing I need to buy the least, right?  And again, delivering this uniquely, targeted terrible ad required creation of a system that cost billions, robs million of their privacy, and outfits America’s enemies with a devastating weapon.

But wait: There’s even more wrong with my tuxedo-ad experience. Being the game consumer that I am, I clicked on the ad to see what would happen. Maybe there’s a cheaper price for the pants I’d just purchased, and I could return them and save a few bucks. Alas, when I do, I see the curious chart above. While the price for the pants is indeed competitive, fully 16 of the 17 sizes shown are unavailable.  Only a single size — 42×32 — is actually for sale.  Meaning, in reality, I got an ad for something that wasn’t for sale.  And that flat-out irritated me. It wasted my time.

Here’s what I know: Someone is stealing Nordstrom’s advertising money.  (I don’t know why my newsletter doesn’t have a sponsor yet.  I could do better than this.)

I know I’m telling you something you know. We’ve all glanced at a product online, only to be stalked by that product for days, at every website we visit. I’m sure it works to some degree.  For every person shown an ad for a product they’ve purchased, there’s another who needs to see it 5 or 10 times before they pull the trigger. So sure, those ads might be better than random ads in some cases. The ad industry calls this re-targeting, and claims these ads have superior click-through rates.   Solid data from the ad industry is hard to come by, however.

And don’t forget, I’m a Nordstom Rack member.  The firm knows my email address, and what I’ve purchased.  Now, I have clicked opt-out on enough data sharing arrangements that there’s might be some reason the datastream broke down and I got an ad for a product that I couldn’t buy, at the very moment when I least needed it, shortly after I had just purchased that item from the store which paid to get in front of me. More likely, however, that this ad delivery system is just flawed.

So, to repeat my main point: All this technology works great if you want to attack a society with propaganda. It works terribly to help commerce and consumers.

This is my privacy problem. It’s just a bad deal.

Look, I’d love to have seen ads for tuxedo pants that actually fit me last week.  Instead, the only thing I can count on is I now will wonder how all these data points might be used by hackers against me, or by a nation-state to manipulate me and my friends, in the future.

This is not a story about tuxedo pants.  Or about annoying ads. This is a story about the false promise that is the utopia of targeted advertising, and the unexpected consequences that this foolish quest creates.  Years ago, when I first ranted against retargeting, I talked — as I always do — about future unintended consequences.  In my wildest dreams, I didn’t imagine that this kind of data hoarding could help a nation-state attack our democracy.  This is *exactly* the point of today’s story. Who knows how my search for pants today might be used against me tomorrow?  Will it signal to my health insurance company that my rates need to go up?  Will a potential future employer use that information to turn me down for a job?  Will a propaganda pusher in St. Petersburg put me in a “bucket” and prod me with cleverly-crafted political ads?

I don’t know. But I do know these ads didn’t help. And they might hurt. That’s a bad deal for everyone.