Author Archives: BobSulli

Equifax: ‘This is … the big one we’ve all been waiting for’ — Breach podcast season 2

Bob Sullivan

“This is, potentially, the motherload. The big one we’ve all been waiting for.” — Ron Lieber, The New York Times “Your Money” columnist.

So begins our second season of Breach, which just dropped this month. We begin with an episode titled “Why, Equifax?” — which means all the things you think it means.

How could a company with so much precious information lose it all in what ultimately turned out to be a cascade of errors? A patch that was never applied, software that didn’t work because a certificate wasn’t updated for 19 months, an IT team that relied on the “honor system” to implement security measure. Then, there’s the biggest irony of all: Hackers broke in through the very system — the dispute resolution portal — that was designed to help American consumers fix errors in their credit reports.

But let’s back up, to the biggest question most people had when Equfax was hacked: Who the heck is Equifax and why does it have all my most intimate personal information?

If those sound like old questions, they aren’t. We have answers — along with ideas about bigger questions, like “What now?” and “Has our privacy been murdered once and for all?”

Alia Tavakolian and I have spent six months researching what many believe is the most important hack ever, along with a team of researchers and producers at Spoke Media, led by Janielle Kastner.  I’m very proud of the result and I think you’ll like it.

Breach is a sponsored podcast paid for by Carbonite; but you’ll glad to know Carbonite didn’t meddle in what say or report on in the podcast.

This season, we are releasing six episodes, one week at a time, each one about 30 minutes.  We’ll explain the history of the credit bureau industry, the run-up to the breach, the bungling of the breach response, and the individuals who are fighting back in the most creative ways possible (wait until you hear what happens in small claims court).  We are also running a great experiment with consumer lawyer Joel Winston where we try to get every credit report on a single consumer (Think you have three? You might have dozens, or even hundreds.)

Then we’ll explain why, I believe, privacy isn’t dead. But it is on life support, and we have no time to waste.

You can listen to episode one by clicking play below, if that embedded link works for you.   If not, click :

here for the Stitcher page
https://www.carbonite.com/podcasts/breach/s02e01-Equifax-data-breach
or

here for our iTunes page
https://itunes.apple.com/us/podcast/breach/id1359920809?mt=2

Securing the modern vehicle: a study of automotive industry cybersecurity practices

Larry Ponemon

Today’s vehicle is a connected, mobile computer, which has introduced an issue the automotive industry has little experience dealing with: cybersecurity risk. Automotive manufacturers have become as much software as transportation companies, facing all the challenges inherent to software security.

Synopsys and SAE International partnered to commission this independent survey of the current cybersecurity practices in the automotive industry to fill a gap that has existed far too long—the lack of data needed to understand the automotive industry’s cybersecurity posture and its capability to address software security risks inherent in connected, software-enabled vehicles. Ponemon Institute was selected to conduct the study. Researchers surveyed 593 professionals responsible for contributing to or assessing the security of automotive components.

Software Security Is Not Keeping Pace with Technology in the Auto Industry

When automotive safety is a function of software, the issue of software security becomes paramount—particularly when it comes to new areas such as connected vehicles and autonomous vehicles. Yet, as this report demonstrates, both automobile OEMs and their suppliers are struggling to secure the technologies used in their products. Eighty four percent of the respondents to our survey have concerns that cybersecurity practices are not keeping pace with the ever-evolving security landscape.

Automotive companies are still building up needed cybersecurity skills and resources. The security professionals surveyed for our report indicated that the typical automotive organization has only nine full-time employees in its product cybersecurity management program. Thirty percent of respondents said their organizations do not have an established product cybersecurity program or team. Sixty-three percent of respondents stated that they test less than half of hardware, software, and other technologies for vulnerabilities. Pressure to meet product deadlines, accidental coding errors, lack of education on secure coding practices, and vulnerability testing occurring too late in production are some of the most common factors that render software vulnerabilities. Our report illustrates the need for more focus on cybersecurity; secure coding training; automated tools to find defects and security vulnerabilities in source code; and software composition analysis tools to identify third-party components that may have been introduced by suppliers.

Software in the Automotive Supply Chain Presents a Major Risk

While most automotive manufacturers still produce some original equipment, their true strength is in research and development, designing and marketing vehicles, managing the parts supply chain, and assembling the final product. OEMs rely on hundreds of independent vendors to supply hardware and software components to deliver the latest in vehicle technology and design. Seventy-three percent of respondents surveyed in our report say they are very concerned about the cybersecurity posture of automotive technologies supplied by third parties. However, only 44 percent of respondents say their organizations impose cybersecurity requirements for products provided by upstream suppliers.

Connected Vehicles Offer Unique Security Issues

Automakers and their suppliers also need to consider what the connected vehicle means for consumer privacy and security. As more connected vehicles hit the roads, software vulnerabilities are becoming accessible to malicious hackers using cellular networks, Wi-Fi, and physical connections to exploit them. Failure to address these risks might be a costly mistake, including the impact they may have on consumer confidence, personal privacy, and brand reputation. Respondents to our survey viewed the technologies with the greatest risk to be RF technologies (such as Wi-Fi and Bluetooth), telematics, and self-driving (autonomous) vehicles. This suggests non-critical systems and connectivity are low-hanging fruit for attacks and should be the main focus of cybersecurity efforts.

As will be clear in the following paragraphs, survey respondents in a myriad of sectors of the industry show a significant awareness of the cybersecurity problem and have a strong desire to make improvements. Of concern is the 69 percent of respondents who do not feel empowered to raise their concerns up their corporate ladder, but efforts such as this report may help to bring the needed visibility of the problem to the executive and boardroom level. Just as lean manufacturing and ISO 9000 practices both brought greater quality to the automotive industry, a rigorous approach to cybersecurity is vital to achieve the full range of benefits of new automotive technologies while preserving quality, safety, and rapid time to market.

Sixty-two percent of those surveyed say a malicious or proof-of-concept attack against automotive technologies is likely or very likely in the next 12 months, but 69 percent reveal that they do not feel empowered to raise their concerns up their chain of command. More than half (52 percent) of respondents are aware of potential harm to drivers of vehicles because of insecure automotive technologies, whether developed by third parties or by their organizations. However, only 31 percent say they feel empowered to raise security concerns within their organizations.

Thirty percent of respondents overall say their organizations do not have an established product cybersecurity program or team. Only 10 percent say their organizations have a centralized product cybersecurity team that guides and supports multiple product development teams.

When these data are broken down by OEM or supplier, 41 percent of respondents in suppliers do not have an established product cybersecurity program or team of any kind. In contrast, only 18 percent of OEMs do not have a product security program or team.

A significant percentage of suppliers are overlooking a well-established best practice: to employ a team of experts to conduct security testing throughout the product development process, from the design phase through decommissioning.

The majority of the industry respondents believe they do not have appropriate levels of resources to combat the cybersecurity threats in the automotive space. On average, companies have only nine full-time employees in their product cybersecurity management programs. Sixty-two percent of respondents say their organizations do not have the necessary cybersecurity skills. More than half (51 percent) say they do not have enough budget and human capital to address cybersecurity risks.

Vehicles are now essentially a mobile IT enterprise that includes control systems, rich data, infotainment, and wireless mesh communications through multiple protocols. That connectivity can extend to the driver’s personal electronic devices, to other vehicles and infrastructure, and through the Internet to OEM and aftermarket applications, making them targets for cyberattacks. Unauthorized remote access to the vehicle network and the potential for attackers to pivot to safety-critical systems puts at risk not just drivers’ personal information but their physical safety as well.

Automotive engineers, product developers, and IT professionals highlighted several major security concern areas as well as security controls they use to mitigate risks.

Technologies viewed as causing the greatest risk are RF technologies, telematics, and self-driving vehicles. Of the technological advances making their way into vehicles, these three are seen to pose the greatest cybersecurity risks. Organizations should be allocating a larger portion of their resources to reducing the risk in these technologies.

Respondents say that pressure to meet product deadlines (71 percent), lack of understanding/training on secure coding practices (60 percent), and accidental coding errors (55 percent) are the most common factors that lead to vulnerabilities in their technologies. Engaging in secure coding training for key staff will target two of the main causes of software vulnerabilities in vehicles.

Download the rest of this report from the Synopsis Webs site (PDF).

Target used location-based data to change prices on consumer app

Click to read KARE-TV’s investigation.

Bob Sullivan

Ever see a price for an item online, then look again and see a different price, and think you were going crazy? Probably not. You were probably encountering some form of dynamic pricing, which retailers have quietly dabbled in for many years.  Quietly, because every time consumers find out about it, there’s an uproar and they have to back off – as Target did this week, when a Minnesota TV station exposed the store for charging very different prices on its app and in its physical stores.  A shopper who claimed to have paid $99 for a razor in-store, then spotted the same thing online for $69, had tipped them off.

The stations reproduced this pattern, with some striking results:

“For instance, Target’s app price for a particular Samsung 55-inch Smart TV was $499.99, but when we pulled into the parking lot of the Minnetonka store that price suddenly increased to $599.99 on the app,” the station said. (Give ’em a click, read the whole report).

KARE shopped for more items, and found an even more intriguing pattern: Basically, the closer shoppers were to the store, the more the item cost.   If you are near the store, you don’t need a price enticement, the logic goes.   It also means Target is following you around, virtually, and knows where you are.  And it’s looking over your shoulder to decide what price you deserve on an item.  Spooky.

Target has changed its policies, according to KARE, in response to the story.

The firm sent me a full statement, included at the bottom of this story. It reads, in part, “We’ve made a number of changes within our app to make it easier to understand pricing and our price match policy.”  In essence, the firm has added language to its app that makes clear a price is valid in a store, or online — see the screenshot below, provided by Target.

I saw something vaguely similar recently when I priced rental cars for a trip to Seattle. When I was logged in using my “discount code” and membership, I got higher prices than when I shopped as an anonymous user.

There’s nothing illegal about dynamic pricing, probably,  even though it might seem unsavory, or downright deceptive. It’s definitely a Gotcha.  Why?  Because the rules of this game are not transparent to you.  And it takes advantage of people who might be too busy or distracted to play the “open another browser on another computer just to check” game when they are buying things.

But I’m here to tell you: This is the only way to buy things in the 21st Century. Shopping around used to mean driving around and getting different prices from different stores. Today, it means clicking around to make sure you aren’t being followed when you buy things.  Every. Single. Time.  Never make a hotel reservation without shopping both at an aggregator like Expedia and direct from the hotel. If you have time, call the hotel, too, and ask about the online price. When you are in a store, always pull out your smartphone and do a quick price comparison — not just at THAT retailer, but at Amazon, and at other shops.  And now you know, it’s best to price the item *before* you get to the store, just in case you are being followed.

Christopher Elliot, travel deal expert at Elliot.org — a site you should be reading — makes the point that software can help keep you from being followed by companies and dynamic pricing.

“You definitely have to log in and out and search for prices,” Elliot says. “Also, consider using your browser’s incognito mode. Companies are trying to track you and may change prices based on who you are, or who they think you are.”

You don’t always have to buy where the price is lowest; in fact, I’m against chasing every last dollar as a shopper.  It’s ok to pay a little more if you want to support local businesses, and often, people waste money and gas trying to save every last penny. That’s not the point here. You just want to make sure you aren’t getting ripped off. It’s a pain, I know.  Sorry. That’s Gotchaland.  And until some regulator forbids the practice, you have to live with it.

—STATEMENT FROM TARGET —

Image provided by Target. Note the phrases near the price indicating where it’s valid — in a store, or online.

“We appreciate the feedback we recently received on our approach to pricing within the Target app.

“The app is designed to help guests plan, shop and save whether they are shopping in store or on the go. We are constantly making updates and enhancements to offer the best experience for guests shopping at Target.

“We’ve made a number of changes within our app to make it easier to understand pricing and our price match policy. Each product will now include a tag that indicates if the price is valid in store or at Target.com. In addition, every page that features a product and price will also directly link to our price match policy.

“We’re committed to providing value to our guests and that includes being priced competitively online and in our stores, and as a result, pricing and promotions may vary. Target’s price match policy allows guests to match the price of any item they see at Target or from a competitor, assuring they can always get the lowest price on any item.”

Guests will receive the latest version of the app in the next few days.

Secure file sharing & content collaboration for users, IT & security

Larry Ponemon

The ability to securely and easily share files and content in the workplace is essential to employees’ productivity, compliance with the EU’s General Data Protection Regulation (GDPR) and digital transformation. However, a lack of visibility into how users are accessing sensitive data and the file applications they are using is putting organizations at risk for a data breach. In fact, 63 percent of participants in this research believe it is likely that their companies had a data breach in the past two years because of insecure file sharing and content collaboration.

According to the findings, an average of 44 percent of employees in organizations use file sharing and collaboration solutions to store, edit or share content in the normal course of business. As a result of this extensive use, most respondents (72 percent) say that it is very important to ensure that the sensitive information in these solutions is secure.

Despite their awareness of the risks, only 39 percent of respondents rate their ability to keep sensitive contents secure in the file sharing and collaboration environment as very high. Only 34 percent of respondents rate the tools used to support the safe use of sensitive information assets in the file sharing and collaboration environment as very effective.

Sponsored by Axway Syncplicity, the purpose of this research is to understand file sharing and content collaboration practices in organizations and what practices should be taken to secure the data without impeding the flow of information. Ponemon Institute surveyed 1,371 IT and IT security practitioners in North America, United Kingdom, Germany and France. All respondents are familiar with content collaboration solutions and tools. Further, their job function involves the management, production and protection of content stored in files.

This section presents an analysis of the key findings. More details can be found on Axway’s website. Following are key themes in this research.

Data breaches in the file sharing and content collaboration environment are likely. Sixty-three percent of respondents say it was likely that their organizations experienced the loss or theft of sensitive information in the file sharing and collaboration environment in the past two years.

The best ways to avoid a data breach is to have skilled personnel with data security responsibilities (73 percent of respondents), more effective data loss protection technologies in place (65 percent of respondents), more budget (56 percent of respondents) and fewer silos and/or turf issues among IT, IT security and lines of business (49 percent of respondents).

Data breaches are likely because of risky user behavior. About 70 percent of respondents say they have received files and content not intended for them. Other risky events include: accidentally sharing files or contents with individuals not authorized to receive them, not deleting confidential contents or files as required by policies and accidentally sharing files or content with unauthorized individuals outside the organization, according to 67 percent, 62 percent and 59 percent of respondents, respectively.

A lack of visibility into users’ access puts sensitive information at risk. Only 31 percent of respondents are confident in having visibility into users’ access and file sharing applications. Some 65 percent of respondents say not knowing where sensitive data is constitutes a significant security risk. Only 27 percent of respondents say their organization has clear visibility into what file sharing applications are being used by employees at work. A consequence of not having visibility is the inability for IT leadership to know if lines of business are using file sharing applications without informing them (i.e. shadow IT).

Customer PII and confidential contents and files are the types of sensitive information at risk. The most sensitive types of data shared with colleagues and third parties is customer PII and confidential documents and files. Hence, these need to be most protected in the file sharing and collaboration environment.

The plethora of unstructured data makes managing the threats to sensitive information difficult. As defined in the research, unstructured data is information that either does not have a pre-defined data model or is not organized in a pre-defined manner. Unstructured information is typically text-heavy, but may contain data such as dates, numbers, and facts as well. An average of 53 percent of organizations’ sensitive data is unstructured and organizations have an average of almost 3 petabytes of unstructured data.

Most unstructured data is stored in email file sharing solutions. Respondents estimate an average of 20.5 percent is stored in shared network drives and 20 percent is stored in other file sync and share solutions. Almost half (49 percent of respondents) are concerned about storing unstructured data in the cloud. Only about 20 percent of unstructured data is stored in cloud-based services such as Dropbox or Box (20 percent) and Office 365 (17 percent).

On average, almost half of an organization’s sensitive data is stored on-premises.  According to Figure 7, an average of almost half (49 percent) of organizations’ sensitive information is stored on-premises and approximately 30 percent is located in the public cloud. An average of 22 percent of sensitive information is stored in the hybrid cloud. Hybrid cloud is a cloud computing environment that uses a mix of on-premises, private cloud and third-party, public cloud services with orchestration between the two platforms.

Companies are challenged to keep sensitive content secure in the file sharing and collaboration environment. As mentioned earlier in the report, respondents are aware of the threats to their sensitive information, but admit their governance practices and technologies should be more effective. According to respondents, on average, about one-third of the data in the file sharing and collaboration environment is considered sensitive.

To classify the level of security that is needed, respondents say it is mostly determined by data usage, location of users and sensitivity of data type (62 percent, 61 percent and 60 percent, respectively). Twenty-four percent of respondents say their companies do not determine content and file-level confidentiality.

To read the rest of this report: Click here to visit Axway’s site. 

No, I don’t have Bruce tickets — When Google ‘interprets’ emails, it’s spooky and too clever by half

What is this reservation for???

Bob Sullivan

Google and Facebook often do spooky things, without our informed consent.  Recently, Google seemed to possibly ruin a holiday surprise for me…but ended up breaking my heart instead. Here’s a story about a clever tech going too far, doing things I never asked it to do, and ultimately, making a fool of itself.

During a recent visit to Times Square in Manhattan, I spotted an intriguing and surprising PIN when I pulled up Google maps on my phone. “Reservation. Dec. XX / 8 p.m.” it said (I’m omitting the date).   It looked like a typical hotel notification, the kind that started showing up automagically on G-Maps about two years ago.  They always surprise (spook?) me, pulled as they are from Gmail, but in truth, they are often useful.

Not this time.

A little context. Back in September of 2016, Google told users that it would start integrating calendar events with maps.  When entering a meeting, if you fill out the “where” field, the address appears on your version of G-Maps. This is a pretty logical use of the tool. If you have a meeting, you are likely to pull up Maps and see where you are supposed to be. Given that you’ve explicitly entered the address into Google’s calendar, it seems not much of a leap to use that information on Google Maps.

But the 2016 announcement revealed something else.  To further embed your life in the Google ecosystem, the firm would also scan your emails (remember, Google and other developers can still ‘read’ your Gmail) for events like hotel reservations and enter those as points on maps, too. Naturally, I never read the announcement.  Like most of you, I just started seeing these pins for airports and hotels on maps, and somewhere inside, figured that was Google inferring things from my Gmail. This feels different to me. In this scenario, I didn’t explicitly give Google the right to know where I was going; instead, the firm looked over my shoulder at what I was doing, and put it on a map.  Again, it’s useful. But I never asked for this feature. I could imagine situations where this would be a bad thing. What if I had booked a surprise for someone, and s/he spotted it when I innocently pulled up a map one day? What if my boss saw it?  Also, who else can see it? What other kinds of marketing might I get because Google knows where I’m going?

I hadn’t considered the Bruce scenario, however.

Back to the suspicious “Reservation. Dec. XX / 8 p.m.”  I had no plans for that day, but there it was.  So I clicked on the PIN.  The addresss showed 219 W 48th St. Didn’t mean anything to me. A restaurant?  A hotel? I clicked on the picture, and saw this:

BRUCE!

Ohhhhhhhhh. It’s not a movie. It’s not a dinner. It’s BRUCE! At the Walter Kerr Theatre. I’m from New Jersey, so I love Bruce. And I’ve been dying to see this Broadway show.

One problem: Tickets are really hard to get. And I know I don’t have them. Then it dawned on me: last Christmas season, I discussed going with my brother.   It was more of a joke, given the insane price tag. But maybe…maybe…he managed to score tickets.  Wow!

But then, how did it get into my calendar?  Some happy error? Some new shared family calendar feature? As I contemplated my possible good fortune, I was also deeply troubled.  Sure, ruining a surprise is bad. But this seemed beyond creepy. Did Google somehow know about my conversations with my brother? Or about his credit card purchases? As I went full-on conspiracy theory, I decided to make sure there was nothing in my email that created this situation. I searched for “Walter Kerr Theatre”

And there it was.  No, I don’t have tickets to see Bruce that night.  A friend does.

Many months ago, an old friend who had won the online lottery scored Bruce tickets from Ticketmaster for December. And in her excitement she forwarded me the confirmation email she’d received from Ticketmaster.

That forwarded email was apparently enough to convince Google that *I* was going to the theatre that night. So it took details from the note and auto-populated it into my Google map.

Haha, jokes on me.  No big deal, I’ll see Bruce another time.

But, this is both spooky and weird.  Not only is Google looking over my shoulder and putting things on a map (again, I never asked). Now it’s putting wrong things on that map. With just a little creativity, it’s easy to see how this could go wrong. A wife spotting a “suspicious” resort hotel reservation (is he cheating?).  A boss “finding out” that you are visiting a competitor (“Is she moonlighting?”).  Worse still, let’s say there’s a crime in Times Square on that December night.  When police subpoena Google for everyone who was near the scene of the crime, I’d be in the list.

I have no idea how often G-Maps makes mistakes like this.  Maybe it’s exceedingly rare. But now, I’m not so sure. I’m on the lookout for more. If you know about one, please tell me. Meanwhile, if you don’t want Google to do this, I’m not sure what to tell you. Back in 2016, project manager Zach Maier gave handy instructions for toggling this feature off — on the map app, under settings, then “personal content.'” The option “upcoming events” was apparently listed there at the time.  It’s no longer there, at least on my version of Android. (While you are there, you can toggle off a feature I find annoying, the placement of contracts on Google maps.)  You could sign out of Maps, but that will probably screw with the normal operations of the software.

It’s hard to get right, the balance between creating new features and respecting privacy.

 

 

 

Managing the risk of post-breach or “resident” attacks

Larry Ponemon

Sponsored by Illusive Networks, Ponemon Institute surveyed 627 IT and IT security practitioners in the United States to understand how well organizations are addressing the cyber risks associated with attackers who may already be residing within the perimeter, including insiders that might act maliciously.

Click here to read the full study on Illusive Networks website.

 

All participants in this research are involved in the evaluation, selection and/or implementation of IT security solutions and governance practices within their organizations.

This study starts with the premise that mitigating business impact once attackers are within the environment requires the ability to:

  1. Understand which cyberthreats pose the greatest risk and align the cybersecurity program accordingly;
  2. Proactively shape security controls and improve cyber hygiene based on an understanding of how attackers operate;
  3. Quickly detect attackers who are operating internally;
  4. Efficiently prioritize and act on incidents based on real-time awareness of how the organization could be impacted.

The data indicates that organizations have low confidence in their ability to prevent serious damage from post-breach attacks. When presented with a set of statements, only 36 percent of respondents express agreement or strong agreement that their security team is effective in detecting and investigating cybersecurity incidents before serious damage occurs.

It is welcome news, then, that security budgets are shifting in favor of allocating greater resources to threat detection and response.

For organizations to get to where they need to be is an uphill challenge. While more than half (56 percent) of respondents to this survey believe they have reduced attacker dwell time over the past year, over 44 percent say they have not (32 percent) or don’t know (12 percent). And not all attacks and incidents are equal. The survey also shows that only 28 percent of respondents agree or strongly agree that their security technologies are optimized to reduce top business risks. A recurring theme in this study is that the inability to see and act on what matters most to the organization hampers the effectiveness of multiple functions.

Part 2. Key Findings

In this section of the report we analyze the key findings of the research. The complete audited findings are presented in the Appendix of the report. We have organized the report according to the following topics:

  1. The risk alignment problem between IT security and the business
  2. Current capabilities to preempt, detect, and respond to post-breach attackers
  3. Takeaways: Toward better risk mitigation for post-breach or resident attacks

A.    The risk alignment problem between IT security and the business

 Comparing a few key data points makes it clear that the day-to-day functioning of IT security is not well-aligned to business needs.

Although 56 percent of respondents say business leaders consider cybersecurity a top business risk, only 29 percent of respondents say business leaders communicate their business risk management priorities to IT security leaders, and only 29 percent of respondents say their security leaders effectively align security with top business risks.

Over 70 percent of respondents say senior leaders do not clearly communicate business risk. Some, 71 percent of respondents say they are not informed about what senior managers consider their organizations’ business risk management priorities—important guidance if IT security is to prioritize what’s most important to the business.

Respondents also are not positive that their leadership understands how persistent and advanced threats can affect the enterprise and that IT security controls are not 100 percent effective (68 percent and 65 percent, respectively).

It makes sense, then, that 60 percent also indicate that leaders don’t understand that the risk of a successful cyberattack should be an ongoing concern.

Business leaders appear to be conflicted about the importance of a strong cybersecurity posture—or perhaps leaders don’t understand the importance of a business-aligned, proactive approach or their role in it. When respondents were asked to describe their executives’ views of the importance of the cybersecurity program, the top two responses seem contradictory.

On the one hand, respondents indicate that executives think a cyberattack could pose a strategic or existential threat to their organization (40 percent of respondents), yet given how important cyber risk seems to be, a reactive approach seems fairly prevalent; almost half (49 percent of respondents) say their organizations’ executives think cybersecurity should be addressed on an as-needed basis when problems arise.

The business/security collaboration gap is reflected in many ways. Whether fault for the disconnect lies on the side of IT security leaders, senior executives, or both, Only 35 percent of respondents say their IT security leaders are proactively included in planning and decision-making for new technology and business initiatives, and only 29 percent of respondents say IT security leaders effectively align security investments, processes, and controls with top business risks. Other steps not taken are having well-defined criteria for determining when to involve business leaders in responding to a cybersecurity incident or issue (only 30 percent of respondents agree), as well as educating business leaders on cyber risks that may impact their organization (only 38 percent of respondents agree).

Only about half (51 percent of respondents) say their organizations’ executives and senior management respect IT security leaders. As a possible consequence, only 37 percent of respondents say the security team has the support it needs from business teams to design and execute business-oriented threat detection and incident response capabilities.

Respondents say that protecting high-volume private data is not the top concern. Respondents were asked to identify the cyberattacks that pose the greatest risk to their business. Given the lack of communication about business risk, these views may not reflect the views of business leaders, but it is notable that although large breaches of PII, EHI, payment and employee data tend to hog the headlines, these are not respondents’ top concerns. The data indicate that the threat of intellectual property or other strategic information theft—theirs or their clients—and various forms of disruption are significantly higher on the risk scale.

Also, 60 percent of respondents say the worst consequence of a cyberattack would be the tampering with or compromise to the integrity of their products or services followed by the disruption of their core business network (58 percent of respondents). Threats to executive safety and privacy are also high on the list.

Business leaders lack understanding of the threats. Leaders cannot communicate effectively with IT security leaders or set cyber risk management priorities without a foundational understanding of the threat actors an organization needs to contend with, yet 68 percent of respondents say their executives and senior management do not have a good understanding of how threat actors work and the harm they can cause. Among technical functions, where granular threat understanding is necessary for strong detection and response, organizations fare better, but could be stronger.

Basic asset and access governance are only half-way there. A risk-focused approach also requires a strong picture of where the important IT assets are and who has access to them. Some 54 percent of respondents agree or strongly agree that their security team has up-to-date knowledge of which data, systems and infrastructure components support critical business processes, yet when asked a series of more detailed questions pertaining to asset awareness and change management, respondents rate themselves considerably lower.  The ability to keep pace with rapidly changing users, user functions, and IT infrastructure continues to be a challenge.

To keep reading this report, click here. 

Someone (China?) is building an enormous dossier database from all these massive hacks

Bob Sullivan

Perhaps you missed the tantalizing detail I reported earlier  that Congressional investigators believe the initial Equifax hackers entered that company’s systems with computers using IP addresses in China.  Or The New York Times reporting that U.S. authorities now blame China for the hack on Starwood / Marriott.  You probably forgot that the devastating hack of the Office of Personel Management systems has also been blamed on China. And you probably forgot that the hack of Anthem, the health care firm, was also blamed on China.

Combine all that information, and one thing seems disturbingly likely: There’s a big dossier database in the sky, controlled by some foreign entity, and your most personal information is in it.

Maybe you are worried about your credit report. But this surveillance database contains far, far more precious and revealing information. Where you traveled. How long you stayed. Your driver’s license. Your passport.  If you are a government worker, who your closest friends are, and even your fingerprint.

All in the hands of a foreign, potentially hostile, nation-state.

Attribution is a very tricky game — freelance actors? the Chinese government? Another nation state hiring mercenaries in China? — and anyone who asserts with surety they know who did it might be overstating their case. When we spent months looking into the Yahoo hack, it became clear that both nation-states and freelancers can be involved in the same hack, making breach analysis even harder. With Equifax, there’s a theory that rogue hackers gained entry at first, then handed off the access to a more sophisticated entity. This kind of hack-sharing means that whoever stole all that data from Yahoo — remember, for years, Russian agents could read millions of victims’ emails — is available to whoever is building this big dossier database in the sky. Passport numbers and 15-year-old emails linked? That’s quite an incredible amount of information.

It’s fashionable to blame things on China right now, but the particular nation-state that’s the culprit at Starwood doesn’t matter as much as the potential existence of this database.

I haven’t seen it, but plenty of folks I speak to very much believe it exists. The best evidence for it: Where are all the stories of Equifax-related identity thefts, or widespread Starwood points hacks, or….? Whoever is stealing this information isn’t doing it for money, and isn’t doing it for lulz. No one hangs out in a network for four years for lulz.  Or, for that matter, for money.

Instead, think about how useful a list of hotel stays would be as an intelligence-gathering tool? As my colleague at NBC News Ben Popken points out, Starwood is a favorite chain for U.S. Government employees. Executives, too. So perhaps most of the data is useless to the hackers; they just want to good stuff. That was initially the goal in the Yahoo hack: Read the email of very specific people. A needle-in-a-haystack search, with the hay uninteresting.  Later on, however, the Yahoo hackers shared the stolen data with others who indeed picked through the hay — you and me, in this metaphor — and found all sorts of other uses for it.

Perhaps the criminals are even more interested in tracking corporate executives.  Understanding their movements can provide a lot of intelligence — “Why is he visiting South Korea? Is he interested in a new supplier?”  Think deeper, and you can imagine the data being used for leverage or extortion. What if a foreign power had information on a clandestine relationship a U.S. executive was having? That would be very useful in negotiations.

In some ways, all these hacks are starting to sound redundant, as if someone keeps stealing the same kinds of data over and over. But as Avivah Litan of Gartner recently told me, there is the matter of upkeep. Whoever has this database has to keep it current, and accurate.  Each new heists helps the “owner” clean the data. (Read more from her here, and here .)

Bill Malik at Trend Micro offers another clever use for this executive-tracking database: something I call executive identity theft. Business email compromise is among the fastest-growing cybercrimes. A criminal poses as a CEO and demands her secretary wire money overseas immediately as part of secret merger talks. It works because underlings are less likely to question bosses. If a criminal had a tool that predicted executive movements, imagine how much easier, and more targeted, these attacks could be.

At this point, you are probably wondering what all this has to do with you.  If merely monitoring high-value targets is the goal of these hackers, that should be a relief to most of us, right? Perhaps. You must understand that whoever is stealing these massive datasets is in it for the long game, however.  Again, the Starwood hack lasted four years.  Can you really be sure that you’ll be uninteresting to a foreign power in a decade or two?  Are you sure there isn’t an email you wrote in 2003 that wouldn’t embarrass you somehow in 2023?

This is the point at which an editor would yell at me to give readers some hope, to dole out advice on what to do about all this.  So sure, change your passwords and limit the personal information you give large companies. Always act like anything you type into a keyboard might eventually end up on a billboard in Times Square. But realistically, you are collateral damage in a cyberwar being fought by nation-states on one side and fairly helpless U.S. corporations on the other.  The big dossier database in the sky is only going to get bigger, and more accurate, with each big hack.  That’s our 21st Century reality now.

 

Email impersonation attacks: a clear & present danger

Larry Ponemon

Most companies admit that it is likely they experienced a data breach or cyberattack because of such email-based threats as phishing, spoofing or impersonation and they are concerned about the ongoing risk of such threats. However, as shown in this research there is a disconnect between the perceived danger of email-based threats and the resources companies are allocating to reduce these risks.

Sponsored by Valimail, Email Impersonation Attacks: A Clear & Present Danger, was conducted by Ponemon Institute to understand the challenges organizations face to protect end-users from email threats, such as impersonation attacks. Ponemon Institute surveyed 650 IT and IT security professionals who have a role in securing email applications and/or protecting end-users from email threats.

The risks that are causing IT security practitioners to lose sleep are phishing emails directed at employees, executives, customers and partners; and email as a vector for cyberattacks. When asked what measures or technologies will be deployed in the next 12 months to prevent impersonation attacks, more companies say they will be using secure email gateway technology, DMARC, DKIM and anti-phishing training for employees. In fact, more companies will be using automated solutions to improve email trust.

We were surprised to see a vast majority of companies who believe that they have had a breach involving email but are not yet embracing automated anti-impersonation solutions to protect themselves proactively. Adopting fully automated solutions for DMARC enforcement that provide email authentication will help companies get ahead of the attackers and build trust with their clients and end users.

The following findings illustrate the disconnect between concerns about email threats and fraud and the lack of action taken by companies represented in this study. 

  • Eighty percent of respondents are very concerned about the state of their companies’ ability to reduce email-based threats, but only 29 percent of respondents are taking significant steps to prevent phishing attacks and email impersonation. 
  • Only 27 percent of respondents say they are very confident that their organization knows all of the vendors and services that are sending email using the organizations’ domain name in the “From” field of the message. 
  • Companies have complex email environments. On average, companies in this research have more than 1,000 employees, six servers and 15 cloud-based services that send email on their behalf. However, only 41 percent of respondents say their organizations have created a security infrastructure or plan for email security. 
  • Despite the ineffectiveness of anti-spam and anti-phishing filters, they have been the primary solution for preventing email-based cyberattacks, and impersonation. Sixty-nine percent of respondents say their organizations use anti-spam or anti-phishing filters and 63 percent of respondents say they use these technologies to prevent impersonation attacks.
  • Companies are not spending enough to prevent email-based cyberattacks and fraud. While there is a sense of urgency among respondents to address the numerous threats against their email systems, only 39 percent of respondents say their organizations are spending enough to protect email from cyberattacks and fraud.

Because the risks discussed above are not being addressed, most companies believe they had a material data breach or cyberattack during the past 12 months that involved email. Seventy-nine percent of respondents say their organizations certainly or likely experienced a serious data breach or cyberattack during the past 12 months such as phishing or business email compromise. More than 53 percent of respondents say it is very difficult to stop such attacks.

“With the dramatic rise in impersonation attacks as a primary vector for cyberattacks, companies are re-assessing the balance of their security efforts,” said Alexander García-Tobar, CEO and co-founder of Valimail. “While traditional approaches are good for filtering malicious content and blocking spam, impersonation attacks can only be stopped with email anti-impersonation solutions. Individuals at all levels of a company, including customers and clients, are vulnerable to phishing, fraud, and impersonation attacks.”

To read the full study, click here and visit Valimail’s site. 

The life-cycle of a vote, and all the ways it can be hacked

Bob Sullivan

We know every vote counts, but will your vote actually be counted? Or will it be hacked? I’ve spent the last several months reporting on election hacking for my podcast Breach, and I’ve learned a lot: Mostly that vote “hacking” is a much broader problem than people realize.  While lots of attention has been paid to the hacking of electronic voting machines themselves, elections can be hacked months before, or months after, voting day.  Here’s a look at the entire life cycle of your vote, and all the places it can be hacked along the way.

Listen to the podcast on Stitcher

https://www.stitcher.com/podcast/pods/breach

or iTunes

https://itunes.apple.com/us/podcast/breach/id1359920809?mt=2

 

Step 1: Deciding to vote

The voting process begins when people decide to vote (or, they don’t), and register. The enemies of democracy spend a lot of time trying to convince citizens that their vote doesn’t count, that people shouldn’t even bother going to the polls. Encouraging apathy is actually step one.  How does that happen? Through disinformation campaigns — state-sponsored trolling — that are nudged along unwittingly by people who fall for the trick

“Academics will make the distinction that disinformation is false information that’s knowingly spread,” says Nick Monaco, a D.C.-based researcher and expert in worldwide trolling campaigns. “So there’s an intent to deceive people knowingly. Then they’ll say that misinformation is information that is spread unknowingly that’s false. So maybe you retweet a story that you thought was true, that would be a case of misinformation. But if you create a false story to smear someone that would be disinformation.”

In the podcast, we talk about a fictitious election between myself and Alia Tavakolian, my Breach co-host. Someone spreads a rumor online that I am a puppy killer — very untrue — and I lose crucial campaign time fighting off this attack. Why does it spread so quickly?  Bots, using artificial intelligence, talk it up.

“Most news organizations now have incentive (and) choose of their own accord to report on what’s trending online. What if what’s trending online is produced 90% by bots and 10% (by) humans?” Monaco said.

In other words, bots are hacking people’s attitudes. State-sponsored trolling is the hacking of our minds.

“I think that in the first place, if people’s attention is hacked already by a platform, and they’re spending time on this platform, and then they’re receiving messages that might sway their actions … So we already have you in one place, we know where you are, we know what you think about, and we know where you live. Let’s just send you some information that we think would be amenable to what you — what you think, and maybe influence you to act in some way,” Monaco said.

 

 

Step 2: Voter registration

Let’s say you press on past digital propaganda and decide you are going to vote. You register. That data has to live somewhere. And it has to remain accurate.  If a group wanted to engage in voter suppression, they could hack state registration databases and remove names — or just change addresses in a way that would create election-day chaos.

“(Voter) records are maintained in computer databases, many of which are connected directly or indirectly to the internet, and subject to the same kind of data breaches that affect other kinds of internet systems,” said Matt Blaze, a computer science professor at the University of Pennsylvania, where he’s been working on voting technology for the past fifteen years. “We often don’t find out that we’re not listed on the voter registration database when we should be until we show up at the polls to vote.”

This isn’t a theoretical risk. The U.S. government says that Russians tried to access voter registration databases in at least 21 states, and in two states they were able to succeed to some degree.

Even more ominous: If someone wanted to tip an election, they’d do this only in zip codes that traditionally leaned one way or the other.

“Because with the marketing data these days we can microtarget down to the neighborhood how we know a certain neighborhood’s going to vote,” said Maggie MacAlpine, co-founder of security firm Nordic Innovation Labs. “We’ve had some elections that were decided by less than 1,000 people, and the burden tends to be on the voter to say that you are registered or not. So if just ten people in the right place at the right time come in and say, ‘Well, I should be registered, why aren’t I registered?’ If you can keep that spike under the radar, you can actually change things that way.”

Many jurisdictions use e-poll books at voting locations now, to get the best registration information in the hands of poll workers. They also add another layer of technology to the process that can be hacked.

 

Step 3: Voting “Day”

U.S. voting machines have been under scrutiny dating back at least to the hanging chads of Bush v. Gore in the 2000 presidential election.  In 2002, Congress passed the Help America Vote Act, which gave states money and incentives to abandon old-fashioned voting machines and led to the purchase of electronic machines — generally touch-screens (DREs) or optical scan / scantron machines (like multiple-choice tests). They’ve caused a lot of trouble. There have been years of demonstrations showing the machines are vulnerable to various attacks.  Vendors often say these are only theoretical, that the machines themselves are not networked so they aren’t really vulnerable.  Many voting experts disagree.

“What people sometimes don’t understand about voting machines is that they’re really not as isolated from each other and from internet-attached systems as they may seem,” said J. Alex Halderman,  director at the Michigan Center for Computer Security in Society, and another long-time voting expert.

For starters, the machines must be loaded with candidates — somehow.

 

“Before every election, virtually every electronic voting machine in the country has to be programmed, and it has to be programmed with the ballot design. That is the candidates, the races, and the rules for counting,” he said.  This is usually done with an election management system. “(Hackers) can potentially spread malicious software to every voting machine in the jurisdiction just by having that software essentially hitch a ride with the ballot programming that election officials copy to the machines in the field.”

Harri Hursti was the researcher who first hacked voting machines nearly 15 years ago.  His technique actually has a name: “The Hursti Hack.”

“What I found was that the bootloader is looking from the memory card a certain file name. If it finds that name, it will reprogram itself with the contents of that file with no checks, balances whatsoever,” he said. Some of the same machines he hacked 15 years ago are still being used in elections today. “Sometimes I get tired of talking about it…but it took people 15 years to listen.”

Step 4: Vote counting

Once you leave the polling place, an intricate dance of technology takes place.  Perhaps the machine you used creates a local tally and prints out an end-of-day receipt, which is later added to tallies from other machines in that precinct , in that county, and that state. The counts themselves must be accurate, but perhaps more important, the transmission of the counts must be secure.  Many experts see this as a vulnerable step.

“If we’re able to modify the transmission of vote tallies back and forth across these systems, we could potentially influence the vote,” said Mark Kuhr, a security expert with Synack Inc.

The votes might be sent over the Internet. They might be sent via “sneaker net,” with a courier driving memory cards to a central location.  In some states, vote tallies are transmitted wirelessly. And that introduces more potential problems. States that do this claim the data is encrypted, but experts worry about vulnerabilities – such as so-called man-in-the-middle attacks.  Devices like Stingray machines – often usually by police to intercept smartphone transmissions — can pose as cellular network towers and download all information sent towards those towers.

Step 5: Announcing the results

It’s easy to overlook, but perhaps the prime election hacking opportunity might also be the easiest – skip the James-Bond-esque vote-flipping efforts, and just hack a secretary of state’s website to cause confusion.

“We know that the Russians have hacked websites that announce election results in the past,” said Jake Braun, executive director of the University of Chicago Cyber Policy Initiative and organizer of the Voting Village project at hacker conference Def Con. “They did it in the Ukraine a few years back. I mean, can you imagine if it’s election night 2020, and they have to take the Florida and Ohio websites down because they’ve been hacked by Russia, and like Wolf Blitzer is losing his (mind) on CNN and Russian RT has announced that their preferred candidate won, who knows who that is, and then of course the fringe media starts running with it as if it’s real here in the United States. …How long would it take to unwind that? I mean it would make Bush v Gore in 2000 look like well-ordered democracy.”

 

This makes me think of somebody who spent six hours making a wedding cake and drives it to the wedding and gets to the wedding and the second before they’re going to put it on the table, they trip and fall and the wedding cake splatters on the floor. That’s our election process.

Step 6: Accepting the results

Even after the vote is over, it’s not over.  A critical element of democracy is that the losing side accepts the results. Think back to step 1: If an enemy of democracy could foment enough disenchantment that a sizable set of the population refuses to accept the legitimacy of the election, that could be enough to “hack” the election process, too.

“Messaging around the integrity of voter information or the legitimacy of the election is something I’m really worried about,” Monaco said. “So aside from hard hacking of infrastructure, (what scares me most is) a disinformation campaign that would say, ‘The vote’s not legitimate, these people couldn’t vote, their voting records were altered,” even if that stuff’s not true. I mean the scary part is like with a kernel of truth that would really, really empower that disinformation campaign. So that’s like a nightmare scenario for me.”

In our market, the dollar bill is the fundamental unit of capitalism in America, The integrity of the dollar bill is paramount. If one day people decided, “What is the dollar really worth? I’m not sure. I don’t trust this thing.” Our country would collapse. Voting is exactly the same way. The vote is the central unit of democracy, and right now the vote is under serious threat. People right now are asking themselves, “Should I really take a vote or not? Does that really matter? Does it really count? When we added them all up, is it really correct?” It’s that fundamental an assault on our way of life.

The End: Next steps

Kim Zetter, who’s been reporting on election hacking for a decade, lays out the dark reality. Russian election interference is only the latest in a long line of problems with the way we vote in America.

“I would say that the Russians are a red herring because that’s not why we should be looking at this. This problem has existed since 2002, people have ignored it,” she said. What is the real danger? “Everything is the danger. Danger is a software bug that could cause the machine to not record your vote to — to lose votes, to record it inaccurately. The danger is an insider in the election office, anyone who is opposed to U.S. foreign policy, anyone who has a gripe with the U.S. And again, it doesn’t have to be someone who’s really sophisticated. “

If all this seems hopeless, it’s not.  For starters, every single expert we talked to about election hacking said that, while the problem is challenging, democracy is far from doomed.

“I have confidence in our democratic institutions, and we’ve survived a lot,” said Adam Levin, whose company Cyberscout performs security audits for state election officials. “And my belief is that we’re going to survive this as well, but the truth is, look, it is a Herculean task. It is a daunting task. No one denies that. But this country has always stepped up, always. At some point, we dug down deep, and we stepped up.”

What can you do? Step up and vote. And be informed. The biggest vulnerability in democracy is apathy. The fewer people who vote, the easier it is the manipulate the result. The fewer people who work hard to be informed, the easier they are to manipulate.  The angrier you are, the easier it is to set you against your fellow citizens.  So vote on (or before!) election. Read, read, read before and after the election to stay informed. And don’t fall for the enemies’ “divide and conquer” strategy or “let’s you and him fight” tactics. Disagree, but keep America a civil society. There’s a lot you can do to prevent the hacking of democracy. Listening to the full podcast would be a good start.

 

Where’s the data? Firms increasingly fret about governance; join us for a free webinar

Larry Ponemon

There will be a free live webinar discussing these results on Oct. 18 at 11 a.m. Click here to register for this webinar.

Organizations are becoming increasingly vulnerable to risks created by the lack of oversight, visibility and controls over employees and other insiders who have access to confidential and high-value information assets. The 2018 Study on the State of Data Access Governance, sponsored by STEALTHbits Technologies, reveals the importance of a Data Access Governance program that can effectively reduce the risk created by employees’ and privileged users’ accidental and conscious exposure of confidential data.

In the context of this research, Data Access Governance is about making access to data exclusive and limiting the number of people who have access to data and their permissions to that data to the lowest levels possible. Ponemon Institute surveyed 991 IT and IT security practitioners in the United States (586) and United Kingdom (405).

To ensure these respondents have an in-depth knowledge of how their organizations manage users’ access to data, we asked them to indicate their level of access to their organizations’ IT networks, enterprise systems, applications and confidential information. If they had only limited end user access rights to IT resources, they were not included in the final sample of respondents.

While the study reveals companies are taking some steps to manage the risk, the perception among these respondents who are knowledgeable about access rights in their organizations perceive that the risk will either increase (48 percent) or stay the same (41 percent) over the next year.

Key Findings

 Following is an analysis of the key findings. The complete audited findings are presented in the Appendix of this report. We have organized the findings according to the following topics:

  • The risk of end user access to unstructured data
  • Data Access Governance tools used to limit access to sensitive data
  • Current practices in assigning privileged user access
  • Effectiveness of Data Access Governance programs
  • Recommendations for improving Data Access Governance programs

The risk of end user access to unstructured data

 Organizations lose track of where employees and other insiders are storing unstructured data. In the context of this research, end users are employees, temporary employees, contractors, consultants and others who have limited or “ordinary” access rights to their organizations’ IT resources.

Unstructured data is defined as information that either does not have a pre-defined data model or is not organized in a pre-defined manner. Unstructured data tends to be user generated or manipulated data that lives in documents, such as spreadsheets or even scanned and signed contracts. Typically, this data may be in a structured format in an application and exported to a document for use by a person or team of people.

Respondents were asked to rate their confidence that their organization knows where users are storing unstructured data from 1 = no confidence to 10 = high confidence. Only 19 percent of respondents rate their confidence as high (7+ responses). This lack of confidence indicates that much of a company’s sensitive unstructured data is not secured.

Organizations lack visibility into how users are accessing unstructured data. As discussed above, respondents have little confidence they know where unstructured data resides. They also don’t know for certain the end users accessing the sensitive unstructured data.

The majority of respondents (50 percent) say their organizations rely upon platform capabilities, such as access controls built into Dropbox, to determine who has access to sensitive unstructured data. Only 37 percent of respondents say they use role-based access enforced through AD groups, even though many rate AD as very important. Only 31 percent of respondents monitor compliance with policies or information from specialized file activity monitoring (28 percent of respondents).

Documents and spreadsheets are the unstructured data most secured today. Some 71 percent of respondents say documents and spreadsheets are most often secured and 64 percent of respondents say emails and text messages are secured.

Confidence in safeguarding unstructured data is low. As a result of the volume of unstructured data that needs to be protected and the difficulty in determining who has access to sensitive unstructured data, only 25 percent of respondents rate their confidence in discovering unstructured data containing sensitive information as very high (7+ on a scale of 1 = no confidence to 10 = high confidence). Only 12 percent of respondents highly confident in their organizations’ ability to discover where unstructured data is stored in the cloud.

Inappropriate behaviors by end users put organizations at risk. Fifty-nine percent of respondents say users access sensitive or confidential data because of curiosity and 52 percent of respondents say users share their access rights with others.

False positives and too much data are the biggest challenges in determining if an event or incident is an insider threat. Organizations find it difficult to determine if inappropriate access to sensitive data was caused by a negligent or malicious insider. Security tools yield too many false positives (63 percent of respondents) and security tools yield more data than can be reviewed in a timely fashion (60 percent of respondents) are the biggest challenges in determining if an event or incident is an insider threat.

To continue reading, download the full report at Stealthbits website.

There will be a free live webinar discussing these results on Oct. 18 at 11 a.m. Click here to register for this webinar.