Monthly Archives: November 2023

Preparing for a Safe Post Quantum Computing Future: A Global Study

Sponsored by DigiCert, the purpose of this research is to understand how organizations are addressing the post quantum computing threat and preparing for a safe post quantum computing future. Ponemon Institute surveyed 1,426 IT and IT security practitioners in the United States (605), EMEA (428) and Asia-Pacific (393) who are knowledgeable about their organizations’ approach to post quantum cryptography.

Quantum computing harnesses the laws of quantum mechanics to solve problems too complex for classical computers. With quantum computing, however, cracking encryption becomes much easier, which poses an enormous threat to data security.

That is why 61 percent of respondents say they are very worried about not being prepared to address these security implications. Another threat of significance is that advanced attackers could conduct “harvest now, decrypt later” attacks, in which they collect and store encrypted data with the goal of decrypting the data in the future (74 percent of respondents). Despite these concerns, only 23 percent of respondents say they have a strategy for addressing the security implications of quantum computing.

The following findings illustrate the challenges organizations face as they prepare to have a safe post quantum computing future. 

Security teams must juggle the pressure to keep ahead of cyberattacks targeting their organizations while preparing for a post quantum computing future. Only 50 percent of respondents say their organizations are very effective in mitigating risks, vulnerabilities and attacks across the enterprise. Reasons for the lack of effectiveness is that almost all respondents say cyberattacks are becoming more sophisticated, targeted and severe. According to the research, ransomware and credential theft are the top two cyberattacks experienced by organizations in this study.

The clock is racing to achieve PQC readiness. Forty-one percent of respondents say their organizations have less than five years to be ready. The biggest challenges are not having enough time, money and expertise to be successful. Currently, only 30 percent of respondents say their organizations are allocating budget for PQC readiness. One possible reason for not having the necessary support is that almost half of respondents (49 percent) say their organization’s leadership is only somewhat aware (26 percent) or not aware (23 percent) about the security implications of quantum computing. Forty-nine percent of respondents are also uncertain about the implications of PQC.

Resources are available to help organizations prepare for a safe post quantum computing future. In the last few years, industry groups such ANSI X9’s Quantum Risk Study Group and NIST’s post-quantum cryptography project have been initiated to help organizations prepare for PQC. Sixty percent of respondents say they are familiar with these groups. Of these respondents, 30 percent say they are most familiar with the ANSI X9’s Quantum Risk Study Group and 28 percent are most familiar with NIST’s industry group.

Many organizations are in the dark about the characteristics and locations of their cryptographic keys. Only slightly more than half of respondents (52 percent) say their organizations are currently taking an inventory of the types of cryptography keys used and their characteristics. Only 39 percent of respondents say they are prioritizing cryptographic assets and only 36 percent of respondents are determining if data and cryptographic assets are located on-premises or in the cloud.

Very few organizations have an overall centralized crypto-management strategy applied consistently across the enterprise. Sixty-one percent of respondents say their organizations only have a limited crypto-management strategy that is applied to certain applications or use cases (36 percent) or they do not have a centralized crypto-management strategy (25 percent).

Without an enterprise-wide cryptographic management strategy organizations are vulnerable to security threats, including those leveraging quantum computing methods. Only 29 percent of respondents say their organizations are very effective in the timely updating of their cryptographic algorithms, parameters, processes and technologies and only 26 percent are confident that their organization will have the necessary cryptographic techniques capable of protecting critical information from quantum threats.

While an accurate inventory of cryptographic keys is an important part of a cryptography management strategy, organizations are overwhelmed keeping up with their increasing use. Sixty-one percent of respondents say their organizations are deploying more cryptographic keys and digital certificates. As a result, 65 percent of respondents say this is increasing the operational burden on their teams and 58 percent of respondents say their organizations do not know exactly how many keys and certificates they have.

The misconfiguration of keys and certificates and the ability to adapt to cryptography changes prevents a cryptographic management program from being effective. Sixty-two percent of respondents say they are concerned about the ability to adapt to changes in cryptography such as algorithm deprecation and quantum computing. Another 62 percent are concerned about the misconfiguration of keys and certificates. Fifty-six percent are concerned about the increased workload and risk of outages caused by shorter SSL/TLS certificate lifespans.

To secure information assets and the IT infrastructure, organizations need to improve their ability to effectively deploy cryptographic solutions and methods. Most respondents say their organizations do not have a high ability to drive enterprise-wide best practices and policies, detect and respond to certificate/key misuse, remediate algorithm remediation or breach and prevent unplanned certificates.

Crypto Centers of Excellence (CCOEs) can support organizations’ efforts to achieve a safe post quantum computing future. A CCOE can help improve operational cryptographic processes and increase an organization’s trust environment. They do require advanced technologies and expertise in cryptography to maintain secure operations and comply with applicable regulations. Most organizations in this research do plan on having a CCOE. However, currently only 28 percent of respondents say their organizations have a mature CCOE that provides crypto leadership, research, implementation strategy, ownership and best practices. Another 28 percent of respondents say they have a CCOE, but it is still immature.

Hiring and retaining qualified personnel is the most important strategic priority for digital security (55 percent of respondents). This is followed by achieving crypto-agility (51 percent of respondents), which is the ability to efficiently update cryptographic algorithms, parameters, processes and technologies to better respond to new protocols, standards and security threats, including those leveraging quantum computing methods.

 To read the key findings in this report, click on DigiCert’s website

Robot fear is approaching 1960s-levels; that might be a distraction

Bob Sullivan

When I logged onto a Zoom meeting recently, I was offered the chance to let the company use some kind of whiz-bangy AI magic that would summarize the meeting for me.  Cool? Maybe. Artificial intelligence? Not by my definition. New? Not really.  New name? Absolutely

I’m sure you’ve had this experience a lot lately.  “AI-powered” marketing-speak is everywhere, sweeping the planet faster than dot com mania ever did.  In fact, it’s come so fast and furious that the White House issued an executive order about AI on Monday. AI hasn’t taken over the planet, but AI-speak sure has. It’s smart to worry about computers taking over the world and doing away with humanity, but I think marketing hype might be AI’s most dangerous weapon.

Look, chatbots are kind of cool and impressive in their own way. New?  Well, as consumers, we’ve all been hijacked by some “smart” computer giving us automated responses when we just want a human being at a company to help us with a problem. The answers are *almost* helpful, but not really.  And chatbots … are…not really new.

I like to remind people who work in this field — before “the Cloud” there were servers.  Before Web 3.0 there was the Internet of Things, and before that, cameras that connected to your WiFi.  Before analytics, and AI, there was Big Data.  Many of these things work better than they did ten or twenty years ago, but it was the magic label — not a new Silicon Valley tech, but a new Madison Avenue slogan — that captured public imagination.  Just because someone calls something AI does not make it so.  It might just be search, or an updated version of Microsoft Bob that isn’t terrible.

I don’t at all mean to minimize concern about tech being used for evil purposes.  Quite the opposite, really. If you read the smartest people I can find right now, this is the concern you’ll hear.  It’s fine to fret about ChatGPT Jr., or ChatGPT’s evil half-brother, making a nuclear bomb, or making it substantially easier to make a nuclear bomb. We’ve been worried about something like that since the 1950s and 60s.  And we should still be concerned about it. But that’s not happening today.

Meanwhile, tech (aka “AI”) is being used to hurt people right now. There’s real concern all the focus on a sci-fi future is taking attention away from what needs to be done to reign in large technology companies right now.

Big databases have been used to harm people for a long time.  Algorithms decide prison sentences — often based on flawed algorithms and data.  (Yes, that is real!) Credit scores rule our lives as consumers. The credit reports on which they are built are riddled with errors. And as people seem to forget, credit scores did virtually nothing to stop the housing bubble.  I just read that credit scores are at an all-time high, despite the fact that consumer debt is at very high levels — and, in a classic toxic combination — interest rates are also very high. So just how predictive are credit scores?

I know this — Folks looking to regulate AI/Big Data/algorithmic bias haven’t done nearly enough research into the decades-long battle among credit reporting agencies, consumer advocacy groups, and government regulators.  Hint: It’s not over.

There is a lot to like in the recent AI executive order.  I’ve long been an advocate that tech companies should include “abuse/harm testing” into new products, the way cybersecurity teams conduct penetration testing to predict how hackers might attack. Experienced, creative technologists should sit beside engineers as they dream up new products and ponder: “If I were a bad person, how might I twist this technology for dark uses?”  So when a large tech firm comes up with a great new tool for tracking lost gadgets, someone in the room will stop them and ask, “How do we prevent enraged ex-partners from using this tech to stalk victims?” Those conversations should be had in real-time, during product development, not after something is released to the world and is already being abused.

Today’s executive order calls for red-teaming and sharing of results with regulators.   In theory, such reports would head off a nuclear-bomb-minded bot at the pass.  Good.  I just hope we don’t race past algorithmic-enhanced racial discrimination in housing decisions — which happens today, and has been happening for decades.

The best piece I read on the executive order appeared in MIT Technology Review — a conversation with Joy Buolamwini, who has a new book out titled  Unmasking AI: My Mission to Protect What Is Human in a World of Machines.  She’s been ringing the alarm bell on current-day risks for nearly a decade

For something that’s a more direct explanation of the order, read this piece on Wired.  

This is a link to the White House fact sheet about the executive order.