Author Archives: BobSulli

Ill-conceived TikTok ban is a missed opportunity; a real privacy law would be far superior

Bob Sullivan

In the age where the U.S. House of Representatives can’t really do anything, it has voted to ban TikTok in its current ownership structure. Color me unimpressed, and more than a bit concerned. I’d say the fact that Congress’ lower house was able to pass the legislation should give everyone pause.

Of course TikTok poses a national risk. So do other platforms, which is why it’s time for Congress to pass comprehensive privacy and national security legislation that forces all platforms to handle personal data with care. But that’s…hard.  Grandstanding that you’ve been tough on TikTok is easy, so that’s what we’re getting.

TikTok’s owner, Bytedance, has been given the choice to divest itself from the popular social media service. I hope you’ll think it’s strange that Congress can force such a sale …. of a single company. But setting that aside for the moment, it’s hard to imagine that doing so will solve the problem everyone seems to agree exists — that the Chinese government can access TikTok’s intimate data about its users.  Would a the sale really stop that? And what of China’s ability to buy such data from any one of hundreds of data brokers in the U.S. who are willing to sell such information to the highest foreign bidder? (Duke University has tested this theory.)

And as for Chinese ownership, it must be asked, why stop with TikTok? Anyone who’s been online in the past six months has seen the near-ubiquitous ads for a shopping service named Temu and its “shop like a billionaire” tagline. That’s because this Chinese-owned firm has spent billions of dollars advertising with companies like Meta/Facebook.  But Temu has been sued for, essentially, loading its software up with spyware.  You should read this analysis for yourself, but here’s a highlight: “The app has hidden functions that allow for extensive data exfiltration unbeknown to users, potentially giving bad actors full access to almost all data on customers’ mobile devices.”

A well-written law would stop a company from doing what Temu is (accused of) doing before it starts, or make it very easy to shut it down. Instead, Congress is doing something so arbitrary that it has made TikTok into a sympathetic character, which would have seemed impossible a few months ago.

Hopefully, cooler heads will prevail in the Senate, and a better law can emerge from this rare moment of focus on data privacy. If not, I fear this legislation could delay passage of a real, comprehensive federal privacy law.   Congress could mistakenly believe it has solved a problem and turn its meager attention in other directions; and the law could backfire so badly that lobbyists could point to it for years as proof no law should ever be passed that limits big tech’s powers.

I found Alex Stamos’ appearances on NBC networks to be informative; you can watch them here. 

 

Managing Access & Risk in the Operational Technology (OT) Environment

There is growing awareness and concern that the increasing sophistication and severity of the various types of potential cyberattacks against the OT environment are putting the critical infrastructure that everyone depends upon at serious risk. In some cases, these incidents could lead to the loss of human life. In 2021, a ransomware attack against the Colonial Pipeline directly impacted Americans on the east coast who were confronted with the disruption of fuel supplies.

(To access a webinar Larry participated in about this research, click this link).

In response to such threats, the Cybersecurity & Infrastructure Security Agency (CISA) has recently announced a pilot program “designed to deliver cutting-edge cybersecurity services on a voluntary basis to critical infrastructure entities most in need of support”.

As defined in the research, OT is the hardware and software that monitors and controls devices, processes and infrastructure and is used in industrial settings. OT devices control the physical world while IT systems manage data and applications. Sponsored by Cyolo, the purpose of this research is to learn important information about organizations’ security and control procedures designed to mitigate serious risks in the OT environment. All respondents are knowledgeable about their organizations’ approach to managing OT system access and risk. The average annual IT security budget is $55 million and an average of $11.5 million of the budget is allocated to OT security activities.

Uncertainty about the number and types of assets in OT environment puts organizations at significant risk.  A key takeaway from the research is that organizations lack visibility into the industrial assets in their OT environment making it difficult to ensure they are secure from potential cyberattacks. Only 27 percent of respondents say their organizations maintain an inventory of the industrial assets in their OT environment. Worse, 38 percent of respondents say their organizations have an inventory, but it may not be accurate or current.

Following are findings that illustrate the importance of aligning IT and OT priorities and improving communication between the two functions.

 The lack of alignment between the IT and OT can result in conflicting priorities about the importance of securing the OT environment despite the risk. As shown in the findings, secure access is a very or high priority in only slightly more than half of respondents (51 percent) and only slightly more than half (55 percent of respondents) say their organizations are very effective (33 percent) or highly effective (22 percent) in reducing risks and security threats.

Without regular communication between IT and OT teams, the goal of collaboration and alignment is difficult to achieve.  Collaboration between the two teams is critical to ensuring consistent policies and processes are in place to secure access between IT and OT systems. However, only 39 percent of respondents say collaboration between the two teams is significant. Thirty-eight percent of respondents say the only time the teams communicate is on an ad-hoc basis (19 percent) or when a security incident occurs (19 percent). Only 16 percent of respondents say the teams communicate daily (6 percent) or weekly (10 percent).

OT and IT teams share responsibilities without regular communication. As discussed above, more regular communication is needed. OT cybersecurity responsibilities are mostly shared by the IT and OT teams (39 percent of respondents). Thirty-two percent say IT is solely responsible for managing the OT environment and 30 percent say OT is solely responsible for managing the OT environment.

Communication with senior leadership and boards of director is also rare and may contribute to respondents’ concerns about having the allocation of needed resources. Thirty percent of respondents say senior leadership and/or board members are updated on the OT security posture, policies and practices in place on an ad-hoc basis. Only 23 percent say they communicate frequently (10 percent say monthly and 13 percent say quarterly). Without being briefed on a regular basis, it may be difficult to convince senior leaders of the importance of increasing budget and in-house expertise.

Organizations are making progress in achieving secure connectivity between IT and OT systems. Most organizations (81 percent) say they have a goal of achieving convergence to be able to transmit data in one or both directions. Thirty-three percent of respondents say their organizations have established policies, tools, governance and reporting in place to control and monitor connectivity between IT and OT systems. Another 24 percent of respondents say that they have some policies in place to govern access between IT and OT systems.

Securing the OT infrastructure is the responsibility of senior executives in the OT and IT. The two roles most involved in securing the OT infrastructure are the OT Vice president/Director (32 percent of respondents) and the CIO or CTO (29 percent of respondents). To strengthen the security posture in the OT environment, close collaboration between these two roles is needed and includes the deployment and integration of traditional IT security solutions as well as Industrial control systems (ICS) protocols and assessments.

The following findings illustrate the progress being made in IT/OT convergence.

To advance IT/OT convergence, more organizations should adopt a blend of IT and OT security solutions. When asked how their organizations plan to introduce new tools to better secure the OT infrastructure, only 32 percent of respondents say their organizations are using a blend of IT and OT security solutions and 19 percent of respondents say they plan to expand existing IT security solutions to secure the OT infrastructure.

Convergence is considered important, but organizations are concerned about its impact on the OT environment. While IT/OT convergence can improve connectivity and the OT environment, more than half of organizations represented are very or highly concerned about the impact of convergence on the availability of IT systems/services (52 percent of respondents) and the safety and uptime of the OT environment (56 percent of respondents).

Convergence is considered to reduce security risks and improve the ability of IT and OT teams to collaborate. The benefits when IT/OT connectivity is increased includes a reduction in security risks (59 percent of respondents), improvement in the ability of IT and OT teams to collaborate (57 percent of respondents) and to respond to unplanned asset downtime quickly (38 percent of respondents).

To achieve convergence, organizations need to have the budget, ability to ensure security and have collaboration between the IT and OT teams. The top three challenges to connecting IT/OT environments is the lack of budget (42 percent of respondents), security risks (35 percent of respondents) and siloed teams (32 percent of respondents. Those organizations that have no plans for connectivity blame the lack of budget, siloed teams and pushback from the OT team.

The following findings reveal the steps needed to improve secure access to the OT environment by internal teams and third parties.

The OT environment is heavily regulated and should drive investments in security solutions and in-house expertise to reduce risks and security threats. Eighty-one percent of respondents say their organizations must comply with regulations today (59 percent) or in the future (25 percent). Noncompliance can potentially result in costly fines.

The ease of accessing the OT environment by both internal teams and third parties using current tools does not receive high marks. Access is important to be able to extend IT and security tools into OT environments, to observe processes and/or check sensors and increase productivity. However, only half of respondents (49 percent) say the access experience is positive. Similarly, only 43 percent of respondents say vendors/third parties experience accessing OT systems with current tools is very good or excellent.

The importance of third parties in maintaining and supporting OT/ICS environments should make securing their access a priority. Third parties include all types of external suppliers, partners, service providers and contractors who perform important work for the organization but are not direct employees. Because of the complexity and specialized systems in the OT/ICS environments it is important to have third parties who can provide product/system support and maintenance.

According to the findings, an average of 77 third parties/vendors are authorized to connect to the OT systems represented in this research. Of the 73 percent of respondents who say their organizations permit access to the OT environment, 30 percent say they limit vendor/third-party access to on-site and 43 percent of respondents say third parties can access both on-site and remote. Only 27 percent of respondents say their organizations do not allow third party access.

Organizations need to address the risk of third parties’ unauthorized access. Forty-four percent of respondents say the top challenge is preventing unauthorized access and 40 percent of respondents say it is to keep third party access secure. Another top challenge is the lack of alignment between IT and OT security priorities regarding third party risks.

Allowing third party access is needed to maintain operations and prevent downtime, but there should be greater awareness and attention to potential risks.  Only 44 percent of respondents say their organizations are very or highly concerned about vendors/third parties accessing its OT environment.

To read the rest of this report. visit Cyolo.com at this link. You can also access a webinar that Larry participated in discussing the report. 

The State of Cybersecurity Insurance Adoption

The cost of a single data breach, ransomware attack or other security incident can adversely impact the most solid financial balance sheet. The growing threat from sophisticated cybercriminals targeting organizations of all sizes elevates cybersecurity insurance from an IT security concern to a critical business priority, demanding the attention of senior leadership and boards of directors. But what are the limitations of these cybersecurity policies and what are the benefits and hurdles to purchasing a policy that protects organizations? In the event of a cyberattack, how satisfied are organizations with their insurers’ response? Sponsored by Recast Software, the purpose of this research is to address these questions and help organizations prepare for the purchase of insurance.

It’s about the money. Respondents do not expect any decrease in cyber risks targeting their organizations. Instead, according to 75 percent of respondents, their organizations’ exposure will increase (47 percent) or at best stay the same (28 percent). As cyberattacks increase in severity and sophistication, the potential for a significant financial consequence is becoming more likely. According to 61 percent of respondents, the average total financial impact of all security exploits and data breaches experienced by their organizations since purchasing insurance averaged $21 million.

The top two reasons for purchasing insurance are the increasing number of cybersecurity incidents (41 percent of respondents) and concerns about the financial impact (40 percent of respondents). According to the research, 65 percent of respondents say their organizations are purchasing limits between $6 million to more than $100 million. However, 50 percent of respondents say it is difficult to comply with insurers’ requirements. More than 51 percent of respondents say insurers require regular scanning for vulnerabilities that need to be patched.

Ponemon Institute surveyed 631 IT and IT security practitioners in the United States who are familiar with cyber risks facing their companies and have knowledge about their organizations’ use of cybersecurity insurance. Seventy-six percent of respondents say their organizations have completed the purchase and 24 percent of respondents say their organizations are in the process.

 

In this section, we provide an analysis of the research. The complete findings are presented in the Appendix of this report. The report is organized according to the following topics.

 

  • What keeps organizations’ IT security posture from being strong?
  • How helpful is cybersecurity insurance in protecting organizations from adverse financial consequences?
  • Dealing with the hurdles organizations face when purchasing cybersecurity insurance

 What keeps organizations’ IT security posture from being strong?

 Technology and governance challenges are affecting the ability to improve organizations’ security posture. Less than half (49 percent) of respondents rate their IT security posture in terms of its effectiveness at mitigating risks, vulnerabilities and attacks across the enterprise as very effective. The primary reasons are the ineffectiveness of security technologies and the complexity of the IT security environment.

Other challenges that need to be addressed are having a complete inventory of third parties with access to their sensitive and confidential data, keeping senior management up to date about threats facing their organizations and convincing management that cyberattacks are a significant risk.

Understanding the level of cyber risk is important because organizations realize cyber threats are not decreasing. Sixty-three percent of respondents say they assess the level of cyber risk to their organizations. According to 75 percent of respondents, cyber risks will increase (47 percent) or stay the same (28 percent).

The internal assessments are informal (23 percent) or formal (21 percent). However, 37 percent of respondents say their organizations do not do any type of assessment (21 percent) or rely on intuition of gut feel (16 percent). Only 19 percent hire an independent third party to conduct the assessment.

How helpful is cybersecurity insurance in protecting organizations from adverse financial consequences?

 Cybersecurity insurance can improve organizations’ security posture. As reported, 76 percent of respondents have completed the purchase of cyber insurance. On average, these organizations have held their policies for two years, which gives them an understanding of the benefits and effectiveness of cyber insurance.

Almost half (49 percent) of respondents say following the purchase of cybersecurity insurance their cybersecurity posture improved greatly or significantly. However, 48 percent of these respondents changed insurance companies. The primary reasons for the change were the cancellation of the policy or the high expense.

Since purchasing cybersecurity insurance, the threats to organizations did not decrease. While only 27 percent of respondents say cyberattacks have increased and only 17 percent of respondents say their IT security costs have increased,  45 and 44 percent of respondents say cyberattacks and IT security costs have stayed the same.

Forty-three percent of respondents say cyber insurance coverage is sufficient with respect to coverage terms and conditions, exclusions, retentions, limits and insurance carrier financial security. Sixty-seven percent of respondents are extremely satisfied (23 percent), very satisfied (21 percent) or satisfied (23 percent) with coverage.

The financial consequences of all security exploits and data breaches experienced since the purchase of insurance averages $21 million, which includes all costs including out-of-pocket expenditures such as ransomware, consultant and legal fees, indirect business costs such as productivity losses, diminished revenues, legal actions, customer turnover and reputation damage. Sixty-one percent of respondents experienced a significantly disruptive security exploit or data breach since the purchase of cybersecurity insurance.

Fifty-three percent of respondents say their organizations filed a claim following the incident and an average of 46 percent of the losses were covered or approximately $9.7 million. When asked how satisfied their organizations were with the insurance company’s response to the claim, less than half (46 percent of respondents) were very or highly satisfied with the response.

And 65 percent of respondents say their organizations have experienced cyberattacks such as ransomware or denial of service and 61 percent of respondents say cyberattacks have resulted in the misuse or theft of business confidential information, such as intellectual properties.

Dealing with the hurdles organizations face when purchasing cybersecurity insurance

 Insurance companies’ assessment of organizations’ security posture is mainly focused on the existence of an adequate budget. Only half (50 percent) of respondents say the insurance company assesses their security posture. If they do, it is to determine if there is adequate budget (65 percent of respondents). Other factors included are evidence of security and training programs conducted (52 percent of respondents), effectiveness of incident response team (45 percent of respondents) and ability to detect and prevent cyberattacks (45 percent of respondents).

To read the rest of this report, visit the ReCastSoftware.com website

Taylor Swift, the FCC deepfake ban, and why you are the last (only?) line of defense

Twitter (X) briefly blocked Taylor Swift searches in reaction to deepfake posts. A good, if brutal, response.

Bob Sullivan

Hold on tight, fellow humans, there’s artificial turbulence ahead.  Like it or not, the time has come to stop believing what you see, what you hear, and perhaps even what you think you know. Reality is indeed under attack and it’s up to us to preserve it.  The only way to beat back this futuristic nightmare is with old-fashioned skepticism.

Lately, it feels like all anyone wants to talk about is AI and how it’s going to make life much easier for criminals, and much harder for you.  I’ve annoyed several interviewers recently by saying I don’t believe the hype. There is not an avalanche of voice-cloning criminals out there manipulating victims by creating fake wailing kids claiming to need bail money.  The so-called grandparent scam has operated successfully for many years without AI.  But I think that misses the point. First of all, as many journalists have demonstrated (even me!) it’s trivial to create deepfakes now. An expert cloned my voice for $1. But more important, a recent offensive, vile Taylor Swift deepfake was viewed 47 million times before it was removed from most of social media.  This kind of violation is here, today, and it’s going to be very hard to stop.

There are celebrated efforts, of course. The FCC just made voice cloning in scams explicitly illegal, which is certainly welcome, but if FCC efforts to stop robocalling are a guide, AI scams won’t be stopped by this. There are also some high-tech efforts to separate what’s real from what’s fake, and that’s also welcome. Watermarking — even in audio files — can be used by software to declare items as AI-generated, so our gadgets can tell us when a Joe Biden video has been manipulated. Naturally, I wish tech companies had built such safety tools into their AI-generating software in the first place, but this kind of retrofitting is what we’ve come to expect from Big Tech.

I don’t have high hopes that an “AI-generated” label on a negative presidential candidate video is going to do much to stop the coming attack on reality, however. I’m afraid to say this, but it’s true: the problem, dear Brutus, lies not in our stars but in ourselves.

I am the last person to lump responsibility for the failures of billion-dollar tech companies onto busy human beings.  And that’s not what I’m doing here. I still want tech workers to speak up when managers ask them to make tools that can be used to hurt people. I still want regulators to staff up and lock down companies that behave recklessly.  But when it comes to defending reality, the truth is, we are on our own right now.  Human beings are going to have to develop radical inquisitiveness when it comes to things we see, hear, and feel while interacting with technology.

This is going to be hard. Many of us want to see a video of our least-favorite politician looking stupid.  A large number want to see “exclusive” video of famous people in….candid…moments.  We would love for them to contact us directly and offer to be our friend, or even our lover.

We have to help each other learn to resist these base urges, to choose reality over this dark fantasy world that’s being foisted on us.

As if often the case with tech crises, this problem isn’t really new.  Marketers have always manipulated consumers. Propagandists have always lied to populations.  Many dark periods of history can be blamed on large groups failing to exercise proper skepticism, their prejudices and predispositions used against them.  What’s different about our time is the scale.  As we learned back in 2016, a room full of typists half-way around the world can persuade thousands of Americans to attend real-world rallies. The tools for liars and criminals are very powerful; we have to respond with equal force.

I recently interviewed Professor Jonathan Anderson, an expert in artificial intelligence and computer security at Memorial University in Canada, about this problem, and he’s persuaded me that humans must react by adjusting to this new “reality” of un-reality.  We must stop believing what we see and hear. And there is precedence for this.  At the dawn of photography, many people believed that photos couldn’t lie.  Most folks now know that it’s trivial to manipulate images, perhaps even on a subconscious level. If you see something that doesn’t look right — a man’s head on an animal’s body — your first instinct is to react as if Photoshop is the culprit.  Hopefully, we’ll all engage in a learning curve now where this is how we react to any media that’s unexpected, be it a fake desperate child, a celebrity asking to meet with us, or a politician doing something foolish.

My fear is that people will still believe what they want to believe, however.  A “red” person will believe only “blue” fakes, and vice versa.  And that, in my view, is the greatest threat to reality right now.

The growing risk of payment transaction fraud

Business payment transactions are massive. In 2020, business transactions in the US alone were $23 trillion. By 2028, the value of business transactions worldwide is expected to increase to $200 trillion[1]. Such growth should make increasing the security of business payment transactions a priority.

The objectives of this research, conducted by Ponemon Institute and sponsored by Creednz, are to understand the vulnerabilities in organizations’ current payment transaction processes and if controls in place are effective in mitigating the risk. We surveyed 659 professionals in finance, accounting, treasury and risk and compliance. All respondents are familiar with their organizations’ approach to managing payment transaction fraud.

A key takeaway from this research is organizations’ lack of confidence in their existing controls and the ability to keep enough staff to address the risk of payment transaction fraud. Eighty-eight percent of respondents say their organizations had at least one payment transaction fraud in the past two years. The average cost of these incidents was $149,225. Seventy-six percent of respondents say it took more than a month to more than a year to discover and remediate these incidents.

As payment transactions grow, fraud is expected to increase. Fifty-four percent of respondents say fraud will increase significantly (23 percent) or increase (31 percent). The primary reasons are the increasing sophistication of fraudsters (59 percent), lack of resources and technology systems to proactively identify accounts payable/receivable fraud (56 percent), and vulnerabilities in business processes (53 percent).

“A manual process leads to fraud, and those that get blamed are the ones who literally clicked the ‘send’ button,” said Creednz Co-founder and CEO Johnny Deutsch. “It makes no sense in today’s world that technology isn’t being utilized and applied to prevent this in the first place. Creednz was built from the ground up to give finance teams the tools to fight back and protect against payment fraud and safeguarding corporate finances.”

The following findings indicate why payment transaction fraud is a growing risk

 Finance and Treasury may be considered most responsible for preventing fraud, but IT security/SecOps is in the accountability hot seat. Finance and Treasury are most responsible according to 17 percent and 13 percent of respondents. Most accountable for reducing payment transaction fraud is IT security/SecOps (34 percent of respondents),

Confidence in banks and current controls to prevent payment transaction fraud is low. Only 32 percent of respondents are confident or very confident that their banks would verify and stop a suspicious transaction and only 30 percent of respondents have confidence in their current controls.

IT security/SecOps and IT operations are most likely to get fraud-related alerts from their banks. IT security/SecOps (67 percent of respondents) and IT operations (55 percent of respondents) are most often notified about payment transaction fraud. Fifty-two percent of respondents say Finance and 37 percent of respondents say Treasury are likely to get fraud alerts. As discussed, IT security/SecOps is the function considered most accountable for preventing fraud.

Most organizations have experienced at least one payment transaction fraud incident in the past two years. Eighty-eight percent of respondents say their organizations had at least one payment transaction fraud incident in the past two years. More than half (51 percent) of respondents say their organizations had at least four incidents.

The average cost of these incidents (not including internal investigations, legal fees and fines and loss of shareholder confidence) was $149,225. Seventy-six percent of respondents say it took more than a month to more than one year to discover and mitigate these incidents.

Following the fraud, the primary step taken was to invest in technologies. Sixty-five percent of respondents say their organizations purchased technologies to reduce the time it takes to detect the fraud and technologies that would prevent business transaction fraud (63 percent of respondents). Only 44 percent of respondents say the fraud was immediately reported to senior management.

Concerns about the organization’s ability to be good custodians can damage reputations. The number one negative consequence following the fraud incident was damage to the organization’s reputation with business partners and consumers (60 percent of respondents). This is followed by non-compliance with regulations (51 percent of respondents) and loss of shareholders’ confidence (46 percent of respondents).

Keeping staff is the number one challenge to mitigating payment transaction fraud. Staff shortages (56 percent of respondents), the inability to systematically control risks created by third parties or vendors (54 percent of respondents, and the worry that staff would leave in the event of payment transaction fraud (51 percent of respondents) are the top challenges.

Organizations (69 percent of respondents) are most likely to use bank account access privileges to minimize payment transaction fraud.  User entitlements determine the level of banking application access. A user must be entitled to account access to perform tasks. Unless a user is given full entitlements, each account user is to have access and level of access that is defined when entitlements are granted.

Sixty-four percent of respondents say their organizations conduct a daily review and approval of all outgoing payment transactions, 61 percent of respondents say their organizations have added ACH blocks or ACH filters and 60 percent of respondents use multi-factor authentication.

Perceptions about the security of banking relationships from 46 percent of respondents who are in Corporate Finance and Treasury.

Almost half (48 percent) of corporate finance and treasury respondents say their organizations have bank accounts outside the US and Canada. Seventy-five percent of these respondents have a minimum of 21 to more than 50 bank accounts and 48 percent say these accounts are located outside of North America. Seventy-seven percent of respondents say bank accounts are located in EMEA, 69 percent of respondents say LATAM and 64 percent of respondent say Asia-Pac.

Review of user entitlements is not frequent. Sixty-one percent of respondents say their organizations review user entitlements. However, almost half (47 percent) of respondents say these reviews occur annually (23 percent) or as needed (24 percent). In addition, organizations are not auditing bank accounts as often as they should. Fifty-five percent of respondents say their organizations are auditing bank accounts to verify such factors as permissions, stale users and signatory rights policies. However, 68 percent of these respondents say they only conduct the audits quarterly (28 percent), annually (19 percent) or as needed (21 percent).

The inability to know all users with entitlement privileges is the most significant challenge to managing user entitlement privileges, according to 40 percent of respondents. Forty-one percent of respondents say there is no formal process to monitor changes to their user bank account entitlement process and 36 percent of respondents say monitoring is only done as needed. Only 23 percent of respondents say IT security is relied upon to monitor changes. Other challenges not as significant are the lack of information related to signatory rights (28 percent of respondents) or the inability to conduct regular reviews (27 percent of respondents).

The findings from all respondents are shown below.

Most organizations are concerned about potential fraud when making payments to third parties and vendors outside the US and Canada. Fifty-nine percent of respondents say payments are made to overseas organizations and 76 percent of respondents say they are concerned or very concerned about the potential of fraud when making payments to these regions. Of the 59 percent of respondents making payments overseas, 70 percent of respondents say payments are made to third parties in EMEA, 65 percent of respondents say LATAM and 55 percent of respondents say Asia-Pac.

The results of risk assessments determine how many payment policies are applied. Sixty-one percent of respondents say payment policies are applied consistently without consideration of risk, location and the length of time the relationship has lasted. However, 39 percent of respondents say payment policies are not applied consistently. If payment polices are not applied consistently, 58 percent say payment policies differ based on risk assessments. This is followed by 45 percent of respondents who say application policies are based on location of vendor/third party/contractor.

Most organizations are using a third-party service to validate bank account details. Only 36 percent of respondents say their organizations validate bank account details all the time. If they do validate, 64 percent of respondents say they use a third-party service. This may be due to the challenge of not having enough staff dedicated to mitigating payment transaction fraud. Forty-nine percent of respondents say a phone call is made to validate vendor bank account details.

Sixty-two percent of respondents say email is used to exchange account details with vendor’s/third parties/contractors and 59 percent of respondents use a more secure exchange through a vendor portal.

Concern about the corruption of the payment file is the reason many respondents are not confident in their vendor management vetting process. Fifty-two percent say possible corruption of the payment file and 48 percent of respondents say it is human error that the third-party vetting process is not reducing the risk of payment transaction fraud.

To read the full report, visit the Creendnz.com website

What self-checkout failures can teach us about artificial intelligence

Bob Sullivan

We all want tech to solve the world’s most vexing problems, and we’ve all spent time with tech that hasn’t lived up to its promises. The printer that lets us down as we’re rushing to a meeting. The concert ticket app that suddenly fails as we’re trying to get into a show. The helpful website customer chat tool that only knows how to say, “I’m sorry you’re having trouble.”  Tech might feel like the solution to every challenge; clearly, it’s not.  And it often creates as many problems as it solves.

And that takes me to self-checkout lines at retail stores.  We’ve all seen them work; we’ve all seen them fail.  I want people to remember this as we race headlong into the AI world.

There have been a flurry of articles recently about the failed experiment of self-checkout. How big stores are backsliding on kiosk installations. How consumers have come to realize they are doing extra work, often enduring extra frustration, and not getting anything for their trouble. How lack of human beings at store exists contribute to shoplifting.

People in the industry say these comments are overblown. But there is no denying; self-checkout is often a headache. And it gets no beta-test bye from me. We’ve been forcing people to find the code for green peppers (organic or not?) since the 1990s.  That’s a generation of shoppers’ time to work out the kinks.

I know many of you love running into a drugstore and running out with one tube of toothpaste without having to wait for a store clerk to take your money and make your change.  As usual, my criticism is not sweeping or all-inclusive.  Quite the contrary. I want people to remember that new technologies should be implemented without forcing people to abandon old ways.  The solutions tech offers are never all-inclusive.  There are compromises to be made. There are exceptions to be handled. The obvious solution for stores is to have a better mix of human clerks and self-checkout machines.

Unfortunately, that mix is hampered by the Trojan horse fighting all good technology implementations — above all, cost cutting. See, self-checkout is really about minimizing minimum wage employees, not getting the best out of a new human invention. In a capitalist society, that pressure will always exist. But even here, this approach is short-sighted and doesn’t pencil out.  There is evidence that labor cost savings from self-checkout are erased by increased shoplifting.  And we’ve all seen stores that end up hiring nearly as many self-checkout hand-holders as they had cashiers.

So when we introduce new tech, blended, human-friendly solutions should be the first impulse, not a last resort. After 9/11, there was so much talk (and snake oil) around facial recognition.   Most of those implementations failed spectacularly, and we found that talented, attentive security guards are better at keeping us safe.  Slowly, airport security tech has gotten better, and it can now stand alongside human talent to protect our flights.  But any rush to find a gadget cure-all is doomed to fail.

Here’s another example I’ve thought about often. Not long ago, many believed virtual reality would soon be ubiquitous.  Today, you barely hear the term. Creating a fully immersive experience that fools the mind turns out to be incredibly hard. But augmented reality — where repair instructions project over a technician’s genuine eyesight as they try to complete a tricky repair — has great potential.  In a similar way, self-driving cars are a pipe dream.  But driver assistance technology is here today, works incredibly well, and holds the promise of soon dramatically reducing car accidents.

So as artificial intelligence continues to creep into our offices and our consumer products, let’s make sure people are leading the way, not forced to follow it. Otherwise, we’ll end up like so many consumers, standing in line, waiting for someone to come verify our driver’s license so we can buy cooking wine.

 

Cyber Insecurity in Healthcare: The Cost and Impact on Patient Safety and Care 2023

A strong cybersecurity posture in healthcare organizations is important to not only safeguard sensitive patient information but also to deliver the best possible medical care. This study was conducted to determine if the healthcare industry is making progress in achieving these two objectives.

With sponsorship from Proofpoint, Ponemon Institute surveyed 653 IT and IT security practitioners in healthcare organizations who are responsible for participating in such cybersecurity strategies as setting IT cybersecurity priorities, managing budgets and selecting vendors and contractors.

The report, “Cyber Insecurity in Healthcare: The Cost and Impact on Patient Safety and Care 2023,” found that 88% of the surveyed organizations experienced an average of 40 attacks in the past 12 months. The average total cost of a cyber attack experienced by healthcare organizations was $4.99 million, a 13% increase from the previous year.

Among the organizations that suffered the four most common types of attacks—cloud compromise, ransomware, supply chain, and business email compromise (BEC)—an average of 66% reported disruption to patient care. Specifically, 57% reported poor patient outcomes due to delays in procedures and tests, 50% saw an increase in medical procedure complications, and 23% experienced increased patient mortality rates. These numbers reflect last year’s findings, indicating that healthcare organizations have made little progress in mitigating the risks of cyber attacks on patient safety and wellbeing.

According to the research, 88 percent of organizations surveyed experienced at least one cyberattack in the past 12 months. For organizations in that group, the average number of cyberattacks was 40. We asked respondents to estimate the single most expensive cyberattack experienced in the past 12 months from a range of less than $10,000 to more than $25 million. Based on the responses, the average total cost for the most expensive cyberattack was $4,991,500, a 13 percent increase over last year. This included all direct cash outlays, direct labor expenditures, indirect labor costs, overhead costs and lost business opportunities.

At an average cost of $1.3 million, disruption to normal healthcare operations because of system availability problems was the most expensive consequence of the cyberattack, an increase from an average of $1 million in 2022. Users’ idle time and lost productivity because of downtime or system performance delays cost an average of $1.1 million, the same as in 2022. The cost of the time required to ensure the impact on patient care was corrected increased from an average of $664,350 in 2022 to $1 million in 2023.

“While the healthcare sector remains highly vulnerable to cybersecurity attacks, I’m encouraged that industry executives understand how a cyber event can adversely impact patient care. I’m also more optimistic that significant progress can be made to protect patients from the physical harm that such attacks may cause,” said Ryan Witt, chair, Healthcare Customer Advisory Board at Proofpoint. “Our survey shows that healthcare organizations are already aware of the cyber risks they face. Now they must work together with their industry peers and embrace governmental support to build a stronger cybersecurity posture—and consequently, deliver the best patient care possible.”

The report analyzes four types of cyberattacks and their impact on healthcare organizations, patient safety and patient care delivery:

Cloud compromise. The most frequent attacks in healthcare are against the cloud, making it the top cybersecurity threat, according to respondents. Seventy-four percent of respondents say their organizations are vulnerable to a cloud compromise. Sixty-three percent say their organizations have experienced at least one cloud compromise. In the past two years, organizations in this group experienced 21 cloud compromises. Sixty-three percent say they are concerned about the threat of a cloud compromise, an increase from 57 percent.

Business email compromise (BEC)/spoofing phishing. Concerns about BEC attacks have increased significantly. Sixty-two percent of respondents say their organizations are vulnerable to a BEC/spoofing phishing incident, an increase from 46 percent in 2022. In the past two years, the frequency of such attacks increased as well from an average of four attacks to five attacks.

Ransomware. Ransomware has declined as a top cybersecurity threat. Sixty-four percent of respondents believe their organizations are vulnerable to a ransomware attack. However, as a concern ransomware has decreased from 60 percent in 2022 to 48 percent in 2023. In the past two years, organizations that had ransomware attacks (54 percent of respondents) experienced an average of four such attacks, an increase from three attacks. While fewer organizations paid the ransom (40 percent in 2023 vs. 51 percent in 2022), the ransom paid increased nearly 30 percent from an average of $771,905 to $995,450.

Supply chain attacks. Organizations are vulnerable to a supply chain attack, according to 63 percent of respondents. However, only 43 percent say this cyber threat is of concern to their organizations. On average, organizations experienced four supply chain attacks in the past two years.

As in the previous report, an important part of the research is the connection between cyberattacks and patient safety. Following are trends in how cyberattacks have affected patient safety and patient care delivery.

  • It is more likely that a supply chain attack will affect patient care. Sixty-four percent of respondents say their organizations had an attack against their supply chains. Seventy-seven percent of those respondents say it disrupted patient care, an increase from 70 percent in 2022. Patients were primarily impacted by delays in procedures and tests that resulted in poor outcomes such as an increase in the severity of an illness (50 percent) and a longer length of stay (48 percent). Twenty-one percent say there was an increase in mortality rate.
  • A BEC/spoofing attack can disrupt patient care. Fifty-four percent of respondents say their organizations experienced a BEC/spoofing incident. Of these respondents, 69 percent say a BEC/spoofing attack against their organizations disrupted patient care, a slight increase from 67 percent in 2022. And of these 69 percent, 71 percent say the consequences caused delays in procedures and tests that have resulted in poor outcomes while 56 percent say it increased complications from medical procedures.
  • Ransomware attacks can cause delays in patient care. Fifty-four percent of respondents say their organizations experienced a ransomware attack. Sixty-eight percent of respondents say ransomware attacks have a negative impact on patient care. Fifty-nine percent of these respondents say patient care was affected by delays in procedures and tests that resulted in poor outcomes and 48 percent say it resulted in longer lengths of stay, which affects organizations’ ability to care for patients.
  • Cloud compromises are least likely to disrupt patient care. Sixty-three percent of respondents say their organizations experienced a cloud compromise, but less than half (49 percent) say cloud compromises disrupted patient care. Of these respondents, 53 percent say these attacks increased complications from medical procedures and 29 percent say they increased mortality rate. 
  • Data loss or exfiltration disrupts patient care and can increase mortality rates. All organizations in this research had at least one data loss or exfiltration incident involving sensitive and confidential healthcare data in the past two years. On average, organizations experienced 19 such incidents in the past two years and 43 percent of respondents say they impacted patient care. Of these respondents, 46 percent say it increased the mortality rate and 38 percent say it increased complications from medical procedures. 

Other key trends in cyber insecurity

Concerns about threats related to employee behaviors increased significantly. Substantially more organizations are now worried about the security risks created by employee-owned devices (BYOD), an increase from 34 percent in 2022 to 61 percent of respondents. Concerns about BEC/spoof phishing increased from 46 percent to 62 percent in 2023.

Disruption to normal healthcare operations because of system availability problems increased to $1.3 million from $1 million in 2022. Users’ idle time and lost productivity because of downtime or system performance delays averaged $1.1 million. The cost of the time taken to ensure impact on patient care was corrected increased to $1 million in 2023 from $664,350 in 2022.

Accidental data loss is the second highest cause of data loss and exfiltration. Accidental data loss can occur in many ways, such as employees misplacing or losing devices that contain sensitive information, or mistakes made when employees are emailing documents with sensitive information. Almost half of respondents (47 percent) say their organizations are very concerned that employees do not understand the sensitivity and confidentiality of documents they share by email.

More progress is needed to reduce the risk of data loss or exfiltration. All healthcare organizations in this research have experienced at least one data loss or exfiltration incident involving sensitive and confidential healthcare data. The average number of such incidents is 19.

Cloud-based user accounts/collaboration tools that enable productivity are most often attacked. Fifty-three percent of respondents say project management tools and Zoom/Skype/video conferencing tools at some point were attacked.

Organizations continue to deploy a combination of approaches to user access and identity management in the cloud (56 percent of respondents). These include separate identity management interfaces for the cloud and on-premises environments, unified identity management interfaces for both the cloud and on-premises environments, and deployment of single sign-on.

The lack of preparedness to stop BEC/spoof phishing and supply chain attacks puts healthcare organizations and patients at risk.  While BEC/spoof phishing is considered a top cybersecurity threat, only 45 percent of respondents say their organizations include steps to prevent and respond to such an attack as part of their cybersecurity strategy. Similarly, only 45 percent say their organizations have documented the steps to prevent and respond to attacks in the supply chain. Malicious insiders are seen as the number one cause of data loss and infiltration — however, only 32 percent say they are prepared to prevent and respond to the threat. 

The primary deterrents to achieving an effective cybersecurity posture are a lack of in-house expertise, staffing and budget. Fifty-eight percent of respondents, an increase from 53 percent in 2022, say their organizations lack in-house expertise and 50 percent say insufficient staffing is a challenge. Those citing insufficient budget increased from 41 percent to 47 percent in 2023.

Security awareness training programs continue to be the primary step taken to reduce the insider risk. Negligent employees pose a significant risk to healthcare organizations. More organizations (65 percent in 2023 vs. 59 percent of respondents in 2022) are taking steps to address the risk of employees’ lack of awareness about cybersecurity threats. Of these respondents, 57 percent say they conduct regular training and awareness programs. Fifty-four percent say their organizations monitor the actions of employees.

The use of identity and access management solutions to reduce phishing and BEC attacks has increased from 56 percent of respondents in 2022 to 65 percent in 2023. The use of domain-based message authentication (DMARC) increased from 38 percent in 2022 to 43 percent in 2023.

To read the full report, download it from Proofpoint’s website.

Someone made a deepfake of my voice for a scam! (With permission…)

Bob Sullivan

“I need help. Oh my God! I hit a woman with my car,” the fake Bob says. “It was an accident, but I’m in jail and they won’t let me leave unless I come up with $20,000 for bail …Can you help me? Please tell me you can send the money.”

It’s fake, but it sounds stunningly real. For essentially $1, using an online service available to anyone, an expert was able to fake my voice and use it to create telephone-ready audio files that would deceive my mom.  We’ve all heard so much about artificial intelligence – AI – recently. For good reason, there have long been fears that AI-generated deepfake videos of government figures could cause chaos and confusion in an election.  But there might be more reason to fear the use of this technology by criminals who want to create confusion in order to steal money from victims.

Already, there are various reports from around North America claiming that criminals are using AI-enhanced audio-generation tools to clone voices and steal from victims. So far, all we have are isolated anecdotes, but after spending a lot of time looking into this recently, and allowing an expert to make deepfakes out of me, I am convinced that there’s plenty of cause for concern.

Reading these words is one thing; hearing voice clones in action is another. So I hope you’ll listen to a recent episode of The Perfect Scam that I hosted on this for AARP.  Professor Jonathan Anderson, an expert in artificial intelligence and computer security at Memorial University in Canada, does a great job of demonstrating the problem — using my voice — and explaining why there is cause for … concern, but perhaps not alarm.  His main suggestion: All consumers need to raise their digital literacy and become skeptical of everything they read, everything they see, and everything they hear. It’s not so far-fetched; many people now realize that images can be ‘photoshopped’ to show fake evidence.  We all need to extend that skepticism to everything we consume. Easier said, than done, however.

Still, I ponder, what kind of future are we building? Also in the episode, AI consultant Chloe Autio offers up some suggestions about how industry, governments, and other policymakers can make better choices now to avoid the darkest version of the future that I’m worried about.

I must admit I am still skeptical that criminals are using deepfakes to any great extent. Still, if you listen to this episode, you’ll hear Phoenix mom Jennifer DeStefano describe a fake child abduction she endured, in which she was quite sure she heard her own child’s voice crying for help.  And you’ll hear about an FBI agent’s warning, and a Federal Trade Commission warning.  As Professor Anderson put it, “based on the technology that is easily, easily available for anyone with a credit card, I could very much believe that that’s something that’s actually going on.”

 

Preparing for a Safe Post Quantum Computing Future: A Global Study

Sponsored by DigiCert, the purpose of this research is to understand how organizations are addressing the post quantum computing threat and preparing for a safe post quantum computing future. Ponemon Institute surveyed 1,426 IT and IT security practitioners in the United States (605), EMEA (428) and Asia-Pacific (393) who are knowledgeable about their organizations’ approach to post quantum cryptography.

Quantum computing harnesses the laws of quantum mechanics to solve problems too complex for classical computers. With quantum computing, however, cracking encryption becomes much easier, which poses an enormous threat to data security.

That is why 61 percent of respondents say they are very worried about not being prepared to address these security implications. Another threat of significance is that advanced attackers could conduct “harvest now, decrypt later” attacks, in which they collect and store encrypted data with the goal of decrypting the data in the future (74 percent of respondents). Despite these concerns, only 23 percent of respondents say they have a strategy for addressing the security implications of quantum computing.

The following findings illustrate the challenges organizations face as they prepare to have a safe post quantum computing future. 

Security teams must juggle the pressure to keep ahead of cyberattacks targeting their organizations while preparing for a post quantum computing future. Only 50 percent of respondents say their organizations are very effective in mitigating risks, vulnerabilities and attacks across the enterprise. Reasons for the lack of effectiveness is that almost all respondents say cyberattacks are becoming more sophisticated, targeted and severe. According to the research, ransomware and credential theft are the top two cyberattacks experienced by organizations in this study.

The clock is racing to achieve PQC readiness. Forty-one percent of respondents say their organizations have less than five years to be ready. The biggest challenges are not having enough time, money and expertise to be successful. Currently, only 30 percent of respondents say their organizations are allocating budget for PQC readiness. One possible reason for not having the necessary support is that almost half of respondents (49 percent) say their organization’s leadership is only somewhat aware (26 percent) or not aware (23 percent) about the security implications of quantum computing. Forty-nine percent of respondents are also uncertain about the implications of PQC.

Resources are available to help organizations prepare for a safe post quantum computing future. In the last few years, industry groups such ANSI X9’s Quantum Risk Study Group and NIST’s post-quantum cryptography project have been initiated to help organizations prepare for PQC. Sixty percent of respondents say they are familiar with these groups. Of these respondents, 30 percent say they are most familiar with the ANSI X9’s Quantum Risk Study Group and 28 percent are most familiar with NIST’s industry group.

Many organizations are in the dark about the characteristics and locations of their cryptographic keys. Only slightly more than half of respondents (52 percent) say their organizations are currently taking an inventory of the types of cryptography keys used and their characteristics. Only 39 percent of respondents say they are prioritizing cryptographic assets and only 36 percent of respondents are determining if data and cryptographic assets are located on-premises or in the cloud.

Very few organizations have an overall centralized crypto-management strategy applied consistently across the enterprise. Sixty-one percent of respondents say their organizations only have a limited crypto-management strategy that is applied to certain applications or use cases (36 percent) or they do not have a centralized crypto-management strategy (25 percent).

Without an enterprise-wide cryptographic management strategy organizations are vulnerable to security threats, including those leveraging quantum computing methods. Only 29 percent of respondents say their organizations are very effective in the timely updating of their cryptographic algorithms, parameters, processes and technologies and only 26 percent are confident that their organization will have the necessary cryptographic techniques capable of protecting critical information from quantum threats.

While an accurate inventory of cryptographic keys is an important part of a cryptography management strategy, organizations are overwhelmed keeping up with their increasing use. Sixty-one percent of respondents say their organizations are deploying more cryptographic keys and digital certificates. As a result, 65 percent of respondents say this is increasing the operational burden on their teams and 58 percent of respondents say their organizations do not know exactly how many keys and certificates they have.

The misconfiguration of keys and certificates and the ability to adapt to cryptography changes prevents a cryptographic management program from being effective. Sixty-two percent of respondents say they are concerned about the ability to adapt to changes in cryptography such as algorithm deprecation and quantum computing. Another 62 percent are concerned about the misconfiguration of keys and certificates. Fifty-six percent are concerned about the increased workload and risk of outages caused by shorter SSL/TLS certificate lifespans.

To secure information assets and the IT infrastructure, organizations need to improve their ability to effectively deploy cryptographic solutions and methods. Most respondents say their organizations do not have a high ability to drive enterprise-wide best practices and policies, detect and respond to certificate/key misuse, remediate algorithm remediation or breach and prevent unplanned certificates.

Crypto Centers of Excellence (CCOEs) can support organizations’ efforts to achieve a safe post quantum computing future. A CCOE can help improve operational cryptographic processes and increase an organization’s trust environment. They do require advanced technologies and expertise in cryptography to maintain secure operations and comply with applicable regulations. Most organizations in this research do plan on having a CCOE. However, currently only 28 percent of respondents say their organizations have a mature CCOE that provides crypto leadership, research, implementation strategy, ownership and best practices. Another 28 percent of respondents say they have a CCOE, but it is still immature.

Hiring and retaining qualified personnel is the most important strategic priority for digital security (55 percent of respondents). This is followed by achieving crypto-agility (51 percent of respondents), which is the ability to efficiently update cryptographic algorithms, parameters, processes and technologies to better respond to new protocols, standards and security threats, including those leveraging quantum computing methods.

 To read the key findings in this report, click on DigiCert’s website

Robot fear is approaching 1960s-levels; that might be a distraction

Bob Sullivan

When I logged onto a Zoom meeting recently, I was offered the chance to let the company use some kind of whiz-bangy AI magic that would summarize the meeting for me.  Cool? Maybe. Artificial intelligence? Not by my definition. New? Not really.  New name? Absolutely

I’m sure you’ve had this experience a lot lately.  “AI-powered” marketing-speak is everywhere, sweeping the planet faster than dot com mania ever did.  In fact, it’s come so fast and furious that the White House issued an executive order about AI on Monday. AI hasn’t taken over the planet, but AI-speak sure has. It’s smart to worry about computers taking over the world and doing away with humanity, but I think marketing hype might be AI’s most dangerous weapon.

Look, chatbots are kind of cool and impressive in their own way. New?  Well, as consumers, we’ve all been hijacked by some “smart” computer giving us automated responses when we just want a human being at a company to help us with a problem. The answers are *almost* helpful, but not really.  And chatbots … are…not really new.

I like to remind people who work in this field — before “the Cloud” there were servers.  Before Web 3.0 there was the Internet of Things, and before that, cameras that connected to your WiFi.  Before analytics, and AI, there was Big Data.  Many of these things work better than they did ten or twenty years ago, but it was the magic label — not a new Silicon Valley tech, but a new Madison Avenue slogan — that captured public imagination.  Just because someone calls something AI does not make it so.  It might just be search, or an updated version of Microsoft Bob that isn’t terrible.

I don’t at all mean to minimize concern about tech being used for evil purposes.  Quite the opposite, really. If you read the smartest people I can find right now, this is the concern you’ll hear.  It’s fine to fret about ChatGPT Jr., or ChatGPT’s evil half-brother, making a nuclear bomb, or making it substantially easier to make a nuclear bomb. We’ve been worried about something like that since the 1950s and 60s.  And we should still be concerned about it. But that’s not happening today.

Meanwhile, tech (aka “AI”) is being used to hurt people right now. There’s real concern all the focus on a sci-fi future is taking attention away from what needs to be done to reign in large technology companies right now.

Big databases have been used to harm people for a long time.  Algorithms decide prison sentences — often based on flawed algorithms and data.  (Yes, that is real!) Credit scores rule our lives as consumers. The credit reports on which they are built are riddled with errors. And as people seem to forget, credit scores did virtually nothing to stop the housing bubble.  I just read that credit scores are at an all-time high, despite the fact that consumer debt is at very high levels — and, in a classic toxic combination — interest rates are also very high. So just how predictive are credit scores?

I know this — Folks looking to regulate AI/Big Data/algorithmic bias haven’t done nearly enough research into the decades-long battle among credit reporting agencies, consumer advocacy groups, and government regulators.  Hint: It’s not over.

There is a lot to like in the recent AI executive order.  I’ve long been an advocate that tech companies should include “abuse/harm testing” into new products, the way cybersecurity teams conduct penetration testing to predict how hackers might attack. Experienced, creative technologists should sit beside engineers as they dream up new products and ponder: “If I were a bad person, how might I twist this technology for dark uses?”  So when a large tech firm comes up with a great new tool for tracking lost gadgets, someone in the room will stop them and ask, “How do we prevent enraged ex-partners from using this tech to stalk victims?” Those conversations should be had in real-time, during product development, not after something is released to the world and is already being abused.

Today’s executive order calls for red-teaming and sharing of results with regulators.   In theory, such reports would head off a nuclear-bomb-minded bot at the pass.  Good.  I just hope we don’t race past algorithmic-enhanced racial discrimination in housing decisions — which happens today, and has been happening for decades.

The best piece I read on the executive order appeared in MIT Technology Review — a conversation with Joy Buolamwini, who has a new book out titled  Unmasking AI: My Mission to Protect What Is Human in a World of Machines.  She’s been ringing the alarm bell on current-day risks for nearly a decade

For something that’s a more direct explanation of the order, read this piece on Wired.  

This is a link to the White House fact sheet about the executive order.