Perimeters Aren’t Dead – They’re Valuable

Since I first began building internet firewalls in the late 1980s, I have periodically encountered claims that “the perimeter is dead” or “firewalls don’t work.” These claims are rather obviously wrong: your firewall or perimeter are simply a way of separating things so you can organize them better. An internet firewall is an organizing principle between “stuff that’s not your problem” (the internet) and “stuff that’s your problem” (your network).

At a finer level of detail, you might apply other organizing principles such as “my data center” and “the unmanaged cloud of desktops” or “our PCI cloud.” If you think of firewalls or perimeters as a way of organizing the various entities you deal with, you’ll be able to better understand your strategic objectives for where data moves, how it moves and where it sits. Without that type of organization, the idea of a network that is “yours” is purely imaginary.

If you think about firewalls and perimeters as an organizing principle, you’ll be able to see how single servers can be a “cloud of one” whether they’re on premise or off, and you can think about the trust relationships between remote servers and internal services. It’s a valuable mental tool, in other words.

We (or rather management) also can make mistakes by forgetting there is a persistent management cost for design. Organizing your computers and thinking about where data moves and how it is stored is expensive. It takes understanding and thought to design this stuff, and if it’s not done right, you wind up with a mess. A typical mess might be: “everything can talk to everything,” which is certainly easy to set up, requires no ongoing management, and is – for all intents and purposes – impossible to secure. It seems to me that a lot of executives expect tremendous cost-savings from moving to the cloud, but they don’t realize that you still need good systems people (to manage the cloud systems using the cloud providers’ interfaces) and governance/analysis (to think about where your data is moving and why). In other words, the thinking is the hard part.

Beyond security, it’s important to think about performance and reliability. If you figure out where your most important servers and data are, you can optimize your network architecture to guarantee best performance where it needs to be. Otherwise, in an “everything can talk to everything” network, your only option for performance tuning is to make everything faster. That’s an important distinction to keep in mind as we collectively move to software-defined networks. The organizing principle that leads to securing your data is also the organizing principle that allows you to optimize your data paths.

A senior IT person at a large enterprise told me, “We have web services all over the place. We use a vulnerability scanner to identify systems that are offering up data on port 80, then we track them down and analyze them.” Think about that for a second! If the organization has a purely reactive governance model like this, how will that enterprise move to a high-performance software-defined network? To map out your performance requirements, you need to know where the data is going to flow. You cannot do that if you’re permanently reverse-engineering your design using what I call “forensic network architecture.”

When we talk about disaster recovery or data backups, the same reasoning applies: you can’t back up your data if you don’t know where it is (organizing principle: data perimeter), and you can’t identify which systems need to be recoverable/reliable if you don’t know which they are (organizing principle: data center perimeter). None of this is a new problem, but, unfortunately, a lot of organizations are going to keep kicking the can down the road, so they can preserve their hard-won ignorance about what’s going on inside their perimeter.

Editor’s note: For more of Marcus Ranum’s insights on this topic, download The Vaguely Defined Perimeter.

Marcus J. Ranum, Security Consultant

[ISACA Now Blog]

Combating the Rising Threat of Malicious AI Uses: A Strategic Imperative

A group of academics and researchers from leading universities and thinktanks – including Oxford, Yale, Cambridge and Open AI – recently published a chilling report titled, The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. The report raised alarm bells about the rising possibilities that rogue states, criminals, terrorists and other malefactors could soon exploit AI capabilities to cause wide spread harm. These risks are weighty and disturbing, albeit not surprising. Several politicians and humanitarians have repeatedly advocated for the need to regulate AI, with some calling it humanity’s most plausible existential threat.

For instance, back in 2016, Barack Obama, then President of the United States, publicly admitted his fears that an AI algorithm could be unleashed against US nuclear weapons. “There could be an algorithm that said, ‘Go penetrate the nuclear codes and figure out how to launch some missiles,’” Obama cautioned.  A year later, in August 2017, the charismatic Tesla and SpaceX CEO, Elon Musk, teamed up with 116 executives and scholars to sign an open letter to the UN, urging the world governing body to urgently enact statutes to ban the global use of lethal autonomous weapons or so-called “killer robots.”

I wrote a 2017 ISACA Journal article to underscore that, while AI’s ability to boost fraud detection and cyber defense is unquestionable, this vital role could soon prove to be a zero-sum game. The same technology could be exploited by malefactors to develop superior and elusive AI programs that will unleash advanced persistent threats against critical systems, manipulate stock markets, perpetrate high-value fraud or steal intellectual property.

What makes this new report particularly significant is its emphasis on the immediacy of the threat. It predicts that widespread use of AI for malicious purposes – such as repurposed autonomous weapons, automated hacking, target impersonation, highly tuned phishing attacks, etc. – could all eventuate as early as the next decade.

So, why has this malicious AI threat escalated from Hollywood fantasy to potential reality far more rapidly than many pundits anticipated? There are three primary drivers:

  • First, cyber-threat actors are increasingly agile and inventive, spurred by the growing base of financial resources and absence of regulation – factors that often stifle innovation for legitimate enterprises.
  • Secondly and perhaps most important, the rapid intersection between cybercrime and politics, combined with deep suspicions that adversarial nations are using advanced programs to manipulate elections, spy on military programs or debilitate critical infrastructure, have further dented prospects of meaningful international cooperation.
  • Thirdly, advanced AI-based programs developed by nation-states may inadvertently fall into wrong hands. An unsettling example is the 2016 incident, in which a ghostly group of hackers, going by the moniker “The Shadow Brokers,” reportedly infiltrated the US National Security Agency (NSA) and stole advanced cyber weapons that were allegedly used to unleash the WannaCry ransomware in May 2017. As these weapons become more powerful and autonomous, the associated risks will invariably grow. The prospect of an autonomous drone equipped with hellfire missiles falling into wrong hands, for instance, would be disconcerting to us all.

It’s clear that addressing this grave threat will be complex and pricey, but the task is pressing. As report co-author Dr. Seán Ó hÉigeartaigh stressed, “We live in a world that could become fraught with day-to-day hazards from the misuse of AI and we need to take ownership of the problems – because the risks are real.” Several strategic measures are required, but the following two are urgent:

  • There is need for deeper, transparent and well-intentioned collaboration between academics, professional associations, the private sector, regulators and world governing bodies. This threat transcends the periphery of any single enterprise or nation. Strategic collaboration will be more impactful than unilateral responses.
  • As the report highlighted, we can learn from disciplines such as cybersecurity that have a credible history in developing best practices to handle dual-use risks. Again, while this is an important step, much more is required. As Musk and his co-collaborators wrote to the UN, addressing this risk requires binding international laws. After all, regulations and standards are only as good as their enforcement.

This is an old story; history is repeating itself. As Craig Timberg wrote in The Threatened Net: How the Web Became a Perilous Place, “When they [internet designers] thought about security, they foresaw the need to protect the network against potential intruders and military threats, but they didn’t anticipate that the internet’s own users would someday use the internet to attack one another.”

The internet’s rapid transformation from a safe collaboration tool to a dangerous place provides an important lesson. If we discount this adjacent threat, AI’s capabilities – which hold so much promise – will similarly be exploited by those with bad intentions. Absent a coherent international response, the same technology that is being used to derive deep customer insights, tackle complex and chronic ailments, alleviate poverty and advance human development could be misappropriated and lead to grave consequences.

Author’s note: Phil Zongo is an experienced head of cybersecurity, strategic advisor, author, and public speaker. He is the 2016-17 winner of the ISACA’s Michael Cangemi Best Book/Article Award, a global award that recognizes individuals for major contributions to publications in the field of IS audit, control and/or security.

In 2016, Zongo won ISACA Sydney’s first Best Governance of the Year award, a recognition for the thought leadership he contributes to the cybersecurity profession. Over the last 14 years, Zongo has advised several business leaders on how to cost-effectively manage business risk in complex transformation programs. Zongo regularly speaks at conferences on disruptive trends, such as cyber resilience, blockchain, artificial intelligence and cloud computing.

Phil Zongo, Head of Cybersecurity, Author and Public Speaker

[ISACA Now Blog]

Five Questions on Board-Level Cybersecurity Considerations with Dottie Schindlinger

Editor’s note: Dottie Schindlinger, VP/Governance Technology Evangelist with Diligent and a panelist on the importance of tech-savvy leadership at ISACA’s CSX North America conference last October, recently told Forbes that cybercriminals target organizations perceived to be low-hanging fruit. Schindlinger visited with ISACA Now to discuss how organizations can avoid falling into that category and other key board-level cybersecurity considerations. The following is an edited transcript:

ISACA Now: How do board directors and executive leaders go about ensuring hackers don’t consider their organizations to be low-hanging fruit?
Board members and executive leaders of organizations are ultimately responsible for ensuring the long-term health of their organizations – and this responsibility extends to mitigating cyber risk. That doesn’t mean they have to be deeply involved in the day-to-day operations of cybersecurity programs, but they can’t be complacent.

The simplest thing directors can do to mitigate cyber risk is to ask questions and hold themselves to a higher standard. First, boards should ensure their organizations are providing the right set of tools to ensure the board’s communications are kept secure – for example, moving away from email in favor of a more holistic “Enterprise Governance Management” solution.

Additionally, boards should receive a quarterly high-level summary from the organization’s IT/data security team explaining the main components of the organization’s cybersecurity program. This should include a review of the current threats and thwarted hacking attempts, and a review of the training and education taking place across the organization. The CIO or CISO should be present at every board meeting to deliver the report, answer questions, highlight concerns and discuss ongoing investments in cybersecurity.

Furthermore, board members and senior executives should be required – along with anyone granted access to the organization’s sensitive data – to receive cybersecurity training and support. Far too often, senior leaders are prime targets for hackers because they have access to highly sensitive data with little IT oversight.

ISACA Now: Are boards becoming more sophisticated about providing cybersecurity leadership?
Yes and no. When asked, most directors voice strong concerns about data security – they are clearly worried about the stories they hear about in the news. But that concern doesn’t necessarily lead to action. For example, far too few directors are required to receive cybersecurity training on a regular basis. Our last survey – conducted in 2017 – showed that fewer than one-third of directors receive regular cybersecurity training and, even then, it’s most likely to be conducted very infrequently.

We also learned how heavily directors rely heavily on email for communication. More than two-thirds use email as their primary form of communication about board business. This is worrisome in light of the explosion of ransomware and malware attacks targeted at high-ranking individuals throughout 2017. If directors are using unsecured, unencrypted email to share sensitive data, the directors themselves become sources of cyber risk, rather than stewards of cybersecurity.

I believe the needle is finally beginning to move in a positive direction. Fear is a strong motivator, but so is the potential for revenue growth that comes when an organization’s leaders are more tech-minded.

ISACA Now: Given the growing understanding of the importance of cybersecurity, why are many organizations still reluctant to invest in training, both for board members and for their staffs?
Partly I think this has to do with a lack of understanding of the immediacy and severity of the threat. Considering that it typically takes a few months for a breach to even be detected, it’s highly likely more organizations have been breached than we know. I think many organizations want to believe they aren’t as vulnerable as they really are. In my conversations with directors, I’ve heard the phrase, “Our IT team is top-notch and we have cyber risk insurance.” Those two statements might be absolutely true – but neither one can prevent an organization from being hacked 100% of the time.

I think it’s fair to say that some complacency is born from a lack of familiarity. The vast majority of directors and senior leaders are not digital natives. The average age of directors is still north of 50, meaning senior leaders are much more likely to have grown up using typewriters than mobile devices. This means that technology can feel like a foreign topic (and a sore subject) for many directors, causing them a good deal of discomfort. I think that when CISOs approach technology discussions from the perspective of enterprise risk and business growth – and don’t stray too deep into the technology “weeds” in their reports – they will find directors and senior leaders are much more open to engaging deeply in the issues.

ISACA Now: What role should CISOs play in working with the board to elevate an organization’s security protocols?
Ideally, the CISO or other data security leader collaborates with the board and other senior leaders in the following ways:

  • Ensure the board and senior leaders have secure communication tools available and know how to use them appropriately;
  • Provide an update at each board meeting on the current state of cybersecurity programs, changes in the threat landscape, updates on cybersecurity investments, and highlight of any technology developments worth the board’s attention;
  • Answer the board’s questions and provide support on cybersecurity questions;
  • Work with the general counsel or audit committee to develop secure communication policies for the board, and brief the board on these policies – and any recommended changes – at least annually;
  • Arrange for cybersecurity training for directors and senior executives – ideally conducted at least annually (but more frequent is better);
  • Coordinate an annual tabletop exercise for the board, simulating a cyber event and testing the board on their response prowess;
  • Conduct a periodic review of the board and senior leadership’s communication methods and norms – to ensure adherence to policies and reduce reliance on any unsecured communication channels.

ISACA Now: For C-suite leaders who might be frustrated by their board’s lack of urgency when it comes to providing strong cybersecurity and risk management oversight, what are some ways they can deliver a wakeup call?
If a director has had personal exposure to a cyber event, he/she suddenly has a much greater level of awareness about the risk and a greater desire to learn how to ensure security. I don’t believe this “personal experience” has to be an actual cyber-attack – rather, a good simulation exercise, deep discussion at the board table, or a guest speaker who can share some “horror stories” should be enough to spur greater action. I’d recommend any activity that gets leaders asking questions like: Do we know which branch of law enforcement to call, and who is our main point of contact? What sort of cyber risk coverage do we have, and what services will our insurance carrier provide to help us during the breach notification period? What is our level of personal legal liability in this case? Do I need to wipe any of my own personal devices or drives and change any of my passwords?

At the same time, many CISOs do themselves a disservice by not focusing on the right issues in their reports to senior leaders. The CISO’s report should remain high-level and jargon-free (or with easy-to-comprehend explanations). Keep the focus on the enterprise risk and business growth side of cyber risk, not on the nitty-gritty of the CISO’s day job.

Bottom line, if a board doesn’t seem to have much motivation to discuss these serious issues, then the C-suite team should find a way to provide that motivation. The stakes are far too high to just hope for the best.

[ISACA Now Blog]

English
Exit mobile version