Cloud Security, Yes – But Is AI Ready for Its Cybersecurity Spotlight?

In today’s world, speed, agility and scalability are essential for organizations and businesses if they want to become successful and stay relevant. On-premises IT can’t provide them with the speed, agility and scalability cloud environments can, so the continued embrace of cloud is inevitable.

Unfortunately, the same characteristics – speed, agility and scalability – also apply to the bad guys. We now see, for example:

  • Production of malware via sites that offer ransomware as a service
  • Proliferation of non-distributing multi-scanners
  • An explosion of available exploit kits based on cloud computing capabilities

These developments signify a serious need to change the approach to securing organizations.

Effective security can no longer rely on a point product approach, for which the acquisition, implementation and training might take weeks or even months. In the cloud era, that’s no longer a viable tactic because the manual use of these point products makes organizations slow and reactive. In other words, we simply cannot defend our organizations against highly sophisticated, automated and agile threats by using old-fashioned, non-automated and non-integrated security.

Cybersecurity technology companies understand this and have for some years been investing in cloud computing, including ways to secure cloud environments and deliver security via cloud-based services. An example of a cloud-delivered security service is a threat intelligence capability in the cloud, which uses the speed and scalability of the cloud model for its software analysis process and can deliver the protection needed within a very short time frame.

The core of what will make cloud computing capabilities continually useful is big data analytics. Without big data analytics, it’s impossible to apply machine learning, which is essential for automation and the required speed of operations. Unfortunately, the terms ‘big data analytics’, ‘machine learning’ and ‘artificial intelligence’ are often confused and used interchangeably. Several cybersecurity companies claim to use artificial intelligence for their services, but they probably mean big data analytics and machine learning. To explain this in simple words, here are the definitions I use to clarify these terms:

  • Big data analytics refers to analyzing large volumes of data with the aim to uncover patterns and connections that might otherwise be invisible, and that might provide valuable insights.[1]
  • Machine learning is a software-development technique used to teach a computer to do a task without explicitly telling the computer how to do it.[2]
  • Artificial intelligence is software that becomes aware of its own existence and can make thoughtful decisions.[3]

How are big data analytics, machine learning, artificial intelligence or the combination of these capabilities best used to protect organizations from cyberattacks?

Unfortunately, there’s no silver bullet yet in this context, although large amounts of data can be better and more quickly handled by machines than by humans (see the threat intelligence example above). The challenge is that AI, especially, is being over-marketed for cybersecurity, but the technology has its limitations: AI is never designed to work in adversarial environments. It works quite well in games like chess or go, where the rules are well-defined and deterministic.[4] But in cybersecurity, these rules don’t apply, and the ‘bad guys’ are constantly evolving and adapting their techniques. At this moment, AI is less suitable because it cannot adapt to the fast and unpredictable environment. This will no doubt improve in the future.

Analyzing data kept in one place also means that place is a single point of failure. An attacker only needs to make subtle, almost unnoticeable changes to the data in this one data location, which could undermine the way an AI algorithm works.[5] Therefore, it’s essential to understand how big data analytics, machine learning and AI work; recognize the limitations; and act accordingly, not on hype.

In today’s world, the use of big data analytics, machine learning and AI provides several advantages in the cybersecurity domain – especially in the threat intelligence, behavioral analytics and cyber forensics areas – but there’s still a long way to go before we can completely rely on these capabilities in cybersecurity. When we get them right, we will truly maximize our investments in cloud.

  1. “Big Data Analytics,” Techopedia, accessed October 27, 2018. https://www.techopedia.com/definition/28659/big-data-analytics.
  2. Rick Howard, “The Business of AI and Machine Learning,” SecurityRoundtable.org, October 11, 2017, https://www.securityroundtable.org/the-business-of-ai-and-machine-learning/.
  3. Rick Howard, “The Business of AI and Machine Learning,” SecurityRoundtable.org, October 11, 2017, https://www.securityroundtable.org/the-business-of-ai-and-machine-learning/.
  4. Jane Bird, “AI is not a ‘silver bullet’ against cyber attacks,” Financial Times, last modified September 25, 2018, https://www.ft.com/content/14cd2608-869d-11e8-9199-c2a4754b5a0e.
  5. Ibid.

Source: https://researchcenter.paloaltonetworks.com/2018/10/cloud-security-yes-ai-ready-cybersecurity-spotlight/

[Palo Alto Networks Research Center]

Transparent Use of Personal Data Critical to Election Integrity in UK

Editor’s note: The ISACA Now blog is featuring a series of posts on the topic of election data integrity. ISACA Now previously published a US perspective on the topic. Today, we publish a post from Mike Hughes, providing a UK perspective.

In some ways, the UK has less to worry about when it comes to protecting the integrity of election data and outcomes than some of its international counterparts. The UK election process is well-established and proven over may years (well centuries), and therefore UK elections are generally conducted in a very basic manner. Before an election, voters receive a poll card indicating the location where they should go to vote. On polling day, voters enter the location, provide their name and address, and are presenting with a voting slip. They take this slip, enter the voting booth, pick up a pencil and put a cross in the box next to their candidate of choice. Voters then deposit this paper slip in an opaque box to be counted once polls are closed in the evening.

Pretty simple (and old-fashioned). Yet, despite the UK’s relatively straightforward election procedures, the Political Studies Association reported in 2016 that the UK rated poorly in election integrity relative to several other established democracies in Western Europe and beyond. More recently, there are strong suspicions that social media has been used to spread false information to manipulate political opinion and, therefore, election results. Consider that one of the biggest examples is the Cambridge Analytica data misuse scandal that has roiled both sides of the Atlantic, and it is fair to say that the matter of election integrity has only become more of a top-of-mind concern in the UK since that 2016 report, especially during the campaigning phase.

Rightfully so, steps are being taken to provide the public greater peace of mind that campaigns and elections are being conducted fairly. In 2017, the Information Commissioner launched a formal inquiry into political parties’ use of data analytics to target voters amid concerns that Britons’ privacy was being jeopardized by new campaign tactics. The inquiry has since broadened and become the largest investigation of its type by any Data Protection Authority, involving social media online platforms, data brokers, analytics firms, academic institutions, political parties and campaign groups. A key strand of the investigation centers on the link between Cambridge Analytica, its parent company, SCL Elections Limited, and Aggregate IQ, and involves allegations that data, obtained from Facebook, may have been misused by both sides in the UK referendum on membership of the EU, as well as to target voters during the 2016 United States presidential election process.

The investigation remains ongoing, but the Information Commissioner needed to meet her commitment to provide Parliament’s Digital Culture Media and Sport Select Committee with an update on the investigation for the purposes of informing their work on the “Fake News” inquiry before the summer recess. A separate report, “Democracy Disrupted? Personal Information and Political Influence”, has been published, covering the policy recommendations from the investigation. This includes an emphasis on the need for political campaigns to use personal data lawfully and transparently.

Social media powers also should draw upon their considerable resources to become part of the solution. Facebook, Google and Twitter have indicated they will ensure that campaigns that pay to place political adverts with them will have to include labels showing who has paid for them. They also say that they plan to publish their own online databases of the political adverts that they have been paid to run. These will include information such as the targeting, actual reach and amount spent on those adverts. These social media giants are aiming to publish their databases in time for the November 2018 mid-term elections in the US, and Facebook has said it aims to publish similar data ahead of the local elections in England and Northern Ireland in May 2019.

All of these considerations are unfolding in an era when the General Data Protection Regulation has trained a bright spotlight on how enterprises are leveraging personal data. As a society, we have come to understand that while the big data era presents many unprecedented opportunities for individuals and organizations, the related privacy, security and ethical implications must be kept at the forefront of our policies and procedures.

As I stated at the start of this article, the UK’s election system is a well-proven, paper-based process that has changed very little over many, many years. One thing is certain: sometime in the not-too-distant future, our paper-based system will disappear and be replaced by a digital system. There will then be a need for a highly trusted digital solution that provides a high level of confidence that the system cannot be tampered with or manipulated. These systems aren’t there yet, but technologies such as blockchain may be the start of the answer. Technology-driven capabilities will continue to evolve, but our commitment to integrity at the polls must remain steadfast.

Mike Hughes, past ISACA board director and partner with Haines Watts

Source: https://www.isaca.org/Knowledge-Center/Blog/Lists/Posts/Post.aspx?ID=1092

[ISACA Now Blog]

Cloud Compliance: The Cheeseburger Principle

We spend our days talking with people about the need to apply security and compliance best practices in their cloud environment, and then helping them maintain automated visibility and remediation of vulnerabilities. We try to imprint on them the notion that security never stops; to truly have the best odds of keeping an environment secure, the effort must be continuous. To illustrate this point, our Chief Cloud Officer, Tim Prendergast, channeled his inner cheeseburger. Read on and you’ll see what I mean.

A Cheesy, Burger-y Metaphor: If you want a clean bill of health at your yearly medical checkup, you can’t eat cheeseburgers for 364 days out of the year and then the day before the checkup, eat a salad and expect to be told you’re in excellent shape. As much as I wish it did, the world doesn’t work like that, and it’s the same for cloud security and compliance.

It doesn’t make sense to ignore security controls, configurations, settings, and other critical aspects of your cloud until the day before auditors come in to review. You could certainly do it, but you’d have an environment populated with bad actors and ransacked with holes and ransomware. The truth is anything other than continuous and automated compliance can result in three potential issues.

  1. The cloud (like your body) is a dynamic entity that is constantly changing. A snapshot of what it looked like yesterday isn’t necessarily what it looks like today, and because of that you need a way to monitor its evolution, its changes, and its state – always.
  2. Your compliance issues and responsibilities will continue to pile up as you ignore them – just as your blood pressure will edge ever upwards if you don’t get off the couch.
  3. You can’t escape what you’re supposed to do. Addressing your cloud (or your health, for that matter) only when it’s convenient presents an advantage to bad actors and bring negative consequences.

Look at it this way: without continuous automation, organizations really can’t prove any form of compliance in the cloud because they don’t have timely visibility into infrastructure configuration and workload risk. Timeliness is critical because of the constant change and dynamic nature of your cloud environment.

Not to worry, Tim is still going to have the occasional cheeseburger, and you should too. And even better, we can help you get started on your journey to compliance in the cloud.

View our webcast – Cloud Compliance is a Team Sport – here,  where cloud security and compliance experts share practical advice to get your cloud compliance program in the best shape possible, including how to automate the time-intensive task to save your teams valuable time and allow them to focus on what matters to the business.

You can also get started measuring your cloud compliance now. Evident offers a simple, one-click compliance report that will show you how your cloud infrastructure measures up. Sign up for a trial here.

Source: https://researchcenter.paloaltonetworks.com/2018/10/cloud-compliance-cheeseburger-principle/

[Palo Alto Networks Research Center]

Workforce Study Methodology and Defining the Gap

2,930,000

That is the size of the global cybersecurity workforce gap. The breakdown is around 498,000 in North America, 136,000 in Latin America, 142,000 in Europe, the Middle East and Africa, with the largest deficit coming in Asia Pacific at 2.14 million. But what does this big, scary number even mean? Where did it come from?

First, this new Cybersecurity Workforce Study from (ISC)² has evolved from past studies to become a more accurate representation of the broader workforce. We surveyed nearly 1,500 professionals around the world who spend at least 25% of their time on cybersecurity activities, which includes IT/ICT professionals who previously may not have been considered part of the cyber workforce.

To ensure our numbers were accurate and representative, we worked with our research partner (Spiceworks) to develop a rigorous sample design for each region. The sample within each country was controlled to ensure a mix of company sizes and industries. Some statistically significant differences between regions noted in the report may be due to regional differences in scale usage.

With a more precise look at who is actually doing the work, we also changed how the gap itself was calculated. Some legacy gap calculations subtracted supply from demand which didn’t consider relevant factors like organizational growth.

For the demand, we start with a calculation of the current percent of organizations with job openings – this represents the expected share of organizations that will have hiring demand. Among organizations surveyed, most (83%) indicated that they had open cybersecurity positions. Next, average hires are estimated.  To make the number more precise, we used information across company size and combine these estimates to extrapolate future staffing needs for the total market (all business entities) using data from various government sources.1

Our calculation of the supply includes new entrants to the field – academic and nonacademic alike – which was linked to secondary market data. We also took into account the number of/rate of professionals who historically have shifted into roles with more cybersecurity responsibilities by combining both primary survey data with secondary market data.²

What Does the Gap Mean?

Our research does not propose there are 2.93 million cybersecurity job postings open right now. The (ISC)² Cybersecurity Workforce Study gap is an assessment of the demand for skilled cyber professionals based on the input from the cybersecurity workforce on the front lines every day. But what does this actually mean to the workforce?

Globally 37% of professionals stated the lack of skilled/experienced cybersecurity personnel was their top job concern. Additionally, 63% of respondents said their organizations have a shortage of staff dedicated to cybersecurity, and 60% said their organizations were at a moderate or extreme risk of cyberattacks due to that shortage.

Our industry is painfully aware of the challenges that organizations face when staffing qualified cyber teams, and the purpose of finding and sharing the gap is not to shout that the sky is falling, but rather build awareness of the need for talent and training, and advocate for solutions that will benefit the workforce, and to ultimately inspire a safe and secure cyber world.

For more on the impact of the gap, and other findings from the study, download the full report.

1: U.S. Census Bureau – Geography Area Series: County Business Patterns by Employment Size Class (2015 Business Patterns), Statistics Canada, Instituto Nacional De Estadistica Y Geografia (Mexico), Office for National Statistics (U.K.), EU Commission and Statistisches Bundesamt (Destatis) (France and Germany), Australian Bureau of Statistics, Statistics Bureau, Ministry of Internal Affairs and Communications (Japan)

²: Wharton School of the University of Pennsylvania, CompTIA Cyberstates 2018, Cybersecurity Workforce Alliance (Australia)

Source: https://blog.isc2.org/isc2_blog/2018/10/workforce-study-methodology-and-defining-the-gap.html

[(ISC)² Blog]

Key Considerations for Assessing GDPR Compliance

The European Union General Data Protection Regulation (GDPR), which took full effect in May this year, solidifies the protection of data subjects’ “personal data,” harmonizes the data privacy laws across Europe and protects and empowers EU citizens’ data privacy, in addition to changing the way data is managed and handled by organizations.

The GDPR regulation affects people across the globe. The scope of GDPR is quite wide-ranging, and can apply to many global institutions with operations in Europe. Certainly, GDPR has created more power for data regulators, due to the severe potential financial penalties for non-compliance (maximum of 4 percent of annual global turnover or €20 Million, whichever is higher).

A few of the key things to know about GDPR are:

  • The regulation governs how institutions collect, record, use, disclose, store, alter, disseminate, and process the personal data of individuals in the EU.
  • If a breach involves personal data, the Data Protection Authorities must be notified within 72 hours.
  • It governs the rights of data subjects, including rights to access, rectification, erasure, restricting processing, data portability, and rights in relation to automated decision-making and profiles.

How do I assess my GDPR compliance?
All these are essential reasons for institutions to ensure that the proper governance and tactical steps are taken for compliance with GDPR regulation. The GDPR Audit Program Bundle developed by ISACA does just this by helping provide institutions with a guide for assessing, validating, and reinforcing the GDPR regulations by which institutions must abide. The audit program was developed to provide enterprises with a baseline focusing on several key areas and their respective sub-processes, that covers all key components of GDPR, including:

  • Data governance
  • Acquiring, identifying and classifying personal data
  • Managing personal data risk
  • Managing personal data security
  • Managing the personal data supply chain
  • Managing incidents and breaches, create and maintain awareness
  • Properly organizing a data privacy organization within your institution

Also included are key testing steps involving control category types and frequency to help facilitate the effective discussion and analysis as it fits your institution. The important thing to remember is that there is no absolute right way to go about becoming GDPR-compliant. However, a robust and thorough review of your GDPR environment as it pertains to data processing for your institution is required to ensure a proper baseline is used to assess compliance and successfully execute a GDPR compliance program.

Editor’s note: ISACA has addressed both general and particular audit perspectives for GDPR through its new GDPR Audit Program Bundle. Download the audit program bundle here. Access a complimentary white paper, “How To Audit GDPR,” here.

Mohammed J. Khan, CISA, CRISC, CIPM, Global Audit Head – IT, Privacy, Medical Device Cybersecurity

Source: https://www.isaca.org/Knowledge-Center/Blog/Lists/Posts/Post.aspx?ID=1091

[ISACA Now Blog]

English
Exit mobile version