Cloud Security, Yes – But Is AI Ready for Its Cybersecurity Spotlight?

In today’s world, speed, agility and scalability are essential for organizations and businesses if they want to become successful and stay relevant. On-premises IT can’t provide them with the speed, agility and scalability cloud environments can, so the continued embrace of cloud is inevitable.

Unfortunately, the same characteristics – speed, agility and scalability – also apply to the bad guys. We now see, for example:

  • Production of malware via sites that offer ransomware as a service
  • Proliferation of non-distributing multi-scanners
  • An explosion of available exploit kits based on cloud computing capabilities

These developments signify a serious need to change the approach to securing organizations.

Effective security can no longer rely on a point product approach, for which the acquisition, implementation and training might take weeks or even months. In the cloud era, that’s no longer a viable tactic because the manual use of these point products makes organizations slow and reactive. In other words, we simply cannot defend our organizations against highly sophisticated, automated and agile threats by using old-fashioned, non-automated and non-integrated security.

Cybersecurity technology companies understand this and have for some years been investing in cloud computing, including ways to secure cloud environments and deliver security via cloud-based services. An example of a cloud-delivered security service is a threat intelligence capability in the cloud, which uses the speed and scalability of the cloud model for its software analysis process and can deliver the protection needed within a very short time frame.

The core of what will make cloud computing capabilities continually useful is big data analytics. Without big data analytics, it’s impossible to apply machine learning, which is essential for automation and the required speed of operations. Unfortunately, the terms ‘big data analytics’, ‘machine learning’ and ‘artificial intelligence’ are often confused and used interchangeably. Several cybersecurity companies claim to use artificial intelligence for their services, but they probably mean big data analytics and machine learning. To explain this in simple words, here are the definitions I use to clarify these terms:

  • Big data analytics refers to analyzing large volumes of data with the aim to uncover patterns and connections that might otherwise be invisible, and that might provide valuable insights.[1]
  • Machine learning is a software-development technique used to teach a computer to do a task without explicitly telling the computer how to do it.[2]
  • Artificial intelligence is software that becomes aware of its own existence and can make thoughtful decisions.[3]

How are big data analytics, machine learning, artificial intelligence or the combination of these capabilities best used to protect organizations from cyberattacks?

Unfortunately, there’s no silver bullet yet in this context, although large amounts of data can be better and more quickly handled by machines than by humans (see the threat intelligence example above). The challenge is that AI, especially, is being over-marketed for cybersecurity, but the technology has its limitations: AI is never designed to work in adversarial environments. It works quite well in games like chess or go, where the rules are well-defined and deterministic.[4] But in cybersecurity, these rules don’t apply, and the ‘bad guys’ are constantly evolving and adapting their techniques. At this moment, AI is less suitable because it cannot adapt to the fast and unpredictable environment. This will no doubt improve in the future.

Analyzing data kept in one place also means that place is a single point of failure. An attacker only needs to make subtle, almost unnoticeable changes to the data in this one data location, which could undermine the way an AI algorithm works.[5] Therefore, it’s essential to understand how big data analytics, machine learning and AI work; recognize the limitations; and act accordingly, not on hype.

In today’s world, the use of big data analytics, machine learning and AI provides several advantages in the cybersecurity domain – especially in the threat intelligence, behavioral analytics and cyber forensics areas – but there’s still a long way to go before we can completely rely on these capabilities in cybersecurity. When we get them right, we will truly maximize our investments in cloud.

  1. “Big Data Analytics,” Techopedia, accessed October 27, 2018. https://www.techopedia.com/definition/28659/big-data-analytics.
  2. Rick Howard, “The Business of AI and Machine Learning,” SecurityRoundtable.org, October 11, 2017, https://www.securityroundtable.org/the-business-of-ai-and-machine-learning/.
  3. Rick Howard, “The Business of AI and Machine Learning,” SecurityRoundtable.org, October 11, 2017, https://www.securityroundtable.org/the-business-of-ai-and-machine-learning/.
  4. Jane Bird, “AI is not a ‘silver bullet’ against cyber attacks,” Financial Times, last modified September 25, 2018, https://www.ft.com/content/14cd2608-869d-11e8-9199-c2a4754b5a0e.
  5. Ibid.

Source: https://researchcenter.paloaltonetworks.com/2018/10/cloud-security-yes-ai-ready-cybersecurity-spotlight/

[Palo Alto Networks Research Center]

Transparent Use of Personal Data Critical to Election Integrity in UK

Editor’s note: The ISACA Now blog is featuring a series of posts on the topic of election data integrity. ISACA Now previously published a US perspective on the topic. Today, we publish a post from Mike Hughes, providing a UK perspective.

In some ways, the UK has less to worry about when it comes to protecting the integrity of election data and outcomes than some of its international counterparts. The UK election process is well-established and proven over may years (well centuries), and therefore UK elections are generally conducted in a very basic manner. Before an election, voters receive a poll card indicating the location where they should go to vote. On polling day, voters enter the location, provide their name and address, and are presenting with a voting slip. They take this slip, enter the voting booth, pick up a pencil and put a cross in the box next to their candidate of choice. Voters then deposit this paper slip in an opaque box to be counted once polls are closed in the evening.

Pretty simple (and old-fashioned). Yet, despite the UK’s relatively straightforward election procedures, the Political Studies Association reported in 2016 that the UK rated poorly in election integrity relative to several other established democracies in Western Europe and beyond. More recently, there are strong suspicions that social media has been used to spread false information to manipulate political opinion and, therefore, election results. Consider that one of the biggest examples is the Cambridge Analytica data misuse scandal that has roiled both sides of the Atlantic, and it is fair to say that the matter of election integrity has only become more of a top-of-mind concern in the UK since that 2016 report, especially during the campaigning phase.

Rightfully so, steps are being taken to provide the public greater peace of mind that campaigns and elections are being conducted fairly. In 2017, the Information Commissioner launched a formal inquiry into political parties’ use of data analytics to target voters amid concerns that Britons’ privacy was being jeopardized by new campaign tactics. The inquiry has since broadened and become the largest investigation of its type by any Data Protection Authority, involving social media online platforms, data brokers, analytics firms, academic institutions, political parties and campaign groups. A key strand of the investigation centers on the link between Cambridge Analytica, its parent company, SCL Elections Limited, and Aggregate IQ, and involves allegations that data, obtained from Facebook, may have been misused by both sides in the UK referendum on membership of the EU, as well as to target voters during the 2016 United States presidential election process.

The investigation remains ongoing, but the Information Commissioner needed to meet her commitment to provide Parliament’s Digital Culture Media and Sport Select Committee with an update on the investigation for the purposes of informing their work on the “Fake News” inquiry before the summer recess. A separate report, “Democracy Disrupted? Personal Information and Political Influence”, has been published, covering the policy recommendations from the investigation. This includes an emphasis on the need for political campaigns to use personal data lawfully and transparently.

Social media powers also should draw upon their considerable resources to become part of the solution. Facebook, Google and Twitter have indicated they will ensure that campaigns that pay to place political adverts with them will have to include labels showing who has paid for them. They also say that they plan to publish their own online databases of the political adverts that they have been paid to run. These will include information such as the targeting, actual reach and amount spent on those adverts. These social media giants are aiming to publish their databases in time for the November 2018 mid-term elections in the US, and Facebook has said it aims to publish similar data ahead of the local elections in England and Northern Ireland in May 2019.

All of these considerations are unfolding in an era when the General Data Protection Regulation has trained a bright spotlight on how enterprises are leveraging personal data. As a society, we have come to understand that while the big data era presents many unprecedented opportunities for individuals and organizations, the related privacy, security and ethical implications must be kept at the forefront of our policies and procedures.

As I stated at the start of this article, the UK’s election system is a well-proven, paper-based process that has changed very little over many, many years. One thing is certain: sometime in the not-too-distant future, our paper-based system will disappear and be replaced by a digital system. There will then be a need for a highly trusted digital solution that provides a high level of confidence that the system cannot be tampered with or manipulated. These systems aren’t there yet, but technologies such as blockchain may be the start of the answer. Technology-driven capabilities will continue to evolve, but our commitment to integrity at the polls must remain steadfast.

Mike Hughes, past ISACA board director and partner with Haines Watts

Source: https://www.isaca.org/Knowledge-Center/Blog/Lists/Posts/Post.aspx?ID=1092

[ISACA Now Blog]

English
Exit mobile version