Google’s GDPR Fine Reinforces Need for Intentional Data Governance

For those of us who work in information security, data privacy and governance, we seem to traverse daily from one headline to another. A new corporate victim announces they were breached to the tune of 100 million records. A regulatory body announces a financial and oversight settlement with a company for failure to adequately protect data. On and on we go.

Because of this constant onslaught, nobody was terribly surprised to hear about the €50 million fineleveled against Google by French data privacy regulators for violations of GDPR. We all knew a big enforcement was coming, and that the early, large fines would be against a social media or tech giant.  Check and check. But what does this mean to organizations on a broader scale?

As I draft this post on Data Privacy Day, trying to find the larger meaning in this first-of-many large fines, I am faced with many possibilities. Could the message be about regulatory muscle-flexing, or is it about corporate arrogance and gamesmanship? Is this a legitimate assertion of individual rights against a corporate giant, or is it an attack against a successful tech company and its profit model? In GDPR, are we looking at the shape of tomorrow’s global data environment, or are we seeing a regulatory trend that risks stifling innovation and “free” service delivery? Of course, the answer is all of the above.

The regulatory authorities across the EU who are charged with enforcing GDPR must, at some point, exercise their authority. No regulation can be effective until it is applied, tested and, ultimately, proven or defeated in practice. At the same time, some organizations may look at the details of the regulation and make a risk-based assessment that they have done enough to comply with their interpretation of the regulation, reasoning “We have taken some [less-than-perfect] actions, let’s see what happens.” The rights to one’s personal data are becoming more widely accepted as a given, but many consumers still are willing to casually or selectively trade some of those rights for convenience or services. With data privacy and security laws and regulators proliferating and evolving, data-centric business activities and profit models must be more carefully engineered and scrutinized. All of the above.

This recent and highly publicized enforcement activity is likely to spur additional compliance efforts from many organizations. Few can absorb a fine with that many zeros in it. On a strategic level, however, it may well contribute to the gradual paradigm shift away from the whack-a-mole approach to security and privacy regulations, and toward a philosophy of intentional data governance and strategy.

There are many financial and organizational benefits to proper data governance, including lower infrastructure costs, better litigation readiness, smaller cyberattack footprint, and better visibility for regulatory compliance. But sometimes it takes a negative result occurring to somebody else to make us ask the right questions and do the right things. Time will tell if a hefty fine is enough to move the behavioral needle for Google, or for the rest of us.

Editor’s note: For more on this topic, read “Maintaining Data Protection and Privacy Beyond GDPR Implementation.”

Andrew Neal, C|CISO, CISM, CRISC, CCFP, CIFI, LPI, President, Information Security & Compliance Services, TransPerfect Legal Solutions, and ISACA conference speaker

Source: https://www.isaca.org/Knowledge-Center/Blog/Lists/Posts/Post.aspx?ID=1139

[ISACA Now Blog]

The Need for Endpoint Protection in Critical Infrastructure

As cyberattacks against ICS and SCADA systems become commonplace, the need for robust endpoint protection grows. The rapid growth of the internet, with its ever-increasing need for data, has made it almost mandatory that information be made available at all times. This gluttony of data results in the need for corporations to provide connections to devices within their process control networks without fully understanding the potential outcome of such actions.

Reasons for the increase in attacks

Thanks to trends like the internet of things, aka IoT, and Industry 4.0, the rise in attacks against critical infrastructure is becoming more prolific and targeted. This is seen in both the recently unsuccessful attack against a petrochemical company in Saudi Arabia during 2018 and the infamously successful Ukraine power grid breach of 2016. Cyberattacks against critical infrastructure are becoming prevalent, partially due to the increased number of networks connected and business-accessible devices, along with the need for the data they generate. Combine this with the demand placed on companies to do more with less staffing and more outsourcing as they attempt to lower yearly operational expense, and the potential for gaps in security grows – in some instances exponentially resulting in a number of worst-case scenarios for operators. With the need for remote access for employees and third-party support, businesses are facing more access to the environment and missing or misconfigured security policies that provide hackers with ideal attack vectors.

It has also come to light that critical infrastructure assets are becoming easier to find and identify, without any direct interaction from potential attackers. Using open source intelligence-gathering techniques, internet databases like Shodan, and geo-stalking, attackers are able to find these assets without exposing themselves or their intent – a clear example of too much information being readily available and unsecure.

Regardless of the reason for the lapse in security, all incidents of breach of a controls network shows us just how disruptive and dangerous these endpoints can be to our daily lives when under the control of those with malicious intent.

Why attack ICS and SCADA endpoints

Motives for attacking these systems can be grand in scope, ranging from corporate espionage with the intent to destroy a competitor’s brand to political in nature, such as the intent to influence the inner workings of a rival nation’s government. We also see examples of attacks that have a more simplistic purpose like financial gain or a script kiddie proving he or she can take control, earning them bragging rights. Regardless of the attacker’s motivation, the need to protect these critical infrastructure assets is of the utmost importance for the companies that run them and the community at large.

Current research into the matter shows that the number of vulnerabilities related to ICS and SCADA systems is doubling on a yearly cadence. As of this year, the estimated number of identified critical infrastructure-related vulnerabilities is roughly around 400, a number that will continue to grow due to the nature of how these systems operate and the security challenge they create. Legacy operating systems and the high uptime mandates of these systems make them some of the most difficult to secure.

There is hope

Despite all the advancements attackers are making to breach and control critical infrastructure, it is possible to defend and protect these highly targeted assets.

True advanced endpoint protection must be capable of preventing known and unknown threats by leveraging features such as:

  • Machine learning, which is capable of providing an instant verdict on an unknown executable before it runs on any of the systems in a process network.
  • Virtual sandboxing technology that can determine if an executable file is malicious before it executes on the machine.
  • Identifying software packages from vendors that are trusted in the environment and blocking those that are not.
  • Support for the various operating systems that controls systems run, including some that are end-of-life.
  • Cloud-readiness.

ICS/SCADA systems require advance endpoint protection capable of disrupting known and unknown cyberattacks while not impacting production. The approach must be lightweight, scalable, innovative and capable of integrating both existing and new technologies while complementing other best practice procedures and offerings. Most importantly, it must be powerful and ICS/SCADA-friendly.

To learn how Palo Alto Networks can help operators of ICS and SCADA networks protect their critical infrastructure, download this whitepaper on advanced endpoint protection for ICS/SCADA systems.

[Palo Alto Networks Research Center]

Cloud Security, Yes – But Is AI Ready for Its Cybersecurity Spotlight?

In today’s world, speed, agility and scalability are essential for organizations and businesses if they want to become successful and stay relevant. On-premises IT can’t provide them with the speed, agility and scalability cloud environments can, so the continued embrace of cloud is inevitable.

Unfortunately, the same characteristics – speed, agility and scalability – also apply to the bad guys. We now see, for example:

  • Production of malware via sites that offer ransomware as a service
  • Proliferation of non-distributing multi-scanners
  • An explosion of available exploit kits based on cloud computing capabilities

These developments signify a serious need to change the approach to securing organizations.

Effective security can no longer rely on a point product approach, for which the acquisition, implementation and training might take weeks or even months. In the cloud era, that’s no longer a viable tactic because the manual use of these point products makes organizations slow and reactive. In other words, we simply cannot defend our organizations against highly sophisticated, automated and agile threats by using old-fashioned, non-automated and non-integrated security.

Cybersecurity technology companies understand this and have for some years been investing in cloud computing, including ways to secure cloud environments and deliver security via cloud-based services. An example of a cloud-delivered security service is a threat intelligence capability in the cloud, which uses the speed and scalability of the cloud model for its software analysis process and can deliver the protection needed within a very short time frame.

The core of what will make cloud computing capabilities continually useful is big data analytics. Without big data analytics, it’s impossible to apply machine learning, which is essential for automation and the required speed of operations. Unfortunately, the terms ‘big data analytics’, ‘machine learning’ and ‘artificial intelligence’ are often confused and used interchangeably. Several cybersecurity companies claim to use artificial intelligence for their services, but they probably mean big data analytics and machine learning. To explain this in simple words, here are the definitions I use to clarify these terms:

  • Big data analytics refers to analyzing large volumes of data with the aim to uncover patterns and connections that might otherwise be invisible, and that might provide valuable insights.[1]
  • Machine learning is a software-development technique used to teach a computer to do a task without explicitly telling the computer how to do it.[2]
  • Artificial intelligence is software that becomes aware of its own existence and can make thoughtful decisions.[3]

How are big data analytics, machine learning, artificial intelligence or the combination of these capabilities best used to protect organizations from cyberattacks?

Unfortunately, there’s no silver bullet yet in this context, although large amounts of data can be better and more quickly handled by machines than by humans (see the threat intelligence example above). The challenge is that AI, especially, is being over-marketed for cybersecurity, but the technology has its limitations: AI is never designed to work in adversarial environments. It works quite well in games like chess or go, where the rules are well-defined and deterministic.[4] But in cybersecurity, these rules don’t apply, and the ‘bad guys’ are constantly evolving and adapting their techniques. At this moment, AI is less suitable because it cannot adapt to the fast and unpredictable environment. This will no doubt improve in the future.

Analyzing data kept in one place also means that place is a single point of failure. An attacker only needs to make subtle, almost unnoticeable changes to the data in this one data location, which could undermine the way an AI algorithm works.[5] Therefore, it’s essential to understand how big data analytics, machine learning and AI work; recognize the limitations; and act accordingly, not on hype.

In today’s world, the use of big data analytics, machine learning and AI provides several advantages in the cybersecurity domain – especially in the threat intelligence, behavioral analytics and cyber forensics areas – but there’s still a long way to go before we can completely rely on these capabilities in cybersecurity. When we get them right, we will truly maximize our investments in cloud.

  1. “Big Data Analytics,” Techopedia, accessed October 27, 2018. https://www.techopedia.com/definition/28659/big-data-analytics.
  2. Rick Howard, “The Business of AI and Machine Learning,” SecurityRoundtable.org, October 11, 2017, https://www.securityroundtable.org/the-business-of-ai-and-machine-learning/.
  3. Rick Howard, “The Business of AI and Machine Learning,” SecurityRoundtable.org, October 11, 2017, https://www.securityroundtable.org/the-business-of-ai-and-machine-learning/.
  4. Jane Bird, “AI is not a ‘silver bullet’ against cyber attacks,” Financial Times, last modified September 25, 2018, https://www.ft.com/content/14cd2608-869d-11e8-9199-c2a4754b5a0e.
  5. Ibid.

Source: https://researchcenter.paloaltonetworks.com/2018/10/cloud-security-yes-ai-ready-cybersecurity-spotlight/

[Palo Alto Networks Research Center]

Transparent Use of Personal Data Critical to Election Integrity in UK

Editor’s note: The ISACA Now blog is featuring a series of posts on the topic of election data integrity. ISACA Now previously published a US perspective on the topic. Today, we publish a post from Mike Hughes, providing a UK perspective.

In some ways, the UK has less to worry about when it comes to protecting the integrity of election data and outcomes than some of its international counterparts. The UK election process is well-established and proven over may years (well centuries), and therefore UK elections are generally conducted in a very basic manner. Before an election, voters receive a poll card indicating the location where they should go to vote. On polling day, voters enter the location, provide their name and address, and are presenting with a voting slip. They take this slip, enter the voting booth, pick up a pencil and put a cross in the box next to their candidate of choice. Voters then deposit this paper slip in an opaque box to be counted once polls are closed in the evening.

Pretty simple (and old-fashioned). Yet, despite the UK’s relatively straightforward election procedures, the Political Studies Association reported in 2016 that the UK rated poorly in election integrity relative to several other established democracies in Western Europe and beyond. More recently, there are strong suspicions that social media has been used to spread false information to manipulate political opinion and, therefore, election results. Consider that one of the biggest examples is the Cambridge Analytica data misuse scandal that has roiled both sides of the Atlantic, and it is fair to say that the matter of election integrity has only become more of a top-of-mind concern in the UK since that 2016 report, especially during the campaigning phase.

Rightfully so, steps are being taken to provide the public greater peace of mind that campaigns and elections are being conducted fairly. In 2017, the Information Commissioner launched a formal inquiry into political parties’ use of data analytics to target voters amid concerns that Britons’ privacy was being jeopardized by new campaign tactics. The inquiry has since broadened and become the largest investigation of its type by any Data Protection Authority, involving social media online platforms, data brokers, analytics firms, academic institutions, political parties and campaign groups. A key strand of the investigation centers on the link between Cambridge Analytica, its parent company, SCL Elections Limited, and Aggregate IQ, and involves allegations that data, obtained from Facebook, may have been misused by both sides in the UK referendum on membership of the EU, as well as to target voters during the 2016 United States presidential election process.

The investigation remains ongoing, but the Information Commissioner needed to meet her commitment to provide Parliament’s Digital Culture Media and Sport Select Committee with an update on the investigation for the purposes of informing their work on the “Fake News” inquiry before the summer recess. A separate report, “Democracy Disrupted? Personal Information and Political Influence”, has been published, covering the policy recommendations from the investigation. This includes an emphasis on the need for political campaigns to use personal data lawfully and transparently.

Social media powers also should draw upon their considerable resources to become part of the solution. Facebook, Google and Twitter have indicated they will ensure that campaigns that pay to place political adverts with them will have to include labels showing who has paid for them. They also say that they plan to publish their own online databases of the political adverts that they have been paid to run. These will include information such as the targeting, actual reach and amount spent on those adverts. These social media giants are aiming to publish their databases in time for the November 2018 mid-term elections in the US, and Facebook has said it aims to publish similar data ahead of the local elections in England and Northern Ireland in May 2019.

All of these considerations are unfolding in an era when the General Data Protection Regulation has trained a bright spotlight on how enterprises are leveraging personal data. As a society, we have come to understand that while the big data era presents many unprecedented opportunities for individuals and organizations, the related privacy, security and ethical implications must be kept at the forefront of our policies and procedures.

As I stated at the start of this article, the UK’s election system is a well-proven, paper-based process that has changed very little over many, many years. One thing is certain: sometime in the not-too-distant future, our paper-based system will disappear and be replaced by a digital system. There will then be a need for a highly trusted digital solution that provides a high level of confidence that the system cannot be tampered with or manipulated. These systems aren’t there yet, but technologies such as blockchain may be the start of the answer. Technology-driven capabilities will continue to evolve, but our commitment to integrity at the polls must remain steadfast.

Mike Hughes, past ISACA board director and partner with Haines Watts

Source: https://www.isaca.org/Knowledge-Center/Blog/Lists/Posts/Post.aspx?ID=1092

[ISACA Now Blog]

Cloud Compliance: The Cheeseburger Principle

We spend our days talking with people about the need to apply security and compliance best practices in their cloud environment, and then helping them maintain automated visibility and remediation of vulnerabilities. We try to imprint on them the notion that security never stops; to truly have the best odds of keeping an environment secure, the effort must be continuous. To illustrate this point, our Chief Cloud Officer, Tim Prendergast, channeled his inner cheeseburger. Read on and you’ll see what I mean.

A Cheesy, Burger-y Metaphor: If you want a clean bill of health at your yearly medical checkup, you can’t eat cheeseburgers for 364 days out of the year and then the day before the checkup, eat a salad and expect to be told you’re in excellent shape. As much as I wish it did, the world doesn’t work like that, and it’s the same for cloud security and compliance.

It doesn’t make sense to ignore security controls, configurations, settings, and other critical aspects of your cloud until the day before auditors come in to review. You could certainly do it, but you’d have an environment populated with bad actors and ransacked with holes and ransomware. The truth is anything other than continuous and automated compliance can result in three potential issues.

  1. The cloud (like your body) is a dynamic entity that is constantly changing. A snapshot of what it looked like yesterday isn’t necessarily what it looks like today, and because of that you need a way to monitor its evolution, its changes, and its state – always.
  2. Your compliance issues and responsibilities will continue to pile up as you ignore them – just as your blood pressure will edge ever upwards if you don’t get off the couch.
  3. You can’t escape what you’re supposed to do. Addressing your cloud (or your health, for that matter) only when it’s convenient presents an advantage to bad actors and bring negative consequences.

Look at it this way: without continuous automation, organizations really can’t prove any form of compliance in the cloud because they don’t have timely visibility into infrastructure configuration and workload risk. Timeliness is critical because of the constant change and dynamic nature of your cloud environment.

Not to worry, Tim is still going to have the occasional cheeseburger, and you should too. And even better, we can help you get started on your journey to compliance in the cloud.

View our webcast – Cloud Compliance is a Team Sport – here,  where cloud security and compliance experts share practical advice to get your cloud compliance program in the best shape possible, including how to automate the time-intensive task to save your teams valuable time and allow them to focus on what matters to the business.

You can also get started measuring your cloud compliance now. Evident offers a simple, one-click compliance report that will show you how your cloud infrastructure measures up. Sign up for a trial here.

Source: https://researchcenter.paloaltonetworks.com/2018/10/cloud-compliance-cheeseburger-principle/

[Palo Alto Networks Research Center]

English
Exit mobile version