Google’s GDPR Fine Reinforces Need for Intentional Data Governance

For those of us who work in information security, data privacy and governance, we seem to traverse daily from one headline to another. A new corporate victim announces they were breached to the tune of 100 million records. A regulatory body announces a financial and oversight settlement with a company for failure to adequately protect data. On and on we go.

Because of this constant onslaught, nobody was terribly surprised to hear about the €50 million fineleveled against Google by French data privacy regulators for violations of GDPR. We all knew a big enforcement was coming, and that the early, large fines would be against a social media or tech giant.  Check and check. But what does this mean to organizations on a broader scale?

As I draft this post on Data Privacy Day, trying to find the larger meaning in this first-of-many large fines, I am faced with many possibilities. Could the message be about regulatory muscle-flexing, or is it about corporate arrogance and gamesmanship? Is this a legitimate assertion of individual rights against a corporate giant, or is it an attack against a successful tech company and its profit model? In GDPR, are we looking at the shape of tomorrow’s global data environment, or are we seeing a regulatory trend that risks stifling innovation and “free” service delivery? Of course, the answer is all of the above.

The regulatory authorities across the EU who are charged with enforcing GDPR must, at some point, exercise their authority. No regulation can be effective until it is applied, tested and, ultimately, proven or defeated in practice. At the same time, some organizations may look at the details of the regulation and make a risk-based assessment that they have done enough to comply with their interpretation of the regulation, reasoning “We have taken some [less-than-perfect] actions, let’s see what happens.” The rights to one’s personal data are becoming more widely accepted as a given, but many consumers still are willing to casually or selectively trade some of those rights for convenience or services. With data privacy and security laws and regulators proliferating and evolving, data-centric business activities and profit models must be more carefully engineered and scrutinized. All of the above.

This recent and highly publicized enforcement activity is likely to spur additional compliance efforts from many organizations. Few can absorb a fine with that many zeros in it. On a strategic level, however, it may well contribute to the gradual paradigm shift away from the whack-a-mole approach to security and privacy regulations, and toward a philosophy of intentional data governance and strategy.

There are many financial and organizational benefits to proper data governance, including lower infrastructure costs, better litigation readiness, smaller cyberattack footprint, and better visibility for regulatory compliance. But sometimes it takes a negative result occurring to somebody else to make us ask the right questions and do the right things. Time will tell if a hefty fine is enough to move the behavioral needle for Google, or for the rest of us.

Editor’s note: For more on this topic, read “Maintaining Data Protection and Privacy Beyond GDPR Implementation.”

Andrew Neal, C|CISO, CISM, CRISC, CCFP, CIFI, LPI, President, Information Security & Compliance Services, TransPerfect Legal Solutions, and ISACA conference speaker

Source: https://www.isaca.org/Knowledge-Center/Blog/Lists/Posts/Post.aspx?ID=1139

[ISACA Now Blog]

Transparent Use of Personal Data Critical to Election Integrity in UK

Editor’s note: The ISACA Now blog is featuring a series of posts on the topic of election data integrity. ISACA Now previously published a US perspective on the topic. Today, we publish a post from Mike Hughes, providing a UK perspective.

In some ways, the UK has less to worry about when it comes to protecting the integrity of election data and outcomes than some of its international counterparts. The UK election process is well-established and proven over may years (well centuries), and therefore UK elections are generally conducted in a very basic manner. Before an election, voters receive a poll card indicating the location where they should go to vote. On polling day, voters enter the location, provide their name and address, and are presenting with a voting slip. They take this slip, enter the voting booth, pick up a pencil and put a cross in the box next to their candidate of choice. Voters then deposit this paper slip in an opaque box to be counted once polls are closed in the evening.

Pretty simple (and old-fashioned). Yet, despite the UK’s relatively straightforward election procedures, the Political Studies Association reported in 2016 that the UK rated poorly in election integrity relative to several other established democracies in Western Europe and beyond. More recently, there are strong suspicions that social media has been used to spread false information to manipulate political opinion and, therefore, election results. Consider that one of the biggest examples is the Cambridge Analytica data misuse scandal that has roiled both sides of the Atlantic, and it is fair to say that the matter of election integrity has only become more of a top-of-mind concern in the UK since that 2016 report, especially during the campaigning phase.

Rightfully so, steps are being taken to provide the public greater peace of mind that campaigns and elections are being conducted fairly. In 2017, the Information Commissioner launched a formal inquiry into political parties’ use of data analytics to target voters amid concerns that Britons’ privacy was being jeopardized by new campaign tactics. The inquiry has since broadened and become the largest investigation of its type by any Data Protection Authority, involving social media online platforms, data brokers, analytics firms, academic institutions, political parties and campaign groups. A key strand of the investigation centers on the link between Cambridge Analytica, its parent company, SCL Elections Limited, and Aggregate IQ, and involves allegations that data, obtained from Facebook, may have been misused by both sides in the UK referendum on membership of the EU, as well as to target voters during the 2016 United States presidential election process.

The investigation remains ongoing, but the Information Commissioner needed to meet her commitment to provide Parliament’s Digital Culture Media and Sport Select Committee with an update on the investigation for the purposes of informing their work on the “Fake News” inquiry before the summer recess. A separate report, “Democracy Disrupted? Personal Information and Political Influence”, has been published, covering the policy recommendations from the investigation. This includes an emphasis on the need for political campaigns to use personal data lawfully and transparently.

Social media powers also should draw upon their considerable resources to become part of the solution. Facebook, Google and Twitter have indicated they will ensure that campaigns that pay to place political adverts with them will have to include labels showing who has paid for them. They also say that they plan to publish their own online databases of the political adverts that they have been paid to run. These will include information such as the targeting, actual reach and amount spent on those adverts. These social media giants are aiming to publish their databases in time for the November 2018 mid-term elections in the US, and Facebook has said it aims to publish similar data ahead of the local elections in England and Northern Ireland in May 2019.

All of these considerations are unfolding in an era when the General Data Protection Regulation has trained a bright spotlight on how enterprises are leveraging personal data. As a society, we have come to understand that while the big data era presents many unprecedented opportunities for individuals and organizations, the related privacy, security and ethical implications must be kept at the forefront of our policies and procedures.

As I stated at the start of this article, the UK’s election system is a well-proven, paper-based process that has changed very little over many, many years. One thing is certain: sometime in the not-too-distant future, our paper-based system will disappear and be replaced by a digital system. There will then be a need for a highly trusted digital solution that provides a high level of confidence that the system cannot be tampered with or manipulated. These systems aren’t there yet, but technologies such as blockchain may be the start of the answer. Technology-driven capabilities will continue to evolve, but our commitment to integrity at the polls must remain steadfast.

Mike Hughes, past ISACA board director and partner with Haines Watts

Source: https://www.isaca.org/Knowledge-Center/Blog/Lists/Posts/Post.aspx?ID=1092

[ISACA Now Blog]

Workforce Study Methodology and Defining the Gap

2,930,000

That is the size of the global cybersecurity workforce gap. The breakdown is around 498,000 in North America, 136,000 in Latin America, 142,000 in Europe, the Middle East and Africa, with the largest deficit coming in Asia Pacific at 2.14 million. But what does this big, scary number even mean? Where did it come from?

First, this new Cybersecurity Workforce Study from (ISC)² has evolved from past studies to become a more accurate representation of the broader workforce. We surveyed nearly 1,500 professionals around the world who spend at least 25% of their time on cybersecurity activities, which includes IT/ICT professionals who previously may not have been considered part of the cyber workforce.

To ensure our numbers were accurate and representative, we worked with our research partner (Spiceworks) to develop a rigorous sample design for each region. The sample within each country was controlled to ensure a mix of company sizes and industries. Some statistically significant differences between regions noted in the report may be due to regional differences in scale usage.

With a more precise look at who is actually doing the work, we also changed how the gap itself was calculated. Some legacy gap calculations subtracted supply from demand which didn’t consider relevant factors like organizational growth.

For the demand, we start with a calculation of the current percent of organizations with job openings – this represents the expected share of organizations that will have hiring demand. Among organizations surveyed, most (83%) indicated that they had open cybersecurity positions. Next, average hires are estimated.  To make the number more precise, we used information across company size and combine these estimates to extrapolate future staffing needs for the total market (all business entities) using data from various government sources.1

Our calculation of the supply includes new entrants to the field – academic and nonacademic alike – which was linked to secondary market data. We also took into account the number of/rate of professionals who historically have shifted into roles with more cybersecurity responsibilities by combining both primary survey data with secondary market data.²

What Does the Gap Mean?

Our research does not propose there are 2.93 million cybersecurity job postings open right now. The (ISC)² Cybersecurity Workforce Study gap is an assessment of the demand for skilled cyber professionals based on the input from the cybersecurity workforce on the front lines every day. But what does this actually mean to the workforce?

Globally 37% of professionals stated the lack of skilled/experienced cybersecurity personnel was their top job concern. Additionally, 63% of respondents said their organizations have a shortage of staff dedicated to cybersecurity, and 60% said their organizations were at a moderate or extreme risk of cyberattacks due to that shortage.

Our industry is painfully aware of the challenges that organizations face when staffing qualified cyber teams, and the purpose of finding and sharing the gap is not to shout that the sky is falling, but rather build awareness of the need for talent and training, and advocate for solutions that will benefit the workforce, and to ultimately inspire a safe and secure cyber world.

For more on the impact of the gap, and other findings from the study, download the full report.

1: U.S. Census Bureau – Geography Area Series: County Business Patterns by Employment Size Class (2015 Business Patterns), Statistics Canada, Instituto Nacional De Estadistica Y Geografia (Mexico), Office for National Statistics (U.K.), EU Commission and Statistisches Bundesamt (Destatis) (France and Germany), Australian Bureau of Statistics, Statistics Bureau, Ministry of Internal Affairs and Communications (Japan)

²: Wharton School of the University of Pennsylvania, CompTIA Cyberstates 2018, Cybersecurity Workforce Alliance (Australia)

Source: https://blog.isc2.org/isc2_blog/2018/10/workforce-study-methodology-and-defining-the-gap.html

[(ISC)² Blog]

Key Considerations for Assessing GDPR Compliance

The European Union General Data Protection Regulation (GDPR), which took full effect in May this year, solidifies the protection of data subjects’ “personal data,” harmonizes the data privacy laws across Europe and protects and empowers EU citizens’ data privacy, in addition to changing the way data is managed and handled by organizations.

The GDPR regulation affects people across the globe. The scope of GDPR is quite wide-ranging, and can apply to many global institutions with operations in Europe. Certainly, GDPR has created more power for data regulators, due to the severe potential financial penalties for non-compliance (maximum of 4 percent of annual global turnover or €20 Million, whichever is higher).

A few of the key things to know about GDPR are:

  • The regulation governs how institutions collect, record, use, disclose, store, alter, disseminate, and process the personal data of individuals in the EU.
  • If a breach involves personal data, the Data Protection Authorities must be notified within 72 hours.
  • It governs the rights of data subjects, including rights to access, rectification, erasure, restricting processing, data portability, and rights in relation to automated decision-making and profiles.

How do I assess my GDPR compliance?
All these are essential reasons for institutions to ensure that the proper governance and tactical steps are taken for compliance with GDPR regulation. The GDPR Audit Program Bundle developed by ISACA does just this by helping provide institutions with a guide for assessing, validating, and reinforcing the GDPR regulations by which institutions must abide. The audit program was developed to provide enterprises with a baseline focusing on several key areas and their respective sub-processes, that covers all key components of GDPR, including:

  • Data governance
  • Acquiring, identifying and classifying personal data
  • Managing personal data risk
  • Managing personal data security
  • Managing the personal data supply chain
  • Managing incidents and breaches, create and maintain awareness
  • Properly organizing a data privacy organization within your institution

Also included are key testing steps involving control category types and frequency to help facilitate the effective discussion and analysis as it fits your institution. The important thing to remember is that there is no absolute right way to go about becoming GDPR-compliant. However, a robust and thorough review of your GDPR environment as it pertains to data processing for your institution is required to ensure a proper baseline is used to assess compliance and successfully execute a GDPR compliance program.

Editor’s note: ISACA has addressed both general and particular audit perspectives for GDPR through its new GDPR Audit Program Bundle. Download the audit program bundle here. Access a complimentary white paper, “How To Audit GDPR,” here.

Mohammed J. Khan, CISA, CRISC, CIPM, Global Audit Head – IT, Privacy, Medical Device Cybersecurity

Source: https://www.isaca.org/Knowledge-Center/Blog/Lists/Posts/Post.aspx?ID=1091

[ISACA Now Blog]

The Path to Improved Cybersecurity Culture

The recent ISACA-CMMI Institute cybersecurity culture research illustrates the accomplishments and gaps that are seen in organizations’ cybersecurity culture. The survey-driven research focuses on culture and continuous improvement, both essential components to a successful cyber risk management program.

In this blog post, I will highlight some of the survey’s findings and then discuss ways you can improve your organization’s cybersecurity culture.

Some positive steps I noticed:

  • 75% of organizations are getting management more involved with cybersecurity culture
  • Most organizations can identify business benefits realized through better cybersecurity
  • 87% think that better cybersecurity would improve profitability or viability

Some gaps:

  • 60% of organizations do not have very successful employee buy-in
  • 42% of firms do not have a cybersecurity culture plan
  • 55% think the CISO owns cybersecurity culture

Achieving a strong cybersecurity culture requires action on many fronts: people, process, technology and outside partners. Culture is people and process. Technology and outside partners are supporting players. Details matter. It’s great that most organizations are getting management more involved. However, it is important that the C-level regularly communicates the importance of security to management and to employees. An annual communication to all employees will not work.

Continuous, incremental improvement is vital. In fact, the root of the word “culture” is “to grow.” Incremental improvement applies to both overall culture and specific elements, like risk management. An effective risk management program is the basis for a good cybersecurity culture.

What factors inhibit continuous improvement of risk management programs (and associated cyber security culture)? Humans can grow but do not accept dire reports of impending disaster – think of Cassandra and the Trojan Horse. Humans may, however, accept incremental adjustments in risk awareness or mitigations. Another reason risk management programs fail to get support is that the CISO is not seen as a “business partner” with other top executives. A promising metric for me was that 87% of respondents believe that better security can lead to better business outcomes. CISOs need to speak in terms of business benefits in order to be a business partner with other CXOs. CISOs also need to build personal relationships with their C-level peers.

Process is the next critical piece of the cultural puzzle. I’m not talking about cybersecurity processes like “patch management” or “privileged identity management.” I am referring to the processes to build a cybersecurity culture. One thing I noticed in the survey is that 55% of respondents think the CISO is responsible for corporate cybersecurity culture and only 6% assign this to HR. I believe that any cultural change must be supported by a partnership involving HR or other “people-focused” centers of influence. Cybersecurity culture is really not different than any other type of culture and established cultural transformation processes can be harnessed for cybersecurity. Businesses have been changing or reviving cultures for years; there is no need to reinvent the wheel.

One resource for cultural transformation is John Kotter’s eight-step model for transformation. Cultural change is the last step in the transformation process. It is preceded by defining a sense of urgency, forming a powerful coalition and five additional enabling steps. Another model for organizational change is Jay Galbraith’s Star model. He highlights the five functions needed in designing an organization: strategy, structure, processes, rewards and people.

These functions can be utilized to create or transform the security organization and culture that you want in your business.

Frederick Scholl, Ph.D., Cyber Security Program Director, Quinnipiac University

Source: https://www.isaca.org/Knowledge-Center/Blog/Lists/Posts/Post.aspx?ID=1089

[ISACA Now Blog]

English
Exit mobile version