Business, Compliance, Cyber Security, Data Privacy

Data Privacy law coming to California in 2020

Data privacy, specifically as it pertains to our lives online, has been a staple topic of discussion particularly since the General Data Protection Regulation (GDPR) went into effect in the European Union in 2018. The United States is now poised to see a similar, albeit different regulation for the state of California. 

What is the California Consumer Privacy Act? 

The California Consumer Privacy Act (CCPA) was signed into law in June 2018 and enforcement for it will begin on January 1st, 2020.

The CCPA applies to all residents of California, providing them with the following:

  • The right to know what personal data is being collected;
  • The right to access their personal data;
  • The right to know whether this data is sold or disclosed, and to whom;
  • The right to opt out of the sale of personal data;
  • The right to equal service and prices, even if they exercise privacy rights.

Essentially, this bill grants the average person more control over the information that companies may collect on them, and it protects them from being denied service if they choose to exercise privacy rights. 

Important to note is the way the CCPA defines “personal information”, which is as follows:

“Information that identifies, relates to, describes, is capable of being associated with, or could be reasonably linked, directly or indirectly, with a particular consumer or household such as a real name, alias, postal address, unique personal identifier, online identifier Internet Protocol address, email address, account name, social security number, driver’s license number, passport number, or other similar identifiers.”

Another significant detail to be aware of is that the CCPA does not consider Publicly Available Information to be personal information that is protected under the bill.

Who does the CCPA apply to?

  • Any business or for-profit entity that collects consumers’ personal data, which does business in California, and meets at least one of the three following thresholds:
    • Has annual gross revenues in excess of 25 million USD;
    • Possesses the personal information of more than 50,000 consumers, households, and devices; or
    • Earns more than half its annual revenue from selling consumer information

The words “digital privacy law” and “California” in the same sentence most likely conjure images of Silicon Valley companies. However, this regulation will most likely not be world changing for giants like Facebook and Google. This is due in part to GDPR-compliance, which makes it easier to comply with regulations like the CCPA. 

Some Similarities & Differences between the CCPA and GDPR

  • Similarities
    • Both regulations only protect natural persons (individuals) and not legal persons
    • Both require companies to demonstrate after a data breach that they took reasonable steps to protect that data from a breach
    • Both apply to organizations that might not have presence in their respective jurisdictions, but offer goods or services in the region 
  • Differences
    • CCPA
      • Requires companies to provide an opt-out to data sharing
      • Penalties are done on a per-violation fine basis
      • Does not cover certain categories of personal data (e.g. health information) because they are already covered under different US regulations, such as HIPAA.
      • Does not require organizations to hire data protection officers or conduct impact assessments
      • Protects a “consumer” who is “a natural person who is a California resident”
      • Obligations apply specifically to “businesses” that are for-profit, collect consumer personal information and meet certain qualifying thresholds as mentioned above
      • Also applies to any entity that controls or is controlled by the business in question. No obligations are directed specifically at “service providers”
    • GDPR
      • Requires companies to provide an opt-in to data sharing
      • Penalties focus on up to 4% of annual global revenue
      • Protects a “data subject” who is “an identified or identifiable natural person” – meaning that an individual does not specifically need EU residency or citizenship if the controller processing their data is located in the EU. If the controller is located outside the EU, the citizenship/residence condition applies
      • Obligations apply to “controllers”, which could be natural or legal persons in addition to business entities, whether the activity is for-profit or not
      • Obligations also apply to processors, which are entities that process personal data on behalf of controllers.

If you have any questions or comments about whether CCPA will impact you or your business, you can always reach out to our team at Centry for help by emailing info@centry.global! 

Business, Compliance, Cyber Security, Data Privacy, Information Security

Google fined €50 million for data privacy violations; what can we learn from it?

On January 21, 2019, Google was fined for €50 million for violating the European Union’s General Data Protection Regulation (GDPR) by the French Supervisory Authority for data protection (CNIL).

GDPR, which went into effect May 25, 2018, was designed to provide EU citizens with greater control of their personal data and rights to what data they choose to share and how that information is retained by organizations. It required organizations that collect data of EU citizens to obtain “clear and affirmative consent” for data collection, have privacy policies in clear and understandable language, inform individuals when their data was compromised, allow them to transfer their data to other organizations, and the “right to be forgotten”, which is the ability to request that their data be deleted.

The fifty-million Euro fine facing Google was the moment the data privacy industry had been waiting for, as GDPR long promised steep costs for those found to be in violation of its data privacy rules.

CNIL reported that Google failed to fully disclose to users how their personal information is collected and what happens to it, in addition to not properly obtaining user consent for displaying personalized ads.

Although Google had made changes to comply with GDPR, the CNIL said in a statement that “the infringements observed deprive the users of essential guarantees regarding processing operations that can reveal important parts of their private life since they are based on a huge amount of data, a wide variety of services, and almost unlimited possible combinations.” They added that the violations were continuous breaches of the regulation, and “not a one-off, time-limited infringement.”

CNIL began investigating Google on the day of the GDPR deadline in response to concerns raised by to privacy activist groups, None of Your Business and La Quadrature du Net. These groups filed complaints with CNIL, claiming that Google did not have a valid legal basis under GDPR to process personal data for the use of personalized and targeted ads.

These groups have also filed privacy complaints against Facebook and its subsidiaries, including Instagram and WhatsApp.

In the course of their investigation, CNIL found that when users created Google accounts with Android smartphones, the company’s practices violated GDPR in two ways – transparency and legal basis for the ads personalization. CNIL additionally found that the notices Google provided to users about what type of information it sought were not easily accessible.

A key fundament of GDPR is that users are able to easily locate and fully understand the extent of data processing operations carried out by organizations. In this Google was found wanting, where their terms were described as “too generic and vague in manner.” Overall, CNIL concluded that “the information communicated is not clear enough” that the average user could understand that the legal basis of processing operations for the ads personalization is consent. In other words, if you do not provide consent to have your information processed for personalized ads, the company legally cannot do it. Per GDPR, consent for data processing must be “unambiguous” with “clear affirmative action from the user.”  

Google responded to the fine with a statement affirming its commitment to meeting transparency expectations and consent requirements of GDPR, and that it is “studying the decision to determine our next steps.”

Despite the fact that companies were given a two-year time frame to comply with the regulation, many were not compliant by the deadline when it went out on May 25, 2018. Other companies only made limited efforts to become compliant, choosing to wait until the first major fine was released to see how serious the enforcement would be. If some of these companies were hoping to get a pass from another national data protection authority, that decision will most certainly be critically assessed in comparison with CNIL’s approach.

Consent requirements seem to be the greatest obstacle for companies struggling with GDPR’s requirements, especially where it concerns transparency and accessibility. Under GDPR, companies cannot hand out a single consent form with a bundle of data uses. That has long been industry standard practice, and part of the reason that Google was found to in violation of GDPR.  

To avoid facing similar fines in the future, companies can review how they obtain consent to collect personal information of users. Each data set requires its own consent– you have to be able to agree or disagree to each way your information will be used.

This level of transparency is essential and it requires changing from previously accepted business practices.

If you have any questions or comments pertaining to GDPR or this article, feel free to contact us at info@centry.global. Be sure to follow us on Twitter @CentryGlobal an subscribe for more content like this!

This article was written by Kristina Weber, Content Manager of Centry Global.

Business, Compliance, Cyber Security, Data Breach, Information Security, Security

The Future of AI, Security, & Privacy

Artificial Intelligence is a subject that is not just for researchers and engineers; it is something everyone should be concerned with.

Martin Ford, author of Architects of Intelligence, describes his findings on the future of AI in an interview with Forbes.

The main takeaway from Ford’s research, which included interviews with more than twenty experts in the field, is that everyone agrees that the future of AI is going to be disruptive. Not everyone agrees on whether this will be a positive or negative disruption, but the technology will have a massive impact on society nonetheless.

Most of the experts concluded that the most real and immediate threats are going to be to cyber security, privacy, political systems, and the possibility of weaponizing AI.

AI is a very useful tool for gathering information, owing to its speed, the scale of data it can process, and of course the automation. It’s the most efficient way to process a large volume of information in a short time frame as it can work faster than human analysts. That said, it can come with some detriments. We have started to see that its algorithms are not immune to gender and race bias in areas such as hiring and facial recognition software. Ford suggests that regulation is necessary for the immediate future, which will require continuing conversation concerning AI in the political sphere.  

AI-based consumer products are vulnerable to data exploitation, and the risk of that has only risen as we have become more dependant on digital technology in our day to day lives. AI can be used to identity and monitor user habits across multiple devices, even if your personal data is anonymized when it becomes part of a larger data set. Anonymized data can be sold to anyone for any purpose. The idea is that since the data has been scrubbed, it cannot be used to identify individuals and is therefore safe to use for analysis or sale.

However, between open source information and increasingly powerful computing, it is now possible to re-identify anonymized data. The reality is that you don’t need that much information about a person to be able to identify them. For example, much of the population of the United States can be identified by the combination of their date of birth, gender, and zip code alone.

With consent-based regulations such as GDPR concerning the right to digital privacy, it is clear that people want to know how their information is used, why, and how it can affect their lives. Furthermore, that they want control over how their information is used.

This article was written by Kristina Weber, Content Supervisor of Centry Ltd. For more content like this, be sure to subscribe to our blog, which updates every other Friday with articles related to the security industry!

Business, Cyber Security, Data Breach, Information Security, Risk Management, Security

Security Predictions for 2019

The predictions for 2018 that we shared last year seemed to land on the points of data protection and cyber security, while it strayed from others – most notably on the front of cryptocurrencies. BitCoin was a hot topic in 2017, surging to values that had people everywhere kicking themselves for not investing sooner. What unfolded after was an epidemic of articles predicting a global acceptance of cryptocurrencies. That balloon popped when the cryptocurrency market crashed in early 2018, and it seems that many have quietly reneged their cryptocurrency hype since.

Continuing the tradition, here are a few insights into the forecast for 2019:

Supply Chain Attacks. While these threats can occur in every sector of the economy as it pertains to supply chains, the industries that most commonly experience these attacks include pharmaceuticals, biotechnology, hospitality, entertainment, and media. Manufacturing operations are attractive targets to adversaries, due in part to having such a broad potential surface of attack. With increasing reliance on the supply chain, there is a wealth of information that could be obtained if organizations have not taken appropriate steps to secure themselves. For more information on cyber security in the supply chain, read our article here.

Further development of consumer privacy laws. Last year we saw the launch of the European Union’s GDPR, which marked the first big regulatory move toward protecting consumer information. Soon after, California passed a bill (Consumer Privacy Act of 2018) that seems to be the state’s version of GDPR – it is slated to go into effect at the end of 2019. A draft for a federal privacy bill for the United States may arrive early in 2019 after concerns over a number of privacy breaches.

Continuing adoption of artificial intelligence across wider society. From Alexa to politics, AI will continue to spread across industries and uses. Chinese companies have announced intentions to develop AI processing chips to avoid reliance on US-manufactured Intel and Nvidia. There is rising concern that AI technology could be increasingly used by authoritarian regimes for the purpose of restricting personal freedoms. As AI continues to spread its proverbial wings, we could see a move toward “transparent AI”, that is, an effort to gain consumer trust in the use of AI by being clear in how it uses human data and why. Of course there is always the worry that the rise of AI will create a jobless future for people, however Gartner suggests the opposite, that artificial intelligence will create more jobs than it will eliminate.

Big data breaches will push companies to tighten login security. We might see a concerted effort of the security industry to replace username/passwords altogether, pushing toward an alternative solution as an industry standard. Biometrics – for example facial recognition or fingerprint logins – are certainly on the rise.

Digital skimming will become more prevalent. The trick of card skimming has moved to the digital world, where attackers are going after websites that process payments. The growth of online shopping has made checkout pages attractive targets. British Airways and Ticketmaster were two high profile cases of this. The British Airways case was particularly alarming, as airlines in general have access to a wide breadth of information ranging from birthdates, passport details, payment information and more. Although the airline was able to confirm that no travel data was stolen in the attack, it nonetheless remains as a cautionary tale.

This article was written by Kristina Weber. For more content like this, be sure to subscribe to Centry Blog for bi-weekly articles related to the security industry. Follow us on Twitter @CentryLTD and @CentryCyber!