Business, Compliance, Cyber Security, Data Privacy, Information Security

Google fined €50 million for data privacy violations; what can we learn from it?

On January 21, 2019, Google was fined for €50 million for violating the European Union’s General Data Protection Regulation (GDPR) by the French Supervisory Authority for data protection (CNIL).

GDPR, which went into effect May 25, 2018, was designed to provide EU citizens with greater control of their personal data and rights to what data they choose to share and how that information is retained by organizations. It required organizations that collect data of EU citizens to obtain “clear and affirmative consent” for data collection, have privacy policies in clear and understandable language, inform individuals when their data was compromised, allow them to transfer their data to other organizations, and the “right to be forgotten”, which is the ability to request that their data be deleted.

The fifty-million Euro fine facing Google was the moment the data privacy industry had been waiting for, as GDPR long promised steep costs for those found to be in violation of its data privacy rules.

CNIL reported that Google failed to fully disclose to users how their personal information is collected and what happens to it, in addition to not properly obtaining user consent for displaying personalized ads.

Although Google had made changes to comply with GDPR, the CNIL said in a statement that “the infringements observed deprive the users of essential guarantees regarding processing operations that can reveal important parts of their private life since they are based on a huge amount of data, a wide variety of services, and almost unlimited possible combinations.” They added that the violations were continuous breaches of the regulation, and “not a one-off, time-limited infringement.”

CNIL began investigating Google on the day of the GDPR deadline in response to concerns raised by to privacy activist groups, None of Your Business and La Quadrature du Net. These groups filed complaints with CNIL, claiming that Google did not have a valid legal basis under GDPR to process personal data for the use of personalized and targeted ads.

These groups have also filed privacy complaints against Facebook and its subsidiaries, including Instagram and WhatsApp.

In the course of their investigation, CNIL found that when users created Google accounts with Android smartphones, the company’s practices violated GDPR in two ways – transparency and legal basis for the ads personalization. CNIL additionally found that the notices Google provided to users about what type of information it sought were not easily accessible.

A key fundament of GDPR is that users are able to easily locate and fully understand the extent of data processing operations carried out by organizations. In this Google was found wanting, where their terms were described as “too generic and vague in manner.” Overall, CNIL concluded that “the information communicated is not clear enough” that the average user could understand that the legal basis of processing operations for the ads personalization is consent. In other words, if you do not provide consent to have your information processed for personalized ads, the company legally cannot do it. Per GDPR, consent for data processing must be “unambiguous” with “clear affirmative action from the user.”  

Google responded to the fine with a statement affirming its commitment to meeting transparency expectations and consent requirements of GDPR, and that it is “studying the decision to determine our next steps.”

Despite the fact that companies were given a two-year time frame to comply with the regulation, many were not compliant by the deadline when it went out on May 25, 2018. Other companies only made limited efforts to become compliant, choosing to wait until the first major fine was released to see how serious the enforcement would be. If some of these companies were hoping to get a pass from another national data protection authority, that decision will most certainly be critically assessed in comparison with CNIL’s approach.

Consent requirements seem to be the greatest obstacle for companies struggling with GDPR’s requirements, especially where it concerns transparency and accessibility. Under GDPR, companies cannot hand out a single consent form with a bundle of data uses. That has long been industry standard practice, and part of the reason that Google was found to in violation of GDPR.  

To avoid facing similar fines in the future, companies can review how they obtain consent to collect personal information of users. Each data set requires its own consent– you have to be able to agree or disagree to each way your information will be used.

This level of transparency is essential and it requires changing from previously accepted business practices.

If you have any questions or comments pertaining to GDPR or this article, feel free to contact us at info@centry.global. Be sure to follow us on Twitter @CentryGlobal an subscribe for more content like this!

This article was written by Kristina Weber, Content Manager of Centry Global.

Business, Compliance, Data Breach, Fraud, Information Security, Risk Management, Security

Centry Quick Check Program for Corporate Due Diligence

New technology has revolutionized corporate investigations and changed the way we go about them. There’s greater efficiency, new insights, and broader reach. However, the downside is that this technology can lull both investigators and clients into a false sense of security.

Computers can provide us with information, but people are still better at evaluating data within context, such as identifying how useful the information is and what it is relevant to. In short, technology can’t yet replicate human analysis – and yet we continue to see a growing dependence upon it for exactly that.

The Value of Professional Investigators in Corporate Due Diligence

In countries where there are robust public records, this dependence on automated scanning and investigative tech is particularly evident. Although investors and corporations still recognize the value of actual investigators in challenging regions across the globe where the public records may not be so accessible or accurate, when it comes to investing in due diligence insidethe US and Canada for example, companies are increasingly drawn by the promise of these low-level automated scans.

However, it’s important to consider that these types of surface level scans will not and cannot encompass a breadth of understanding of an investigated subject. Software driven data harvests conducted without the analytical power of the human mind could expose businesses to risks they may be unaware of, including things like reputational risk, fraud, money laundering, and more.

Most of these automated scans lack coverage on the target in media, whether that’s on social platforms or journalistic content. This surface level research cannot hope to provide a clear and accurate picture of a subject, and it certainly would not appease judicial officials if something were to go wrong.

For example, single-location local records checks cannot account for whether a person has moved cities. It would also not pick up any information about whether or not the subject faced allegations of criminal activity, which is something that can be identified through doing a media assessment. Furthermore, media research can also illustrate any extreme political views or subjects that an investor or company might not want to be associated with.

The professional experience of a professional who has done hundreds, if not thousands of due diligence investigations is something that is highly valuable. They are more likely to be able to provide context around findings that may initially seem adverse, such as whether or not a particular practice is typical for a particular industry or they might pick up contextual clues that could uncover a previously overlooked detail.

Companies seeking to save a dime by purchasing an automated scan with no human inference could be unknowingly setting themselves up for a huge risk in the future.

Our Answer: Investigator Driven Quick Checks for Individuals and Companies

Propelled by increased regulatory concerns among corporate entities and a more competitive environment amid the offers of automated checks, Centry Global has formulated an answer to the question of how to marry meaningful analysis to efficiency in due diligence investigations with our Quick Check (QC) program.

What to Expect from a Centry QC

The QC program combines an identity review, sanctions screening, compliance check, and media research into a single, well-organized background check package on either individuals or companies with a turnaround time of 5-7 business days.

Quick Check of a Company

  • Identity Review
    • Key financial figures
    • Risk Level
    • Beneficial Owners and Senior Management
  • Compliance Review
    • Sanctions and Watchlists Screening
  • Social/Adverse Media Review
  • Analysis and/or Recommendations

Quick Check of an Individual

  • Identity Review
    • Shareholdings and Directorships
  • Compliance Review
    • Sanctions and Watchlists Screening
    • Politically Exposed Persons Screening
    • Litigations Check
  • Social/Adverse Media Review
  • Analysis and/or Recommendations

For more information on these Quick Checks, please feel free to contact us at info@centry.global or on our LinkedIn, Facebook and Twitter pages!

Information Security, Risk Management, Security, Social Media

Tactical Catfishing

Most of us think of ‘catfishing’ in the context of someone using a fake profile, usually on some dating app, to trick unsuspecting people. Maybe they do it for manipulation and blackmailing purposes, or to scam people out of money.

Now, however, a social engineering drill conducted by the NATO Strategic Communications Centre of Exellence (NATO StratCom COE) has shown us that these catfishing tactics can be used on soldiers to glean sensitive information about things like battalion locations, troop movements, and other personal intel.

The operation used the catfishing technique to set up fake social media pages and accounts on Facebook and Instagram with the intent of fooling military personnel. This clandestine operation, designed to take place over the course of a month, was arranged by a “red team” based out of NATO’s StratCom Center of Excellence in Latvia.

The falsified Facebook pages were designed to look like pages that service members use to connect with each other – one seemed to be geared toward a large scale military exercise in Europe and a number of the group members were accounts that appeared to be real service members.

The truth was, however, these were fake accounts created by StratCom researchers to test how deeply they could influence the soldiers’ real world actions through social engineering. Using Facebook advertising to recruit members to these pages, the research group was able to permeate the ranks of NATO soldiers, using fake profiles to befriend and manipulate the soldiers into providing sensitive information about military operations and their personal lives.

The point of the exercise was to answer three questions:

  1. What kind of information can be found out about a military exercise just from open source data?
  2. What can be found out about the soldiers just from open source data?
  3. Can any of this data be used to influence the soldiers against their given orders?

Open source data relates to any information that can be found in public avenues such as social media platforms, dating profiles, public government data and more.

The researchers found that you can, indeed, find out a lot of information from open source data – and yes, the information can be used to influence members of the armed forces. The experiment emphasizes just how much personal information is ‘open season’ online, especially as our lives are increasingly impacted by our digital footprints.

Perhaps even more troubling is the fact that even those of us who are the best positioned to resist such tactics still managed to fall for them, illustrating just how easy it is for the average person with no experience with digital privacy.

Many of the details about how exactly the operation was conducted remain classified, such as precisely where it took place and who was impacted. The research group that ran the drill did so with the approval of the military, but obviously service members were not aware of what was happening.

The researchers obtained a wide range of  information from the soldiers, including things like the locations of battalions, troop movements, photographs of equipment, personal contact information, and even sensitive details about personal lives that could be used for blackmail – such as the presence of married individuals on dating sites.

Instagram in particular was found to be useful for identifying personal information related to the soldiers, while Facebook’s suggested friends feature was key in recruiting members to the fake pages.

Representatives of the NATO StratCom COE stated that the decision to launch the exercise was made in the wake of the Cambridge Analytica scandal and Mark Zuckerberg’s appearance before U.S. Congress last year.

A quote from the report says:

“Overall, we identified a significant number of people taking part in the exercise and managed to identify all members of certain units, pinpoint the exact locations of several battalions, gain knowledge of troop movements to and from exercises, and discover the dates of active phases of the exercises.

“The level of personal information we found was very detailed and enabled us to instill undesirable behaviour during the exercise.”

Military personnel are often the target of scams like catfishing. Recently, a massive blackmailing scheme that affected more than 440 service members was uncovered in South Carolina, where a group of inmates had allegedly used fake personas on online dating services to manipulate the service members. This just goes to show that it’s not just finances at risk through catfishing, but security overall.

Facebook has taken a decidedly firm stance against the proliferation of fake pages and accounts designed to manipulate the public. The company prohibits what it calls “coordinated inauthentic behavior”, and has bolstered its safety and security team over the past year in an effort to combat phishing and other types of social scams.

But after the success of StratCom’s endeavor, it seems that Facebook’s efforts to crack down on this aren’t completely successful. Of the fake pages created, one was shut down within hours, while the others took weeks to be addressed after being reported. Some of the fake profiles still remain.

One thing to keep in mind is just how small-scale this experiment was in relation to the massive yield of information. Three fake pages and five profiles were all it took to identify more than 150 soldiers and obtain all of that sensitive information. This is tiny in comparison to the coordinated efforts of bad actors that utilize hundreds of accounts, profiles, and pages. One can imagine just how much data could be obtained through those schemes.

As a result of the study, the researchers suggested some changes Facebook could make to help prevent malign operations of a similar nature. For example, if the company established tighter controls over the Suggested Friends tool, it would not be quite as easy to identify members of a given group.

Digital privacy is especially important – the picture we present of ourselves across different social media platforms can help people build a clear idea of who we are, which could, consequently, be used against us in terms of manipulation tactics and social engineering.

The use of social media to gather mission sensitive information is going to be a significant challenge for the foreseeable future. The researchers suggest that we ought to put more pressure on social media to address vulnerabilities like these that could be used in broad strokes against national security or individuals directly.

Centry Global has a service for identity verification of online profiles. If you suspect you may be at risk for being manipulated, contact us at www.datecheckonline.com!

This article was written by Kristina Weber, Content Manager of Centry Global. For more content like this, be sure to follow us on Twitter @CentryGlobal and subscribe to Centry Blog for bi-weekly updates.

Business, Compliance, Cyber Security, Data Breach, Information Security, Security

The Future of AI, Security, & Privacy

Artificial Intelligence is a subject that is not just for researchers and engineers; it is something everyone should be concerned with.

Martin Ford, author of Architects of Intelligence, describes his findings on the future of AI in an interview with Forbes.

The main takeaway from Ford’s research, which included interviews with more than twenty experts in the field, is that everyone agrees that the future of AI is going to be disruptive. Not everyone agrees on whether this will be a positive or negative disruption, but the technology will have a massive impact on society nonetheless.

Most of the experts concluded that the most real and immediate threats are going to be to cyber security, privacy, political systems, and the possibility of weaponizing AI.

AI is a very useful tool for gathering information, owing to its speed, the scale of data it can process, and of course the automation. It’s the most efficient way to process a large volume of information in a short time frame as it can work faster than human analysts. That said, it can come with some detriments. We have started to see that its algorithms are not immune to gender and race bias in areas such as hiring and facial recognition software. Ford suggests that regulation is necessary for the immediate future, which will require continuing conversation concerning AI in the political sphere.  

AI-based consumer products are vulnerable to data exploitation, and the risk of that has only risen as we have become more dependant on digital technology in our day to day lives. AI can be used to identity and monitor user habits across multiple devices, even if your personal data is anonymized when it becomes part of a larger data set. Anonymized data can be sold to anyone for any purpose. The idea is that since the data has been scrubbed, it cannot be used to identify individuals and is therefore safe to use for analysis or sale.

However, between open source information and increasingly powerful computing, it is now possible to re-identify anonymized data. The reality is that you don’t need that much information about a person to be able to identify them. For example, much of the population of the United States can be identified by the combination of their date of birth, gender, and zip code alone.

With consent-based regulations such as GDPR concerning the right to digital privacy, it is clear that people want to know how their information is used, why, and how it can affect their lives. Furthermore, that they want control over how their information is used.

This article was written by Kristina Weber, Content Supervisor of Centry Ltd. For more content like this, be sure to subscribe to our blog, which updates every other Friday with articles related to the security industry!