Google fined €50 million for data privacy violations; what can we learn from it?

On January 21, 2019, Google was fined for €50 million for violating the European Union’s General Data Protection Regulation (GDPR) by the French Supervisory Authority for data protection (CNIL).

GDPR, which went into effect May 25, 2018, was designed to provide EU citizens with greater control of their personal data and rights to what data they choose to share and how that information is retained by organizations. It required organizations that collect data of EU citizens to obtain “clear and affirmative consent” for data collection, have privacy policies in clear and understandable language, inform individuals when their data was compromised, allow them to transfer their data to other organizations, and the “right to be forgotten”, which is the ability to request that their data be deleted.

The fifty-million Euro fine facing Google was the moment the data privacy industry had been waiting for, as GDPR long promised steep costs for those found to be in violation of its data privacy rules.

CNIL reported that Google failed to fully disclose to users how their personal information is collected and what happens to it, in addition to not properly obtaining user consent for displaying personalized ads.

Although Google had made changes to comply with GDPR, the CNIL said in a statement that “the infringements observed deprive the users of essential guarantees regarding processing operations that can reveal important parts of their private life since they are based on a huge amount of data, a wide variety of services, and almost unlimited possible combinations.” They added that the violations were continuous breaches of the regulation, and “not a one-off, time-limited infringement.”

CNIL began investigating Google on the day of the GDPR deadline in response to concerns raised by to privacy activist groups, None of Your Business and La Quadrature du Net. These groups filed complaints with CNIL, claiming that Google did not have a valid legal basis under GDPR to process personal data for the use of personalized and targeted ads.

These groups have also filed privacy complaints against Facebook and its subsidiaries, including Instagram and WhatsApp.

In the course of their investigation, CNIL found that when users created Google accounts with Android smartphones, the company’s practices violated GDPR in two ways – transparency and legal basis for the ads personalization. CNIL additionally found that the notices Google provided to users about what type of information it sought were not easily accessible.

A key fundament of GDPR is that users are able to easily locate and fully understand the extent of data processing operations carried out by organizations. In this Google was found wanting, where their terms were described as “too generic and vague in manner.” Overall, CNIL concluded that “the information communicated is not clear enough” that the average user could understand that the legal basis of processing operations for the ads personalization is consent. In other words, if you do not provide consent to have your information processed for personalized ads, the company legally cannot do it. Per GDPR, consent for data processing must be “unambiguous” with “clear affirmative action from the user.”  

Google responded to the fine with a statement affirming its commitment to meeting transparency expectations and consent requirements of GDPR, and that it is “studying the decision to determine our next steps.”

Despite the fact that companies were given a two-year time frame to comply with the regulation, many were not compliant by the deadline when it went out on May 25, 2018. Other companies only made limited efforts to become compliant, choosing to wait until the first major fine was released to see how serious the enforcement would be. If some of these companies were hoping to get a pass from another national data protection authority, that decision will most certainly be critically assessed in comparison with CNIL’s approach.

Consent requirements seem to be the greatest obstacle for companies struggling with GDPR’s requirements, especially where it concerns transparency and accessibility. Under GDPR, companies cannot hand out a single consent form with a bundle of data uses. That has long been industry standard practice, and part of the reason that Google was found to in violation of GDPR.  

To avoid facing similar fines in the future, companies can review how they obtain consent to collect personal information of users. Each data set requires its own consent– you have to be able to agree or disagree to each way your information will be used.

This level of transparency is essential and it requires changing from previously accepted business practices.

If you have any questions or comments pertaining to GDPR or this article, feel free to contact us at info@centry.global. Be sure to follow us on Twitter @CentryGlobal an subscribe for more content like this!

This article was written by Kristina Weber, Content Manager of Centry Global.

Centry Quick Check Program for Corporate Due Diligence

New technology has revolutionized corporate investigations and changed the way we go about them. There’s greater efficiency, new insights, and broader reach. However, the downside is that this technology can lull both investigators and clients into a false sense of security.

Computers can provide us with information, but people are still better at evaluating data within context, such as identifying how useful the information is and what it is relevant to. In short, technology can’t yet replicate human analysis – and yet we continue to see a growing dependence upon it for exactly that.

The Value of Professional Investigators in Corporate Due Diligence

In countries where there are robust public records, this dependence on automated scanning and investigative tech is particularly evident. Although investors and corporations still recognize the value of actual investigators in challenging regions across the globe where the public records may not be so accessible or accurate, when it comes to investing in due diligence insidethe US and Canada for example, companies are increasingly drawn by the promise of these low-level automated scans.

However, it’s important to consider that these types of surface level scans will not and cannot encompass a breadth of understanding of an investigated subject. Software driven data harvests conducted without the analytical power of the human mind could expose businesses to risks they may be unaware of, including things like reputational risk, fraud, money laundering, and more.

Most of these automated scans lack coverage on the target in media, whether that’s on social platforms or journalistic content. This surface level research cannot hope to provide a clear and accurate picture of a subject, and it certainly would not appease judicial officials if something were to go wrong.

For example, single-location local records checks cannot account for whether a person has moved cities. It would also not pick up any information about whether or not the subject faced allegations of criminal activity, which is something that can be identified through doing a media assessment. Furthermore, media research can also illustrate any extreme political views or subjects that an investor or company might not want to be associated with.

The professional experience of a professional who has done hundreds, if not thousands of due diligence investigations is something that is highly valuable. They are more likely to be able to provide context around findings that may initially seem adverse, such as whether or not a particular practice is typical for a particular industry or they might pick up contextual clues that could uncover a previously overlooked detail.

Companies seeking to save a dime by purchasing an automated scan with no human inference could be unknowingly setting themselves up for a huge risk in the future.

Our Answer: Investigator Driven Quick Checks for Individuals and Companies

Propelled by increased regulatory concerns among corporate entities and a more competitive environment amid the offers of automated checks, Centry Global has formulated an answer to the question of how to marry meaningful analysis to efficiency in due diligence investigations with our Quick Check (QC) program.

What to Expect from a Centry QC

The QC program combines an identity review, sanctions screening, compliance check, and media research into a single, well-organized background check package on either individuals or companies with a turnaround time of 5-7 business days.

Quick Check of a Company

  • Identity Review
    • Key financial figures
    • Risk Level
    • Beneficial Owners and Senior Management
  • Compliance Review
    • Sanctions and Watchlists Screening
  • Social/Adverse Media Review
  • Analysis and/or Recommendations

Quick Check of an Individual

  • Identity Review
    • Shareholdings and Directorships
  • Compliance Review
    • Sanctions and Watchlists Screening
    • Politically Exposed Persons Screening
    • Litigations Check
  • Social/Adverse Media Review
  • Analysis and/or Recommendations

For more information on these Quick Checks, please feel free to contact us at info@centry.global or on our LinkedIn, Facebook and Twitter pages!

Tactical Catfishing

Most of us think of ‘catfishing’ in the context of someone using a fake profile, usually on some dating app, to trick unsuspecting people. Maybe they do it for manipulation and blackmailing purposes, or to scam people out of money.

Now, however, a social engineering drill conducted by the NATO Strategic Communications Centre of Exellence (NATO StratCom COE) has shown us that these catfishing tactics can be used on soldiers to glean sensitive information about things like battalion locations, troop movements, and other personal intel.

The operation used the catfishing technique to set up fake social media pages and accounts on Facebook and Instagram with the intent of fooling military personnel. This clandestine operation, designed to take place over the course of a month, was arranged by a “red team” based out of NATO’s StratCom Center of Excellence in Latvia.

The falsified Facebook pages were designed to look like pages that service members use to connect with each other – one seemed to be geared toward a large scale military exercise in Europe and a number of the group members were accounts that appeared to be real service members.

The truth was, however, these were fake accounts created by StratCom researchers to test how deeply they could influence the soldiers’ real world actions through social engineering. Using Facebook advertising to recruit members to these pages, the research group was able to permeate the ranks of NATO soldiers, using fake profiles to befriend and manipulate the soldiers into providing sensitive information about military operations and their personal lives.

The point of the exercise was to answer three questions:

  1. What kind of information can be found out about a military exercise just from open source data?
  2. What can be found out about the soldiers just from open source data?
  3. Can any of this data be used to influence the soldiers against their given orders?

Open source data relates to any information that can be found in public avenues such as social media platforms, dating profiles, public government data and more.

The researchers found that you can, indeed, find out a lot of information from open source data – and yes, the information can be used to influence members of the armed forces. The experiment emphasizes just how much personal information is ‘open season’ online, especially as our lives are increasingly impacted by our digital footprints.

Perhaps even more troubling is the fact that even those of us who are the best positioned to resist such tactics still managed to fall for them, illustrating just how easy it is for the average person with no experience with digital privacy.

Many of the details about how exactly the operation was conducted remain classified, such as precisely where it took place and who was impacted. The research group that ran the drill did so with the approval of the military, but obviously service members were not aware of what was happening.

The researchers obtained a wide range of  information from the soldiers, including things like the locations of battalions, troop movements, photographs of equipment, personal contact information, and even sensitive details about personal lives that could be used for blackmail – such as the presence of married individuals on dating sites.

Instagram in particular was found to be useful for identifying personal information related to the soldiers, while Facebook’s suggested friends feature was key in recruiting members to the fake pages.

Representatives of the NATO StratCom COE stated that the decision to launch the exercise was made in the wake of the Cambridge Analytica scandal and Mark Zuckerberg’s appearance before U.S. Congress last year.

A quote from the report says:

“Overall, we identified a significant number of people taking part in the exercise and managed to identify all members of certain units, pinpoint the exact locations of several battalions, gain knowledge of troop movements to and from exercises, and discover the dates of active phases of the exercises.

“The level of personal information we found was very detailed and enabled us to instill undesirable behaviour during the exercise.”

Military personnel are often the target of scams like catfishing. Recently, a massive blackmailing scheme that affected more than 440 service members was uncovered in South Carolina, where a group of inmates had allegedly used fake personas on online dating services to manipulate the service members. This just goes to show that it’s not just finances at risk through catfishing, but security overall.

Facebook has taken a decidedly firm stance against the proliferation of fake pages and accounts designed to manipulate the public. The company prohibits what it calls “coordinated inauthentic behavior”, and has bolstered its safety and security team over the past year in an effort to combat phishing and other types of social scams.

But after the success of StratCom’s endeavor, it seems that Facebook’s efforts to crack down on this aren’t completely successful. Of the fake pages created, one was shut down within hours, while the others took weeks to be addressed after being reported. Some of the fake profiles still remain.

One thing to keep in mind is just how small-scale this experiment was in relation to the massive yield of information. Three fake pages and five profiles were all it took to identify more than 150 soldiers and obtain all of that sensitive information. This is tiny in comparison to the coordinated efforts of bad actors that utilize hundreds of accounts, profiles, and pages. One can imagine just how much data could be obtained through those schemes.

As a result of the study, the researchers suggested some changes Facebook could make to help prevent malign operations of a similar nature. For example, if the company established tighter controls over the Suggested Friends tool, it would not be quite as easy to identify members of a given group.

Digital privacy is especially important – the picture we present of ourselves across different social media platforms can help people build a clear idea of who we are, which could, consequently, be used against us in terms of manipulation tactics and social engineering.

The use of social media to gather mission sensitive information is going to be a significant challenge for the foreseeable future. The researchers suggest that we ought to put more pressure on social media to address vulnerabilities like these that could be used in broad strokes against national security or individuals directly.

Centry Global has a service for identity verification of online profiles. If you suspect you may be at risk for being manipulated, contact us at www.datecheckonline.com!

This article was written by Kristina Weber, Content Manager of Centry Global. For more content like this, be sure to follow us on Twitter @CentryGlobal and subscribe to Centry Blog for bi-weekly updates.

The Future of AI, Security, & Privacy

Artificial Intelligence is a subject that is not just for researchers and engineers; it is something everyone should be concerned with.

Martin Ford, author of Architects of Intelligence, describes his findings on the future of AI in an interview with Forbes.

The main takeaway from Ford’s research, which included interviews with more than twenty experts in the field, is that everyone agrees that the future of AI is going to be disruptive. Not everyone agrees on whether this will be a positive or negative disruption, but the technology will have a massive impact on society nonetheless.

Most of the experts concluded that the most real and immediate threats are going to be to cyber security, privacy, political systems, and the possibility of weaponizing AI.

AI is a very useful tool for gathering information, owing to its speed, the scale of data it can process, and of course the automation. It’s the most efficient way to process a large volume of information in a short time frame as it can work faster than human analysts. That said, it can come with some detriments. We have started to see that its algorithms are not immune to gender and race bias in areas such as hiring and facial recognition software. Ford suggests that regulation is necessary for the immediate future, which will require continuing conversation concerning AI in the political sphere.  

AI-based consumer products are vulnerable to data exploitation, and the risk of that has only risen as we have become more dependant on digital technology in our day to day lives. AI can be used to identity and monitor user habits across multiple devices, even if your personal data is anonymized when it becomes part of a larger data set. Anonymized data can be sold to anyone for any purpose. The idea is that since the data has been scrubbed, it cannot be used to identify individuals and is therefore safe to use for analysis or sale.

However, between open source information and increasingly powerful computing, it is now possible to re-identify anonymized data. The reality is that you don’t need that much information about a person to be able to identify them. For example, much of the population of the United States can be identified by the combination of their date of birth, gender, and zip code alone.

With consent-based regulations such as GDPR concerning the right to digital privacy, it is clear that people want to know how their information is used, why, and how it can affect their lives. Furthermore, that they want control over how their information is used.

This article was written by Kristina Weber, Content Supervisor of Centry Ltd. For more content like this, be sure to subscribe to our blog, which updates every other Friday with articles related to the security industry!

Security Predictions for 2019

The predictions for 2018 that we shared last year seemed to land on the points of data protection and cyber security, while it strayed from others – most notably on the front of cryptocurrencies. BitCoin was a hot topic in 2017, surging to values that had people everywhere kicking themselves for not investing sooner. What unfolded after was an epidemic of articles predicting a global acceptance of cryptocurrencies. That balloon popped when the cryptocurrency market crashed in early 2018, and it seems that many have quietly reneged their cryptocurrency hype since.

Continuing the tradition, here are a few insights into the forecast for 2019:

Supply Chain Attacks. While these threats can occur in every sector of the economy as it pertains to supply chains, the industries that most commonly experience these attacks include pharmaceuticals, biotechnology, hospitality, entertainment, and media. Manufacturing operations are attractive targets to adversaries, due in part to having such a broad potential surface of attack. With increasing reliance on the supply chain, there is a wealth of information that could be obtained if organizations have not taken appropriate steps to secure themselves. For more information on cyber security in the supply chain, read our article here.

Further development of consumer privacy laws. Last year we saw the launch of the European Union’s GDPR, which marked the first big regulatory move toward protecting consumer information. Soon after, California passed a bill (Consumer Privacy Act of 2018) that seems to be the state’s version of GDPR – it is slated to go into effect at the end of 2019. A draft for a federal privacy bill for the United States may arrive early in 2019 after concerns over a number of privacy breaches.

Continuing adoption of artificial intelligence across wider society. From Alexa to politics, AI will continue to spread across industries and uses. Chinese companies have announced intentions to develop AI processing chips to avoid reliance on US-manufactured Intel and Nvidia. There is rising concern that AI technology could be increasingly used by authoritarian regimes for the purpose of restricting personal freedoms. As AI continues to spread its proverbial wings, we could see a move toward “transparent AI”, that is, an effort to gain consumer trust in the use of AI by being clear in how it uses human data and why. Of course there is always the worry that the rise of AI will create a jobless future for people, however Gartner suggests the opposite, that artificial intelligence will create more jobs than it will eliminate.

Big data breaches will push companies to tighten login security. We might see a concerted effort of the security industry to replace username/passwords altogether, pushing toward an alternative solution as an industry standard. Biometrics – for example facial recognition or fingerprint logins – are certainly on the rise.

Digital skimming will become more prevalent. The trick of card skimming has moved to the digital world, where attackers are going after websites that process payments. The growth of online shopping has made checkout pages attractive targets. British Airways and Ticketmaster were two high profile cases of this. The British Airways case was particularly alarming, as airlines in general have access to a wide breadth of information ranging from birthdates, passport details, payment information and more. Although the airline was able to confirm that no travel data was stolen in the attack, it nonetheless remains as a cautionary tale.

This article was written by Kristina Weber. For more content like this, be sure to subscribe to Centry Blog for bi-weekly articles related to the security industry. Follow us on Twitter @CentryLTD and @CentryCyber!

2018 Year in Review

As 2018 comes to a close, we reflect on those moments throughout the year that defined the times yet to come. For Centry, 2018 was a year that brought us great joys like the opening of our new branch in Mexico City and establishment of the ASIS Ukraine chapter, but also times of mourning after our colleague, Mr. Rachid Boukhari, passed away in June. Above all, it has been a journey, and one we are grateful to undertake for the mark we make on this world.

From our Centry family to yours, we wish our readers love and joy over the holidays, and a happy new year!

In keeping with the tradition of our year’s end articles on Centry Blog, we put together a list of some of our most-read stories from 2018 below.

January

Centry’s GDPR Guide

Our GDPR guide breaks down exactly what the EU’s General Data Protection Regulation was all about. This article was highlighted on TWiT live in an interview with our CTO Dave Ehman!

February

The Next Gold Rush: Renewable Energy

The Renewable Energy industry just might be the next gold rush for businesses and investors alike. This time, we aren’t hiking into the Klondike for gold; individuals and organizations alike are turning their eyes toward the broader world, looking out for opportunities to make good on this booming initiative.

March

Hidden Sanctions Risk: North Korean ties to Africa

The connection between Namibia and North Korea stands as but one example among many similar stories. It began in the 1960s, when several African countries started the struggle for independence from colonialism. During this vulnerable time period, North Korea invested time and money in these revolutions, where the political ties eventually grew into commercial relationships.

April

Human Trafficking in the European Union

Over the course of the past two decades, the European Union has been making an increased effort to understand and address the heinous crime of human trafficking. The most recent publication of statistics from Eurostat concerning registered victims and suspected traffickers revealed that a number of non-EU nationals are trafficked into member states, primarily from Nigeria.

This week’s article on Centry Blog examines just a facet of this deep and complex issue through analyzing Nigerian campus cults, the international response, and global business reponses.

May

Fake Social Media Profiles and What To Do If You Are Being Impersonated Online

False accounts are prevalent across social media, mainly used for phishing purposes. Whether it’s a bot or malicious actor threatening your account, we put together an instructional guide for those moments that you notice you have a seemingly second profile, not of your own making.

June

Supply Chain Security Introductory Guide

Having a secure logistics supply chain can save your company millions in terms of assets and reputation, and here at Centry, we have the know-how to help you. This article serves as an introductory guide to security in the supply chain.

July

Typosquatters

Sometimes fat-finger errors can lead to more than just an autocorrect goof. Some scammers have figured out how to lay traps surrounding these common mistakes.

August

Common Security Dos and Don’ts

Our article on Common Security Dos and Don’ts covers what you and your business can do to prevent costly breaches of data and trust.

September

Golden Visa for sale! Now on special offer for the 1%

In some countries, you can buy your way to citizenship. European passports and Schengen visas are the most desired traveling documents in the world. Not only do they grant the most traveling freedom, they give access to a safe and stable living environment, with free speech, in a market that can fulfill all your needs. Many EU countries have taken advantage of this by offering entry in exchange for investment. This kind of activity is commonly referred to as a Golden Visa Program.

October

5 Basic Digital Privacy Tips for the Average Person

Digital privacy is for everyone. But it’s also a massive topic that can be very easy to get lost in, especially if you’re new to to it. However, you don’t need to be a security expert nor do you need any particular reason to want to bolster your privacy on the internet.

November

What is Social Engineering?

Social engineering is a growing threat to individuals and businesses alike. In this article, we look into what social engineering is, the ways it can manifest, and what you can do to protect yourself.

December

Cyber Security in the Supply Chain

Your company might have a rigorous Cyber Security policy, and thorough training on all its personnel. But what happens when the security vulnerability comes from a trusted source in the Supply Chain?

Security professionals must now consider not only the possible vulnerabilities of their own network, but their supplier’s network, and their supplier’s supplier network, and so on.

We hope you have enjoyed Centry Blog this year. For more content like this, be sure to subscribe and follow us on Twitter @CentryLTD! We will see you in 2019!

What is Social Engineering?

One of the most common methods of fraud is social engineering. This refers to a calculated deception that targets people in order to obtain sensitive information relative to their business, identity, or finances.  

There are two main categories of social engineering: (a) Mass Fraud, which is mostly comprised of basic techniques meant to scam a high quantity of people; and (b) Targeted Fraud, which is a highly-specialized method of fraud that singles out a specific individual or company.

The majority of these schemes follow the same general path. It begins usually with gathering information on a topic or target. Once enough information about the target has been obtained, scammers can focus on developing a false sense of security and trust with their target. In cases of mass fraud, this could look like replicating the design of a Netflix customer service email, or in targeted fraud establishing enough of a friendly rapport with an individual over the phone that they feel comfortable providing more and more information. Once this has been established, scammers can exploit any of the identified vulnerabilities and ultimately execute the scam.

Social engineering works because it preys on our instinct to trust.

Let’s say you are at work and receive a call or email from a “colleague” asking for some sort of account number or other piece of information related to the business. If you haven’t had any training on your company’s confidentiality policy, you might not think twice about providing this person the information they ask for. After all, they might seem trustworthy, or talk about things in a way that would give you no reason to suspect they aren’t a fellow coworker. That’s because they have meticulously studied how to prop up the illusion.

These types of attacks are common; all you need to do is look at the news to find examples. Just recently it was found that hackers connected to the Russian government were impersonating US State Department employees and sending emails with downloadable attachments. These attachments would then install software that could provide the hackers access to internal systems.

These fraud attempts aren’t just work-related. They can target you at home, too.

The Internal Revenue Service (IRS) of the United States just issued a warning about a new tax related scam. A surge of emails recently have been impersonating the IRS and using “tax transcripts” as bait to trick users into opening documents that contain malware. The malware behind this scam, Emotet, has been historically associated with posing as financial institutions in order to encourage people to download the malicious attachments. The IRS has recommended that if you have received one of these emails to delete it or forward it to phishing@irs.gov.

So how can you protect yourself?

Individuals can take the time to be vigilant of unfamiliar calls and emails. Sometimes social engineering won’t be a singular attempt. It could be repeated calls over years that slowly harvest the information needed to execute a scam. When in doubt, you can double check with the source, and avoid providing personal information. Meanwhile, companies can develop a guide for handling sensitive information to avoid blunders with fake employees. With sufficient training, employees can be taught to recognize different types of fraud and have an established plan for handling it should they come across it.

This article was written by Kristina Weber of Centry Global. For more content like this, subscribe to our blog and follow us on Twitter @CentryLTD!