Data Privacy law coming to California in 2020

Data privacy, specifically as it pertains to our lives online, has been a staple topic of discussion particularly since the General Data Protection Regulation (GDPR) went into effect in the European Union in 2018. The United States is now poised to see a similar, albeit different regulation for the state of California. 

What is the California Consumer Privacy Act? 

The California Consumer Privacy Act (CCPA) was signed into law in June 2018 and enforcement for it will begin on January 1st, 2020.

The CCPA applies to all residents of California, providing them with the following:

  • The right to know what personal data is being collected;
  • The right to access their personal data;
  • The right to know whether this data is sold or disclosed, and to whom;
  • The right to opt out of the sale of personal data;
  • The right to equal service and prices, even if they exercise privacy rights.

Essentially, this bill grants the average person more control over the information that companies may collect on them, and it protects them from being denied service if they choose to exercise privacy rights. 

Important to note is the way the CCPA defines “personal information”, which is as follows:

“Information that identifies, relates to, describes, is capable of being associated with, or could be reasonably linked, directly or indirectly, with a particular consumer or household such as a real name, alias, postal address, unique personal identifier, online identifier Internet Protocol address, email address, account name, social security number, driver’s license number, passport number, or other similar identifiers.”

Another significant detail to be aware of is that the CCPA does not consider Publicly Available Information to be personal information that is protected under the bill.

Who does the CCPA apply to?

  • Any business or for-profit entity that collects consumers’ personal data, which does business in California, and meets at least one of the three following thresholds:
    • Has annual gross revenues in excess of 25 million USD;
    • Possesses the personal information of more than 50,000 consumers, households, and devices; or
    • Earns more than half its annual revenue from selling consumer information

The words “digital privacy law” and “California” in the same sentence most likely conjure images of Silicon Valley companies. However, this regulation will most likely not be world changing for giants like Facebook and Google. This is due in part to GDPR-compliance, which makes it easier to comply with regulations like the CCPA. 

Some Similarities & Differences between the CCPA and GDPR

  • Similarities
    • Both regulations only protect natural persons (individuals) and not legal persons
    • Both require companies to demonstrate after a data breach that they took reasonable steps to protect that data from a breach
    • Both apply to organizations that might not have presence in their respective jurisdictions, but offer goods or services in the region 
  • Differences
    • CCPA
      • Requires companies to provide an opt-out to data sharing
      • Penalties are done on a per-violation fine basis
      • Does not cover certain categories of personal data (e.g. health information) because they are already covered under different US regulations, such as HIPAA.
      • Does not require organizations to hire data protection officers or conduct impact assessments
      • Protects a “consumer” who is “a natural person who is a California resident”
      • Obligations apply specifically to “businesses” that are for-profit, collect consumer personal information and meet certain qualifying thresholds as mentioned above
      • Also applies to any entity that controls or is controlled by the business in question. No obligations are directed specifically at “service providers”
    • GDPR
      • Requires companies to provide an opt-in to data sharing
      • Penalties focus on up to 4% of annual global revenue
      • Protects a “data subject” who is “an identified or identifiable natural person” – meaning that an individual does not specifically need EU residency or citizenship if the controller processing their data is located in the EU. If the controller is located outside the EU, the citizenship/residence condition applies
      • Obligations apply to “controllers”, which could be natural or legal persons in addition to business entities, whether the activity is for-profit or not
      • Obligations also apply to processors, which are entities that process personal data on behalf of controllers.

If you have any questions or comments about whether CCPA will impact you or your business, you can always reach out to our team at Centry for help by emailing info@centry.global! 

Google fined €50 million for data privacy violations; what can we learn from it?

On January 21, 2019, Google was fined for €50 million for violating the European Union’s General Data Protection Regulation (GDPR) by the French Supervisory Authority for data protection (CNIL).

GDPR, which went into effect May 25, 2018, was designed to provide EU citizens with greater control of their personal data and rights to what data they choose to share and how that information is retained by organizations. It required organizations that collect data of EU citizens to obtain “clear and affirmative consent” for data collection, have privacy policies in clear and understandable language, inform individuals when their data was compromised, allow them to transfer their data to other organizations, and the “right to be forgotten”, which is the ability to request that their data be deleted.

The fifty-million Euro fine facing Google was the moment the data privacy industry had been waiting for, as GDPR long promised steep costs for those found to be in violation of its data privacy rules.

CNIL reported that Google failed to fully disclose to users how their personal information is collected and what happens to it, in addition to not properly obtaining user consent for displaying personalized ads.

Although Google had made changes to comply with GDPR, the CNIL said in a statement that “the infringements observed deprive the users of essential guarantees regarding processing operations that can reveal important parts of their private life since they are based on a huge amount of data, a wide variety of services, and almost unlimited possible combinations.” They added that the violations were continuous breaches of the regulation, and “not a one-off, time-limited infringement.”

CNIL began investigating Google on the day of the GDPR deadline in response to concerns raised by to privacy activist groups, None of Your Business and La Quadrature du Net. These groups filed complaints with CNIL, claiming that Google did not have a valid legal basis under GDPR to process personal data for the use of personalized and targeted ads.

These groups have also filed privacy complaints against Facebook and its subsidiaries, including Instagram and WhatsApp.

In the course of their investigation, CNIL found that when users created Google accounts with Android smartphones, the company’s practices violated GDPR in two ways – transparency and legal basis for the ads personalization. CNIL additionally found that the notices Google provided to users about what type of information it sought were not easily accessible.

A key fundament of GDPR is that users are able to easily locate and fully understand the extent of data processing operations carried out by organizations. In this Google was found wanting, where their terms were described as “too generic and vague in manner.” Overall, CNIL concluded that “the information communicated is not clear enough” that the average user could understand that the legal basis of processing operations for the ads personalization is consent. In other words, if you do not provide consent to have your information processed for personalized ads, the company legally cannot do it. Per GDPR, consent for data processing must be “unambiguous” with “clear affirmative action from the user.”  

Google responded to the fine with a statement affirming its commitment to meeting transparency expectations and consent requirements of GDPR, and that it is “studying the decision to determine our next steps.”

Despite the fact that companies were given a two-year time frame to comply with the regulation, many were not compliant by the deadline when it went out on May 25, 2018. Other companies only made limited efforts to become compliant, choosing to wait until the first major fine was released to see how serious the enforcement would be. If some of these companies were hoping to get a pass from another national data protection authority, that decision will most certainly be critically assessed in comparison with CNIL’s approach.

Consent requirements seem to be the greatest obstacle for companies struggling with GDPR’s requirements, especially where it concerns transparency and accessibility. Under GDPR, companies cannot hand out a single consent form with a bundle of data uses. That has long been industry standard practice, and part of the reason that Google was found to in violation of GDPR.  

To avoid facing similar fines in the future, companies can review how they obtain consent to collect personal information of users. Each data set requires its own consent– you have to be able to agree or disagree to each way your information will be used.

This level of transparency is essential and it requires changing from previously accepted business practices.

If you have any questions or comments pertaining to GDPR or this article, feel free to contact us at info@centry.global. Be sure to follow us on Twitter @CentryGlobal an subscribe for more content like this!

This article was written by Kristina Weber, Content Manager of Centry Global.

Centry Quick Check Program for Corporate Due Diligence

New technology has revolutionized corporate investigations and changed the way we go about them. There’s greater efficiency, new insights, and broader reach. However, the downside is that this technology can lull both investigators and clients into a false sense of security.

Computers can provide us with information, but people are still better at evaluating data within context, such as identifying how useful the information is and what it is relevant to. In short, technology can’t yet replicate human analysis – and yet we continue to see a growing dependence upon it for exactly that.

The Value of Professional Investigators in Corporate Due Diligence

In countries where there are robust public records, this dependence on automated scanning and investigative tech is particularly evident. Although investors and corporations still recognize the value of actual investigators in challenging regions across the globe where the public records may not be so accessible or accurate, when it comes to investing in due diligence insidethe US and Canada for example, companies are increasingly drawn by the promise of these low-level automated scans.

However, it’s important to consider that these types of surface level scans will not and cannot encompass a breadth of understanding of an investigated subject. Software driven data harvests conducted without the analytical power of the human mind could expose businesses to risks they may be unaware of, including things like reputational risk, fraud, money laundering, and more.

Most of these automated scans lack coverage on the target in media, whether that’s on social platforms or journalistic content. This surface level research cannot hope to provide a clear and accurate picture of a subject, and it certainly would not appease judicial officials if something were to go wrong.

For example, single-location local records checks cannot account for whether a person has moved cities. It would also not pick up any information about whether or not the subject faced allegations of criminal activity, which is something that can be identified through doing a media assessment. Furthermore, media research can also illustrate any extreme political views or subjects that an investor or company might not want to be associated with.

The professional experience of a professional who has done hundreds, if not thousands of due diligence investigations is something that is highly valuable. They are more likely to be able to provide context around findings that may initially seem adverse, such as whether or not a particular practice is typical for a particular industry or they might pick up contextual clues that could uncover a previously overlooked detail.

Companies seeking to save a dime by purchasing an automated scan with no human inference could be unknowingly setting themselves up for a huge risk in the future.

Our Answer: Investigator Driven Quick Checks for Individuals and Companies

Propelled by increased regulatory concerns among corporate entities and a more competitive environment amid the offers of automated checks, Centry Global has formulated an answer to the question of how to marry meaningful analysis to efficiency in due diligence investigations with our Quick Check (QC) program.

What to Expect from a Centry QC

The QC program combines an identity review, sanctions screening, compliance check, and media research into a single, well-organized background check package on either individuals or companies with a turnaround time of 5-7 business days.

Quick Check of a Company

  • Identity Review
    • Key financial figures
    • Risk Level
    • Beneficial Owners and Senior Management
  • Compliance Review
    • Sanctions and Watchlists Screening
  • Social/Adverse Media Review
  • Analysis and/or Recommendations

Quick Check of an Individual

  • Identity Review
    • Shareholdings and Directorships
  • Compliance Review
    • Sanctions and Watchlists Screening
    • Politically Exposed Persons Screening
    • Litigations Check
  • Social/Adverse Media Review
  • Analysis and/or Recommendations

For more information on these Quick Checks, please feel free to contact us at info@centry.global or on our LinkedIn, Facebook and Twitter pages!

Cryptocurrency OneCoin revealed to be $3bn pyramid scheme

An international pyramid scheme involving the marketing of the cryptocurrency OneCoin has now been revealed. Konstantin Ignatov, his sister Ruja Ignatova and Mark Scott have been charged by the Southern District of New York (SDNY) for wire fraud conspiracy, securities fraud, and money laundering.

OneCoin is a Bulgarian-based company that was founded in 2014 and is still active today. The company’s main operations depended upon selling educational cryptocurrency trading packages to its members, who in turn receive commissions for recruiting others to purchase these packages. SDNY has identified this as a multi-level marketing structure and attributes that to the rapid growth of the OneCoin member network. The company claims to have more than 3 million members worldwide.

In a government press release, Manhattan attorney Geoffrey Berman said that the OneCoin leaders essentially created a multi-billion dollar company “based completely on lies and deceit.”

Leaders of OneCoin were furthermore alleged to have lied to investors to inflate the value of a OneCoin from approximately $0.50 to over $30.00. This was just one facet of a breadth of misinformation perpetrated by the leaders of the company, including claims about how how OneCoin cryptocurrency is mined by company servers, when in reality OneCoins are not mined with computer resources and the use of a private blockchain, which was found to be false in the investigation.

So how damaging was this scheme? The SDNY claims that between 2014-2016 alone, OneCoin was able to generate more than $3.7 billion in sales revenue and earned profits of approximately $2.6 billion. The investigation revealed that Ignatova and her co-founder created the business with the full intent of using it to defraud investors. In one email that was found between OneCoin’s co-founders, Ignatova described her exit strategy for OneCoin, which was simple to take the money and run and to blame someone else.

Konstantin Ignatov was arrested on March 6, 2019 at LAX, while his sister remains still at large. It is estimated that Ruja Ignatova could see up to 85 years in prison if she is found guilty on all accounts, as she faces five separate charges. Mark Scott was arrested in Massachusetts on Sept. 5, 2018 and faces 20 years in prison.

Many authorities across the globe have been notified of OneCoin’s fraudulent behaviours and have attempted to stop the company’s operations.

This article was written by Kristina Weber of Centry Global. For more content like this, be sure to subscribe to Centry Blog for bi-weekly articles related to the security and risk industries. Follow us on Twitter @CentryGlobal!

At the Cost of Life: The Counterfeit Pharmaceuticals Industry

Out of all the fraudulent goods that circulate the world amongst their legitimate counterparts, the counterfeit pharmaceuticals market is one of the largest sectors. Counterfeit pharmaceuticals is a booming business, with multiple sources pointing between an estimated $200-$300 billion dollar revenue. Worse, the stakes behind counterfeiting pharmaceuticals are higher than fake handbags – people around the world depend on medication for their livelihood.

The World Health Organization has a broad definition of what constitutes a counterfeit drug – any drug that has been deliberately or fraudulently mislabeled with respect to identity or source. The term ‘counterfeit pharmaceuticals’ is an umbrella that encompasses everything from changing things like date of expiration on packaging to altering the raw materials to removing the active ingredients from the medication.

Counterfeit medications make up roughly 10% of pharmaceuticals on a global scale, but it’s important to note that this isn’t an even spread throughout each country in the world. Developing countries face fraudulent pharmaceuticals on a wider scale than you might find in first world countries that have more regulations.

But even in those areas there are plenty of opportunities for criminals to take advantage of the system. Counterfeit pharma is an attractive industry to criminals – especially those with organized crime connections – as it can generate enough revenue to rival trafficking things like heroin. This is because they can charge near market prices for big ticket medicines like cancer treatments and insulin, which makes it easier to mass produce knockoffs for a substantial profit.

All drugs must undergo clinical trials to test their efficiency, quality, and potential side effects before they can be marketed to the public. These measures function as a safety valve to protect consumers. However, these regulations are not observed by the manufacturers of counterfeit products. Instead, they use substandard products, leave out active ingredients, or otherwise tamper with the components. The end result can range from ineffective treatment to severe health problems or death.

The way pharmaceuticals move through the supply chain can leave them vulnerable. There’s typically three major phases of it – manufacturing, distributing, and retailers.

Usually the chemical compounds that compose the active ingredients of a drug are manufactured in places like China or India due to the relatively low cost of raw materials in those regions. Indeed, India itself is home to more than fifteen thousand illicit drug factories, which are estimated to supply approximately 75% of the world’s counterfeit drugs. Continuing the manufacturing process, those chemical compounds are made into their respective forms – whether that’s capsules, injections, creams, etc. – either in the country of origin or in USA/Europe.

The drugs are then shipped in large quantities to packing facilities, where they are prepared for the distribution phase. This part is particularly vulnerable, as this is an area where criminals can deposit convincing fakes to be combined with the legitimate drugs.   

That said, counterfeiting can occur outside the distribution chain as well because of internet and mail order markets, street vendors, etc. This is especially common in areas where medicine can be hard to acquire – counterfeit pharmaceuticals are a big problem in the developing world. Some pharmacists in these developing regions are compelled to buy from the cheapest – but not necessarily the safest – suppliers in order to compete with the street market.

Things such as lotions, creams, and oils are often counterfeited as they are relatively easy to make and then sold by illegitimate suppliers. Production of these goods is generally less regulated than pills or injections because knockoff ointments are most likely going to be less potentially damaging than their oral counterparts. That said, there were a significant portion of counterfeit injectables found amongst the legitimate pharmaceutical supply chain, which tend to cost more and can have deadly effects. Overall, though, anything that is profitable is at risk to be counterfeited – things that range from treatments for AIDS to Cancer to Diabetes to antibiotics and more. Theoretically, every patient is at risk – thus counterfeit pharmaceuticals are a problem that should concern everyone.

The human toll is enormous, as a study by the World Health Organization calculated that up to 72,000 deaths from childhood pneumonia could have been attributed to the use of antibiotics that had reduced activity, and that number climbs to 169,000 if the drugs had no active ingredients at all. These low quality drugs also add to the danger of antibiotic resistance, which threatens to undermine the power of these life-saving medicines in the future.

Three international security organizations including Interpol, the Institute of Security Studies, and the Global Initiative against Transnational Organized Crime have called for an overhaul of the regulatory, enforcement, and education systems for medical supply chains in Africa specifically. This is in order to reduce the spread of counterfeit pharmaceuticals across the continent. The three organizations commissioned a study called “The rise of counterfeit pharmaceuticals in Africa” under a bigger project funded by the European Union that is dedicated to Enhancing Africa’s Response to Transnational Organized Crime (ENACT). Findings from the study suggested that counterfeit pharmaceuticals accounted for nearly 30% of drugs on the market in Africa.

In order to address the proliferation of fraudulent pharmaceuticals, anti-counterfeiting measures have been rising up around the world. For example the U.S. Drug Supply Chain Security Act requires pharmaceutical companies to add serial numbers to all packages over the next few years, which should help in tracking the movement of the medications through the supply chain. Several non-profits have been founded to combat the issue as well.  Overall, however, finding a solution to this issue is going to be something that requires sustained commitment not only on a national level but on an international plane between health organizations, law enforcement, healthcare stakeholders, and the pharmaceutical industry as a whole.

This article was written by Kristina Weber of Centry Global. For more content like this, subscribe to our blog for bi-weekly articles related to the security industry and follow us on Twitter with our new handle @CentryGlobal!

The Future of AI, Security, & Privacy

Artificial Intelligence is a subject that is not just for researchers and engineers; it is something everyone should be concerned with.

Martin Ford, author of Architects of Intelligence, describes his findings on the future of AI in an interview with Forbes.

The main takeaway from Ford’s research, which included interviews with more than twenty experts in the field, is that everyone agrees that the future of AI is going to be disruptive. Not everyone agrees on whether this will be a positive or negative disruption, but the technology will have a massive impact on society nonetheless.

Most of the experts concluded that the most real and immediate threats are going to be to cyber security, privacy, political systems, and the possibility of weaponizing AI.

AI is a very useful tool for gathering information, owing to its speed, the scale of data it can process, and of course the automation. It’s the most efficient way to process a large volume of information in a short time frame as it can work faster than human analysts. That said, it can come with some detriments. We have started to see that its algorithms are not immune to gender and race bias in areas such as hiring and facial recognition software. Ford suggests that regulation is necessary for the immediate future, which will require continuing conversation concerning AI in the political sphere.  

AI-based consumer products are vulnerable to data exploitation, and the risk of that has only risen as we have become more dependant on digital technology in our day to day lives. AI can be used to identity and monitor user habits across multiple devices, even if your personal data is anonymized when it becomes part of a larger data set. Anonymized data can be sold to anyone for any purpose. The idea is that since the data has been scrubbed, it cannot be used to identify individuals and is therefore safe to use for analysis or sale.

However, between open source information and increasingly powerful computing, it is now possible to re-identify anonymized data. The reality is that you don’t need that much information about a person to be able to identify them. For example, much of the population of the United States can be identified by the combination of their date of birth, gender, and zip code alone.

With consent-based regulations such as GDPR concerning the right to digital privacy, it is clear that people want to know how their information is used, why, and how it can affect their lives. Furthermore, that they want control over how their information is used.

This article was written by Kristina Weber, Content Supervisor of Centry Ltd. For more content like this, be sure to subscribe to our blog, which updates every other Friday with articles related to the security industry!

Security Predictions for 2019

The predictions for 2018 that we shared last year seemed to land on the points of data protection and cyber security, while it strayed from others – most notably on the front of cryptocurrencies. BitCoin was a hot topic in 2017, surging to values that had people everywhere kicking themselves for not investing sooner. What unfolded after was an epidemic of articles predicting a global acceptance of cryptocurrencies. That balloon popped when the cryptocurrency market crashed in early 2018, and it seems that many have quietly reneged their cryptocurrency hype since.

Continuing the tradition, here are a few insights into the forecast for 2019:

Supply Chain Attacks. While these threats can occur in every sector of the economy as it pertains to supply chains, the industries that most commonly experience these attacks include pharmaceuticals, biotechnology, hospitality, entertainment, and media. Manufacturing operations are attractive targets to adversaries, due in part to having such a broad potential surface of attack. With increasing reliance on the supply chain, there is a wealth of information that could be obtained if organizations have not taken appropriate steps to secure themselves. For more information on cyber security in the supply chain, read our article here.

Further development of consumer privacy laws. Last year we saw the launch of the European Union’s GDPR, which marked the first big regulatory move toward protecting consumer information. Soon after, California passed a bill (Consumer Privacy Act of 2018) that seems to be the state’s version of GDPR – it is slated to go into effect at the end of 2019. A draft for a federal privacy bill for the United States may arrive early in 2019 after concerns over a number of privacy breaches.

Continuing adoption of artificial intelligence across wider society. From Alexa to politics, AI will continue to spread across industries and uses. Chinese companies have announced intentions to develop AI processing chips to avoid reliance on US-manufactured Intel and Nvidia. There is rising concern that AI technology could be increasingly used by authoritarian regimes for the purpose of restricting personal freedoms. As AI continues to spread its proverbial wings, we could see a move toward “transparent AI”, that is, an effort to gain consumer trust in the use of AI by being clear in how it uses human data and why. Of course there is always the worry that the rise of AI will create a jobless future for people, however Gartner suggests the opposite, that artificial intelligence will create more jobs than it will eliminate.

Big data breaches will push companies to tighten login security. We might see a concerted effort of the security industry to replace username/passwords altogether, pushing toward an alternative solution as an industry standard. Biometrics – for example facial recognition or fingerprint logins – are certainly on the rise.

Digital skimming will become more prevalent. The trick of card skimming has moved to the digital world, where attackers are going after websites that process payments. The growth of online shopping has made checkout pages attractive targets. British Airways and Ticketmaster were two high profile cases of this. The British Airways case was particularly alarming, as airlines in general have access to a wide breadth of information ranging from birthdates, passport details, payment information and more. Although the airline was able to confirm that no travel data was stolen in the attack, it nonetheless remains as a cautionary tale.

This article was written by Kristina Weber. For more content like this, be sure to subscribe to Centry Blog for bi-weekly articles related to the security industry. Follow us on Twitter @CentryLTD and @CentryCyber!