The Secure Times

An online forum of the ABA Section of Antitrust Law's Privacy and Information Security Committee

Leave a comment

Caution: Your Company’s Biggest Privacy Threat is…the FTC

Technology companies – from startups to megacorporations – should not overlook an old privacy foe: the Federal Trade Commission (FTC).  Since its inception in 2002, the FTC’s data security program has significantly picked up steam.  In the last two years, the FTC has made headlines for its hefty privacy-related fines against Google and photo-sharing social network, Path.  In January 2014 alone, the agency settled with a whopping 15 companies for privacy violations.  What is more, many of these companies’ practices were not purposefully deceptive or unfair; rather the violations stem from mere failure to invest the time and security resources needed to protect data.

Vested with comprehensive authority and unburdened by certain hurdles that class actions face, the FTC appears poised for more action.  The FTC’s basis for its authority in the privacy context originates from the Federal Trade Commission Act (FTC Act) and is quite broad.  Simply put, it may investigate “unfair and deceptive acts and practices in or affecting commerce.”  In addition to this general authority, the FTC has authority to investigate privacy violations and breaches under numerous sets of rules, including the Children’s Online Privacy Protection Act (COPPA), the Fair Credit Reporting Act including disposal (FCRA), the Gramm-Leach-Bliley Act (GLB), and the Telemarketing and Consumer Fraud and Abuse Prevention Act.  Nor is the FTC hampered with the requirements of private class action litigation.  For example, successful privacy class actions often must establish that consumers were harmed by a data breach (as in In re Barnes & Noble Pin Pad Litigation), consumers actually relied on a company’s promises to keep the information confidential (as in In re Apple iPhone Application Litigation), or the litigation will not be burdened with consumer-specific issues (such as whether the user impliedly consented to the disclosure, as in In re: Google Inc. Gmail Litigation).

The FTC has often focused on companies failing to adhere to their own stated policies, considered a “deceptive” practice by the FTC.  More recently, the FTC settled with the maker of one of the most popular Android Apps, “Brightest Flashlight Free.”  While the App informed users that it collected their data, it is alleged to have failed to disclose that the data would be shared with third parties.  And though the bottom of the license agreement offered consumers an opportunity to click to “Accept” or “Refuse,” the App is alleged to have already been collecting and sending information (such as the location and the unique device identifier) prior to receiving acceptance.  Just last week, the FTC settled with Fandango for failing to adequately secure data transmitted through its mobile app, in contravention of its promise to users.  The FTC alleged that Fandango disabled a critical security process, known as SSL certificate validation, which would have verified that its app’s communications were secure.   As another example, the FTC recently settled with a maker of a camera device used in homes for a variety of purposes, including baby monitoring and security.  The device allows the video to be accessed from any internet connection.  The devices are alleged to have “had faulty software that left them open to online viewing, and in some instances listening, by anyone with the cameras’ Internet address.”

Companies have also been targeted for even slight deviations from their stated policies.  For example, the FTC recently reached settlements with BitTorrent and the Denver Broncos.  The entities were blamed for falsely claiming they held certifications under the U.S.-EU Safe Harbor framework.  In reality, the entities had received the certifications but failed to renew them.  The safe harbor is a streamlined process for US companies (that receive or process personally identifiable information either directly or indirectly from Europe) to comply with European privacy law.  Self-certifying to the U.S.-EU Safe Harbor Framework also ensures that EU organizations know that the organization provides “adequate” privacy protection.

Perhaps most surprising to companies is the FTC’s assertion that it may require them to have reasonable data protection policies in place (even if the company never promised consumers it would safeguard the data).  Failure to secure data, according to the FTC, is an “unfair” practice under the FTC Act.  For example, the FTC recently settled with Accretive Health, a company that handles medical data and patient-financial information.  Among other things, Accretive was alleged to have transported laptops with private information in an unsafe manner, leading to a laptop (placed in a locked compartment of an employee’s car) being stolen.  It is estimated that the FTC has brought over 20 similar types of cases, but all but one settled before any meaningful litigation.  The one: a case against Wyndham Hotels.  There, the FTC has alleged that Wyndham failed to adequately protect consumer data collected by its member hotels.  According to the FTC, hackers repeatedly accessed the data due to the company’s wrongly configured software, weak passwords, and insecure servers.  Though Wyndham’s Privacy Policy did not technically promise that the information would remain secure, the FTC faulted it for the lapse anyway.  Wyndham has challenged the FTC’s position in federal court and a decision is expected soon.

Being a target of an FTC action is no walk in the park.  In addition to paying for attorney fees, the FTC often demands significant remedial measures.  For instance, the company may be asked to (1) create privacy programs and protocols, (2) notify affected consumers, (3) delete private consumer data, (4) hire third-party auditors, and (5) subject itself to continual oversight by the FTC for 20 years.  What is more, if a company ever becomes a repeat offender and violates its agreement not to engage in future privacy violations, it will face significant fines by the FTC.  In this regard, for example, Google was required to pay $22.5 million for violating a previous settlement with the FTC.

All told, technology companies should not feel emboldened by recent class action victories in the privacy context.  To avoid FTC investigation, they should carefully review their data handling practices to ensure that they are in accord with their privacy policy.  Further, they would be wise to invest in the necessary resources required to safeguard data and regularly ensure that their methods are state of the art.


Leave a comment

Google Avoids Class Certification in Gmail Litigation

On March 18, 2014, Judge Koh in the Northern District of California denied Plaintiffs’ Motion for Class Certification in the In re: Google Inc. Gmail Litigation matter, Case No. 13-MD-02430-LHK. The case involved allegations of unlawful wiretapping in Google’s operation of its Gmail email service. Plaintiffs alleged that, without obtaining proper consent, Google unlawfully read the content of emails, extracted concepts from the emails, and used metadata from emails to create secret user profiles.

Among other things, obtaining class certification requires a plaintiff to demonstrate that class issues will predominate over individual issues. In this case, Judge Koh’s opinion focused almost exclusively on the issue of predominance. The Court noted that the predominance inquiry “tests whether proposed classes are sufficiently cohesive to warrant adjudication by representation.” Opinion (“Op.”) at 23 (citations omitted). The Court further emphasized that the predominance inquiry “is a holistic one, in which the Court considers whether overall, considering the issues to be litigated, common issues will predominate.” Op. at 24.

The Court in the Gmail litigation noted how the existence of consent is a common defense to all of Plaintiffs’ claims. Consent can either be express, or it can be implied “based on whether the surrounding circumstances demonstrate that the party whose communications were intercepted new of such interceptions.” Op. at 26. The decision explained how common issues would not predominate with respect to a determination of whether any particular class member consented to Google’s alleged conduct.

The Court briefly addressed whether the issue of express consent could be practically litigated on a class-wide basis, but the opinion focused largely on the issue of implied consent. The Court noted that implied consent “is an intensely factual question that requires consideration of the circumstances surrounding the interception to divine whether the party whose communication was intercepted was on notice that the communication would be intercepted.” Op. at 30. Google contended that implied consent would require individual inquiries into what each person knew. Google pointed to a plethora of information surrounding the scanning of Gmail emails including:  (1) Google’s Terms of Service; (2) Google’s multiple Privacy Policies; (3) Google’s product-specific Privacy Policies; (4) Google’s Help pages; (5) Google’s webpages on targeted advertising; (6) disclosures in the Gmail interface; (7) media reporting of Gmail’s launch and how Google “scans” email messages; (8) media reports regarding Google’s advertising system; and (9) media reports of litigation concerning Gmail email scanning. The Court thus agreed with Google that there was a “panoply of sources from which email users could have learned of Google’s interceptions other than Google’s TOS and Privacy Policies.” Op. at 33. With all these different means by which a user could have learned of the scanning practices (and provided implied consent to the practice) the issue of consent would overwhelmingly require individualized inquiries and thus precluded class certification.

This opinion demonstrates a key defense to class action claims where implied consent is at issue. Any class action defendant’s assessment of risk should include an early calculation of the likelihood of class certification, and that calculation should inform litigation strategy throughout the case. Google consistently litigated the matter to highlight class certification difficulties surrounding consent, and ultimately obtained a significant victory in defeating class certification.

Leave a comment

PHI Whack-a-Mole

Well, the newest Ponemon Study confirms what the news keeps reporting:  health care organizations continue to be plagued by PHI security incidents.  In fact, 90 percent of healthcare organizations surveyed reported experiencing breaches.  And 45 percent agree that they have inadequate policies and procedures in place to effectively detect or prevent PHI security incidents.  


So what’s causing these breaches?  According to the Ponemon Institute’s fourth annual Patient Privacy & Data Security Study, patient data security and privacy threats are new and expanding.  For the health care organizations, it is a bit like playing whack-a-mole. First place for types of actual PHI breach goes to lost or stolen computing devices (49 percent), followed by employee error (46 percent), and third-party mishap (41 percent). The rate of data breaches caused by a malicious insider or hacker has doubled from 20 percent of all incidents to 40 percent since the first Ponemon Institute study four years ago.  This last statistic is of special concern to Dr. Ponemon:  “The latest trend we are seeing is the uptick in criminal attacks on hospitals…. With millions of new patients entering the U.S. healthcare system under the Affordable Care Act, patient records have become a smorgasbord for criminals.”  The problem, of course, is that the cybercriminals are becoming more and more sophisticated with their malicious tactics and this is a huge challenge to address for the often financially strapped healthcare organizations.


What’s keeping the healthcare executives up at night?  75 percent worry the most about employee negligence; in particular, BYOD.  While 88 percent of the organization allow employees and medical staff to use their own devices, more than half are not confident that the BYOD are secure and 38 percent have no procedures to secure the devices or prevent them from accessing sensitive information. 


Business associates (BA) are worrying these executives as well.  As the healthcare environment expands with more healthcare organizations reliant on BA’s for services such as IT, claims processing and benefits management, greater risks emerge.  Less than one-third of those surveyed expressed confidence that their BAs are protecting patient data in ways mandated by the HIPAA Final Rule.


So, is there any good news?  Are any moles actually getting whacked?  Fortunately, more than half of the health care organizations surveyed (55 percent) believe they do have effective policies and procedures in place to prevent or quickly detect unauthorized patient data access, loss or theft.  This is not unfounded as the actual number and cost of data breaches has slightly declined from prior study results.  Further, while the 2013 Report noted that 45 percent of the healthcare organizations state they have had more than five incidents in the last two years, this year that percentage declined to 38 percent.  So, slow but steady progress. The challenge, according to Ponemon, is that organizations need to imbed a culture of compliance to stem the security risks.  Healthcare organizations must implement tools, processes and software that automate and streamline the practice of protecting PHI—these are the mallets health care organizations need to whack those data security moles. 

Leave a comment

Google ‘Glasstiquette’ and Other Google Glass Issues

I was sitting at an airport café a few weeks ago, and witnessed a group of friends having fun, speaking and laughing together around a table. Next to them was a middle-aged business man wearing Google Glass.

One of the members of the group spotted the glasses, and started a conversation with their wearer. The business man obliged, saying he liked them very much, and then said: Oh, and by the way, I just took a picture of you!

The mood of the group subtly changed. They politely ended the conversation and then, very slowly, as not to be rude, all turned their back to the business man.

I had just witnessed a Google Glass breach of etiquette, and for what is worth, it did not seem that the culprit was unaware of it.

Etiquette for a Wearable Computers World

Do we need new rules to live in a society where the person next from us on line at the supermarket counter, or worse, behind us at the pharmacy while we pick up drugs, may be able to take a photograph of us, or to film us, without us even noticing it?

Google itself seems to believe we do,  as it recently released a guide for ‘Google Glass Explorers,’ people who have been given the opportunity to test Google Glass, which are still not available for sale.

Well, it seems that the business man I witnessed at the airport broke one of the Do’s of the guide: ‘Ask for permission.’ While this seems just common sense if one if taking a photograph, how could we possibly ask for permission to film a crowd? Should we ask every single person for permission?

Google Glass and Law Enforcement

Knowing the answer to this question may be a matter of personal safety. A woman claimed this week that she was attacked and robbed in a bar at San Francisco while wearing Google Glass, and, according to her, it was because she was wearing Google Glass.

It seems that she was later able to retrieve her Google Glass. Interestingly, its camera had apparently recorded the incident, showing a man ripping the glasses off her face. Could the recoding be used as evidence?

Indeed, Google Glass may be used by law enforcement officials in the near future. New York’s finest, the NY Police Department, is experimenting with two pairs of Google Glass, to find out whether they could be used for police work. That could raise some interesting fourth amendment issues.

On the other hand, Google Glass may become the pet peeve of police officers in charge of enforcing road security.

A woman got a ticket in San Diego last October, for speeding, but also for wearing Google Glass, as 27602(a) of the California Vehicle Code forbids driving a motor vehicle if “a television receiver, a video monitor, or a television or video screen, or any other similar means of visually displaying a television broadcast or video signal that produces entertainment or business applications, is operating and is located in the motor vehicle at a point forward of the back of the driver’s seat, or is operating and the monitor, screen, or display is visible to the driver while driving the motor vehicle.”

The case was later dismissed, as the judge did not find enough evidence to prove beyond a reasonable doubt that Google Glass was switched on while its wearer was driving.

Is it safe for car drivers to wear Google Glass while driving? It may be banned soon by legislators, just as texting and using a telephone while driving is forbidden in most states. Some states, such as Illinois, have already introduced bills to that effect. It remains to be seen if they will be enacted; Reuters reported this week that Google is lobbying to stop these bills from becoming law.

Facial Recognition and Google Glass

Even though Google has not (yet) developed a facial recognition app for Google Glass, another company, NameTag has done so. It would allow Google Glass wearers to pick a suitable date among a crowd of strangers, by scanning social media information of people around them. That triggered concerns from Senator Al Franken (D-Minnesota), who sent a letter on February 5th to the President of NameTag, expressing his “deep concerns” about the new app  and urging him to delay its launch “until best practices for facial recognition technology are established.”

It is not the first time legislators are expressing concerns over the privacy issues involved in Google Glass, and it probably won’t be the last.