The Secure Times

An online forum of the ABA Section of Antitrust Law's Privacy and Information Security Committee


Leave a comment

FTC Chairwoman Edith Ramirez Comments on Data Security for the Internet of Things

Happy New Year! For many, the holidays included exciting new gadgets. Whether it’s a new fitness tracker, a smart thermostat, or a smart glucose meter, these new connected devices have arrived, and new products are on the horizon. These products, termed the “Internet of Things” by privacy professionals, are broadly defined as products that can connect to a network.

On January 6, 2015, FTC Chairwoman Edith Ramirez delivered the opening remarks at the International Consumer Electronics Show, during which she spoke on security issues surrounding the Internet of Things (“IoT”). Chairwoman Ramirez discussed what she viewed as three key risks to consumer privacy, along with suggested industry solutions to mitigate those risks.

IoT Risks to Consumer Privacy
The first privacy and security risk of connected devices Chairwoman Ramirez identified was that connected devices engage in “ubiquitous data collection.” Because these devices can potentially collect personal information, including our habits, location, and physical condition, the data can lead to rich profiles of consumer preferences and behavior.

The second risk Chairwoman Ramirez identified was the possible unexpected use of consumer data acquired through connected devices. As an example, she asked whether data from a smart TV’s tracking of consumer television habits could be combined with other data to enable businesses to engage in targeted advertising or even exacerbate socio-economic disparities.

The third risk she identified was that connected devices can be hijacked, leading to misuse of personal information.

Suggested Industry Solutions
To combat the risks identified above, Chairwoman Ramirez suggested three solutions for the IoT industry. First, IoT companies should engage in “Security by Design,” namely that IoT products should be built initially with a priority on security, and that IoT companies should implement technical and administrative measures to ensure reasonable security. Chairwoman Ramirez identified five aspects of Security by Design:

  • conduct a privacy or security risk assessment as part of the design process;
  • test security measures before products launch;
  • use smart defaults—such as requiring consumers to change default passwords in the set-up process;
  • consider encryption, particularly for the storage and transmission of sensitive information, such as health data; and
  • monitor products throughout their life cycle and, to the extent possible, patch known vulnerabilities.

Second, Chairwoman Ramirez suggested that companies that collect personal information should engage in data minimization, viz. that they should collect only the data needed for a specific purpose and then safely destroy that data afterwards. Chairwoman Ramirez also urged companies to de-identify consumer data where possible.

Finally, Chairwoman Ramirez suggested that IoT companies provide notice and choice to consumers for unexpected collection or uses of their data. As an example, Chairwoman Ramirez stated that if IoT companies are sharing data from a smart thermostat or fitness band with data brokers or marketing firms, those companies should provide consumers with a “simple notice of the proposed uses of their data and a way to consent.”

Although not official FTC statements, these remarks by Chairwoman Ramirez provide valuable insight into how the Federal Trade Commission may regulate connected devices in the future. Companies in the IoT space should monitor further developments closely and review their data collection, security, and sharing practices accordingly.


Leave a comment

2014 Verizon Data Breach Report Paints a Sobering Picture of the Information Security Landscape

The 2014 Verizon Data Breach Investigations Report (DBIR) was released on April 22, providing just the sort of deep empirical analysis of cybersecurity incidents we’ve come to expect from this annual report. The primary messages of this year’s DBIR are the targeting of web applications, continued weaknesses in payment systems, and nine categories of attack patterns that cover almost all recorded incidents. Further, despite the attention paid to last year’s enormous data breach at Target, this year’s data shows that attacks against point of sale (POS) systems are actually decreasing somewhat. Perhaps most importantly, the underlying thread that is found throughout this year’s DBIR is the need for increased education and application of digital hygiene.

Each year’s DBIR is compiled based on data from breaches and incidents investigated by Verizon, law enforcement organizations, and other private sector contributors. This year, Verizon condensed their analysis to nine attack patterns common to all observed breaches. Within each of these patterns, Verizon cites the software and vectors attackers are exploiting, as well as other important statistics such as time to discovery and remediation. The nine attack patterns listed in the DBIR are POS intrusions, web application attacks, insider misuse, physical theft/loss, miscellaneous errors, crimeware, card skimmers, denial-of-service (DoS) attacks, and cyber-espionage. Within industry verticals, most attacks can be characterized by only three of the nine categories.

Attacks on web applications attacks were by far the most common threat type observed last year, with 35% of all confirmed incidents linked to web application security problems. These numbers represents a significant increase over the three-year average of 21% of data breaches from web application attacks. The DBIR states that nearly two thirds of attackers targeting web applications are motivated by ideology, while financial incentives drive another third. Attacks for financial reasons are most likely to target organizations from the financial and retail industries. These attacks tend to focus on user interfaces like those at online banking or payment sites, either by exploiting some underlying weakness in the application itself or by using stolen user credentials. To mitigate the use of stolen credentials, the DBIR advised companies to consider implementing some form of two-factor authentication, a recommendation that is made to combat several attack types in this year’s report.

The 2014 DBIR contains a wide array of detailed advice for companies who wish to do a better job of mitigating these threats. The bulk of this advice can be condensed into the following categories:

  • Be vigilant: Organizations often only find out about security breaches when they get a call from the police or a customer. Log files and change management systems can give you early warning.
  • Make your people your first line of defense:  Teach staff about the importance of security, how to spot the signs of an attack, and what to do when they see something suspicious.
  • Keep data on a ‘need to know basis’: Limit access to the systems staff need to do their jobs. And make sure that you have processes in place to revoke access when people change role or leave.
  • Patch promptly: Attackers often gain access using the simplest attack methods, ones that you could guard against simply with a well-configured IT environment and up-to-date anti-virus.
  • Encrypt sensitive data: Then if data is lost or stolen, it’s much harder for a criminal to use.
  • Use two-factor authentication: This won’t reduce the risk of passwords being stolen, but it can limit the damage that can be done with lost or stolen credentials.
  • Don’t forget physical security. Not all data thefts happen online. Criminals will tamper with computers or payment terminals or steal boxes of printouts.

These recommendations are further broken down by industry in the DBIR, but they largely come down to a liberal application of “elbow grease” on the part of companies and organizations. Executing on cyber security plans requires diligence and a determination to keep abreast of continual changes to the threat landscape, and often requires a shift in culture within a company. But with the FTC taking a more aggressive interest in data breaches, not to mention the possibility of civil suits as a response to less-than-adequate data security measures, companies and organizations would do well to make cyber security a top priority from the C-Suite on down.


Leave a comment

FTC v. Wyndham Update, Part 3

In earlier updates, we’ve provided background and tracked the progress (and the unique circumstances) of FTC v. Wyndham Worldwide Corp., et al. On April 7, a highly anticipated opinion was issued by New Jersey District Court Judge Esther Salas in a case that will likely have broad implications in the realms of privacy and data security. Through a motion to dismiss, Wyndham argued that the FTC had no authority to assert a claim in the data security context, that the FTC must first formally promulgate data security regulations before bringing such a claim, and that the FTC’s pleadings of consumer harm were insufficient to support their claims. The Wyndham court sided with the FTC on all of these arguments, and dismissed Wyndham’s motion to dismiss.

Continue reading


Leave a comment

Ninth Circuit Holds Actual Injury Not Required For Article III Standing under FCRA

On February 4, 2014, in Robins v. Spokeo, Inc., the Ninth Circuit reversed a district court and held that a plaintiff had standing to pursue a claim for damages under the Fair Credit Reporting Act (FCRA).

Spokeo is a data broker that operates a “people search” website that allows users to obtain information about other individuals, including contact information, marital status, age, occupation, economic health, and wealth level.  The complaint asserted that Spokeo violated a number of provisions of the FCRA, such as the requirement that the company, as an alleged “consumer reporting agency,” did not follow reasonable procedures to assure the requisite accuracy of information about consumers or provide notices to providers and users of information.  With respect to harm, the named plaintiff, bringing the action on behalf of a putative class, asserted that Spokeo had provided inaccurate information about him – namely, that he had a graduate degree and was wealthy – which diminished his employment prospects and led to anxiety and stress about his damaged ability to obtain work.

The Ninth Circuit easily dispensed with the challenge to standing as a statutory matter.  The court reasoned that because the FCRA provides a private right of action that does not require proof of actual damages, so, too, the statute does not require a plaintiff to plead actual damages to have standing.

As for Article III injury-in-fact, the Ninth Circuit required no more in the way of pleading actual damages.  The court explained that, first, a plaintiff must allege that his statutory rights have been violated.  Second, the statutory rights at issue must protect against “individual, rather than collective, harm.”  The plaintiff alleged that he personally was injured by Spokeo’s provison of inaccurate information about him.  And his “personal interests in the handling of his . . . information are individualized rather than collective,” and therefore constitute “concrete de facto injuries.”  As for causation and redressability, once again, the statutory cause of action controlled:  the alleged violation of a statutory provision “caused” the violation of a right conferred by that provision.  Likewise, the court reasoned that statutory damages are presumed to redress the alleged injury.

The Ninth Circuit’s Spokeo decision follows the reasoning of Beaudry v. TeleCheck Services, Inc., 579 F.3d 702 (6th Cir. 2009).  At the same time, such statutory cases stand in contrast to the many – but by no means all – class actions where plaintiffs have struggled to plead injury-in-fact to pursue state common law claims seeking damages following the loss of personal information in a data breach.  See, e.g., Reilly v. Ceridian Corp., 664 F.3d 38 (3d Cir. 2011) (dismissing complaint for lack of standing); Key v. DSW Inc., 454 F. Supp. 2d 684 (S.D. Ohio 2006) (same); but see, e.g., Pisciotta v. Old Nat’l Bancorp, 499 F.3d 629 (7th Cir. 2007) (holding plaintiff demonstrated standing).


1 Comment

FTC v. Wyndham Update, Part 2

Update (April 10, 2014): For a more recent update on this case, please see this post.

Since our last update, there has been some interesting activity in the matter concerning the Federal Trade Commission’s (FTC) complaint against Wyndham Worldwide, currently pending before the U.S. District Court for the District of New Jersey, and I thought this might be a good time for an update on these proceedings. This case has drawn considerable attention, mainly due to Wyndham’s challenge of the FTC’s authority to bring a data security enforcement action on unfairness grounds under Section 5 of the FTC Act. The outcome in this matter may very well have a profound effect on the FTC’s ability to regulate data security.

When we last discussed this matter, Wyndham’s motions to dismiss the FTC’s complaint, asserting, inter alia, that the FTC lacked the legal authority to enforce data security standards for private businesses, were before the court. On November 7, 2013, Judge Esther Salas heard oral argument on these motions. Wyndham opened by citing FDA v. Brown & Williamson Tobacco Corp., 529 U.S. 120 (2000), as support for their proposition that the FTC’s data security standards exceeded the agency’s authority. Judge Salas remained skeptical during the hearing, stating that she thought Brown & Williamson was distinguishable.

Wyndham further argued that the FTC’s informal data security guidelines are insufficient, and do not provide fair notice of what is required under Section 5. Wyndham questioned both the FTC’s authority as well as their expertise in this area. The FTC countered by asserting that its data security guidelines, which include best practices and past consent decrees, put businesses on notice of what is required to meet a standard of reasonableness.

Following oral argument, Judge Salas denied Wyndham’s motion to stay discovery proceedings, but did not immediately address Wyndham’s other pending motions to dismiss. On December 27, Judge Salas ordered the parties to submit a supplemental, joint letter brief to the Court, addressing the two outstanding motions to dismiss. The parties filed their joint letter brief on January 21.

In the brief, Wyndham once again argued that the FTC lacks the statutory authority to regulate data security practices for every American company. Wyndham pointed out that Congress has limited the FTC’s data security power to only certain, well-defined areas, citing the Fair Credit Reporting Act (FCRA), Gramm-Leach-Bliley Act (GLBA), and the Children’s Online Privacy Protection Act (COPPA) as evidence of these boundaries. Wyndham dismissed the FTC’s argument that FCRA, GLBA, and COPPA merely supplement the FTC’s existing data security authority as “revisionist history.”

In addition, Wyndham refuted the theory first raised by the FTC at oral argument, that Congress “understood Section 5 to provide the FTC with general police power over data-security matters,” but enacted FCRA, GLBA, and COPPA “for the limited purpose of freeing the Commission from the need to prove substantial consumer injury in specific contexts” as “a far-fetched reconstruction” of Congressional intent. Wyndham cited the context and legislative history of the FCRA, GLBA, and COPPA as further proof that the FTC is exceeding its authority, stating that Congress enacted these statutes “precisely because it believed that data security was not covered by existing statutory provisions, including Section 5 of the FTC Act.” (emphasis in original).

Finally, Wyndham reasserted that, even if the FTC is correct in its understanding of the statutes, they have not provided businesses fair notice required by the Due Process Clause. Wyndham points out that the “FTC has not published any rules, regulations, or guidelines explaining to businesses what data-security protections they must employ to comply with the FTC’s interpretation of Section 5 of the FTC Act.” This has been a growing concern among U.S. businesses, which face a daily struggle against data breaches and other related information security incidents, and are unsure of what “reasonable data security practices” might mean.

In its section of the brief, the FTC responded by asserting that “Section 5 of the FTC Act applies by its terms to all unfair commercial practices,” and “is not susceptible to a ‘data security’ exception.” The FTC highlighted the recent LabMD and Verizon decisions as supporting their argument for statutory authority.

The FTC also reiterated its position that the FCRA, GLBA, and COPPA permit the FTC to enforce these statutes using “additional enforcement tools,” which differentiate them from the FTC Act. Further, the FTC argues that where the FTC Act “merely authorizes,” the FCRA, GLBA, and COPPA “affirmatively compel the FTC to use its authority in particular ways” in certain contexts, which does not “divest the [FTC] of its preexisting and much broader authority to protect consumers against ‘unfair’ practices.”

Finally, the FTC pointed out that, while the court did not request additional briefing on the due process question, they felt obliged to respond to Wyndham’s claim on this point. The FTC asserted that to follow Wyndham’s argument would “undermine 100 years of FTC precedent,” and would “crash headlong” into Supreme Court precedent regarding Section 5 of the FTC Act.

Of note, the FTC has also been making similar arguments before Congress, where the FTC has expressed its support for new data security legislation. In hearings held before House Energy and Commerce Committee, the FTC emphasized its ongoing efforts to promote data security through civil law enforcement, education, and policy initiatives.

The court accepted parties’ brief and submissions of supplemental authority on January 23, and granted Wyndham’s request to submit a five-page letter brief in order to respond to the substantive issues raised by the FTC’s inclusion of the LabMD and Verizon decisions. Wyndham filed this brief on January 29, citing multiple negative responses to these decisions in the press as evidence of a breach of the “fundamental principles of fair notice” that “imposes substantial costs on business.”

Judge Salas has not yet responded to these arguments, but we will certainly be keeping a very close eye on these proceedings and its implications for the FTC regulation of data security standards.


Leave a comment

Tributes and a Call to Action – Remembering Aaron Swartz

A year ago, on January 11, 2012, 26-year-old internet activist Aaron Swartz committed suicide while facing up to 35 years in prison and up to $1 million in fines. Charges against him included violations of the Computer Fraud and Abuse Act (CFAA) as a result of “unauthorized access” for downloading millions of academic articles while on MIT’s network. On this first anniversary of his death, a reinvigorated call to action is taking place. The Electronic Frontier Foundation (EFF) has launched a “Remembering Aaron” campaign and is reactivating efforts to reform the CFAA, activists are invoking his name for an upcoming day of action against NSA surveillance, “The Day We Fight Back” to be held on February 11, lawmakers are demanding answers from the Justice Department treatment of Swartz, and a host of articles and other tributes are appearing across the internet.

The EFF’s Remembering Aaron campaign includes a tribute to Swartz’s legacy and kicks off a month of action against censorship and surveillance, toward open access. The EFF is reinvigorating efforts to reform the CFAA, encouraging supporters to send a letter to their legislative representatives that criticizes the law for its “vague language” and “heavy-handed penalties,” and its disregard for demonstrating whether an act was done to further the public good. The letter calls for: “three critical fixes: first, terms of service violations must not be considered crimes. Second, if a user is allowed to access information, it should not be a crime to access that data in a new or innovative way — which means commonplace computing techniques that protect privacy or help test security cannot be illegal. And finally, penalties must be made proportionate to offenses: minor violations should be met with minor penalties.”

In addition to calls to change the CFAA, activists are also calling a protest against laws and systems that enable government surveillance to run unchecked. Specifically, a mass movement against government surveillance is being organized by a heavy-hitting group of organizations including  EFF; the organization Swartz co-founded, Demand Progress; Fight for the Future; Reddit; and Mozilla. Organized for February 11, 2014, “The Day We Fight Back Against Mass Surveillance” invokes Swartz’s legacy in its call for a day of mass protest against government surveillance: “If Aaron were alive, he’d be on the front lines, fighting against a world in which governments observe, collect, and analyze our every digital action.” In a show of support for the planned protest, on the day before the year anniversary of Swartz’s death, Anonymous defaced MIT’s SSL-enabled Cogeneration Project page, displaying a page that called viewers to “Remember the day we fight back.”

In addition to reinvigorating the fight against laws that are abusive and can be easily abused, the key players in Swartz’s prosecution are also coming under scrutiny: the DOJ and MIT. On Friday January 10, a bipartisan group of eight lawmakers, Sens. John Cornyn, R-Texas; Ron Wyden, D-Ore.; Jeff Flake, R-Ariz.; and Reps. Darrell Issa, R-Calif.; James Sensenbrenner, R-Wis.; Alan Grayson, D-Fla.; Zoe Lofgren, D-Calif.; and Jared Polis, D-Colo, sent Attorney General Eric Holder a letter calling out inconsistencies between the DOJ’s and MIT’s reports and the DOJ’s lack of forthrightness and transparency. Additionally, the letter issues this demand: “In March, you testified that Mr. Swartz’s case was ‘a good use of prosecutorial discretion.’  We respectfully disagree. We hope your response to this letter is fulsome, which would help re-build confidence about the willingness of the Department to examine itself where prosecutorial conduct is concerned.” In Boston Magazine’s Losing Aaron, Bob Swartz, Aaron’s father, voices his deep disappointment in MIT and articulates specific ways in which he believes the institution was complicit in the DOJ’s draconian prosecution contributing to Aaron’s suicide.

Additional tributes to Swartz this month include a documentary by Brian Knappenberger, The Internet’s Own Boy: The Story of Aaron Swartz, which will play at the Sundance Film Festival beginning this week. In Wired Magazine’s article, One Year Later, Web Legends Honor Aaron Swartz, author Angela Watercutter notes “Swartz’s fight for rights online has only been brought more intensely into focus in the year since his death, largely due to NSA whistleblower Edward Snowden. To see him talk about government spying in [Knappenberger’s] documentary at a time before the Snowden leaks is especially chilling now.”  Further, in Knappenberger’s forthcoming documentary  web visionaries, including founders of the World Wide Web and Creative Commons, speak of Swartz’s work and legacy:

“I think Aaron was trying to make the world work – he was trying to fix it…  he was a bit ahead of his time.” – Tim Berners-Lee.

“He was just doing what he thought was right to produce a world that was better.” – Lawrence Lessig

 


2 Comments

Before Liftoff, Drones Must Maneuver Through Privacy Laws

Unmanned aerial vehicles, better known as drones, are expected to revolutionize the way companies deliver packages to their customers.  Some also imagine these small aircrafts delivering pizzas to a customer’s home or nachos to a fan at a ballgame.  Researchers are even investigating the possibility of using drones to assist farmers with monitoring their crops.  Before drone technology takes flight, however, it will have to maneuver through privacy laws.

The Federal Aviation Administration (FAA) is the agency charged with developing rules, including privacy rules, for private individuals and companies to operate drones in national airspace.  While the precise breadth of FAA rules is not entirely clear, a framework is beginning to develop.  When the FAA recently announced test sites for drones, it also noted that test site operators must: (1) comply with existing federal and state privacy laws, (2) have publicly available privacy policies and a written plan for data use and retention, and (3) conduct a review of privacy practices that allows for public comment.  When soliciting the public for comment on these test site-privacy rules, the FAA received a wide spectrum of feedback.  This feedback ranged from suggestions that the agency must articulate precise elements of what constitutes a privacy violation, to the federal agency was not equipped (and therefore should not attempt) to regulate privacy at all.  It appears that the FAA settled on a middle ground of requiring drones to comply with existing privacy law, which is largely regulated by individual states.

Accordingly, state privacy laws are likely to be the critical privacy hurdle to commercial drone use.  It appears that only four states have thus far expressly addressed the use of private drones (as distinguished from drones used by public agencies, such as law enforcement).  Idaho and Texas generally prohibit civilians from using a drone to take photographs of private property.  They also restrict photography of any individual – even in public view – by such a drone.  And Oregon prevents drones from flying less than 400 feet above a property of a person who makes such a request.  The fourth state, Illinois, restricts use of drones that interfere with hunting and fishing activities.

As for the other states, they may be simply getting up to speed on the technology.  On the other hand, many of these states have considered or enacted laws restricting use of drones by the police.  Because these laws are silent on the use of private drones, one could argue that these states intentionally chose not to regulate private drones (and accordingly, existing laws regarding use of aircrafts or other public cameras, govern use of private drones).

Even though a state has passed a drone-related privacy law, it may very well be challenged on constitutional or other grounds.  For instance – to the extent they prohibit photography of public areas or objects and people in plain view – the Idaho and Texas laws may raise First Amendment questions.  As described in Hurley v. Irish-American, photographers generally receive First Amendment protection when taking public photos if he or she “possessed a message to be communicated” and “an audience to receive that message, regardless of the medium in which the message is to be expressed.”  Under this test, in Porat v. Lincoln Towers Community Association, a photo hobbyist taking pictures for aesthetic and recreational purposes was denied First Amendment protection.  In contrast, in Pomykacz v. Borough of West Wildwood, a “citizen activist” – whose pictures were taken out of concern about an affair between a town’s mayor and a police officer – was found to have First Amendment protection.  To be sure, however, the Supreme Court has acknowledged that “even in a public forum the government may impose reasonable restrictions on the time, place, or manner of protected speech, provided the restriction are justified without reference to the content of the regulated speech, that they are narrowly tailored to serve a significant governmental interest, and that they leave open ample alternative channels for communication of the information.”  For example, under this premise, some courts have upheld restrictions on public access to crime and accident scenes.  All told, we may see drone users assert First Amendment protection for photographs taken of public areas.

Another future legal challenge may involve the question of who owns the airspace above private property.  In United States v. Causby, the Supreme Court appeared to reject the idea of private ownership of airspace.  More specifically, it held that government aircrafts flying over private land do not amount to a government “taking”, or seizure of private property, unless the aircrafts are so low and frequent that they constitute an immediate interference with enjoyment of the land.  In other words, under Causby, the landowner owns the airspace necessary to use and enjoy the land.  But the Court declined to draw a specific line.  At the moment, it is unclear whether Oregon’s law – restricting drones within 400 feet of a home – is consistent with principle.

Lastly, we may see a legal challenge asserting that certain state privacy laws (such as the Idaho or Texas law or others that disallow drone use altogether) are preempted, or trumped.  Congress’s intent to impliedly preempt state law may be inferred (1) from a pervasive scheme of federal regulation that Congress left no room for the states to supplement, or (2) where Congress’s actions touch a field in which the federal interest is so dominant that the federal system will be assumed to preclude enforcement of state laws on that subject.  Applied here, one could argue that Congress has entrusted the FAA with sole authority for creating a scheme for regulating the the narrow field of national airspace, and drones in particular.  Additionally, the argument goes, the federal government has a dominant interest in regulating national airspace as demonstrated by the creation of the FAA and numerous other aircraft regulations.  Under the preemption line of reasoning, state privacy laws may be better focused on regulating data gathered by the drone rather than the space where the drone may fly or actions the drone may take while in the space (e.g. taking pictures).

All told, before official drone liftoff, companies employing drones will have to wait for final FAA rules on privacy.  Whether these final rules track the test site rules discussed above is not for certain.  Likely, the final rules will depend on the public comments received by the drone test sites.  Assuming the final rules track the test site rules, companies using commercial drones should focus on compliance with the various state privacy laws.  But, as noted above, we may see a constitutional challenge to these laws along the way.  Stay tuned.


Leave a comment

What’s More Challenging? Establishing Privacy Class Action Standing, or Climbing Mount Kilimanjaro?

Two opinions recently issued from the Northern District of California have important implications for parties seeking privacy class actions. Both opinions highlight the evolving jurisprudence around establishing standing for consumer privacy lawsuits.

In re Apple iPhone Application Litigation

On November 25, 2013, Judge Lucy Koh granted Apple’s motion for summary judgment on all of plaintiffs’ claims in In re Apple iPhone Application Litigation, 11-MD-02250-LHK (N.D. Cal. Nov. 25, 2013). Plaintiffs alleged that Apple violated its Privacy Policy by allowing third parties to access iPhone users’ personal information. Based on those misrepresentations, plaintiffs claimed they overpaid for their iPhones, and that their iPhones’ performance suffered. Plaintiffs also alleged that Apple violated its Software License Agreement (“SLA”) when it falsely represented that customers could prevent Apple from collecting geolocation information by turning off the iPhone’s Location Services setting. Plaintiffs alleged that, contrary to this representation, Apple continued to collect certain geolocation information from iPhone users even if those users had turned the Location Services setting off. Based on the SLA misrepresentations, plaintiffs alleged they overpaid for their iPhones and suffered reduced iPhone performance. Plaintiffs argued that Apple’s alleged conduct constituted a violation of California’s unfair competition law (“UCL”) and the Consumer Legal Remedies Act (“CLRA”).

Judge Koh disagreed, finding that plaintiffs failed to create a genuine issue of material fact concerning their standing under Article III, the UCL, and the CLRA. Judge Koh held that plaintiffs presented enough evidence of injury—that plaintiffs purportedly overpaid for their iPhones and suffered reduced iPhone performance. Conversely though, Judge Koh held that plaintiffs could not establish that such injury was causally linked to Apple’s alleged misrepresentations. Judge Koh ruled that actual reliance was essential for standing. Accordingly, plaintiffs must have (1) seen the misrepresentations; and (2) acted on those misrepresentations.  Judge Koh noted that none of the plaintiffs had even seen the alleged misrepresentations prior to purchasing their iPhones, or at any time thereafter. Because none of the plaintiffs had even seen the misrepresentations, they could not have relied upon such misrepresentations. Without reliance, Judge Koh held that plaintiffs’ claims could not survive.

In re Google, Inc. Privacy Policy Litigation

On December 3, 2013, Judge Paul Grewal granted Google’s motion to dismiss in In re Google, Inc. Privacy Policy Litigation, Case No. C-12-01382-PSG (N.D. Cal. Dec. 3, 2013), but not based on lack of standing. The claims stemmed from Google’s change in its privacy policies. Before March 1, 2012, Google maintained separate privacy policies for each of its products, and those policies purportedly stated that Google would only use a user’s personally-identifying information for that particular product. Google then introduced a new privacy policy informing consumers that it would commingle data between products. Plaintiffs contend that the new privacy policy violated Google’s prior privacy policies. Plaintiffs also alleged that Google shared PII with third parties to allow third parties to develop apps for Google Play.

In assessing standing, Judge Grewal noted that “injury-in-fact has proven to be a significant barrier to entry,” and that establishing standing in the Northern District of California is akin to climbing Mount Kilimanjaro. Notwithstanding the high burden, Judge Grewal found that plaintiffs adequately alleged standing.

Plaintiffs alleged standing based on (1) commingling of personally identifiable information; (2) direct economic injury; and (3) statutory violations. With respect to the commingling argument, plaintiffs contended that Google never compensated plaintiffs for the value associated with commingling PII amongst different Google products. Judge Grewal rejected this argument, noting that a plaintiff may not establish standing by pointing to a defendant’s profit; rather, plaintiff must actually suffer damages as a result of defendant’s conduct.

With respect to plaintiffs’ allegations of direct economic injury, Judge Grewal held that those allegations sufficed to confer standing. Plaintiffs argued they suffered direct economic injuries because of reduced performance of Android devices (plaintiffs had to pay for the battery power used by Google to send data to third parties). Plaintiffs also argued that they overpaid for their phones and had to buy different phones because of Google’s practices. These allegations sufficed to establish injury. Based on Judge Koh’s opinion in Apple, one key issue in the Google case will likely be whether any of the plaintiffs actually read and relied upon Google’s privacy policies.

Finally, Judge Grewal found that standing could be premised on the alleged violation of statutory rights. This ruling is consistent with the trend in other federal courts. Though Judge Grewal ultimately dismissed the complaint for failure to state a claim, the opinion’s discussion of standing will be informative to both the plaintiff and defense bars in privacy litigation.

The Apple and Google lawsuits represent a fraction of the many lawsuits seeking to recover damages and/or injunctive relief for the improper collection and/or use of consumer information. Establishing standing remains a difficult hurdle for plaintiffs in consumer privacy lawsuits, though courts are increasingly accepting standing arguments based on statutory violations and allegations of economic injuries. The Apple decision is on appeal, so we will see if the Ninth Circuit sheds further light on issues of standing in privacy lawsuits.


Leave a comment

GAO Data Broker Report Calls for Comprehensive Privacy Law

On November 15, 2013, the U.S. Government Accountability Office released a report on the statutory legal protections for consumers with regard to the use of data for marketing purposes by data brokers.

The GAO report canvasses the existing federal consumer legal protections applicable to information resellers and finds them wanting with regard to the use of the data for marketing purposes.  Specifically, the GAO concluded the following:

– “[I]nformation about an individual’s physical and mental health, income and assets, mobile telephone numbers, shopping habits, personal interests, political affiliations, and sexual habits and orientation,” can legally be collected, shared, and used for marketing purposes.  The report notes limits on HIPAA’s applicability to health-related marketing lists used by e-health websites.

– Although some industry participants have stated that current privacy laws are adequate – particularly in light of self-regulatory measures – there are gaps in the current statutory privacy framework that do not fully address “changes in technology and marketplace practices that fundamentally have altered the nature and extent to which personal information is being shared with third parties.”

– Current law is often out of step with the fair information practice principles.

According to the GAO, Congress should therefore consider strengthening the current consumer privacy framework in relation to consumer data used for marketing while not unduly inhibiting the benefits to industry and consumers from data sharing.  In doing so, Congress should consider:

– the adequacy of consumers’ ability to access, correct, and control their personal information in circumstances beyond those currently accorded under FCRA;

– whether there should be additional controls on the types of personal or sensitive information that may be collected and shared;

– changes needed, if any, in the permitted sources and methods for data collection; and

– privacy controls related to new technologies, such as web tracking and mobile devices.

GAO Report at 19, 46-47.

The GAO Report is the most recent expression of support for comprehensive privacy legislation from within the federal government.  In this regard, the report echoes the Obama Administration’s 2012 Privacy Blueprint and the FTC’s 2012 Privacy Report, both of which called for baseline privacy legislation.  The FTC Privacy Report also reiterated the agency’s support for a privacy law targeted to data brokers.  The GAO Report, by contrast, implies that a general privacy law could suffice to address the issues raised by data brokers.