The Secure Times

An online forum of the ABA Section of Antitrust Law's Privacy and Information Security Committee


Leave a comment

Another Court Limits What Constitutes an ATDS Under the TCPA: Is it Safe to Say a Toaster is Not an Autodialer?

The TCPA remains a hotbed of class action litigation, with new cases being filed across the country. The viability of cases frequently turns on whether defendant used an automatic telephone dialing system (“ATDS” or “autodialer”). Given the broad definition provided by the FCC, many courts have held that any device that has the mere capacity to autodial falls within the definition of an ATDS. As one well-known company expressed in an amicus brief on the issue, such a broad interpretation of ATDS would encompass any device, including a toaster oven.

Last week, a Northern District of California court acknowledged this absurdity when it granted summary judgment for GroupMe, Inc. (“GroupMe”) on a TCPA claim, finding there was no triable issue of fact as to whether GroupMe used an autodialer. Glauser v. GroupMe, Inc., 2015 WL 475111 (N.D. Cal. Feb. 4, 2015). The case involved GroupMe’s group messaging application, where users can create groups whose members can all send text messages to each other. One user created a “Poker” group and added plaintiff to that group. Plaintiff received two text messages welcoming him to the group and instructing him how he could opt out of the group. Plaintiff did not respond, but received more text messages where group members discussed their availability for a poker game. Because plaintiff had not responded, GroupMe sent plaintiff a text message notifying plaintiff he’d be removed from the group, unless plaintiff replied to the text message. Plaintiff indeed responded “In,” which reinstated him as part of the group.

Despite joining into the Poker group, plaintiff cried foul and sued under the TCPA. GroupMe moved for summary judgment on the issue of whether it used an autodialer. The court made three findings in ruling in favor of GroupMe. First, the district court agreed that whether equipment has the “capacity” to autodial depends on the device’s present capacity, not potential capacity, noting that to accept a “potential capacity” argument would impermissibly allow the TCPA to capture common devices, such as smartphones. Second, the court held that autodialers include not just dialers that can generate numbers randomly or sequentially, but also predictive dialers. Finally, the court ruled that an autodialer must have the capacity to dial numbers without human intervention. Because all of GroupMe’s text messages were triggered by the original GroupMe user’s creation of the “Poker” group, human intervention was necessary, and GroupMe did not use an autodialer. Absent an autodialer, plaintiff’s case fell on summary judgment.

The GroupMe case demonstrates that TCPA actions may be brought for nearly any conduct involving text messages. Despite affirmatively joining in the “Poker” group and reaping the benefits of the GroupMe service, plaintiff still sued. The ruling also demonstrates that more courts are inclined to scale back what constitutes an ATDS under the TCPA. The FCC may weigh in on the issue, as petitions remain pending. Until then, expect more uncertainty on the scope of an ATDS under the TCPA.


Leave a comment

FTC Chairwoman Edith Ramirez Comments on Data Security for the Internet of Things

Happy New Year! For many, the holidays included exciting new gadgets. Whether it’s a new fitness tracker, a smart thermostat, or a smart glucose meter, these new connected devices have arrived, and new products are on the horizon. These products, termed the “Internet of Things” by privacy professionals, are broadly defined as products that can connect to a network.

On January 6, 2015, FTC Chairwoman Edith Ramirez delivered the opening remarks at the International Consumer Electronics Show, during which she spoke on security issues surrounding the Internet of Things (“IoT”). Chairwoman Ramirez discussed what she viewed as three key risks to consumer privacy, along with suggested industry solutions to mitigate those risks.

IoT Risks to Consumer Privacy
The first privacy and security risk of connected devices Chairwoman Ramirez identified was that connected devices engage in “ubiquitous data collection.” Because these devices can potentially collect personal information, including our habits, location, and physical condition, the data can lead to rich profiles of consumer preferences and behavior.

The second risk Chairwoman Ramirez identified was the possible unexpected use of consumer data acquired through connected devices. As an example, she asked whether data from a smart TV’s tracking of consumer television habits could be combined with other data to enable businesses to engage in targeted advertising or even exacerbate socio-economic disparities.

The third risk she identified was that connected devices can be hijacked, leading to misuse of personal information.

Suggested Industry Solutions
To combat the risks identified above, Chairwoman Ramirez suggested three solutions for the IoT industry. First, IoT companies should engage in “Security by Design,” namely that IoT products should be built initially with a priority on security, and that IoT companies should implement technical and administrative measures to ensure reasonable security. Chairwoman Ramirez identified five aspects of Security by Design:

  • conduct a privacy or security risk assessment as part of the design process;
  • test security measures before products launch;
  • use smart defaults—such as requiring consumers to change default passwords in the set-up process;
  • consider encryption, particularly for the storage and transmission of sensitive information, such as health data; and
  • monitor products throughout their life cycle and, to the extent possible, patch known vulnerabilities.

Second, Chairwoman Ramirez suggested that companies that collect personal information should engage in data minimization, viz. that they should collect only the data needed for a specific purpose and then safely destroy that data afterwards. Chairwoman Ramirez also urged companies to de-identify consumer data where possible.

Finally, Chairwoman Ramirez suggested that IoT companies provide notice and choice to consumers for unexpected collection or uses of their data. As an example, Chairwoman Ramirez stated that if IoT companies are sharing data from a smart thermostat or fitness band with data brokers or marketing firms, those companies should provide consumers with a “simple notice of the proposed uses of their data and a way to consent.”

Although not official FTC statements, these remarks by Chairwoman Ramirez provide valuable insight into how the Federal Trade Commission may regulate connected devices in the future. Companies in the IoT space should monitor further developments closely and review their data collection, security, and sharing practices accordingly.


Leave a comment

Google Avoids Class Certification in Gmail Litigation

On March 18, 2014, Judge Koh in the Northern District of California denied Plaintiffs’ Motion for Class Certification in the In re: Google Inc. Gmail Litigation matter, Case No. 13-MD-02430-LHK. The case involved allegations of unlawful wiretapping in Google’s operation of its Gmail email service. Plaintiffs alleged that, without obtaining proper consent, Google unlawfully read the content of emails, extracted concepts from the emails, and used metadata from emails to create secret user profiles.

Among other things, obtaining class certification requires a plaintiff to demonstrate that class issues will predominate over individual issues. In this case, Judge Koh’s opinion focused almost exclusively on the issue of predominance. The Court noted that the predominance inquiry “tests whether proposed classes are sufficiently cohesive to warrant adjudication by representation.” Opinion (“Op.”) at 23 (citations omitted). The Court further emphasized that the predominance inquiry “is a holistic one, in which the Court considers whether overall, considering the issues to be litigated, common issues will predominate.” Op. at 24.

The Court in the Gmail litigation noted how the existence of consent is a common defense to all of Plaintiffs’ claims. Consent can either be express, or it can be implied “based on whether the surrounding circumstances demonstrate that the party whose communications were intercepted new of such interceptions.” Op. at 26. The decision explained how common issues would not predominate with respect to a determination of whether any particular class member consented to Google’s alleged conduct.

The Court briefly addressed whether the issue of express consent could be practically litigated on a class-wide basis, but the opinion focused largely on the issue of implied consent. The Court noted that implied consent “is an intensely factual question that requires consideration of the circumstances surrounding the interception to divine whether the party whose communication was intercepted was on notice that the communication would be intercepted.” Op. at 30. Google contended that implied consent would require individual inquiries into what each person knew. Google pointed to a plethora of information surrounding the scanning of Gmail emails including:  (1) Google’s Terms of Service; (2) Google’s multiple Privacy Policies; (3) Google’s product-specific Privacy Policies; (4) Google’s Help pages; (5) Google’s webpages on targeted advertising; (6) disclosures in the Gmail interface; (7) media reporting of Gmail’s launch and how Google “scans” email messages; (8) media reports regarding Google’s advertising system; and (9) media reports of litigation concerning Gmail email scanning. The Court thus agreed with Google that there was a “panoply of sources from which email users could have learned of Google’s interceptions other than Google’s TOS and Privacy Policies.” Op. at 33. With all these different means by which a user could have learned of the scanning practices (and provided implied consent to the practice) the issue of consent would overwhelmingly require individualized inquiries and thus precluded class certification.

This opinion demonstrates a key defense to class action claims where implied consent is at issue. Any class action defendant’s assessment of risk should include an early calculation of the likelihood of class certification, and that calculation should inform litigation strategy throughout the case. Google consistently litigated the matter to highlight class certification difficulties surrounding consent, and ultimately obtained a significant victory in defeating class certification.


Leave a comment

What’s More Challenging? Establishing Privacy Class Action Standing, or Climbing Mount Kilimanjaro?

Two opinions recently issued from the Northern District of California have important implications for parties seeking privacy class actions. Both opinions highlight the evolving jurisprudence around establishing standing for consumer privacy lawsuits.

In re Apple iPhone Application Litigation

On November 25, 2013, Judge Lucy Koh granted Apple’s motion for summary judgment on all of plaintiffs’ claims in In re Apple iPhone Application Litigation, 11-MD-02250-LHK (N.D. Cal. Nov. 25, 2013). Plaintiffs alleged that Apple violated its Privacy Policy by allowing third parties to access iPhone users’ personal information. Based on those misrepresentations, plaintiffs claimed they overpaid for their iPhones, and that their iPhones’ performance suffered. Plaintiffs also alleged that Apple violated its Software License Agreement (“SLA”) when it falsely represented that customers could prevent Apple from collecting geolocation information by turning off the iPhone’s Location Services setting. Plaintiffs alleged that, contrary to this representation, Apple continued to collect certain geolocation information from iPhone users even if those users had turned the Location Services setting off. Based on the SLA misrepresentations, plaintiffs alleged they overpaid for their iPhones and suffered reduced iPhone performance. Plaintiffs argued that Apple’s alleged conduct constituted a violation of California’s unfair competition law (“UCL”) and the Consumer Legal Remedies Act (“CLRA”).

Judge Koh disagreed, finding that plaintiffs failed to create a genuine issue of material fact concerning their standing under Article III, the UCL, and the CLRA. Judge Koh held that plaintiffs presented enough evidence of injury—that plaintiffs purportedly overpaid for their iPhones and suffered reduced iPhone performance. Conversely though, Judge Koh held that plaintiffs could not establish that such injury was causally linked to Apple’s alleged misrepresentations. Judge Koh ruled that actual reliance was essential for standing. Accordingly, plaintiffs must have (1) seen the misrepresentations; and (2) acted on those misrepresentations.  Judge Koh noted that none of the plaintiffs had even seen the alleged misrepresentations prior to purchasing their iPhones, or at any time thereafter. Because none of the plaintiffs had even seen the misrepresentations, they could not have relied upon such misrepresentations. Without reliance, Judge Koh held that plaintiffs’ claims could not survive.

In re Google, Inc. Privacy Policy Litigation

On December 3, 2013, Judge Paul Grewal granted Google’s motion to dismiss in In re Google, Inc. Privacy Policy Litigation, Case No. C-12-01382-PSG (N.D. Cal. Dec. 3, 2013), but not based on lack of standing. The claims stemmed from Google’s change in its privacy policies. Before March 1, 2012, Google maintained separate privacy policies for each of its products, and those policies purportedly stated that Google would only use a user’s personally-identifying information for that particular product. Google then introduced a new privacy policy informing consumers that it would commingle data between products. Plaintiffs contend that the new privacy policy violated Google’s prior privacy policies. Plaintiffs also alleged that Google shared PII with third parties to allow third parties to develop apps for Google Play.

In assessing standing, Judge Grewal noted that “injury-in-fact has proven to be a significant barrier to entry,” and that establishing standing in the Northern District of California is akin to climbing Mount Kilimanjaro. Notwithstanding the high burden, Judge Grewal found that plaintiffs adequately alleged standing.

Plaintiffs alleged standing based on (1) commingling of personally identifiable information; (2) direct economic injury; and (3) statutory violations. With respect to the commingling argument, plaintiffs contended that Google never compensated plaintiffs for the value associated with commingling PII amongst different Google products. Judge Grewal rejected this argument, noting that a plaintiff may not establish standing by pointing to a defendant’s profit; rather, plaintiff must actually suffer damages as a result of defendant’s conduct.

With respect to plaintiffs’ allegations of direct economic injury, Judge Grewal held that those allegations sufficed to confer standing. Plaintiffs argued they suffered direct economic injuries because of reduced performance of Android devices (plaintiffs had to pay for the battery power used by Google to send data to third parties). Plaintiffs also argued that they overpaid for their phones and had to buy different phones because of Google’s practices. These allegations sufficed to establish injury. Based on Judge Koh’s opinion in Apple, one key issue in the Google case will likely be whether any of the plaintiffs actually read and relied upon Google’s privacy policies.

Finally, Judge Grewal found that standing could be premised on the alleged violation of statutory rights. This ruling is consistent with the trend in other federal courts. Though Judge Grewal ultimately dismissed the complaint for failure to state a claim, the opinion’s discussion of standing will be informative to both the plaintiff and defense bars in privacy litigation.

The Apple and Google lawsuits represent a fraction of the many lawsuits seeking to recover damages and/or injunctive relief for the improper collection and/or use of consumer information. Establishing standing remains a difficult hurdle for plaintiffs in consumer privacy lawsuits, though courts are increasingly accepting standing arguments based on statutory violations and allegations of economic injuries. The Apple decision is on appeal, so we will see if the Ninth Circuit sheds further light on issues of standing in privacy lawsuits.


1 Comment

Amendments to CalOPPA Allow Minors to “Erase” Information from the Internet and Also Restricts Advertising Practices to Minors

On September 23, 2013, California Governor Jerry Brown signed SB568 into law, which adds new provisions to the California Online Privacy Protection Act. Officially called “Privacy Rights for California Minors in the Digital World,” the bill has already garnered the nickname of the “Internet Eraser Law,” because it affords California minors the ability to remove content or information previously posted on a Web site. The bill also imposes restrictions on advertising to California minors.

California Minors’ Right to Remove Online Content

Effective January 1, 2015, the bill requires online operators to provide a means by which California minors may remove online information posted by that minor. Online operators can elect to allow a minor to directly remove such information or can alternatively remove such information at a minor’s request. The bill further requires that online operators notify California minors of the right to remove previously-posted information.

Online operators do not need to allow removal of information in certain circumstances, including where (1) the content or information was posted by a third party; (2) state or federal law requires the operator or third party to retain such content or information; or (3) the operator anonymizes the content or information. The bill further clarifies that online operators need only remove the information from public view; the bill does not require wholesale deletion of the information from the online operator’s servers.

New Restrictions on Advertising to California Minors

Also effective January 1, 2015, the bill places new restrictions on advertising to California minors. The bill prohibits online services directed to minors from advertising certain products, including alcohol, firearms, tobacco, and tanning services. It further prohibits online operators from allowing third parties (e.g. advertising networks or plug-ins) to advertise certain products to minors. And where an advertising service is notified that a particular site is directed to minors, the bill restricts the types of products that can be advertised by that advertising service to minors.

Implications

Given the sheer number of California minors, these amendments to CalOPPA will likely have vast implications for online service providers. First, the bill extends not just to Web sites, but also to mobile apps, which is consistent with a general trend of governmental scrutiny of mobile apps. Online service providers should expect regulation of mobile apps to increase, as both California and the Federal Trade Commission have issued publications indicating concerns over mobile app privacy. Second, the bill also reflects an increased focus on privacy of children and minors. Developers should consider these privacy issues when designing Web sites and mobile apps, and design such products with the flexibility needed to adapt to changing legislation. Thus, any business involved in the online space should carefully review these amendments and ensure compliance before the January 1, 2015 deadline.