The Secure Times

An online forum of the ABA Section of Antitrust Law's Privacy and Information Security Committee


Leave a comment

More International Reactions to the US Massive Intelligence Collection

The revelation of the large-scale US intelligence collection program continues to bring about international reactions.  

United Nations

The Third Committee of the United Nations (Social, Humanitarian and Cultural) approved on November 26 a draft resolution on “The right to privacy in the digital age.” The draft will now advance to General Assembly voting.

 The General Assembly would call upon Member States to respect the right to privacy and to take measures to prevent its violation. They also would have to “review their procedures, practices and legislation regarding the surveillance of communications, their interception and collection of personal data, including mass surveillance, interception and collection, with a view to upholding the right to privacy by ensuring the full and effective implementation of all their obligations under international human rights law.

The General Assembly would also request the United Nations High Commissioner for Human Rights to write a report on the right to privacy in the context of domestic and extraterritorial surveillance and/or interception of digital communications and collection of personal data, including on a mass scale.

European Union Commission

In the European Union, the EU Commission published today a communication on “Rebuilding Trust in EU-US Data Flows.” The Commission noted that “the standard of protection of personal data must be addressed in its proper context, without affecting other dimensions of EU-US relations.”

This is why data protection standards will not be negotiated within the Transatlantic Trade and Investment Partnership (TTIP).

The Commission noted in the introduction that trust in the US/EU “has been negatively affected and needs to be restored” and that “[m]ass surveillance of private communication, be it of citizens, enterprises or political leaders, is unacceptable.”

The communication identified six steps which should be taken to restore trust in transatlantic data transfers:

Implement the EU Data Protection Reform

The proposed regulation has a wide territorial scope since companies not established in the EU would have to apply it if they offer goods and services to European consumers or monitor their behavior.

The regulation would also provide” clear rules on the obligations and liabilities of data processors such as cloud providers.” Surveillance programs affect data stored in the cloud, and companies providing cloud services asked to provide personal data to foreign authorities would not be able “to escape their responsibility” by arguing that they are mere data processors, not data controllers.

Making the Safe Harbor Safe

The Safe Harbor scheme has several weaknesses and that leads to some competitive disadvantages. For instance, some self-certified Safe Harbor members do not comply with its principles in practice. Also, some countries may decide to cease altogether data transfer on the basis of Safe Harbor.

Therefore,” the current implementation of Safe Harbor cannot be maintained.”However, it should be strengthened, not canceled.

The scheme would be more effective if certified companies would have more transparent privacy policies  and also if affordable dispute resolution mechanisms would be available to EU citizens.

Strengthening Data Protection Safeguards in the Law Enforcement Area

The current negotiations between the US and the EU on an “umbrella agreement” for transfers and processing of data in the context of police and judicial cooperation must be concluded quickly.

Using the existing Mutual Legal Assistance and Sectoral Agreements to Obtain Data

The Commission expressed hope that the US would commit that personal data held by private companies located in the EU will not be directly accessed and transferred to US law enforcement authorities outside of formal channels of cooperation, such as the Passenger Name Records Agreement and Terrorist Financing Tracking Program.

Addressing European Concerns in the On-Going U.S. Reform Process

President Obama has announced a review of U.S. national security authorities’ activities. This process should also benefit EU citizens by providing an opportunity to address the EU concerns about US intelligence collection programs.

Safeguards available to US citizens should be extended to EU citizens not resident in the US, and transparency should be increased and there should be better and stronger oversight.

Promoting Privacy Standards Internationally

The U.S. should accede to the Council of Europe’s Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data, the “Convention 108”, which is open to countries which are not member of the Council of Europe. The US has already acceded to the 2001 Convention on Cybercrime.

The press release about this communication is available here.


Leave a comment

ACS Panel: is it Possible for the Constitution to Keep Pace with New Technologies?

On November 21, the American Constitution Society (ACS) presented a panel on ‘The Constitution and Privacy in a Time of Innovation.’ The participants were Stephen Vladeck of American University, Chris Calabrese of the ACLU, James Grimmelman of the University of Maryland, and Orin Kerr of The George Washington University, and the moderator was Dahlia Lithwick of Slate. The video of the panel is here.

The ACS asked the question: is it possible for the Constitution to keep pace with new technologies? The question could not have been more topical, as the panel took place just a day after it was revealed that the government has been collecting Internet metadata for many years.

Orin Kerr started by reminding the audience that the Fourth Amendment was originally applied to breaking into homes, arresting people and seizing in the physical world. However, the Fourth Amendment also addressed new issues over the years.

Professor Kerr noted that “every age is an age of innovation.” In the 20’s, the issue was how the Fourth Amendment would apply to automobiles and telephones, in the 60’s, how it would apply to phone booths, and in the 80’s, how it would apply to aerial surveillance. Today, the issues are DNA, email and GPS devices. Professor Kerr believes that we’ll see more cases similar to the 2012 U.S. v. Jones GPS case in the future.

Chris Calabrese regretted that the Supreme Court has dodged the issue of the records about each of us, not the content of our messages, but the metadata. This is “envelope information” such as who we call and when. Mr. Calebrese gave the example of an individual calling a suicide hot line:  this would be a very sensitive piece of information. However, the Supreme Court  still follows the third party doctrine, as  stated in Smith v. Maryland:  if you share info with a third party, it is no longer protected by the Fourth Amendment.

U.S. v. Jones is the first case where the Supreme Court addressed a form of metadata, location information. Mr. Calabrese stated that as “we live in a world of records,” where we constantly create records, it is thus essential to know who owns these records and how they are accessed. These are issues that both the Supreme Court and Congress must address. 

James Grimmelman reminded the audience that some third parties holding our information now actively engage in information gathering, unlike telephone companies which play a somewhat passive role. He gave Facebook as an example of a company actively trying to maximize information to gain advertising revenue and fuel activities on its site. To do so, it gives incentives to users to share information.

Professor Grimmelman pointed out that Facebook is aware that users are concerned about how their information is being shared with the government, which may chill their willingness to share information on Facebook. Mr. Calabrese later said that, according to a survey, the public dislikes corporate collection of data even more than government collection of data, as they perceive the government as giving them at least something  back.

Professor Grimmelman also believes that surveillance by drones in public places will be an important issue, as the government will know and aggregate the whereabouts of individuals on a massive scale.  Mr. Calabrese noted that we appear in public, sure, but in “an impermanent way” and that would no longer be the case if our public presence is constantly recorded and analyzed. This is a pervasive intrusion on our private lives.

Stephen Vladeck addressed the issue of whether the Fourth Amendment is affected by NSA surveillance. He reminded the audience that the FISA Act functioned in a space unoccupied by the Supreme Court. FISA was extended in 2001 and in 2008, and gave the government authority to implement programs such as PRISM. What is the role of the Fourth Amendment in these surveillance issues?

Professor Kerr does not believe the Fourth Amendment has a role to play there, under the third party doctrine, but Professor Vladeck proposed to differentiate between records that we individually share with third parties, from the fairly new ability for the government to aggregate that data.

For Professor Vladeck, the Fourth Amendment can regulate surveillance, as only the government is capable of such massive surveillance. However, Professor Kerr does not believe that the Fourth Amendment protects metadata against searches, as it only protects against government invasion into one’s private space. Metadata searches should be regulated by a law, but this is not the role of the Fourth Amendment. When reading the Jones concurrence, it seems however, that the Supreme Court may soon introduce a new category of search. It could possibly cover metadata.

That may happen this term, as Professor Kerr believes the Supreme Court will tackle this term the issue of whether the police may search a cell phone incident to an arrest, and, if they can, how far they can search. That would be an opportunity for the Supreme Court to discuss once again the issue of new technologies and the Fourth Amendment.


Leave a comment

FTC Examines Internet of things, Privacy and Security, in Recent Workshop

On November 19, 2013, the Federal Trade Commission held a day-long workshop, “Internet of Things: Privacy and Security in a Connected World” on the privacy implications concerning devices such as cars, home appliances, fitness equipment, and other machines that are able to gather data and connect to the internet. For consumers, these devices can help track health, remotely monitor aging family members, reduce utility bills, and even send alerts to buy more milk.

Ubiquitous Internet of Things

Technological advances and new business models centered around the internet of things have taken off.
It has been reported that crowd-sourcing start-up, Quirky, has teamed up with GE to develop connected-home products. Another start up company isdeveloping tracking technology through GPS-embedded tags. On November 20, 2013, Qualcomm announced that has developed a line of chips for the internet of things space. It has been argued that companies should adjust their business models and use the internet of things to connect to customers. These developments present the FTC with the challenge of keeping up with technology to protect consumers and the competitive landscape.

In her remarks, Chairwoman Edith Ramirez emphasized how ubiquitous smart devices have become. Five years ago, she remarked, there are more “things” than people connected to the Internet; by 2015, there will be an estimated twenty-five billion things connected to the Internet and by 2020, an estimated fifty billion. Commissioner Maureen Ohlhausen, in her remarks later in the workshop, stated that the FTC will conduct policy research to understand the effects that technological advances and innovative business models concerning the internet of things have on consumers and the marketplace.

Privacy and Security Challenges

Chairwoman Ramirez noted privacy and security challenges presented by the internet of things. Privacy risks are present since devices connected to the internet can collect, compile, and transmit information about consumers in ways that may not have been expected. When aggregated, the data pieces collected by devices present “a deeply personal and startlingly complete picture of each of us.” Security risks are present since “any device connected to the Internet is potentially vulnerable to hijack.” Indeed, these risks have been reported and present real concerns.

Chairwoman Ramirez noted that the FTC will be vigilant in bringing enforcement actions against companies who fail to properly safeguard consumers from security breaches. She noted as an example the FTC’s first enforcement forayinto the internet of things against TRENDnet for failing to properly design its software and test its internet-connected security cameras, leaving consumers vulnerable to a hacker who accessed the live feeds from 700 cameras and made them available on the Internet. When it encounters consumer harm, Commissioner Olhausen stated that the FTC will use its traditional enforcement tools to challenge any potential threats that arise, much like it has done in the data security, mobile, and big data spaces.

Chairwoman Ramirez said that companies that take part in the internet of things ecosystem are “stewards of the consumer data” and that “with big data comes big responsibility.” The FTC has published a number of best practices that Chairwoman Ramirez identified as useful for companies in the internet of things space: (1) privacy by design—privacy protections built in from the outset, (2) simplified consumer choice—allowing consumers to control their data, and (3) transparency—disclosure of what information the devices collect and how it is being used.

FTC Report Forthcoming

The FTC will produce a report on what it has learned from the November 19 workshop and provide fruther recommendations about best practices. The FTC report can educate consumers and businesses on how to maximize consumer benefits and avoid or minimize any identified risks. Commissioner Ohlhausen stressed that the FTC should identify whether existing laws and existing regulatory structures, including self-regulation, are sufficient to address potential harms.

Vint Cerf of Google, who gave the keynote presentation, advised that rather than relying on regulations to protect privacy, social conventions should be developed. He stated that “while regulation might be helpful, an awful lot of the problems that we experience with privacy is a result of our own behavior.”

The same day as the workshop, the Future of Privacy Forum released a white paper arguing for an updated privacy paradigm for the internet of things that focuses not on how information is collected and communicated but on how organizations use personally identifiable information.

The FTC will continue to accept comments until January 10, 2013.


Leave a comment

State Attorneys General enter into 17 Million Dollar Settlement With Google Over Safari Web Tracking

36 states and the District of Columbia recently entered into a $17 million dollar settlement and injunction regarding Google’s use of third-party tracking cookies on Safari web browsers. This settlement puts to bed the Attorneys General’s allegations that Google mislead consumers and violated state consumer protection and privacy laws by circumventing Safari’s default privacy settings.

Third party cookies are small files placed on a web browser, often by advertising networks, that track users as they visit websites and that gathers data on users’ browsing activities. Google is a search engine that owns an ad network, DoubleClick. Apple’s Safari web browser has a default privacy setting that blocks third-party cookies that track a consumer’s browsing history, including those used by DoubleClick.

Starting in June 2011 Google altered its DoubleClick coding to circumvent Safari’s default cookie-blocking privacy settings without the users’ knowledge or consent. This practice ended in February 2012 after the discovery by Stanford researcher Jonathan Mayer that Google and three other online-advertising companies had circumvented Safari’s ad-blocker.  In addition to tracking without consent, Google mislead Safari users by suggesting that they need not install a Google opt-out plugin to block third-party advertising cookies because Safari already blocks all third-party cookies by default.

Google’s settlement with the states follows Google’s $22.5 million settlement with the Federal Trade Commission reached in August 2012 to settle the FTC’s allegation that Google violated an earlier FTC order by misrepresenting to Safari users that it would not place cookies on their browsers.  


Leave a comment

GAO Data Broker Report Calls for Comprehensive Privacy Law

On November 15, 2013, the U.S. Government Accountability Office released a report on the statutory legal protections for consumers with regard to the use of data for marketing purposes by data brokers.

The GAO report canvasses the existing federal consumer legal protections applicable to information resellers and finds them wanting with regard to the use of the data for marketing purposes.  Specifically, the GAO concluded the following:

– “[I]nformation about an individual’s physical and mental health, income and assets, mobile telephone numbers, shopping habits, personal interests, political affiliations, and sexual habits and orientation,” can legally be collected, shared, and used for marketing purposes.  The report notes limits on HIPAA’s applicability to health-related marketing lists used by e-health websites.

– Although some industry participants have stated that current privacy laws are adequate – particularly in light of self-regulatory measures – there are gaps in the current statutory privacy framework that do not fully address “changes in technology and marketplace practices that fundamentally have altered the nature and extent to which personal information is being shared with third parties.”

– Current law is often out of step with the fair information practice principles.

According to the GAO, Congress should therefore consider strengthening the current consumer privacy framework in relation to consumer data used for marketing while not unduly inhibiting the benefits to industry and consumers from data sharing.  In doing so, Congress should consider:

– the adequacy of consumers’ ability to access, correct, and control their personal information in circumstances beyond those currently accorded under FCRA;

– whether there should be additional controls on the types of personal or sensitive information that may be collected and shared;

– changes needed, if any, in the permitted sources and methods for data collection; and

– privacy controls related to new technologies, such as web tracking and mobile devices.

GAO Report at 19, 46-47.

The GAO Report is the most recent expression of support for comprehensive privacy legislation from within the federal government.  In this regard, the report echoes the Obama Administration’s 2012 Privacy Blueprint and the FTC’s 2012 Privacy Report, both of which called for baseline privacy legislation.  The FTC Privacy Report also reiterated the agency’s support for a privacy law targeted to data brokers.  The GAO Report, by contrast, implies that a general privacy law could suffice to address the issues raised by data brokers.


Leave a comment

Legislators Propose COPPA Expansion through Do Not Track Kids Act

It was less than a year ago that the Federal Trade Commission announced amendments to the regulations implementing the Children’s Online Privacy Protection Act (COPPA), which went into effect on July 1, 2013.  But COPPA has never covered teenagers, and a bipartisan group of senators and congressman seeks to change that.  On November 14, 2013, Senator Edward Markey (D-MA) and Representative Joseph Barton (R-TX), with Senator Mark Kirk (R-IL) and Representative Bobby Rush (D-IL), introduced the Do Not Track Kids Act of 2013 (S. 1700, H.R.3481), which would amend COPPA and introduce additional provisions to govern the collection and use of teens’ personal information.

Broadly-stated, the bipartisan-sponsored legislation would:

  • Prohibit online properties from collecting personal and geo-location information from anyone 13 to 15 years old without the user’s consent;
  • Require consent of a parent or teen before sending targeted advertising to children and teens;
  • Require adherence to a “Digital Marketing Bill of Rights” for teens that encompasses the fair information practice principles of collection and retention limitations, purpose specification, data accuracy, access, and security;
  • Create an “eraser button” (or a “right to be forgotten” – the more elegant name by which it is known in Europe) by requiring covered online companies to permit users to remove publicly available personal information and content they have posted, when technologically feasible; and
  • Require the FTC to issue implementing regulations enforceable by both the FTC and state attorneys general.  The new COPPA prohibitions would be enforceable by the FTC against telecommunications carriers, thereby effectuating a limited repeal of the “common carrier exemption” to the FTC’s jurisdiction.

It was a 2011 iteration of the Markey-Barton Do Not Track Kids Act, which did not advance in the last Congress, that introduced the concept of an “eraser button” for teens.  California has since seized on the idea and run with it.  As previously discussed in the Secure Times blog, California recently enacted an “eraser button” for California minors, which goes into effect on January 1, 2015.  This is but one illustration of California’s recent willingness to take more aggressive action on privacy issues than Congress, while utilizing ideas trumpeted in Congress or elsewhere at the national level.


Leave a comment

PCI Council Releases Version 3.0

Last week, the PCI Council released version 3.0 of the PCI Data Security Standard (PCI DSS) and the Payment Application Data Security Standard (PA-DSS).  This most recent version is intended to help “make payment security part of their business-as-usual activities” by introducing more flexibility.  Proposed changes for version 3.0 were released in August.

Overall updates to the standards include recommendations for making PCI DSS part of every day business processes, best practices for maintaining PCI DSS compliance, additional guidance built into the standard, and enhanced testing procedures.  Several new requirements were added as well, including:

  • 5.1.2 – evaluate evolving malware threats for any systems not considered to be commonly affected
  • 8.2.3 – combine minimum password complexity and strength requirements into one
  • 8.6 – where other authentication mechanisms like tokens, smart cards, certificates, etc., are used, these mechanisms must be linked to an individual account and ensure only the intended user can gain access
  • 9.3 – control physical access to sensitive areas for onsite personnel
  • 11.5.1 – implement a process to respond to alerts generated by the change-detection mechanism
  • 12.8.5 – maintain information about which PCI DSS requirements are managed by each service provider, and which are managed by the entity

Best practices were also added, which will become requirements on July 1, 2015:

  • 8.5.1 – use unique authentication credentials for each customer for service providers with remote access to customer premises
  • 9.9 – protect devices that capture payment card data via direct physical interaction with the card from tampering and substitution
  • 11.3 and 11.3.4 – implement a methodology for penetration testing; if segmentation is used to isolate the cardholder data environment from other networks, perform penetration tests to verify that the segmentation methods are operational and effective
  • 12.9 – for service providers, provide the written, agreement or acknowledgment to their customers as specified by 12.8.2.

Revisions to the PA-DSS include new requirements that payment application developers to verify the integrity of source code during the development process, that vendors incorporate risk assessment techniques into their software development process, and that applicable vendor personnel receive information security training at least annually.

Version 3.0 becomes effective January 1, 2014.  Version 2.0 remain in use until the end of 2014 to give organizations time to make the transition to the revised standards.


1 Comment

The Adobe Data Breach and Recurring Questions of Software Liability

In recent weeks, news and analysis of the data breach announced by Adobe in early October has revealed the problem to be possibly much worse than early reports had estimated. When Adobe first detected the breach, its investigations revealed that “certain information relating to 2.9 million Adobe customers, including customer names, encrypted credit or debit card numbers, expiration dates, and other information relating to customer orders” had been stolen through a series of sophisticated attacks on Adobe’s networks. Adobe immediately began an internal review and notified customers of steps they could take to protect their data. Security researchers have since discovered, however, that more than 150 million user accounts may have been compromised in this breach. While I make no assertions regarding any potential claims related to this breach, I believe the facts of this incident can help convey the difficulties inherent in the ongoing debate over liability in cybersecurity incidents.

The question of whether software companies should be held liable for damages due to incidents involving security vulnerabilities or software bugs has been kicked around by scholars and commentators since the 1980s—centuries ago in Internet time—with no real resolution to show for it. Over the past month, Jane Chong has written a series of articles for the New Republic which revives the debate, and argues that software vendors who do not take adequate precautions to limit defects in their code should bear a greater share of the liability burden when these defects result in actual damages. This argument may seem reasonable on its face, but a particular aspect of the recent Adobe data breach illustrates some of the complexities found in the details that should be considered a crucial part of this debate. Namely, how do we define “adequate” or “reasonable” when it comes to writing secure software?

As Adobe correctly pointed out in their initial announcement, the password data stolen during the data breach was encrypted. For most non-programmers, this would appear to be a reasonable measure to protect sensitive customer data. The catch here lies in two core tenets of information security: First, cryptography and information security are not the same thing, and second, securing software of any complexity is not easy.

When Adobe encrypted their customer passwords, they used a well-known encryption algorithm called Triple DES (3DES) in what is called ECB mode. The potential problem is not in the encryption algorithm, however, but in its application. Information security researchers have strongly discouraged the use of cryptographic algorithms like 3DES—especially in the mode Adobe implemented—for encrypting stored passwords, since it uses a single encryption key. Once a hacker cracks the key, all of the passwords become readable. In addition, since 3DES in ECB mode will always give the same encrypted text when using the same plain text, this enables hackers to use guessing techniques to uncover certain passwords. These techniques are made easier by users who use easily guessed passwords like “123456” (used by two million Adobe customers). When you consider that many Adobe customers use the same password for multiple different logins, which may include banks, health care organizations, or other accounts where sensitive information may be accessed, one can see the value of this Adobe customer data to hackers.

From an Adobe customer’s perspective, it may seem reasonable that Adobe bear some of the liability for any damages that might result from this incident. After all, the customer might reason, Adobe’s network was breached, so Adobe did not do enough to protect customer data. On the other hand, Adobe could justifiably point out that it had taken reasonable precautions to protect their networks, including encrypting the sensitive data, and it was only due to a particularly sophisticated attack that the data was stolen. Further, Adobe could argue, if a customer used an easily guessed password for multiple logins, there is nothing Adobe can do to prevent this behavior—how could it be expected to be liable for digital carelessness on the part of its customers?

These questions will not be answered in a few paragraphs here, of course, but it is clear that any discussion of software liability is not necessarily analogous to product liability theories in other industries, like airlines or cars. Rather, software engineering has its own unique considerations, and we should be careful not to slip too easily into convenient metaphors when considering questions of software liability. Secure software development can be difficult; we should expect no less for questions of law related to this industry.


3 Comments

Mobile Location Analytics Companies Agree to Code of Conduct

U.S. Senator Charles Schumer, the Future of Privacy Forum (“FPF”), a Washington, D.C. based think tank, and a group of location analytics companies, including Euclid, Mexia Interactive, Radius Networks, Brickstream, Turnstyle Solutions and SOLOMO,  released a Code of Conduct to promote customer privacy and transparency for mobile location analytics. 

Mobile location analytics technology, which allows stores to analyze shoppers’ behavior based on information collected from the shoppers’ cell phones, has faced a string of negative press in the last several months.  The location analytics companies gather Wi-Fi and Bluetooth MAC address signals  to monitor shoppers’ movements around the store, providing feedback such as how long shoppers wait in line at the check-out, how effective a window display draws customers into the store, and how many people who browse actually make a purchase.  Retailers argue that the technology provides them with the same type of behavioral data that is already being collected from shoppers when they browse retail sites online.  Customer advocates, on the other hand, raise concerns about the invasive nature of the tracking service, particularly as most customers aren’t aware that the tracking is taking place. Senator Schumer has been one of the most vocal critics of the mobile location analytics services, calling it an “unfair or deceptive” trade practice to fail to notify shoppers that their movements are being tracked or to give them a chance to opt-out of the practice.   In an open letter to the FTC in July 2013, Sen. Schumer described the technology thus:

“Retailers do not ever receive affirmative consent from the customer for [location analytics] tracking, and the only options for a customer to not be tracked are to turn off their phone’s Wi-Fi or to leave the phone at home. Geophysical location data about a person is obviously highly sensitive; however, retailers are collecting this information anonymously without consent.”

In response, a group of leading mobile location analytics companies agreed to a Code of Conduct developed in collaboration with Sen. Schumer and the Future of Privacy Forum to govern mobile location analytics services.   Under the Code:

  • A participating mobile location analytics firm will “take reasonable steps to require” participating retailers to provide customer notice through clear, in-store signage; using a standard symbol or icon to indicate the collection of mobile location analytics data; and to direct customers to industry education and opt-out website (For example, “To learn about use of customer location and your choices, visit www.smartstoreprivacy.com” would be acceptable language for in-store signage)
  • The mobile location analytics company will provide a detailed disclosure in its privacy policy about the use and collection of data it collects in-store, which should be separate from the disclosure of information collected through the company’s website.
  • Customers must be allowed the choice to opt-out of tracking.  The mobile location analytics company will post a link in its privacy policy to the industry site which provides a central opt-out.  A notice telling customers to turn off their mobile device or to deactivate the Wi-Fi signal is not considered sufficient “choice” under the Code.
  • The notice and choice requirements do not apply if the information collected is not unique to an individual device or user, or it is promptly aggregated so as not to be unique to a device or user, and individual level data is not retained. If a mobile location analytics firm records device-level information, even if it only shares aggregate information with retail clients, it must provide customer choice.
  •  A customer’s affirmative consent is required if: (1) personal information will be linked to a mobile device identifier, or (2) a customer will be contacted based on the analytic information.  

 The FTC has offered support to the self-regulatory process and provided feedback on the Code during the drafting negotiations.  “It’s great that industry has recognized customer concerns about invisible tracking in retail space and has taken a positive step forward in developing a self-regulatory code of conduct,” FTC Director of Customer Protection Jessica Rich told Politico

Some critics, however, feel that the Code does not go far enough.  The notice provision is weak, as it relies on the retailers to provide in-store signage for the customer.  Notably, retailers were not party to the negotiations developing the Code of Conduct and no retailer has publicly agreed to post signs in their stores.  Given the history – retailer Nordstrom was forced to drop its mobile location analytics pilot program in response to bad press from customers complaining after seeing posted signs – retailers are likely to want in-store signage to be as inconspicuous as possible. 

The next time you’re out shopping, keep your eyes peeled for in-store signage.  Are your retailers watching you? 


1 Comment

FTC v. Wyndham Update

Edit (Feb. 5, 2014): For a more recent update on this case, please see this post.

On November 1, Maureen Ohlhausen, a Commissioner at the Federal Trade Commission (FTC), held an “ask me (almost) anything” (AMAA) session on Reddit. There were no real surprises in the questions Commissioner Ohlhausen answered, and the AMAA format is not well-suited to lengthy responses. One interesting topic that did arise, however, was the FTC’s complaint against Wyndham Worldwide Corporation, and Wyndham’s subsequent filing of a motion to dismiss the FTC action against them. Commissioner Ohlhausen declined to discuss the ongoing litigation, but asserted generally that the FTC has the authority to bring such actions under Section 5 of the FTC Act, 15 U.S.C. § 45. While there were no unexpected revelations in the Commissioner’s response, I thought it presented an excellent opportunity to bring everyone up to speed on the Wyndham litigation.

On June 26, 2012, the Federal Trade Commission (FTC) filed a complaint in Arizona Federal District Court against Wyndham Worldwide Corporation, alleging that Wyndham “fail[ed] to maintain reasonable security” on their computer networks, which led to a data breach resulting in the theft of payment card data for hundreds of thousands of Wyndham customers, and more than $10.6 million in fraudulent charges on customers’ accounts.  Specifically, the complaint alleged that Wyndham engaged in deceptive business practices in violation of Section 5 of the FTC Act by misrepresenting the security measures it undertook to protect customers’ personal information. The complaint also alleged that Wyndham’s failure to provide reasonable data security is an unfair trade practice, also in violation of Section 5.

On August 27, 2012, Wyndham  responded by filing a motion to dismiss the FTC’s complaint, asserting, inter alia, that the FTC lacked the statutory authority to “establish data-security standards for the private sector and enforce those standards in federal court,” thus challenging the FTC’s authority to bring the unfairness count under the FTC Act. In their October 1, 2012 response, the FTC asked the court to reject Wyndham’s arguments, stating that the FTC’s complaint alleged a number of specific security failures on the part of Wyndham, which resulted in two violations of the FTC Act. The case was transferred to the Federal District of New Jersey on March 25, 2013, and Wyndham’s motions to dismiss were denied. On April 26, Wyndham once again filed motions to dismiss the FTC’s complaint, again asserting that the FTC lacked the legal authority to legislate data security standards for private businesses under Section 5 of the FTC Act.

At stake in this litigation is the FTC’s ability to bring enforcement claims against companies that suffer data breach due to a lack of “reasonable security.” What is unique in this case is Wyndham’s decision to fight the FTC action in court rather than make efforts to settle the case, as other companies have done when faced with similar allegations by the FTC. For example, in 2006, the FTC hit ChoicePoint Inc. with a $10 million penalty over data breach where over 180,000 payment card numbers were stolen. The FTC has also gone after such high-profile companies as Twitter, HTC, and Google based on similar facts and law. These actions resulted in out-of-court settlements.

If Wyndham’s pending motions to dismiss are denied, and the FTC ultimately prevails in this case, it is likely that the FTC will continue to bring these actions, and businesses will likely see an increased level of scrutiny applied to their network security. If, however, Wyndham succeeds and the FTC case against them is dismissed, public policy questions regarding data security will likely fall back to Congress to resolve.

Oral argument for the pending motions to dismiss are scheduled for November 7. No doubt many parties will be following these proceedings with great interest.