The Secure Times

An online forum of the ABA Section of Antitrust Law's Privacy and Information Security Committee


1 Comment

The Adobe Data Breach and Recurring Questions of Software Liability

In recent weeks, news and analysis of the data breach announced by Adobe in early October has revealed the problem to be possibly much worse than early reports had estimated. When Adobe first detected the breach, its investigations revealed that “certain information relating to 2.9 million Adobe customers, including customer names, encrypted credit or debit card numbers, expiration dates, and other information relating to customer orders” had been stolen through a series of sophisticated attacks on Adobe’s networks. Adobe immediately began an internal review and notified customers of steps they could take to protect their data. Security researchers have since discovered, however, that more than 150 million user accounts may have been compromised in this breach. While I make no assertions regarding any potential claims related to this breach, I believe the facts of this incident can help convey the difficulties inherent in the ongoing debate over liability in cybersecurity incidents.

The question of whether software companies should be held liable for damages due to incidents involving security vulnerabilities or software bugs has been kicked around by scholars and commentators since the 1980s—centuries ago in Internet time—with no real resolution to show for it. Over the past month, Jane Chong has written a series of articles for the New Republic which revives the debate, and argues that software vendors who do not take adequate precautions to limit defects in their code should bear a greater share of the liability burden when these defects result in actual damages. This argument may seem reasonable on its face, but a particular aspect of the recent Adobe data breach illustrates some of the complexities found in the details that should be considered a crucial part of this debate. Namely, how do we define “adequate” or “reasonable” when it comes to writing secure software?

As Adobe correctly pointed out in their initial announcement, the password data stolen during the data breach was encrypted. For most non-programmers, this would appear to be a reasonable measure to protect sensitive customer data. The catch here lies in two core tenets of information security: First, cryptography and information security are not the same thing, and second, securing software of any complexity is not easy.

When Adobe encrypted their customer passwords, they used a well-known encryption algorithm called Triple DES (3DES) in what is called ECB mode. The potential problem is not in the encryption algorithm, however, but in its application. Information security researchers have strongly discouraged the use of cryptographic algorithms like 3DES—especially in the mode Adobe implemented—for encrypting stored passwords, since it uses a single encryption key. Once a hacker cracks the key, all of the passwords become readable. In addition, since 3DES in ECB mode will always give the same encrypted text when using the same plain text, this enables hackers to use guessing techniques to uncover certain passwords. These techniques are made easier by users who use easily guessed passwords like “123456” (used by two million Adobe customers). When you consider that many Adobe customers use the same password for multiple different logins, which may include banks, health care organizations, or other accounts where sensitive information may be accessed, one can see the value of this Adobe customer data to hackers.

From an Adobe customer’s perspective, it may seem reasonable that Adobe bear some of the liability for any damages that might result from this incident. After all, the customer might reason, Adobe’s network was breached, so Adobe did not do enough to protect customer data. On the other hand, Adobe could justifiably point out that it had taken reasonable precautions to protect their networks, including encrypting the sensitive data, and it was only due to a particularly sophisticated attack that the data was stolen. Further, Adobe could argue, if a customer used an easily guessed password for multiple logins, there is nothing Adobe can do to prevent this behavior—how could it be expected to be liable for digital carelessness on the part of its customers?

These questions will not be answered in a few paragraphs here, of course, but it is clear that any discussion of software liability is not necessarily analogous to product liability theories in other industries, like airlines or cars. Rather, software engineering has its own unique considerations, and we should be careful not to slip too easily into convenient metaphors when considering questions of software liability. Secure software development can be difficult; we should expect no less for questions of law related to this industry.


3 Comments

Mobile Location Analytics Companies Agree to Code of Conduct

U.S. Senator Charles Schumer, the Future of Privacy Forum (“FPF”), a Washington, D.C. based think tank, and a group of location analytics companies, including Euclid, Mexia Interactive, Radius Networks, Brickstream, Turnstyle Solutions and SOLOMO,  released a Code of Conduct to promote customer privacy and transparency for mobile location analytics. 

Mobile location analytics technology, which allows stores to analyze shoppers’ behavior based on information collected from the shoppers’ cell phones, has faced a string of negative press in the last several months.  The location analytics companies gather Wi-Fi and Bluetooth MAC address signals  to monitor shoppers’ movements around the store, providing feedback such as how long shoppers wait in line at the check-out, how effective a window display draws customers into the store, and how many people who browse actually make a purchase.  Retailers argue that the technology provides them with the same type of behavioral data that is already being collected from shoppers when they browse retail sites online.  Customer advocates, on the other hand, raise concerns about the invasive nature of the tracking service, particularly as most customers aren’t aware that the tracking is taking place. Senator Schumer has been one of the most vocal critics of the mobile location analytics services, calling it an “unfair or deceptive” trade practice to fail to notify shoppers that their movements are being tracked or to give them a chance to opt-out of the practice.   In an open letter to the FTC in July 2013, Sen. Schumer described the technology thus:

“Retailers do not ever receive affirmative consent from the customer for [location analytics] tracking, and the only options for a customer to not be tracked are to turn off their phone’s Wi-Fi or to leave the phone at home. Geophysical location data about a person is obviously highly sensitive; however, retailers are collecting this information anonymously without consent.”

In response, a group of leading mobile location analytics companies agreed to a Code of Conduct developed in collaboration with Sen. Schumer and the Future of Privacy Forum to govern mobile location analytics services.   Under the Code:

  • A participating mobile location analytics firm will “take reasonable steps to require” participating retailers to provide customer notice through clear, in-store signage; using a standard symbol or icon to indicate the collection of mobile location analytics data; and to direct customers to industry education and opt-out website (For example, “To learn about use of customer location and your choices, visit www.smartstoreprivacy.com” would be acceptable language for in-store signage)
  • The mobile location analytics company will provide a detailed disclosure in its privacy policy about the use and collection of data it collects in-store, which should be separate from the disclosure of information collected through the company’s website.
  • Customers must be allowed the choice to opt-out of tracking.  The mobile location analytics company will post a link in its privacy policy to the industry site which provides a central opt-out.  A notice telling customers to turn off their mobile device or to deactivate the Wi-Fi signal is not considered sufficient “choice” under the Code.
  • The notice and choice requirements do not apply if the information collected is not unique to an individual device or user, or it is promptly aggregated so as not to be unique to a device or user, and individual level data is not retained. If a mobile location analytics firm records device-level information, even if it only shares aggregate information with retail clients, it must provide customer choice.
  •  A customer’s affirmative consent is required if: (1) personal information will be linked to a mobile device identifier, or (2) a customer will be contacted based on the analytic information.  

 The FTC has offered support to the self-regulatory process and provided feedback on the Code during the drafting negotiations.  “It’s great that industry has recognized customer concerns about invisible tracking in retail space and has taken a positive step forward in developing a self-regulatory code of conduct,” FTC Director of Customer Protection Jessica Rich told Politico

Some critics, however, feel that the Code does not go far enough.  The notice provision is weak, as it relies on the retailers to provide in-store signage for the customer.  Notably, retailers were not party to the negotiations developing the Code of Conduct and no retailer has publicly agreed to post signs in their stores.  Given the history – retailer Nordstrom was forced to drop its mobile location analytics pilot program in response to bad press from customers complaining after seeing posted signs – retailers are likely to want in-store signage to be as inconspicuous as possible. 

The next time you’re out shopping, keep your eyes peeled for in-store signage.  Are your retailers watching you? 


1 Comment

FTC v. Wyndham Update

Edit (Feb. 5, 2014): For a more recent update on this case, please see this post.

On November 1, Maureen Ohlhausen, a Commissioner at the Federal Trade Commission (FTC), held an “ask me (almost) anything” (AMAA) session on Reddit. There were no real surprises in the questions Commissioner Ohlhausen answered, and the AMAA format is not well-suited to lengthy responses. One interesting topic that did arise, however, was the FTC’s complaint against Wyndham Worldwide Corporation, and Wyndham’s subsequent filing of a motion to dismiss the FTC action against them. Commissioner Ohlhausen declined to discuss the ongoing litigation, but asserted generally that the FTC has the authority to bring such actions under Section 5 of the FTC Act, 15 U.S.C. § 45. While there were no unexpected revelations in the Commissioner’s response, I thought it presented an excellent opportunity to bring everyone up to speed on the Wyndham litigation.

On June 26, 2012, the Federal Trade Commission (FTC) filed a complaint in Arizona Federal District Court against Wyndham Worldwide Corporation, alleging that Wyndham “fail[ed] to maintain reasonable security” on their computer networks, which led to a data breach resulting in the theft of payment card data for hundreds of thousands of Wyndham customers, and more than $10.6 million in fraudulent charges on customers’ accounts.  Specifically, the complaint alleged that Wyndham engaged in deceptive business practices in violation of Section 5 of the FTC Act by misrepresenting the security measures it undertook to protect customers’ personal information. The complaint also alleged that Wyndham’s failure to provide reasonable data security is an unfair trade practice, also in violation of Section 5.

On August 27, 2012, Wyndham  responded by filing a motion to dismiss the FTC’s complaint, asserting, inter alia, that the FTC lacked the statutory authority to “establish data-security standards for the private sector and enforce those standards in federal court,” thus challenging the FTC’s authority to bring the unfairness count under the FTC Act. In their October 1, 2012 response, the FTC asked the court to reject Wyndham’s arguments, stating that the FTC’s complaint alleged a number of specific security failures on the part of Wyndham, which resulted in two violations of the FTC Act. The case was transferred to the Federal District of New Jersey on March 25, 2013, and Wyndham’s motions to dismiss were denied. On April 26, Wyndham once again filed motions to dismiss the FTC’s complaint, again asserting that the FTC lacked the legal authority to legislate data security standards for private businesses under Section 5 of the FTC Act.

At stake in this litigation is the FTC’s ability to bring enforcement claims against companies that suffer data breach due to a lack of “reasonable security.” What is unique in this case is Wyndham’s decision to fight the FTC action in court rather than make efforts to settle the case, as other companies have done when faced with similar allegations by the FTC. For example, in 2006, the FTC hit ChoicePoint Inc. with a $10 million penalty over data breach where over 180,000 payment card numbers were stolen. The FTC has also gone after such high-profile companies as Twitter, HTC, and Google based on similar facts and law. These actions resulted in out-of-court settlements.

If Wyndham’s pending motions to dismiss are denied, and the FTC ultimately prevails in this case, it is likely that the FTC will continue to bring these actions, and businesses will likely see an increased level of scrutiny applied to their network security. If, however, Wyndham succeeds and the FTC case against them is dismissed, public policy questions regarding data security will likely fall back to Congress to resolve.

Oral argument for the pending motions to dismiss are scheduled for November 7. No doubt many parties will be following these proceedings with great interest.


Leave a comment

NIST Updates Proposed National Cybersecurity Framework

As noted earlier on this blog, President Obama issued a sweeping Cybersecurity Executive Order in February, which called for the development of a national cybersecurity framework to mitigate risks to federal agencies and critical infrastructure. On October 22, the National Institute of Standards and Technology (NIST) published a Preliminary Cybersecurity Framework, which is a revision to their Draft Cybersecurity Framework published in August. The preliminary framework is the result of a series of public workshops and input from more than 3,000 individuals and organizations on standards and best practices.

According to NIST Director Patrick Gallagher, the goal of the NIST framework is to “turn today’s best [security] practices into common practices,” and to create a set of security guidelines for businesses to protect themselves from evolving cybersecurity threats. Adoption of the NIST framework, however, would be voluntary for companies, since NIST is a non-regulatory agency within the Department of Commerce.

Despite the voluntary nature of the framework, it has received a fair measure of criticism from businesses concerned that these standards will increase negligence liability once regulatory agencies establish requirements based on these standards. It is likely that courts will rely—at least in part—on any such standards to help define what “reasonable” cybersecurity measures are.

This concern is not without merit. Courts have struggled with the definition of reasonable cyber security, and these struggles have taken on greater urgency as our questions about the vulnerability of the nation’s critical infrastructure and protections of private information arise. The principal problem with this question is the moving target that “reasonable security” presents to businesses and individuals. In order to address some of these questions, NIST has published standards such as the Security and Privacy Controls for Federal Information Systems and Organizations, the final revision of which was released in April. Other sources, such as the Uniform Commercial Code, also make attempts to address reasonable security, but their language is too often frustratingly vague.

In an effort to address some of these concerns, the preliminary framework has increased the flexibility of its standards, For example, NIST has removed use of the word “should” from the updated draft, and added a paragraph that gives organizations greater options in their security implementations:

Appendix B contains a methodology to protect privacy and civil liberties for a cybersecurity program as required under the Executive Order. Organizations may already have processes for addressing privacy risks such as a process for conducting privacy impact assessments. The privacy methodology is designed to complement such processes by highlighting privacy considerations and risks that organizations should be aware of when using cybersecurity measures or controls. As organizations review and select relevant categories from the Framework Core, they should review the corresponding category section in the privacy methodology. These considerations provide organizations with flexibility in determining how to manage privacy risk.

On the other hand, privacy groups have objected to the framework’s lack of requirements, and have called for protections for civil liberties as well as a commitment to civilian control of cybersecurity. Advocacy groups have also questioned reports that the National Security Agency (NSA) has directed NIST to reduce key security standards. NIST has not yet commented on any NSA involvement in the development of the framework, but has initiated an internal audit to review its own method for guidance development.

On October 29, NIST opened a 45-day public comment period, with plans to release the final version of the framework in February 2014. NIST will also host a workshop to discuss the state of the framework at North Carolina State University on November 14th and 15th. While it is unlikely that every stakeholder group will be completely satisfied with the final version of the framework, a strengthening of the nation’s critical infrastructure in the form of mutually agreed-upon, reasonable standards will surely be welcome.


Leave a comment

Direct Marketing Association Launches “Data Protection Alliance”

Image

On October 29, 2013, the Direct Marketing Association (“DMA”) announced the launch of a new initiative, the Data Protection Alliance, which it describes “as  a legislative coalition that will focus specifically on ensuring that effective regulation and legislation protects the value of the Data-Driven Marketing Economy far into the future.” In its announcement release, the DMA reports the results of a study it commissioned on the economic impact of what calls “the responsible use of consumer data” on “data-driven innovation.” According to the DMA, its study indicated that regulation which “impeded responsible exchange of data across the Data-Driven Marketing Economy” would cause substantial negative damage to the U.S.’ economic growth and employment. Instead of such regulation, the DMA asks Congress to focus on its “Five Fundamentals for the Future”:

  1. Pass a national data security and breach notification law;

  2. Preempt state laws that endanger the value of data;

  3. Prohibit privacy class action suits and fund Federal Trade Commission enforcement;

  4. Reform the Electronic Communications Privacy Act (ECPA); and

  5. Preserve robust self-regulation for the Data-Driven Marketing Economy.

The DMA is explicitly concerned with its members’ interests, as any trade group would be, and this report and new Data Protection Alliance are far from the only views being expressed as to the need for legislation and regulation to alter the current balance between individual control and commercial use of personal information. Given the size and influence of the DMA and its members, though, this announcement provides useful information on the framing of the ongoing debate in the United States and elsewhere over privacy regulation.