The Secure Times

An online forum of the ABA Section of Antitrust Law's Privacy and Information Security Committee


Leave a comment

Caution: Your Company’s Biggest Privacy Threat is…the FTC

Technology companies – from startups to megacorporations – should not overlook an old privacy foe: the Federal Trade Commission (FTC).  Since its inception in 2002, the FTC’s data security program has significantly picked up steam.  In the last two years, the FTC has made headlines for its hefty privacy-related fines against Google and photo-sharing social network, Path.  In January 2014 alone, the agency settled with a whopping 15 companies for privacy violations.  What is more, many of these companies’ practices were not purposefully deceptive or unfair; rather the violations stem from mere failure to invest the time and security resources needed to protect data.

Vested with comprehensive authority and unburdened by certain hurdles that class actions face, the FTC appears poised for more action.  The FTC’s basis for its authority in the privacy context originates from the Federal Trade Commission Act (FTC Act) and is quite broad.  Simply put, it may investigate “unfair and deceptive acts and practices in or affecting commerce.”  In addition to this general authority, the FTC has authority to investigate privacy violations and breaches under numerous sets of rules, including the Children’s Online Privacy Protection Act (COPPA), the Fair Credit Reporting Act including disposal (FCRA), the Gramm-Leach-Bliley Act (GLB), and the Telemarketing and Consumer Fraud and Abuse Prevention Act.  Nor is the FTC hampered with the requirements of private class action litigation.  For example, successful privacy class actions often must establish that consumers were harmed by a data breach (as in In re Barnes & Noble Pin Pad Litigation), consumers actually relied on a company’s promises to keep the information confidential (as in In re Apple iPhone Application Litigation), or the litigation will not be burdened with consumer-specific issues (such as whether the user impliedly consented to the disclosure, as in In re: Google Inc. Gmail Litigation).

The FTC has often focused on companies failing to adhere to their own stated policies, considered a “deceptive” practice by the FTC.  More recently, the FTC settled with the maker of one of the most popular Android Apps, “Brightest Flashlight Free.”  While the App informed users that it collected their data, it is alleged to have failed to disclose that the data would be shared with third parties.  And though the bottom of the license agreement offered consumers an opportunity to click to “Accept” or “Refuse,” the App is alleged to have already been collecting and sending information (such as the location and the unique device identifier) prior to receiving acceptance.  Just last week, the FTC settled with Fandango for failing to adequately secure data transmitted through its mobile app, in contravention of its promise to users.  The FTC alleged that Fandango disabled a critical security process, known as SSL certificate validation, which would have verified that its app’s communications were secure.   As another example, the FTC recently settled with a maker of a camera device used in homes for a variety of purposes, including baby monitoring and security.  The device allows the video to be accessed from any internet connection.  The devices are alleged to have “had faulty software that left them open to online viewing, and in some instances listening, by anyone with the cameras’ Internet address.”

Companies have also been targeted for even slight deviations from their stated policies.  For example, the FTC recently reached settlements with BitTorrent and the Denver Broncos.  The entities were blamed for falsely claiming they held certifications under the U.S.-EU Safe Harbor framework.  In reality, the entities had received the certifications but failed to renew them.  The safe harbor is a streamlined process for US companies (that receive or process personally identifiable information either directly or indirectly from Europe) to comply with European privacy law.  Self-certifying to the U.S.-EU Safe Harbor Framework also ensures that EU organizations know that the organization provides “adequate” privacy protection.

Perhaps most surprising to companies is the FTC’s assertion that it may require them to have reasonable data protection policies in place (even if the company never promised consumers it would safeguard the data).  Failure to secure data, according to the FTC, is an “unfair” practice under the FTC Act.  For example, the FTC recently settled with Accretive Health, a company that handles medical data and patient-financial information.  Among other things, Accretive was alleged to have transported laptops with private information in an unsafe manner, leading to a laptop (placed in a locked compartment of an employee’s car) being stolen.  It is estimated that the FTC has brought over 20 similar types of cases, but all but one settled before any meaningful litigation.  The one: a case against Wyndham Hotels.  There, the FTC has alleged that Wyndham failed to adequately protect consumer data collected by its member hotels.  According to the FTC, hackers repeatedly accessed the data due to the company’s wrongly configured software, weak passwords, and insecure servers.  Though Wyndham’s Privacy Policy did not technically promise that the information would remain secure, the FTC faulted it for the lapse anyway.  Wyndham has challenged the FTC’s position in federal court and a decision is expected soon.

Being a target of an FTC action is no walk in the park.  In addition to paying for attorney fees, the FTC often demands significant remedial measures.  For instance, the company may be asked to (1) create privacy programs and protocols, (2) notify affected consumers, (3) delete private consumer data, (4) hire third-party auditors, and (5) subject itself to continual oversight by the FTC for 20 years.  What is more, if a company ever becomes a repeat offender and violates its agreement not to engage in future privacy violations, it will face significant fines by the FTC.  In this regard, for example, Google was required to pay $22.5 million for violating a previous settlement with the FTC.

All told, technology companies should not feel emboldened by recent class action victories in the privacy context.  To avoid FTC investigation, they should carefully review their data handling practices to ensure that they are in accord with their privacy policy.  Further, they would be wise to invest in the necessary resources required to safeguard data and regularly ensure that their methods are state of the art.

 


1 Comment

The Adobe Data Breach and Recurring Questions of Software Liability

In recent weeks, news and analysis of the data breach announced by Adobe in early October has revealed the problem to be possibly much worse than early reports had estimated. When Adobe first detected the breach, its investigations revealed that “certain information relating to 2.9 million Adobe customers, including customer names, encrypted credit or debit card numbers, expiration dates, and other information relating to customer orders” had been stolen through a series of sophisticated attacks on Adobe’s networks. Adobe immediately began an internal review and notified customers of steps they could take to protect their data. Security researchers have since discovered, however, that more than 150 million user accounts may have been compromised in this breach. While I make no assertions regarding any potential claims related to this breach, I believe the facts of this incident can help convey the difficulties inherent in the ongoing debate over liability in cybersecurity incidents.

The question of whether software companies should be held liable for damages due to incidents involving security vulnerabilities or software bugs has been kicked around by scholars and commentators since the 1980s—centuries ago in Internet time—with no real resolution to show for it. Over the past month, Jane Chong has written a series of articles for the New Republic which revives the debate, and argues that software vendors who do not take adequate precautions to limit defects in their code should bear a greater share of the liability burden when these defects result in actual damages. This argument may seem reasonable on its face, but a particular aspect of the recent Adobe data breach illustrates some of the complexities found in the details that should be considered a crucial part of this debate. Namely, how do we define “adequate” or “reasonable” when it comes to writing secure software?

As Adobe correctly pointed out in their initial announcement, the password data stolen during the data breach was encrypted. For most non-programmers, this would appear to be a reasonable measure to protect sensitive customer data. The catch here lies in two core tenets of information security: First, cryptography and information security are not the same thing, and second, securing software of any complexity is not easy.

When Adobe encrypted their customer passwords, they used a well-known encryption algorithm called Triple DES (3DES) in what is called ECB mode. The potential problem is not in the encryption algorithm, however, but in its application. Information security researchers have strongly discouraged the use of cryptographic algorithms like 3DES—especially in the mode Adobe implemented—for encrypting stored passwords, since it uses a single encryption key. Once a hacker cracks the key, all of the passwords become readable. In addition, since 3DES in ECB mode will always give the same encrypted text when using the same plain text, this enables hackers to use guessing techniques to uncover certain passwords. These techniques are made easier by users who use easily guessed passwords like “123456” (used by two million Adobe customers). When you consider that many Adobe customers use the same password for multiple different logins, which may include banks, health care organizations, or other accounts where sensitive information may be accessed, one can see the value of this Adobe customer data to hackers.

From an Adobe customer’s perspective, it may seem reasonable that Adobe bear some of the liability for any damages that might result from this incident. After all, the customer might reason, Adobe’s network was breached, so Adobe did not do enough to protect customer data. On the other hand, Adobe could justifiably point out that it had taken reasonable precautions to protect their networks, including encrypting the sensitive data, and it was only due to a particularly sophisticated attack that the data was stolen. Further, Adobe could argue, if a customer used an easily guessed password for multiple logins, there is nothing Adobe can do to prevent this behavior—how could it be expected to be liable for digital carelessness on the part of its customers?

These questions will not be answered in a few paragraphs here, of course, but it is clear that any discussion of software liability is not necessarily analogous to product liability theories in other industries, like airlines or cars. Rather, software engineering has its own unique considerations, and we should be careful not to slip too easily into convenient metaphors when considering questions of software liability. Secure software development can be difficult; we should expect no less for questions of law related to this industry.