The Secure Times

An online forum of the ABA Section of Antitrust Law's Privacy and Information Security Committee


Leave a comment

FTC Chairwoman Edith Ramirez Comments on Data Security for the Internet of Things

Happy New Year! For many, the holidays included exciting new gadgets. Whether it’s a new fitness tracker, a smart thermostat, or a smart glucose meter, these new connected devices have arrived, and new products are on the horizon. These products, termed the “Internet of Things” by privacy professionals, are broadly defined as products that can connect to a network.

On January 6, 2015, FTC Chairwoman Edith Ramirez delivered the opening remarks at the International Consumer Electronics Show, during which she spoke on security issues surrounding the Internet of Things (“IoT”). Chairwoman Ramirez discussed what she viewed as three key risks to consumer privacy, along with suggested industry solutions to mitigate those risks.

IoT Risks to Consumer Privacy
The first privacy and security risk of connected devices Chairwoman Ramirez identified was that connected devices engage in “ubiquitous data collection.” Because these devices can potentially collect personal information, including our habits, location, and physical condition, the data can lead to rich profiles of consumer preferences and behavior.

The second risk Chairwoman Ramirez identified was the possible unexpected use of consumer data acquired through connected devices. As an example, she asked whether data from a smart TV’s tracking of consumer television habits could be combined with other data to enable businesses to engage in targeted advertising or even exacerbate socio-economic disparities.

The third risk she identified was that connected devices can be hijacked, leading to misuse of personal information.

Suggested Industry Solutions
To combat the risks identified above, Chairwoman Ramirez suggested three solutions for the IoT industry. First, IoT companies should engage in “Security by Design,” namely that IoT products should be built initially with a priority on security, and that IoT companies should implement technical and administrative measures to ensure reasonable security. Chairwoman Ramirez identified five aspects of Security by Design:

  • conduct a privacy or security risk assessment as part of the design process;
  • test security measures before products launch;
  • use smart defaults—such as requiring consumers to change default passwords in the set-up process;
  • consider encryption, particularly for the storage and transmission of sensitive information, such as health data; and
  • monitor products throughout their life cycle and, to the extent possible, patch known vulnerabilities.

Second, Chairwoman Ramirez suggested that companies that collect personal information should engage in data minimization, viz. that they should collect only the data needed for a specific purpose and then safely destroy that data afterwards. Chairwoman Ramirez also urged companies to de-identify consumer data where possible.

Finally, Chairwoman Ramirez suggested that IoT companies provide notice and choice to consumers for unexpected collection or uses of their data. As an example, Chairwoman Ramirez stated that if IoT companies are sharing data from a smart thermostat or fitness band with data brokers or marketing firms, those companies should provide consumers with a “simple notice of the proposed uses of their data and a way to consent.”

Although not official FTC statements, these remarks by Chairwoman Ramirez provide valuable insight into how the Federal Trade Commission may regulate connected devices in the future. Companies in the IoT space should monitor further developments closely and review their data collection, security, and sharing practices accordingly.


Leave a comment

Canada’s Anti-Spam Law (CASL) – New Guidance on Providing Apps and Software

Canada’s Anti-Spam Law (CASL) targets more than just email and text messages 

In our previous post, we explained that on July 1, 2014, Canada’s Anti-Spam Law (CASL) had entered into force with respect to email, text and other “commercial electronic messages”.

CASL also targets “malware”.  It prohibits installing a “computer program” – including an app, widget, software, or other executable data – on a computer system (e.g. computer, device) unless the program is installed with consent and complies with disclosure requirements.  The provisions in CASL related to the installation of computer programs will come into force on January 15, 2015.

Application outside Canada

Like CASL’s email and text message provisions, the Act’s ”computer program” installation provisions apply to persons outside Canada.  A person contravenes the computer program provisions if the computer system (computer, device) is located in Canada at the relevant time (or if the person is in Canada or is acting under the direction of a person in Canada).  We wrote about CASL’s application outside of Canada here.

Penalties

The maximum penalty under CASL is $10 million for a violation of the Act by a corporation.  In certain circumstances, a person may enter into an “undertaking” to avoid a Notice of Violation.  Moreover, a private right of action is available to individuals as of July 1, 2017.

CASL’s broad scope leads to fundamental questions – how does it apply?

The broad legal terms “computer program”, “computer system” “install or cause to be installed” have raised many fundamental questions with industry stakeholders.  The CRTC – the Canadian authority charged with administering this new regime – seems to have gotten the message.  The first part of the CRTC’s response to FAQ #1 in its interpretation document CASL Requirements for Installing Computer Programs is “First off, don’t panic”.

New CRTC Guidance 

The CRTC has clarified some, but not all of the questions that industry stakeholders have raised.  CRTC Guidance does clarify the following.

  • Self-installed software is not covered under CASL.  CASL does not apply to owners or authorized users who are installing software on their own computer systems – for example, personal devices such as computers, mobile devices or tablets.
  • CASL does not apply to “offline installations“, for example, where a person installs a CD or DVD that is purchased at a store.
  • Where consent is required, it may be obtained from an employee (in an employment context); from the lessee of a computer (in a lease context); or from an individual (e.g. in a family context) where that individual has the “sole use” of the computer.
  • An “update or upgrade” – which benefits from blanket consent in certain cases under CASL – is “generally a replacement of software with a newer or better version”, or a version change.
  • Grandfathering – if a program (software, app, etc.) was installed on a person’s computer system before January 15, 2015, then you have implied consent until January 15, 2018 – unless the person opts out of future updates or upgrades.

Who is liable?

CRTC staff have clarified that as between the software developer and the software vendor (the “platform”), both may be liable under CASL.  To determine liability, the CRTC proposes to examine the following factors, on a case-by-case basis:

  • was their action a necessary cause leading to the installation?
  • was their action reasonably proximate to the installation?
  • was their action sufficiently important toward the end result of causing the installation of the computer program?

CRTC and Industry Canada staff have indicated that they will be publishing additional FAQs, in response to ongoing industry stakeholder questions.

See:  Step-by-Step: How CASL applies to software, apps and other “computer programs”

See also:  fightspam.gc.ca  and consider signing up for information updates through the site.


Leave a comment

Windows XP End of Life Poses Risks to the Significant Percentage of Companies Still Tied to the Platform

On April 8, Microsoft officially ended all support and ceased providing updates for their Windows XP operating system. This “end of life” (EOL) announcement is not uncommon with software platforms, where continued support of aging software (XP is over 12 1/2 years old) becomes too expensive or too impractical, and the user is thus encouraged to upgrade to a newer version of that software. This all makes sense on the surface. As we’ve seen time and time again, software–especially large, complex pieces of software like operating systems–tends not to age well. Due to the sheer complexity of systems like XP, retrofitting patches to fix errors and vulnerabilities can be quite difficult, and may even lead to unintended consequences (i.e., more bugs). Thus, over time, software companies may urge their customers to migrate to the (relatively) clean slate provided by upgraded versions of their software.

The XP EOL announcement came as no surprise. Microsoft has been urging customers to start planning for upgrades since it terminated all retail sales of the operating system in 2008. But according to recent statistics provided by Net Applications, nearly 28% of Internet users are still running some version of Windows XP. Even worse, this number does not include those computers running XP that aren’t use for web browsing, e.g., servers, point-of-sale (POS) systems, medical systems, industrial systems, security systems, and ATMs. This number includes large organizations such as banks and governments which, due mainly to their size and conservative technology adoption policies, take more time to migrate away from software platforms, especially those that provide core services, such as operating systems. This has led to multi-million dollar agreements between these organizations and Microsoft in order to provide continued support for the short term.

But what about those companies and organizations who don’t necessarily have the wherewithal to negotiate individual support contracts with Microsoft? In addition, these smaller companies too often don’t have the depth of IT support required to keep up with these updates, and some organizations may not even be aware they’re still running XP within their network. For these companies, the fact that Microsoft will no longer be providing public patches for future vulnerabilities could prove to be a serious problem.

The first example of this problem showed up this week. On Monday, a new “zero-day” vulnerability in Microsoft’s Internet Explorer (IE) web browser was announced. This vulnerability is quite serious, as it could allow for remote code execution on a user’s computer, and had already been detected as an attack being used in the wild. Technology news sources were referring to this bug as the first sign of the “XPocalypse,” where users and organizations still running the unsupported platform would be left to the wolves, so to speak.

Yesterday, Microsoft took the unusual step of issuing a patch for this IE vulnerability for all of its platforms, including the “unsupported” Windows XP. While this step may have averted disaster for XP users–at least for the time being–many technology experts are warning that providing retroactive support for EOL platforms will not solve the larger problem of a significant number of users running aging, vulnerable software. This should concern not only the companies still running XP, but the entire Internet ecosystem, since compromised computer systems are often repurposed as platforms for further attacks.

It’s still too early to tell whether any of the dire predictions presented by the so-called XPocalypse will come to pass. Some cynics have pointed out that we are not likely to see a sudden surge of attacks on XP, since XP has been quite vulnerable to attack for some time, even when it was supported. Either way, companies would do well to make software security a priority, from the C-Suite on down. Companies are coming to realize that many (or most) of them are actually in the software business, as so much of their operation depends on the software that sits behind the scenes. There may come a time that the FTC views the unsupported use of XP as failing to take reasonable security measures. Adopting a wait-and-see approach to software security is bound to make a potentially bad situation even worse.


Leave a comment

2014 Verizon Data Breach Report Paints a Sobering Picture of the Information Security Landscape

The 2014 Verizon Data Breach Investigations Report (DBIR) was released on April 22, providing just the sort of deep empirical analysis of cybersecurity incidents we’ve come to expect from this annual report. The primary messages of this year’s DBIR are the targeting of web applications, continued weaknesses in payment systems, and nine categories of attack patterns that cover almost all recorded incidents. Further, despite the attention paid to last year’s enormous data breach at Target, this year’s data shows that attacks against point of sale (POS) systems are actually decreasing somewhat. Perhaps most importantly, the underlying thread that is found throughout this year’s DBIR is the need for increased education and application of digital hygiene.

Each year’s DBIR is compiled based on data from breaches and incidents investigated by Verizon, law enforcement organizations, and other private sector contributors. This year, Verizon condensed their analysis to nine attack patterns common to all observed breaches. Within each of these patterns, Verizon cites the software and vectors attackers are exploiting, as well as other important statistics such as time to discovery and remediation. The nine attack patterns listed in the DBIR are POS intrusions, web application attacks, insider misuse, physical theft/loss, miscellaneous errors, crimeware, card skimmers, denial-of-service (DoS) attacks, and cyber-espionage. Within industry verticals, most attacks can be characterized by only three of the nine categories.

Attacks on web applications attacks were by far the most common threat type observed last year, with 35% of all confirmed incidents linked to web application security problems. These numbers represents a significant increase over the three-year average of 21% of data breaches from web application attacks. The DBIR states that nearly two thirds of attackers targeting web applications are motivated by ideology, while financial incentives drive another third. Attacks for financial reasons are most likely to target organizations from the financial and retail industries. These attacks tend to focus on user interfaces like those at online banking or payment sites, either by exploiting some underlying weakness in the application itself or by using stolen user credentials. To mitigate the use of stolen credentials, the DBIR advised companies to consider implementing some form of two-factor authentication, a recommendation that is made to combat several attack types in this year’s report.

The 2014 DBIR contains a wide array of detailed advice for companies who wish to do a better job of mitigating these threats. The bulk of this advice can be condensed into the following categories:

  • Be vigilant: Organizations often only find out about security breaches when they get a call from the police or a customer. Log files and change management systems can give you early warning.
  • Make your people your first line of defense:  Teach staff about the importance of security, how to spot the signs of an attack, and what to do when they see something suspicious.
  • Keep data on a ‘need to know basis’: Limit access to the systems staff need to do their jobs. And make sure that you have processes in place to revoke access when people change role or leave.
  • Patch promptly: Attackers often gain access using the simplest attack methods, ones that you could guard against simply with a well-configured IT environment and up-to-date anti-virus.
  • Encrypt sensitive data: Then if data is lost or stolen, it’s much harder for a criminal to use.
  • Use two-factor authentication: This won’t reduce the risk of passwords being stolen, but it can limit the damage that can be done with lost or stolen credentials.
  • Don’t forget physical security. Not all data thefts happen online. Criminals will tamper with computers or payment terminals or steal boxes of printouts.

These recommendations are further broken down by industry in the DBIR, but they largely come down to a liberal application of “elbow grease” on the part of companies and organizations. Executing on cyber security plans requires diligence and a determination to keep abreast of continual changes to the threat landscape, and often requires a shift in culture within a company. But with the FTC taking a more aggressive interest in data breaches, not to mention the possibility of civil suits as a response to less-than-adequate data security measures, companies and organizations would do well to make cyber security a top priority from the C-Suite on down.


Leave a comment

FTC v. Wyndham Update, Part 3

In earlier updates, we’ve provided background and tracked the progress (and the unique circumstances) of FTC v. Wyndham Worldwide Corp., et al. On April 7, a highly anticipated opinion was issued by New Jersey District Court Judge Esther Salas in a case that will likely have broad implications in the realms of privacy and data security. Through a motion to dismiss, Wyndham argued that the FTC had no authority to assert a claim in the data security context, that the FTC must first formally promulgate data security regulations before bringing such a claim, and that the FTC’s pleadings of consumer harm were insufficient to support their claims. The Wyndham court sided with the FTC on all of these arguments, and dismissed Wyndham’s motion to dismiss.

Continue reading


2 Comments

NIST Eliminates Privacy Appendix from Cybersecurity Framework

In a January 15, 2014 update, the National Institutes of Standards and Technology (“NIST”) announced that it would eliminate contentious privacy provisions in Appendix B of the Preliminary Cybersecurity Framework.  The appendix was originally intended “to protect individual privacy and civil liberties” as part of the February 2012 Executive Order 13636 requiring NIST to establish a framework to manage cybersecurity risk.  The proposed privacy provisions generated widespread controversy, however, because “the methodology did not reflect consensus private sector practices and therefore might limit use of the Framework.”  As a result, NIST determined that the appendix “did not generate sufficient support through the comments to be included in the final Framework.”

In place of a separate privacy appendix, NIST stated that it would incorporate an alternative methodology proposed on behalf of several industry sectors.  This substitute approach eliminates references to specific privacy standards, such as Fair Information Practice Principles (FIPPs), given the current lack of consensus regarding such standards.  Instead, the Framework will provide “more narrowed and focused” guidance in the “How To Use” section that requires companies to consider privacy implications and address them as appropriate.  The high-level measures now include ensuring proper privacy training, reviewing any monitoring activities, and evaluating any privacy concerns that arise when information (such as threat data) is shared outside the company.  According to NIST, this approach will “allow organizations to better incorporate general privacy principles when implementing a cybersecurity program.”

Although eliminating the privacy appendix in favor of more general guidance was the only definitive change that NIST announced, the update also noted several other common issues raised in public comments.  These topics – which include reaching consensus on what “adoption” of the Framework entails and the use of “Framework Implementation Tiers” to assess the strength of a company’s cybersecurity program – will remain key areas of debate once the Cybersecurity Framework is released on February 13, 2014.

Although the Framework is slated for release in just a few weeks (and will be available here), NIST made clear that it is intended to be a “living document” that will need to be “update[d] and refine[d] . . . based on lessons learned through use as well as integration of new standards, guidelines, and practices that become available.”  NIST also explained that it intends to continue serving as the “convener” for such changes until the document can be transitioned to a non-government organization, but will issue a roadmap with more details soon. 


2 Comments

Before Liftoff, Drones Must Maneuver Through Privacy Laws

Unmanned aerial vehicles, better known as drones, are expected to revolutionize the way companies deliver packages to their customers.  Some also imagine these small aircrafts delivering pizzas to a customer’s home or nachos to a fan at a ballgame.  Researchers are even investigating the possibility of using drones to assist farmers with monitoring their crops.  Before drone technology takes flight, however, it will have to maneuver through privacy laws.

The Federal Aviation Administration (FAA) is the agency charged with developing rules, including privacy rules, for private individuals and companies to operate drones in national airspace.  While the precise breadth of FAA rules is not entirely clear, a framework is beginning to develop.  When the FAA recently announced test sites for drones, it also noted that test site operators must: (1) comply with existing federal and state privacy laws, (2) have publicly available privacy policies and a written plan for data use and retention, and (3) conduct a review of privacy practices that allows for public comment.  When soliciting the public for comment on these test site-privacy rules, the FAA received a wide spectrum of feedback.  This feedback ranged from suggestions that the agency must articulate precise elements of what constitutes a privacy violation, to the federal agency was not equipped (and therefore should not attempt) to regulate privacy at all.  It appears that the FAA settled on a middle ground of requiring drones to comply with existing privacy law, which is largely regulated by individual states.

Accordingly, state privacy laws are likely to be the critical privacy hurdle to commercial drone use.  It appears that only four states have thus far expressly addressed the use of private drones (as distinguished from drones used by public agencies, such as law enforcement).  Idaho and Texas generally prohibit civilians from using a drone to take photographs of private property.  They also restrict photography of any individual – even in public view – by such a drone.  And Oregon prevents drones from flying less than 400 feet above a property of a person who makes such a request.  The fourth state, Illinois, restricts use of drones that interfere with hunting and fishing activities.

As for the other states, they may be simply getting up to speed on the technology.  On the other hand, many of these states have considered or enacted laws restricting use of drones by the police.  Because these laws are silent on the use of private drones, one could argue that these states intentionally chose not to regulate private drones (and accordingly, existing laws regarding use of aircrafts or other public cameras, govern use of private drones).

Even though a state has passed a drone-related privacy law, it may very well be challenged on constitutional or other grounds.  For instance – to the extent they prohibit photography of public areas or objects and people in plain view – the Idaho and Texas laws may raise First Amendment questions.  As described in Hurley v. Irish-American, photographers generally receive First Amendment protection when taking public photos if he or she “possessed a message to be communicated” and “an audience to receive that message, regardless of the medium in which the message is to be expressed.”  Under this test, in Porat v. Lincoln Towers Community Association, a photo hobbyist taking pictures for aesthetic and recreational purposes was denied First Amendment protection.  In contrast, in Pomykacz v. Borough of West Wildwood, a “citizen activist” – whose pictures were taken out of concern about an affair between a town’s mayor and a police officer – was found to have First Amendment protection.  To be sure, however, the Supreme Court has acknowledged that “even in a public forum the government may impose reasonable restrictions on the time, place, or manner of protected speech, provided the restriction are justified without reference to the content of the regulated speech, that they are narrowly tailored to serve a significant governmental interest, and that they leave open ample alternative channels for communication of the information.”  For example, under this premise, some courts have upheld restrictions on public access to crime and accident scenes.  All told, we may see drone users assert First Amendment protection for photographs taken of public areas.

Another future legal challenge may involve the question of who owns the airspace above private property.  In United States v. Causby, the Supreme Court appeared to reject the idea of private ownership of airspace.  More specifically, it held that government aircrafts flying over private land do not amount to a government “taking”, or seizure of private property, unless the aircrafts are so low and frequent that they constitute an immediate interference with enjoyment of the land.  In other words, under Causby, the landowner owns the airspace necessary to use and enjoy the land.  But the Court declined to draw a specific line.  At the moment, it is unclear whether Oregon’s law – restricting drones within 400 feet of a home – is consistent with principle.

Lastly, we may see a legal challenge asserting that certain state privacy laws (such as the Idaho or Texas law or others that disallow drone use altogether) are preempted, or trumped.  Congress’s intent to impliedly preempt state law may be inferred (1) from a pervasive scheme of federal regulation that Congress left no room for the states to supplement, or (2) where Congress’s actions touch a field in which the federal interest is so dominant that the federal system will be assumed to preclude enforcement of state laws on that subject.  Applied here, one could argue that Congress has entrusted the FAA with sole authority for creating a scheme for regulating the the narrow field of national airspace, and drones in particular.  Additionally, the argument goes, the federal government has a dominant interest in regulating national airspace as demonstrated by the creation of the FAA and numerous other aircraft regulations.  Under the preemption line of reasoning, state privacy laws may be better focused on regulating data gathered by the drone rather than the space where the drone may fly or actions the drone may take while in the space (e.g. taking pictures).

All told, before official drone liftoff, companies employing drones will have to wait for final FAA rules on privacy.  Whether these final rules track the test site rules discussed above is not for certain.  Likely, the final rules will depend on the public comments received by the drone test sites.  Assuming the final rules track the test site rules, companies using commercial drones should focus on compliance with the various state privacy laws.  But, as noted above, we may see a constitutional challenge to these laws along the way.  Stay tuned.


1 Comment

The Adobe Data Breach and Recurring Questions of Software Liability

In recent weeks, news and analysis of the data breach announced by Adobe in early October has revealed the problem to be possibly much worse than early reports had estimated. When Adobe first detected the breach, its investigations revealed that “certain information relating to 2.9 million Adobe customers, including customer names, encrypted credit or debit card numbers, expiration dates, and other information relating to customer orders” had been stolen through a series of sophisticated attacks on Adobe’s networks. Adobe immediately began an internal review and notified customers of steps they could take to protect their data. Security researchers have since discovered, however, that more than 150 million user accounts may have been compromised in this breach. While I make no assertions regarding any potential claims related to this breach, I believe the facts of this incident can help convey the difficulties inherent in the ongoing debate over liability in cybersecurity incidents.

The question of whether software companies should be held liable for damages due to incidents involving security vulnerabilities or software bugs has been kicked around by scholars and commentators since the 1980s—centuries ago in Internet time—with no real resolution to show for it. Over the past month, Jane Chong has written a series of articles for the New Republic which revives the debate, and argues that software vendors who do not take adequate precautions to limit defects in their code should bear a greater share of the liability burden when these defects result in actual damages. This argument may seem reasonable on its face, but a particular aspect of the recent Adobe data breach illustrates some of the complexities found in the details that should be considered a crucial part of this debate. Namely, how do we define “adequate” or “reasonable” when it comes to writing secure software?

As Adobe correctly pointed out in their initial announcement, the password data stolen during the data breach was encrypted. For most non-programmers, this would appear to be a reasonable measure to protect sensitive customer data. The catch here lies in two core tenets of information security: First, cryptography and information security are not the same thing, and second, securing software of any complexity is not easy.

When Adobe encrypted their customer passwords, they used a well-known encryption algorithm called Triple DES (3DES) in what is called ECB mode. The potential problem is not in the encryption algorithm, however, but in its application. Information security researchers have strongly discouraged the use of cryptographic algorithms like 3DES—especially in the mode Adobe implemented—for encrypting stored passwords, since it uses a single encryption key. Once a hacker cracks the key, all of the passwords become readable. In addition, since 3DES in ECB mode will always give the same encrypted text when using the same plain text, this enables hackers to use guessing techniques to uncover certain passwords. These techniques are made easier by users who use easily guessed passwords like “123456” (used by two million Adobe customers). When you consider that many Adobe customers use the same password for multiple different logins, which may include banks, health care organizations, or other accounts where sensitive information may be accessed, one can see the value of this Adobe customer data to hackers.

From an Adobe customer’s perspective, it may seem reasonable that Adobe bear some of the liability for any damages that might result from this incident. After all, the customer might reason, Adobe’s network was breached, so Adobe did not do enough to protect customer data. On the other hand, Adobe could justifiably point out that it had taken reasonable precautions to protect their networks, including encrypting the sensitive data, and it was only due to a particularly sophisticated attack that the data was stolen. Further, Adobe could argue, if a customer used an easily guessed password for multiple logins, there is nothing Adobe can do to prevent this behavior—how could it be expected to be liable for digital carelessness on the part of its customers?

These questions will not be answered in a few paragraphs here, of course, but it is clear that any discussion of software liability is not necessarily analogous to product liability theories in other industries, like airlines or cars. Rather, software engineering has its own unique considerations, and we should be careful not to slip too easily into convenient metaphors when considering questions of software liability. Secure software development can be difficult; we should expect no less for questions of law related to this industry.


1 Comment

FTC v. Wyndham Update

Edit (Feb. 5, 2014): For a more recent update on this case, please see this post.

On November 1, Maureen Ohlhausen, a Commissioner at the Federal Trade Commission (FTC), held an “ask me (almost) anything” (AMAA) session on Reddit. There were no real surprises in the questions Commissioner Ohlhausen answered, and the AMAA format is not well-suited to lengthy responses. One interesting topic that did arise, however, was the FTC’s complaint against Wyndham Worldwide Corporation, and Wyndham’s subsequent filing of a motion to dismiss the FTC action against them. Commissioner Ohlhausen declined to discuss the ongoing litigation, but asserted generally that the FTC has the authority to bring such actions under Section 5 of the FTC Act, 15 U.S.C. § 45. While there were no unexpected revelations in the Commissioner’s response, I thought it presented an excellent opportunity to bring everyone up to speed on the Wyndham litigation.

On June 26, 2012, the Federal Trade Commission (FTC) filed a complaint in Arizona Federal District Court against Wyndham Worldwide Corporation, alleging that Wyndham “fail[ed] to maintain reasonable security” on their computer networks, which led to a data breach resulting in the theft of payment card data for hundreds of thousands of Wyndham customers, and more than $10.6 million in fraudulent charges on customers’ accounts.  Specifically, the complaint alleged that Wyndham engaged in deceptive business practices in violation of Section 5 of the FTC Act by misrepresenting the security measures it undertook to protect customers’ personal information. The complaint also alleged that Wyndham’s failure to provide reasonable data security is an unfair trade practice, also in violation of Section 5.

On August 27, 2012, Wyndham  responded by filing a motion to dismiss the FTC’s complaint, asserting, inter alia, that the FTC lacked the statutory authority to “establish data-security standards for the private sector and enforce those standards in federal court,” thus challenging the FTC’s authority to bring the unfairness count under the FTC Act. In their October 1, 2012 response, the FTC asked the court to reject Wyndham’s arguments, stating that the FTC’s complaint alleged a number of specific security failures on the part of Wyndham, which resulted in two violations of the FTC Act. The case was transferred to the Federal District of New Jersey on March 25, 2013, and Wyndham’s motions to dismiss were denied. On April 26, Wyndham once again filed motions to dismiss the FTC’s complaint, again asserting that the FTC lacked the legal authority to legislate data security standards for private businesses under Section 5 of the FTC Act.

At stake in this litigation is the FTC’s ability to bring enforcement claims against companies that suffer data breach due to a lack of “reasonable security.” What is unique in this case is Wyndham’s decision to fight the FTC action in court rather than make efforts to settle the case, as other companies have done when faced with similar allegations by the FTC. For example, in 2006, the FTC hit ChoicePoint Inc. with a $10 million penalty over data breach where over 180,000 payment card numbers were stolen. The FTC has also gone after such high-profile companies as Twitter, HTC, and Google based on similar facts and law. These actions resulted in out-of-court settlements.

If Wyndham’s pending motions to dismiss are denied, and the FTC ultimately prevails in this case, it is likely that the FTC will continue to bring these actions, and businesses will likely see an increased level of scrutiny applied to their network security. If, however, Wyndham succeeds and the FTC case against them is dismissed, public policy questions regarding data security will likely fall back to Congress to resolve.

Oral argument for the pending motions to dismiss are scheduled for November 7. No doubt many parties will be following these proceedings with great interest.


Leave a comment

NIST Updates Proposed National Cybersecurity Framework

As noted earlier on this blog, President Obama issued a sweeping Cybersecurity Executive Order in February, which called for the development of a national cybersecurity framework to mitigate risks to federal agencies and critical infrastructure. On October 22, the National Institute of Standards and Technology (NIST) published a Preliminary Cybersecurity Framework, which is a revision to their Draft Cybersecurity Framework published in August. The preliminary framework is the result of a series of public workshops and input from more than 3,000 individuals and organizations on standards and best practices.

According to NIST Director Patrick Gallagher, the goal of the NIST framework is to “turn today’s best [security] practices into common practices,” and to create a set of security guidelines for businesses to protect themselves from evolving cybersecurity threats. Adoption of the NIST framework, however, would be voluntary for companies, since NIST is a non-regulatory agency within the Department of Commerce.

Despite the voluntary nature of the framework, it has received a fair measure of criticism from businesses concerned that these standards will increase negligence liability once regulatory agencies establish requirements based on these standards. It is likely that courts will rely—at least in part—on any such standards to help define what “reasonable” cybersecurity measures are.

This concern is not without merit. Courts have struggled with the definition of reasonable cyber security, and these struggles have taken on greater urgency as our questions about the vulnerability of the nation’s critical infrastructure and protections of private information arise. The principal problem with this question is the moving target that “reasonable security” presents to businesses and individuals. In order to address some of these questions, NIST has published standards such as the Security and Privacy Controls for Federal Information Systems and Organizations, the final revision of which was released in April. Other sources, such as the Uniform Commercial Code, also make attempts to address reasonable security, but their language is too often frustratingly vague.

In an effort to address some of these concerns, the preliminary framework has increased the flexibility of its standards, For example, NIST has removed use of the word “should” from the updated draft, and added a paragraph that gives organizations greater options in their security implementations:

Appendix B contains a methodology to protect privacy and civil liberties for a cybersecurity program as required under the Executive Order. Organizations may already have processes for addressing privacy risks such as a process for conducting privacy impact assessments. The privacy methodology is designed to complement such processes by highlighting privacy considerations and risks that organizations should be aware of when using cybersecurity measures or controls. As organizations review and select relevant categories from the Framework Core, they should review the corresponding category section in the privacy methodology. These considerations provide organizations with flexibility in determining how to manage privacy risk.

On the other hand, privacy groups have objected to the framework’s lack of requirements, and have called for protections for civil liberties as well as a commitment to civilian control of cybersecurity. Advocacy groups have also questioned reports that the National Security Agency (NSA) has directed NIST to reduce key security standards. NIST has not yet commented on any NSA involvement in the development of the framework, but has initiated an internal audit to review its own method for guidance development.

On October 29, NIST opened a 45-day public comment period, with plans to release the final version of the framework in February 2014. NIST will also host a workshop to discuss the state of the framework at North Carolina State University on November 14th and 15th. While it is unlikely that every stakeholder group will be completely satisfied with the final version of the framework, a strengthening of the nation’s critical infrastructure in the form of mutually agreed-upon, reasonable standards will surely be welcome.