The Secure Times

An online forum of the ABA Section of Antitrust Law's Privacy and Information Security Committee


Leave a comment

FTC Chairwoman Edith Ramirez Comments on Data Security for the Internet of Things

Happy New Year! For many, the holidays included exciting new gadgets. Whether it’s a new fitness tracker, a smart thermostat, or a smart glucose meter, these new connected devices have arrived, and new products are on the horizon. These products, termed the “Internet of Things” by privacy professionals, are broadly defined as products that can connect to a network.

On January 6, 2015, FTC Chairwoman Edith Ramirez delivered the opening remarks at the International Consumer Electronics Show, during which she spoke on security issues surrounding the Internet of Things (“IoT”). Chairwoman Ramirez discussed what she viewed as three key risks to consumer privacy, along with suggested industry solutions to mitigate those risks.

IoT Risks to Consumer Privacy
The first privacy and security risk of connected devices Chairwoman Ramirez identified was that connected devices engage in “ubiquitous data collection.” Because these devices can potentially collect personal information, including our habits, location, and physical condition, the data can lead to rich profiles of consumer preferences and behavior.

The second risk Chairwoman Ramirez identified was the possible unexpected use of consumer data acquired through connected devices. As an example, she asked whether data from a smart TV’s tracking of consumer television habits could be combined with other data to enable businesses to engage in targeted advertising or even exacerbate socio-economic disparities.

The third risk she identified was that connected devices can be hijacked, leading to misuse of personal information.

Suggested Industry Solutions
To combat the risks identified above, Chairwoman Ramirez suggested three solutions for the IoT industry. First, IoT companies should engage in “Security by Design,” namely that IoT products should be built initially with a priority on security, and that IoT companies should implement technical and administrative measures to ensure reasonable security. Chairwoman Ramirez identified five aspects of Security by Design:

  • conduct a privacy or security risk assessment as part of the design process;
  • test security measures before products launch;
  • use smart defaults—such as requiring consumers to change default passwords in the set-up process;
  • consider encryption, particularly for the storage and transmission of sensitive information, such as health data; and
  • monitor products throughout their life cycle and, to the extent possible, patch known vulnerabilities.

Second, Chairwoman Ramirez suggested that companies that collect personal information should engage in data minimization, viz. that they should collect only the data needed for a specific purpose and then safely destroy that data afterwards. Chairwoman Ramirez also urged companies to de-identify consumer data where possible.

Finally, Chairwoman Ramirez suggested that IoT companies provide notice and choice to consumers for unexpected collection or uses of their data. As an example, Chairwoman Ramirez stated that if IoT companies are sharing data from a smart thermostat or fitness band with data brokers or marketing firms, those companies should provide consumers with a “simple notice of the proposed uses of their data and a way to consent.”

Although not official FTC statements, these remarks by Chairwoman Ramirez provide valuable insight into how the Federal Trade Commission may regulate connected devices in the future. Companies in the IoT space should monitor further developments closely and review their data collection, security, and sharing practices accordingly.

Advertisements


Leave a comment

Canada’s Anti-Spam Law (CASL) – New Guidance on Providing Apps and Software

Canada’s Anti-Spam Law (CASL) targets more than just email and text messages 

In our previous post, we explained that on July 1, 2014, Canada’s Anti-Spam Law (CASL) had entered into force with respect to email, text and other “commercial electronic messages”.

CASL also targets “malware”.  It prohibits installing a “computer program” – including an app, widget, software, or other executable data – on a computer system (e.g. computer, device) unless the program is installed with consent and complies with disclosure requirements.  The provisions in CASL related to the installation of computer programs will come into force on January 15, 2015.

Application outside Canada

Like CASL’s email and text message provisions, the Act’s ”computer program” installation provisions apply to persons outside Canada.  A person contravenes the computer program provisions if the computer system (computer, device) is located in Canada at the relevant time (or if the person is in Canada or is acting under the direction of a person in Canada).  We wrote about CASL’s application outside of Canada here.

Penalties

The maximum penalty under CASL is $10 million for a violation of the Act by a corporation.  In certain circumstances, a person may enter into an “undertaking” to avoid a Notice of Violation.  Moreover, a private right of action is available to individuals as of July 1, 2017.

CASL’s broad scope leads to fundamental questions – how does it apply?

The broad legal terms “computer program”, “computer system” “install or cause to be installed” have raised many fundamental questions with industry stakeholders.  The CRTC – the Canadian authority charged with administering this new regime – seems to have gotten the message.  The first part of the CRTC’s response to FAQ #1 in its interpretation document CASL Requirements for Installing Computer Programs is “First off, don’t panic”.

New CRTC Guidance 

The CRTC has clarified some, but not all of the questions that industry stakeholders have raised.  CRTC Guidance does clarify the following.

  • Self-installed software is not covered under CASL.  CASL does not apply to owners or authorized users who are installing software on their own computer systems – for example, personal devices such as computers, mobile devices or tablets.
  • CASL does not apply to “offline installations“, for example, where a person installs a CD or DVD that is purchased at a store.
  • Where consent is required, it may be obtained from an employee (in an employment context); from the lessee of a computer (in a lease context); or from an individual (e.g. in a family context) where that individual has the “sole use” of the computer.
  • An “update or upgrade” – which benefits from blanket consent in certain cases under CASL – is “generally a replacement of software with a newer or better version”, or a version change.
  • Grandfathering – if a program (software, app, etc.) was installed on a person’s computer system before January 15, 2015, then you have implied consent until January 15, 2018 – unless the person opts out of future updates or upgrades.

Who is liable?

CRTC staff have clarified that as between the software developer and the software vendor (the “platform”), both may be liable under CASL.  To determine liability, the CRTC proposes to examine the following factors, on a case-by-case basis:

  • was their action a necessary cause leading to the installation?
  • was their action reasonably proximate to the installation?
  • was their action sufficiently important toward the end result of causing the installation of the computer program?

CRTC and Industry Canada staff have indicated that they will be publishing additional FAQs, in response to ongoing industry stakeholder questions.

See:  Step-by-Step: How CASL applies to software, apps and other “computer programs”

See also:  fightspam.gc.ca  and consider signing up for information updates through the site.


Leave a comment

Windows XP End of Life Poses Risks to the Significant Percentage of Companies Still Tied to the Platform

On April 8, Microsoft officially ended all support and ceased providing updates for their Windows XP operating system. This “end of life” (EOL) announcement is not uncommon with software platforms, where continued support of aging software (XP is over 12 1/2 years old) becomes too expensive or too impractical, and the user is thus encouraged to upgrade to a newer version of that software. This all makes sense on the surface. As we’ve seen time and time again, software–especially large, complex pieces of software like operating systems–tends not to age well. Due to the sheer complexity of systems like XP, retrofitting patches to fix errors and vulnerabilities can be quite difficult, and may even lead to unintended consequences (i.e., more bugs). Thus, over time, software companies may urge their customers to migrate to the (relatively) clean slate provided by upgraded versions of their software.

The XP EOL announcement came as no surprise. Microsoft has been urging customers to start planning for upgrades since it terminated all retail sales of the operating system in 2008. But according to recent statistics provided by Net Applications, nearly 28% of Internet users are still running some version of Windows XP. Even worse, this number does not include those computers running XP that aren’t use for web browsing, e.g., servers, point-of-sale (POS) systems, medical systems, industrial systems, security systems, and ATMs. This number includes large organizations such as banks and governments which, due mainly to their size and conservative technology adoption policies, take more time to migrate away from software platforms, especially those that provide core services, such as operating systems. This has led to multi-million dollar agreements between these organizations and Microsoft in order to provide continued support for the short term.

But what about those companies and organizations who don’t necessarily have the wherewithal to negotiate individual support contracts with Microsoft? In addition, these smaller companies too often don’t have the depth of IT support required to keep up with these updates, and some organizations may not even be aware they’re still running XP within their network. For these companies, the fact that Microsoft will no longer be providing public patches for future vulnerabilities could prove to be a serious problem.

The first example of this problem showed up this week. On Monday, a new “zero-day” vulnerability in Microsoft’s Internet Explorer (IE) web browser was announced. This vulnerability is quite serious, as it could allow for remote code execution on a user’s computer, and had already been detected as an attack being used in the wild. Technology news sources were referring to this bug as the first sign of the “XPocalypse,” where users and organizations still running the unsupported platform would be left to the wolves, so to speak.

Yesterday, Microsoft took the unusual step of issuing a patch for this IE vulnerability for all of its platforms, including the “unsupported” Windows XP. While this step may have averted disaster for XP users–at least for the time being–many technology experts are warning that providing retroactive support for EOL platforms will not solve the larger problem of a significant number of users running aging, vulnerable software. This should concern not only the companies still running XP, but the entire Internet ecosystem, since compromised computer systems are often repurposed as platforms for further attacks.

It’s still too early to tell whether any of the dire predictions presented by the so-called XPocalypse will come to pass. Some cynics have pointed out that we are not likely to see a sudden surge of attacks on XP, since XP has been quite vulnerable to attack for some time, even when it was supported. Either way, companies would do well to make software security a priority, from the C-Suite on down. Companies are coming to realize that many (or most) of them are actually in the software business, as so much of their operation depends on the software that sits behind the scenes. There may come a time that the FTC views the unsupported use of XP as failing to take reasonable security measures. Adopting a wait-and-see approach to software security is bound to make a potentially bad situation even worse.


Leave a comment

2014 Verizon Data Breach Report Paints a Sobering Picture of the Information Security Landscape

The 2014 Verizon Data Breach Investigations Report (DBIR) was released on April 22, providing just the sort of deep empirical analysis of cybersecurity incidents we’ve come to expect from this annual report. The primary messages of this year’s DBIR are the targeting of web applications, continued weaknesses in payment systems, and nine categories of attack patterns that cover almost all recorded incidents. Further, despite the attention paid to last year’s enormous data breach at Target, this year’s data shows that attacks against point of sale (POS) systems are actually decreasing somewhat. Perhaps most importantly, the underlying thread that is found throughout this year’s DBIR is the need for increased education and application of digital hygiene.

Each year’s DBIR is compiled based on data from breaches and incidents investigated by Verizon, law enforcement organizations, and other private sector contributors. This year, Verizon condensed their analysis to nine attack patterns common to all observed breaches. Within each of these patterns, Verizon cites the software and vectors attackers are exploiting, as well as other important statistics such as time to discovery and remediation. The nine attack patterns listed in the DBIR are POS intrusions, web application attacks, insider misuse, physical theft/loss, miscellaneous errors, crimeware, card skimmers, denial-of-service (DoS) attacks, and cyber-espionage. Within industry verticals, most attacks can be characterized by only three of the nine categories.

Attacks on web applications attacks were by far the most common threat type observed last year, with 35% of all confirmed incidents linked to web application security problems. These numbers represents a significant increase over the three-year average of 21% of data breaches from web application attacks. The DBIR states that nearly two thirds of attackers targeting web applications are motivated by ideology, while financial incentives drive another third. Attacks for financial reasons are most likely to target organizations from the financial and retail industries. These attacks tend to focus on user interfaces like those at online banking or payment sites, either by exploiting some underlying weakness in the application itself or by using stolen user credentials. To mitigate the use of stolen credentials, the DBIR advised companies to consider implementing some form of two-factor authentication, a recommendation that is made to combat several attack types in this year’s report.

The 2014 DBIR contains a wide array of detailed advice for companies who wish to do a better job of mitigating these threats. The bulk of this advice can be condensed into the following categories:

  • Be vigilant: Organizations often only find out about security breaches when they get a call from the police or a customer. Log files and change management systems can give you early warning.
  • Make your people your first line of defense:  Teach staff about the importance of security, how to spot the signs of an attack, and what to do when they see something suspicious.
  • Keep data on a ‘need to know basis’: Limit access to the systems staff need to do their jobs. And make sure that you have processes in place to revoke access when people change role or leave.
  • Patch promptly: Attackers often gain access using the simplest attack methods, ones that you could guard against simply with a well-configured IT environment and up-to-date anti-virus.
  • Encrypt sensitive data: Then if data is lost or stolen, it’s much harder for a criminal to use.
  • Use two-factor authentication: This won’t reduce the risk of passwords being stolen, but it can limit the damage that can be done with lost or stolen credentials.
  • Don’t forget physical security. Not all data thefts happen online. Criminals will tamper with computers or payment terminals or steal boxes of printouts.

These recommendations are further broken down by industry in the DBIR, but they largely come down to a liberal application of “elbow grease” on the part of companies and organizations. Executing on cyber security plans requires diligence and a determination to keep abreast of continual changes to the threat landscape, and often requires a shift in culture within a company. But with the FTC taking a more aggressive interest in data breaches, not to mention the possibility of civil suits as a response to less-than-adequate data security measures, companies and organizations would do well to make cyber security a top priority from the C-Suite on down.


Leave a comment

FTC v. Wyndham Update, Part 3

In earlier updates, we’ve provided background and tracked the progress (and the unique circumstances) of FTC v. Wyndham Worldwide Corp., et al. On April 7, a highly anticipated opinion was issued by New Jersey District Court Judge Esther Salas in a case that will likely have broad implications in the realms of privacy and data security. Through a motion to dismiss, Wyndham argued that the FTC had no authority to assert a claim in the data security context, that the FTC must first formally promulgate data security regulations before bringing such a claim, and that the FTC’s pleadings of consumer harm were insufficient to support their claims. The Wyndham court sided with the FTC on all of these arguments, and dismissed Wyndham’s motion to dismiss.

Continue reading


2 Comments

NIST Eliminates Privacy Appendix from Cybersecurity Framework

In a January 15, 2014 update, the National Institutes of Standards and Technology (“NIST”) announced that it would eliminate contentious privacy provisions in Appendix B of the Preliminary Cybersecurity Framework.  The appendix was originally intended “to protect individual privacy and civil liberties” as part of the February 2012 Executive Order 13636 requiring NIST to establish a framework to manage cybersecurity risk.  The proposed privacy provisions generated widespread controversy, however, because “the methodology did not reflect consensus private sector practices and therefore might limit use of the Framework.”  As a result, NIST determined that the appendix “did not generate sufficient support through the comments to be included in the final Framework.”

In place of a separate privacy appendix, NIST stated that it would incorporate an alternative methodology proposed on behalf of several industry sectors.  This substitute approach eliminates references to specific privacy standards, such as Fair Information Practice Principles (FIPPs), given the current lack of consensus regarding such standards.  Instead, the Framework will provide “more narrowed and focused” guidance in the “How To Use” section that requires companies to consider privacy implications and address them as appropriate.  The high-level measures now include ensuring proper privacy training, reviewing any monitoring activities, and evaluating any privacy concerns that arise when information (such as threat data) is shared outside the company.  According to NIST, this approach will “allow organizations to better incorporate general privacy principles when implementing a cybersecurity program.”

Although eliminating the privacy appendix in favor of more general guidance was the only definitive change that NIST announced, the update also noted several other common issues raised in public comments.  These topics – which include reaching consensus on what “adoption” of the Framework entails and the use of “Framework Implementation Tiers” to assess the strength of a company’s cybersecurity program – will remain key areas of debate once the Cybersecurity Framework is released on February 13, 2014.

Although the Framework is slated for release in just a few weeks (and will be available here), NIST made clear that it is intended to be a “living document” that will need to be “update[d] and refine[d] . . . based on lessons learned through use as well as integration of new standards, guidelines, and practices that become available.”  NIST also explained that it intends to continue serving as the “convener” for such changes until the document can be transitioned to a non-government organization, but will issue a roadmap with more details soon. 


2 Comments

Before Liftoff, Drones Must Maneuver Through Privacy Laws

Unmanned aerial vehicles, better known as drones, are expected to revolutionize the way companies deliver packages to their customers.  Some also imagine these small aircrafts delivering pizzas to a customer’s home or nachos to a fan at a ballgame.  Researchers are even investigating the possibility of using drones to assist farmers with monitoring their crops.  Before drone technology takes flight, however, it will have to maneuver through privacy laws.

The Federal Aviation Administration (FAA) is the agency charged with developing rules, including privacy rules, for private individuals and companies to operate drones in national airspace.  While the precise breadth of FAA rules is not entirely clear, a framework is beginning to develop.  When the FAA recently announced test sites for drones, it also noted that test site operators must: (1) comply with existing federal and state privacy laws, (2) have publicly available privacy policies and a written plan for data use and retention, and (3) conduct a review of privacy practices that allows for public comment.  When soliciting the public for comment on these test site-privacy rules, the FAA received a wide spectrum of feedback.  This feedback ranged from suggestions that the agency must articulate precise elements of what constitutes a privacy violation, to the federal agency was not equipped (and therefore should not attempt) to regulate privacy at all.  It appears that the FAA settled on a middle ground of requiring drones to comply with existing privacy law, which is largely regulated by individual states.

Accordingly, state privacy laws are likely to be the critical privacy hurdle to commercial drone use.  It appears that only four states have thus far expressly addressed the use of private drones (as distinguished from drones used by public agencies, such as law enforcement).  Idaho and Texas generally prohibit civilians from using a drone to take photographs of private property.  They also restrict photography of any individual – even in public view – by such a drone.  And Oregon prevents drones from flying less than 400 feet above a property of a person who makes such a request.  The fourth state, Illinois, restricts use of drones that interfere with hunting and fishing activities.

As for the other states, they may be simply getting up to speed on the technology.  On the other hand, many of these states have considered or enacted laws restricting use of drones by the police.  Because these laws are silent on the use of private drones, one could argue that these states intentionally chose not to regulate private drones (and accordingly, existing laws regarding use of aircrafts or other public cameras, govern use of private drones).

Even though a state has passed a drone-related privacy law, it may very well be challenged on constitutional or other grounds.  For instance – to the extent they prohibit photography of public areas or objects and people in plain view – the Idaho and Texas laws may raise First Amendment questions.  As described in Hurley v. Irish-American, photographers generally receive First Amendment protection when taking public photos if he or she “possessed a message to be communicated” and “an audience to receive that message, regardless of the medium in which the message is to be expressed.”  Under this test, in Porat v. Lincoln Towers Community Association, a photo hobbyist taking pictures for aesthetic and recreational purposes was denied First Amendment protection.  In contrast, in Pomykacz v. Borough of West Wildwood, a “citizen activist” – whose pictures were taken out of concern about an affair between a town’s mayor and a police officer – was found to have First Amendment protection.  To be sure, however, the Supreme Court has acknowledged that “even in a public forum the government may impose reasonable restrictions on the time, place, or manner of protected speech, provided the restriction are justified without reference to the content of the regulated speech, that they are narrowly tailored to serve a significant governmental interest, and that they leave open ample alternative channels for communication of the information.”  For example, under this premise, some courts have upheld restrictions on public access to crime and accident scenes.  All told, we may see drone users assert First Amendment protection for photographs taken of public areas.

Another future legal challenge may involve the question of who owns the airspace above private property.  In United States v. Causby, the Supreme Court appeared to reject the idea of private ownership of airspace.  More specifically, it held that government aircrafts flying over private land do not amount to a government “taking”, or seizure of private property, unless the aircrafts are so low and frequent that they constitute an immediate interference with enjoyment of the land.  In other words, under Causby, the landowner owns the airspace necessary to use and enjoy the land.  But the Court declined to draw a specific line.  At the moment, it is unclear whether Oregon’s law – restricting drones within 400 feet of a home – is consistent with principle.

Lastly, we may see a legal challenge asserting that certain state privacy laws (such as the Idaho or Texas law or others that disallow drone use altogether) are preempted, or trumped.  Congress’s intent to impliedly preempt state law may be inferred (1) from a pervasive scheme of federal regulation that Congress left no room for the states to supplement, or (2) where Congress’s actions touch a field in which the federal interest is so dominant that the federal system will be assumed to preclude enforcement of state laws on that subject.  Applied here, one could argue that Congress has entrusted the FAA with sole authority for creating a scheme for regulating the the narrow field of national airspace, and drones in particular.  Additionally, the argument goes, the federal government has a dominant interest in regulating national airspace as demonstrated by the creation of the FAA and numerous other aircraft regulations.  Under the preemption line of reasoning, state privacy laws may be better focused on regulating data gathered by the drone rather than the space where the drone may fly or actions the drone may take while in the space (e.g. taking pictures).

All told, before official drone liftoff, companies employing drones will have to wait for final FAA rules on privacy.  Whether these final rules track the test site rules discussed above is not for certain.  Likely, the final rules will depend on the public comments received by the drone test sites.  Assuming the final rules track the test site rules, companies using commercial drones should focus on compliance with the various state privacy laws.  But, as noted above, we may see a constitutional challenge to these laws along the way.  Stay tuned.