The Secure Times

An online forum of the ABA Section of Antitrust Law's Privacy and Information Security Committee


Leave a comment

Washington State May Soon Regulate Personal Information Collection by Drones

Two Washington State bills are addressing the issue of government surveillance using drones, and the potential negative impact this could have on privacy.

The first bill, HB 1771, is a bi-partisan bill sponsored by Rep. David Taylor, R-Moxee, which was   introduced last year. It calls drones a “public unmanned aircraft system.”

HB 2789, is also sponsored by Rep. David Taylor. It calls drones “extraordinary sensing devices” and its Section 3(1) would have government use of drones “conducted in a transparent manner that is open to public scrutiny.”

Calling drones “devices” instead of “aircraft” has significance for a State famous for its aeronautic industry.  Indeed, while HB 1771 passed the House last week, HB 2789 stills lingers in Committee.

A Very Broad Definition of Personal Information

HB 2789 and HB 1771 both define what is “personal information” quite broadly, as it would not only encompass a social security or an I.D. number, but also “medical history, ancestry, religion, political ideology, or criminal or employment record.

Interestingly, it would also encompass information that can be “a basis for inferring personal characteristics” such as “the record of the person’s presence, registration, or membership in an organization or activity, or admission to an institution” or even, “things done by or to such person,” a definition that is so broad that it may encompass just about anything that ever happens to an individual. This definition recognizes that drone surveillance allows for a 24/7 surveillance society.

Personal information also means IP and trade secret information.

Illegal Collection of Data by Drones Must be “Minimized”

Under section 4 of HB 2789, disclosure of personal information acquired by a drone must be conducted in a way that minimizes unauthorized collection and disclosure of personal information. It reprises the words of Section 5 of HB 1771, only replacing ‘public unmanned aircraft by ‘extraordinary sensing device.’

I am not sure that I interpreted section 4 correctly, so here is the full text:

All operations of an extraordinary sensing device or disclosure of personal information about any person acquired through the operation of an extraordinary sensing device must be conducted in such a way as to minimize the collection and disclosure of personal information not authorized under this chapter.

So the standard it not complete avoidance of unauthorized collection of personal information, but instead minimization of illegal collection. The wording may reflect the understanding of the legislature that, because of the amazing volume of data that may potentially be collected by drones, including “things done by or to such person,” it would be unrealistic to set a standard of complete avoidance of data collection.

Maybe this ”minimizing” standard set by HB 1771 and HB 2789 is a glimpse of the standards for future data protection law…

Warrant Needed to Collect Personal Information by Drones

Under Section 5 of HB 2789, a drone could to collect personal information pursuant to a search warrant, which could not exceed a period of ten days.

The standard to obtain a warrant under Section5 (3)(c) of HB 2789 and Section 6 (2) (c ) of HB 1771would be “specific and articulable facts demonstrating probable cause to believe that there has been, is, or will be criminal activity

Under Section 5 (3)(d) of HB 2789, a petition for a search warrant would also have to include a statement that “other methods of data collection have been investigated and found to be either cost prohibitive or pose an unacceptable safety risk to a law enforcement officer or to the public. ”

So drones should be, at least for now, still considered an extraordinary method to be used in criminal investigations.  Such statement would not be necessary though under HB 1771.

Warrant could not exceed ten days under Section 5(5) of HB 2789, but could not exceed 48 hours under section 6(4)HB 1771, and thus HB 1771 would be much more protective for civil liberties. However, as we saw, it is unlikely that HB 1771 will ever be enacted into law.

Warrant Not Needed in Case of an Emergency

Both bills would authorize some warrantless use of drones.

However, under Section 7 of HB 2789 a warrant would not be needed if a law enforcement officer “reasonably determines that an emergency situation exists [involving] criminal activity and presents immediate danger of death or serious physical injury to any person,” and that the use of a drone is thus necessary.

Under Section 8 of HB 1771, it would only be necessary for the law enforcement officer to “reasonably determine that an emergency situation exists that involves immediate danger of death or serious physical injury to any person” which would require the use of drone, without requiring a pre-determination of criminal activity.

But even if an emergency situation does not involve criminal activity, section 8 of HB 2789 allows for the use of drones without a warrant if there is “immediate danger of death or serious physical injury to any person,” which would require the use of drones in order “to reduce the danger of death or serious physical injury.”

However, such use would only be authorized if it could be reasonably determined that such use of drones “does not intend to collect personal information and is unlikely to accidentally collect personal information,” and also that such use is not done “for purposes of regulatory enforcement.“

Both bills require that an application for a warrant be made within 48 hours after the warrantless use of a drone.

Fruits of the Poisonous Drone

Under section 10 of HB 2789 and section 10 of HB 1771, no personal information acquired illegally by a drone nor any evidence derived from it could be used as evidence in a court of law or by state authorities.

Handling Personal Information Lawfully Collected

Even if personal information has been lawfully collected by drones, such information may not be copied or disclosed for any other purpose than the one for which it has been collected, “unless there is probable cause that the personal information is evidence of criminal activity.”

If there is no such evidence, the information must be deleted within 30 days if the information was collected pursuant to a warrant and 10 days if was incidentally collected under section 11 of HB 2789, but would have to be deleted within 24 hours under section 11 of HB 1771.

Drone regulation is a new legal issue, but Washington  would not be the first State to regulate it. Many other States have introduced similar proposals, often not successfully however. But Florida, Idaho, Illinois, Montana, Oregon, Tennessee, Texas and Virginia have all enacted laws regulating the use of drones for surveillance purposes and North Carolina has enacted a two-year moratorium. It remains to be seen if and when federal legislation will be enacted.

Advertisements


1 Comment

Warrant Needed In Massachusetts to Obtain Cell Phone Records

The Massachusetts Supreme Judicial Court ruled 5-2 on February 18 in Commonwealth v. Augustine that the government must first obtain a warrant supported by probable cause before obtaining two weeks worth of historical cell site location information (CSLI).

Defendant had been indicted for the 2004 murder of his former girlfriend. During the investigation, the prosecution filed for an order to obtain CSLI from the suspect’s cellular service provider, but the order was filed under 18 U.S.C. § 2703(d) of the Stored Communications Act (SCA). Under that law, the government does not need to show probable cause, but only needs to show specific and articulable facts showing “that there are reasonable grounds to believe that the contents of a wire or electronic communication, or the records or other information sought, are relevant and material to an ongoing criminal investigation.”

The order was granted by the Superior Court in September 2004. Defendant was indicted by a grand jury in 2011, and filed a motion to suppress evidence associated with his cell phone in November 2012.

A judge from the Superior Court granted his motion to suppress, reasoning that this was a search under article 14 of the Massachusetts Declaration of Rights – which is similar to the Fourth Amendment to the U.S. Constitution – and thus a search warrant was required.

The Commonwealth of Massachusetts appealed, arguing that the CSLI was a business record, held by a third party, and that the defendant had no expectation of privacy in this information as he had voluntarily revealed it to a third party.

This argument did not convince the Massachusetts Supreme Judicial Court, ruling instead that defendant had an expectation of privacy in the CSLI and that the prosecution therefore needed to obtain a warrant based on probable cause to obtain this information.

The Third Party Doctrine

Why did the court find that the defendant had an expectation of privacy in his CSLI, even though this information was known by a third party, his cell phone service provider?

Under the U.S. Supreme Court third party doctrine, as stated in the U.S. v. Miller 1976 case and in the 1979 Smith v. Maryland case, a defendant has no reasonable expectation of privacy in information revealed to third parties.

In Miller, the Supreme Court found that defendant has no expectation of privacy in his bank records, as they were “business records of the banks.” Similarly, in Smith v. Maryland, the Supreme Court held that installing and using a telephone pen register was not a “search” under the Fourth Amendment, and thus no warrant was required, because the defendant had no expectation of privacy in the phone numbers he had dialed.

First, the Massachusetts Supreme Judicial Court recognized article 14 of the Massachusetts Declaration of Rights affords more protection than the Fourth Amendment to the U.S. Constitution.

 

Then, the Supreme Judicial Court distinguished Miller and Smith from the case, finding “significant difference” between these two cases and the case at stake. The Court noted that “the digital age has altered dramatically the societal landscape from the 1970’s.

In Smith, the defendant had taken an affirmative step when dialing the numbers which had been communicated to the prosecution by the telephone company. He had to do it in order to be able to use his telephone service. As such, Smith had “identified] a discrete item of information…like a telephone number (or a check or deposit slip as in Miller) and then transmit it to the provider.”

But cell phone users do not transmit their data to their cell phone company in order to use the service. Instead, “CSLI is purely a function and product of cellular telephone technology, created by the provider’s system network.”

The court also noted that, while using a landline may only indicate that a particular party is at home, CSLI provides a detailed report of an individual’s whereabouts. The Massachusetts court quoted the State v. Earls case from the New Jersey Supreme Court, which stated that using a cell phone to determine the location of its owner “is akin to using a tracking device and can function as a substitute for 24/7 surveillance.”

As CSLI is business information “substantively different from the types of information and records contemplated by Smith and Miller,” the court concluded that it “would be inappropriate to apply the third-party doctrine to CSLI.” However, the court added that they saw “no reason to change [their] view thatthe third-party doctrine applies to traditional telephone records.”

Obtaining CSLI from a Cell Phone Provider is a Search and Thus Requires a Warrant

The court then proceeded to answer the question of whether the government needed a warrant to access the CSLI.

As CSLI informs law enforcement about the whereabouts of an individual, the Massachusetts Supreme Judicial Court compared it to electronic monitoring devices such as a GPS. It noted that “it is only when such tracking takes place over extended periods of time that the cumulative nature of the information collected implicates a privacy interest on the part of the individual who is the target of the tracking,” quoting the Supreme Court U.S. v. Jones case, where Justice Sotomayor and Justice Alito both noted in their concurring opinions that the length of a GPS surveillance is relevant to determine whether or not the individual monitored has or does not have an expectation of privacy.

The Massachusetts Supreme Judicial Court found relevant the duration of the period of time for which historical CSLI was sought by the government. The government may only obtain historical CSLI, meeting the SCA standard of specific and articulable facts, if the time period is “too brief to implicate the person’s reasonable privacy interest,” but the two-week period covered in this case exceeds it.

The court’s ruling was about article 14 of the Massachusetts Declaration of Rights. The Supreme Court has not yet considered the issue of whether obtaining CSLI is a search under the Fourth Amendment. Since courts are split on this issue, it is likely that the Supreme Court will answer the question of whether a warrant is required to obtain cell phone location records quite soon. 


Leave a comment

New COPPA Compliance Mechanisms Now Available

FTC recently approved a new COPPA safe harbor program and a new method for obtaining parental consent, providing flexibility to companies striving to comply with COPPA obligations.

The revisions to the COPPA rule that took effect July 2013 expanded COPPA provisions in several ways, including by expanding the definition of “personal information” and clarifying that third party operators are also subject to COPPA compliance obligations.  The revised rules also imposed stricter requirements for companies wishing to provide COPPA safe harbor certification and created a mechanism through which companies could submit approval for new methods of obtaining parental consent.

Safe Harbor.   Websites that participate in an FTC-approved COPPA safe harbor program will generally be subject to review and disciplinary actions under the program guidelines rather than be subject to a formal FTC investigation and enforcement action.  In the amended Rule, the FTC imposed stricter requirements for companies wishing to provide safe harbor certification programs. A potential safe harbor program provider must now provide extensive documentation about the program’s requirements and the organization’s capability to oversee the program during the approval process and, after approval, the program must submit annual reports to the FTC.

On February 12, the FTC announced its approval of the kidSAFE Seal Safe Harbor program, which is designed for child-friendly websites and applications, including kid-targeted games, educational sites, virtual worlds, social networks, mobile apps, tablet devices and other similar interactive services and technologies.

The FTC approved the kidSAFE seal safe harbor program after determining that it had (1) a requirement that participants in the safe harbor program implement substantially similar requirements that provide the same or greater protection for children as those contained in the COPPA Rule; (2) an effective, mandatory mechanism for independent assessment of the safe harbor program participants’ compliance with the guidelines; and (3) disciplinary actions for noncompliance by safe harbor participants.

The kidSAFE Seal program as the first safe harbor program approved under the amended version of the rule.  The program joins five other safe harbor certifications previously approved by the FTC: the Children’s Advertising Review Unit of the BBB, the Entertainment Software Rating Board, TRUSTe, Privo Inc. and Aristotle International, Inc. 

Parental Verification MethodsThe FTC recently approved a new authentication method proposed by Imperium, LLC for verifying the identity of parents who consent to the collection of their children’s data.  Imperium proposed a “knowledge-based authentication system,” for its identify verification system ChildGuardOnline, which verifies a user’s identity by asking a series of out-of-wallet challenge questions (e.g., questions which cannot be determined merely by looking in a person’s wallet).  Knowledge-based authentication systems are already used by entities that handle sensitive information like financial institutions and credit bureaus. The FTC found this was a reliable method of verification because the questions were sufficiently difficult that a child age 12 and under in the parent’s household could not reasonably ascertain the answers and noted that knowledge-based authentication has already proven reliable in the market place in other contexts.

Previously, the FTC had rejected an application by AssertID Inc. for its ConsentID product, which proposed to verify parental identify by asking that “friends” on the parent’s social media sites vouch for the parental-child relationship. The FTC found that this method was not “reasonably calculated in light of available technology” to ensure the person providing consent was the child’s parent and that the process could easily be circumvented by children who create fake social media accounts.  To date, the Imperium methodology of parental consent verification is the only method approved by the FTC that was not in the text of the Rule itself.  The other methods for verifying parental consent as provided in the text of the Rule are (a) requesting such consent be provided by written form returned by mail, fax or scanned email; (b) requesting a credit or debit card in connection with a monetary transaction; (c) requesting parent call a toll-free phone number, (d) connect with parent via video-conference, or (e) check a form of ID against a government database.

The FTC recently closed its public comment period for another proposed verification system submitted by iVeriFly. The iVeriFly methodology combines a knowledge-based authentication system similar to the method imposed by Imperium, wherein the program scans non-FCRA consumer databases to generate out-of-wallet questions for the parent to answer. If the parent answers the questions correctly, the iVeryFly system then places a call to the parent requesting that consent be provided through a series of telephone key presses.


Leave a comment

FTC Mobile Device Tracking Seminar Highlight – Customer Trust

On Wednesday, February 19, the FTC hosted the first event of its Spring Privacy Series on emerging consumer privacy issues, focusing on mobile device tracking, specifically as it is used to track consumer’s’ movements throughout and around retail stores.

After a mobile location tracking demonstration by FTC Chief Technologist Latanya Sweeney, Ashkan Soltani provided an overview of the technology. A panel discussion followed, including representatives from The National Retail Federation (NRF), Electronic Frontier Foundation (EFF),  analytics companies iInside and Mexia Interactive, and design firm Create with Context.

According to panelists, retailers and other users of mobile location tracking technology are strongly incented to maintain consumer trust because consumer loyalty is on the line. The way to consumer trust is transparency – transparency about what data are being collected, what’s being done with the data, etc.  Enabling transparency, however, is challenging because it is difficult to capture consumer awareness. There are three different ways to enable awareness: explicit, implicit and ambient. Studies have shown that explicit awareness, or signage, is ineffective. Consumers do not notice signs when they enter a retail establishment. And including signage on a mobile device is a limited communication tool because although 84% of shoppers use smartphones in a store, only 11% have it visible at any given time. The reason? Their hands are busy; holding a device while shopping is not practical. Also, the amount of detail that explicit signage is attempting to communicate poses a challenge. Including information on what is being collected, how it is being used, giving an opt out option and conveying consumer benefits is a substantial amount of print for a consumer to read and process while she is trying to get in a store, find items, and get out.  

Implicit awareness arises when a consumers understand that information is being collected about them because they are receiving a benefit or service that directly leverages that information. For example, when a user accesses Google maps to get directions, she understands that the map application is collecting her location information. The more a user receives a benefit from the tracking functionality and can intuit that information is being collected to provide that benefit, the higher the level of implicit awareness. Finally, ambient awareness arises from input on the periphery; it’s not directly a part of the consumer’s experience but she may be aware of it. An example of ambient awareness is the handicapped sign; people see it and immediately understand the message it communicates. At the same time, it’s not front-and-center like explicit awareness. Designers are working on creating a universal  “My Data” icon that would immediately communicate to consumers that their data is being collected. To date, over 300 iterations have been tested and the icon is still in development.  Using the three types of awareness together is a long term goal that would strengthen transparency.

Panelists were clear that the use of mobile location tracking technology is limited to collecting information in the aggregate and looking for trends, not profiling individuals. This, coupled with the fact that the information being collected is hashed MAC addresses, not actual MAC addresses or consumer’s individual names, should increase consumer confidence. There was debate among panel participants about whether the collection of information should be limited at the outset, or whether full collection should take place but with strong governance controls in place, such as not sharing data across multiple clients. Participants also addressed the fact that location data can be the most sensitive type of data of all, as it provides for multiple inferences regarding habits and associations, and thus yields the most insights.

The panel concluded with the NRF restating that retailers are using  information collected only in aggregate, are doing the best they can, and are providing choice to consumers so they can opt out across the industry (Future of Privacy Forum Mobile Analytics Opt Out (Beta)); EFF calling for a change to the underlying technology so devices don’t have unique identifiers; a call for good design to enable transparency and trust; and strong governance to protect the data being collected.

For more information about governance around mobile device location tracking, read about the Future of Privacy Forum Mobile Analytics Code of Conduct in Emily Tabatabai’s Secure Times blog on the topic, as well as view the the actual Code.


Leave a comment

Reactions to NIST’s Final Cybersecurity Framework – The Good and the Bad (but no ugly)

On February 12, 2014, the National Institute of Standards and Technology (“NIST”) issued the final Framework for Improving Critical Infrastructure Cybersecurity (“Framework”).  Issuance of the Framework was required by the Obama Administration’s February 2013 executive order, Improving Critical Infrastructure Cybersecurity (the “Executive Order”) aimed at increasing the overall resilience of U.S. critical infrastructure which is highly dependent upon cyber systems.

Businesses and industry groups have lauded the NIST effort for providing a:

  • Highly inclusive and collaborative approach. When developing the Framework, NIST (1) engaged with over 3,000 individuals and organizations to develop the Framework, working with them to identify and discuss current cybersecurity standards, best practices and guidelines; and (2) received over 200 responses, many of which were comprehensive reports, to its Request for Information. This approach will be continued as NIST continues to solicit feedback for future updates and iterations of the Framework. Also, one would expect to see further collaboration and communication between industries and government, and among businesses via the Critical Infrastructure Cyber Community C³ (“C Cubed”) Voluntary Program established as a partnership between the Department of Homeland Security (DHS) and the critical infrastructure community to help promote implementation and awareness of the Framework.
  • Common cybersecurity language. According to Tom Gann of McAfee the Framework provides, for the first time, a common language for technologists and executives and board members to communicate and make appropriate risk-based decisions. “This is important in and of itself, because C-suite executives can’t manage risk if they don’t explicitly understand what the risks are.” [1]
  • Useful tool. The Framework has also been praised for providing organizations with a tool that allows them to get an honest picture of where they are from a security perspective as well as identify what they want their security program to look like. According to McAfee’s McGann, “The delta between the two provides a means to identify a roadmap for improvement… and communication.”
  • Flexible approach. The Framework avoids a one-size-fits-all approach, recognizing that organizations will continue to have unique risks, including different threats, vulnerabilities, and risk tolerances, and that they will vary in how they implement the Framework. By employing an approach based on principles and best practices,  the Framework is applicable to organizations regardless of size, degree of cybersecurity risk or sophistication. Additionally, it makes sense of the current panoply of approaches to cybersecurity by organizing and structuring the standards, guidelines, and practices that are working effectively in industry today.

At the same time, business and industry groups have leveled some strong criticism at NIST and the Framework for several reasons, including:

  • Threats are largely ignored. One industry pundit, Bob Gourley, has harsh words for NIST: “By ignoring the prodigious threats we face and the threat information functions organizations should put in place NIST has shown an incredibly high naivete, with the result being a framework unable to improve critical infrastructure cyber security.” [2]
  • Cloud risks are dismissed. There is no attempt to address cloud-related risks. In fact, the word “cloud” is mentioned only once in the entire Framework, and only by way of an example. Such neglect seems foolhardy in light of Forrester Research’s prediction  that cloud will take off in 2014 as it is integrated into existing IT portfolios and becomes a de-facto way of doing business.[3]
  • Economic realities are not addressed. The Framework in no way addresses how to establish a cybersecurity program that is cost effective or supported by economic incentives. Cybersecurity programs are not cheap and this oversight will have real consequences for companies struggling to get a program off the ground or strengthen a currently existing program. Internet Security Alliance President Larry Clinton cautions, “If we don’t make real progress in these areas quickly all the work that went into developing the NIST framework will go to waste.”[4]  A voluntary program that doesn’t account for economic reality will quickly find itself with no volunteers.
  • Too simplistic. The Framework is presented in such an elementary, high-level format that it is unlikely to be of any use to sophisticated companies already employing robust cybersecurity defense programs. At the same time, because it is voluntary, it is unlikely to compel less sophisticated companies without a program to adopt the Framework.

Given NIST’s collaborative and inclusive approach to developing the Framework and its continued solicitation of feedback, there seems to be reason for hope that the criticisms leveled against the current Framework will eventually be addressed.


Leave a comment

Ninth Circuit Holds Actual Injury Not Required For Article III Standing under FCRA

On February 4, 2014, in Robins v. Spokeo, Inc., the Ninth Circuit reversed a district court and held that a plaintiff had standing to pursue a claim for damages under the Fair Credit Reporting Act (FCRA).

Spokeo is a data broker that operates a “people search” website that allows users to obtain information about other individuals, including contact information, marital status, age, occupation, economic health, and wealth level.  The complaint asserted that Spokeo violated a number of provisions of the FCRA, such as the requirement that the company, as an alleged “consumer reporting agency,” did not follow reasonable procedures to assure the requisite accuracy of information about consumers or provide notices to providers and users of information.  With respect to harm, the named plaintiff, bringing the action on behalf of a putative class, asserted that Spokeo had provided inaccurate information about him – namely, that he had a graduate degree and was wealthy – which diminished his employment prospects and led to anxiety and stress about his damaged ability to obtain work.

The Ninth Circuit easily dispensed with the challenge to standing as a statutory matter.  The court reasoned that because the FCRA provides a private right of action that does not require proof of actual damages, so, too, the statute does not require a plaintiff to plead actual damages to have standing.

As for Article III injury-in-fact, the Ninth Circuit required no more in the way of pleading actual damages.  The court explained that, first, a plaintiff must allege that his statutory rights have been violated.  Second, the statutory rights at issue must protect against “individual, rather than collective, harm.”  The plaintiff alleged that he personally was injured by Spokeo’s provison of inaccurate information about him.  And his “personal interests in the handling of his . . . information are individualized rather than collective,” and therefore constitute “concrete de facto injuries.”  As for causation and redressability, once again, the statutory cause of action controlled:  the alleged violation of a statutory provision “caused” the violation of a right conferred by that provision.  Likewise, the court reasoned that statutory damages are presumed to redress the alleged injury.

The Ninth Circuit’s Spokeo decision follows the reasoning of Beaudry v. TeleCheck Services, Inc., 579 F.3d 702 (6th Cir. 2009).  At the same time, such statutory cases stand in contrast to the many – but by no means all – class actions where plaintiffs have struggled to plead injury-in-fact to pursue state common law claims seeking damages following the loss of personal information in a data breach.  See, e.g., Reilly v. Ceridian Corp., 664 F.3d 38 (3d Cir. 2011) (dismissing complaint for lack of standing); Key v. DSW Inc., 454 F. Supp. 2d 684 (S.D. Ohio 2006) (same); but see, e.g., Pisciotta v. Old Nat’l Bancorp, 499 F.3d 629 (7th Cir. 2007) (holding plaintiff demonstrated standing).