The Secure Times

An online forum of the ABA Section of Antitrust Law's Privacy and Information Security Committee


Leave a comment

2014 Verizon Data Breach Report Paints a Sobering Picture of the Information Security Landscape

The 2014 Verizon Data Breach Investigations Report (DBIR) was released on April 22, providing just the sort of deep empirical analysis of cybersecurity incidents we’ve come to expect from this annual report. The primary messages of this year’s DBIR are the targeting of web applications, continued weaknesses in payment systems, and nine categories of attack patterns that cover almost all recorded incidents. Further, despite the attention paid to last year’s enormous data breach at Target, this year’s data shows that attacks against point of sale (POS) systems are actually decreasing somewhat. Perhaps most importantly, the underlying thread that is found throughout this year’s DBIR is the need for increased education and application of digital hygiene.

Each year’s DBIR is compiled based on data from breaches and incidents investigated by Verizon, law enforcement organizations, and other private sector contributors. This year, Verizon condensed their analysis to nine attack patterns common to all observed breaches. Within each of these patterns, Verizon cites the software and vectors attackers are exploiting, as well as other important statistics such as time to discovery and remediation. The nine attack patterns listed in the DBIR are POS intrusions, web application attacks, insider misuse, physical theft/loss, miscellaneous errors, crimeware, card skimmers, denial-of-service (DoS) attacks, and cyber-espionage. Within industry verticals, most attacks can be characterized by only three of the nine categories.

Attacks on web applications attacks were by far the most common threat type observed last year, with 35% of all confirmed incidents linked to web application security problems. These numbers represents a significant increase over the three-year average of 21% of data breaches from web application attacks. The DBIR states that nearly two thirds of attackers targeting web applications are motivated by ideology, while financial incentives drive another third. Attacks for financial reasons are most likely to target organizations from the financial and retail industries. These attacks tend to focus on user interfaces like those at online banking or payment sites, either by exploiting some underlying weakness in the application itself or by using stolen user credentials. To mitigate the use of stolen credentials, the DBIR advised companies to consider implementing some form of two-factor authentication, a recommendation that is made to combat several attack types in this year’s report.

The 2014 DBIR contains a wide array of detailed advice for companies who wish to do a better job of mitigating these threats. The bulk of this advice can be condensed into the following categories:

  • Be vigilant: Organizations often only find out about security breaches when they get a call from the police or a customer. Log files and change management systems can give you early warning.
  • Make your people your first line of defense:  Teach staff about the importance of security, how to spot the signs of an attack, and what to do when they see something suspicious.
  • Keep data on a ‘need to know basis’: Limit access to the systems staff need to do their jobs. And make sure that you have processes in place to revoke access when people change role or leave.
  • Patch promptly: Attackers often gain access using the simplest attack methods, ones that you could guard against simply with a well-configured IT environment and up-to-date anti-virus.
  • Encrypt sensitive data: Then if data is lost or stolen, it’s much harder for a criminal to use.
  • Use two-factor authentication: This won’t reduce the risk of passwords being stolen, but it can limit the damage that can be done with lost or stolen credentials.
  • Don’t forget physical security. Not all data thefts happen online. Criminals will tamper with computers or payment terminals or steal boxes of printouts.

These recommendations are further broken down by industry in the DBIR, but they largely come down to a liberal application of “elbow grease” on the part of companies and organizations. Executing on cyber security plans requires diligence and a determination to keep abreast of continual changes to the threat landscape, and often requires a shift in culture within a company. But with the FTC taking a more aggressive interest in data breaches, not to mention the possibility of civil suits as a response to less-than-adequate data security measures, companies and organizations would do well to make cyber security a top priority from the C-Suite on down.


Leave a comment

The FTC “Pins” Cole Haan on Pinterest Campaign: Disclosure of Contest Driving Endorsement of Products Required

The rise of social media for contests and marketing campaigns has captured the attention of the Federal Trade Commission (FTC), particularly campaigns that provide for contest entry based on what amounts to social media endorsements.  “Like Company XYZ now to enter!”  The FTC is taking stock and beginning to weigh in on this relatively recent practice.  Just ask Cole Haan.  Late last month, the FTC sent the popular shoemaker a letter marking the end of its investigation into a marketing campaign that turned on “pinning” Cole Haan products for entry into a contest.  In it, the FTC concluded that Cole Haan needed to do more to disclose the connection between the contestants’ “pins” and the company’s contest.

It all started last year when Cole Haan launched its Wandering Sole marketing campaign.  Cole Haan encouraged consumers to create Pinterest boards that included five shoe images from Cole Haan’s own Pinterest board and another five images of the contestants’ favorite places to wander.  Whoever created the board that the company dubbed most creative would win a $1,000 shopping spree.  To identify the contestants, Cole Haan asked that the Pinterest users include the hashtag #WanderingSole in the description of their images.

According to the FTC, Cole Haan allegedly created a “deceptive” situation with its Pinterest campaign because consumers may not have realized that the authors of the pinned content were receiving incentives for their endorsements, even if that incentive was the mere chance to win a contest.  Notably, the FTC determined that Cole Haan’s request that contestants include the contest-specific hashtag was insufficient to overcome the potential for deception.

The foundation for the FTC’s letter lies in 16 C.F.R. Part 255, Guides Concerning the Use of Endorsements and Testimonials in Advertising.  In § 255.5, the FTC explains that companies must “fully disclose” any connection between the company and an endorser of its products when that connection “might materially affect the weight or credibility of the endorsement.”  As for social media uses, the Guides specifically acknowledge the difficulty of determining the link between an individual’s Internet activity and a manufacturer’s marketing activity.  The FTC specifically points out that the marketer “presumably would not have initiated the process that led to the endorsements being made in these new media had it not concluded that a financial benefit would accrue from doing so.”  The importance of the FTC’s Cole Haan letter is that it explicitly states that a pin on a Pinterest board constitutes an “endorsement” and a contest entry constitutes a “connection” between the company and the endorser under § 255.5.  The FTC deliberately publicized its closing letter as a means to put companies on notice of its interpretation of the endorsement guidelines in connection with Pinterest contests.  The message is not that social media contests need to stop, but rather that they must be plainly disclosed for what they are. What the FTC considers an adequate disclosure, however, remains to be seen.  #StayTuned.

Cheryl A. Falvey of Crowell & Moring, LLP contributed to this post.


Leave a comment

Caution: Your Company’s Biggest Privacy Threat is…the FTC

Technology companies – from startups to megacorporations – should not overlook an old privacy foe: the Federal Trade Commission (FTC).  Since its inception in 2002, the FTC’s data security program has significantly picked up steam.  In the last two years, the FTC has made headlines for its hefty privacy-related fines against Google and photo-sharing social network, Path.  In January 2014 alone, the agency settled with a whopping 15 companies for privacy violations.  What is more, many of these companies’ practices were not purposefully deceptive or unfair; rather the violations stem from mere failure to invest the time and security resources needed to protect data.

Vested with comprehensive authority and unburdened by certain hurdles that class actions face, the FTC appears poised for more action.  The FTC’s basis for its authority in the privacy context originates from the Federal Trade Commission Act (FTC Act) and is quite broad.  Simply put, it may investigate “unfair and deceptive acts and practices in or affecting commerce.”  In addition to this general authority, the FTC has authority to investigate privacy violations and breaches under numerous sets of rules, including the Children’s Online Privacy Protection Act (COPPA), the Fair Credit Reporting Act including disposal (FCRA), the Gramm-Leach-Bliley Act (GLB), and the Telemarketing and Consumer Fraud and Abuse Prevention Act.  Nor is the FTC hampered with the requirements of private class action litigation.  For example, successful privacy class actions often must establish that consumers were harmed by a data breach (as in In re Barnes & Noble Pin Pad Litigation), consumers actually relied on a company’s promises to keep the information confidential (as in In re Apple iPhone Application Litigation), or the litigation will not be burdened with consumer-specific issues (such as whether the user impliedly consented to the disclosure, as in In re: Google Inc. Gmail Litigation).

The FTC has often focused on companies failing to adhere to their own stated policies, considered a “deceptive” practice by the FTC.  More recently, the FTC settled with the maker of one of the most popular Android Apps, “Brightest Flashlight Free.”  While the App informed users that it collected their data, it is alleged to have failed to disclose that the data would be shared with third parties.  And though the bottom of the license agreement offered consumers an opportunity to click to “Accept” or “Refuse,” the App is alleged to have already been collecting and sending information (such as the location and the unique device identifier) prior to receiving acceptance.  Just last week, the FTC settled with Fandango for failing to adequately secure data transmitted through its mobile app, in contravention of its promise to users.  The FTC alleged that Fandango disabled a critical security process, known as SSL certificate validation, which would have verified that its app’s communications were secure.   As another example, the FTC recently settled with a maker of a camera device used in homes for a variety of purposes, including baby monitoring and security.  The device allows the video to be accessed from any internet connection.  The devices are alleged to have “had faulty software that left them open to online viewing, and in some instances listening, by anyone with the cameras’ Internet address.”

Companies have also been targeted for even slight deviations from their stated policies.  For example, the FTC recently reached settlements with BitTorrent and the Denver Broncos.  The entities were blamed for falsely claiming they held certifications under the U.S.-EU Safe Harbor framework.  In reality, the entities had received the certifications but failed to renew them.  The safe harbor is a streamlined process for US companies (that receive or process personally identifiable information either directly or indirectly from Europe) to comply with European privacy law.  Self-certifying to the U.S.-EU Safe Harbor Framework also ensures that EU organizations know that the organization provides “adequate” privacy protection.

Perhaps most surprising to companies is the FTC’s assertion that it may require them to have reasonable data protection policies in place (even if the company never promised consumers it would safeguard the data).  Failure to secure data, according to the FTC, is an “unfair” practice under the FTC Act.  For example, the FTC recently settled with Accretive Health, a company that handles medical data and patient-financial information.  Among other things, Accretive was alleged to have transported laptops with private information in an unsafe manner, leading to a laptop (placed in a locked compartment of an employee’s car) being stolen.  It is estimated that the FTC has brought over 20 similar types of cases, but all but one settled before any meaningful litigation.  The one: a case against Wyndham Hotels.  There, the FTC has alleged that Wyndham failed to adequately protect consumer data collected by its member hotels.  According to the FTC, hackers repeatedly accessed the data due to the company’s wrongly configured software, weak passwords, and insecure servers.  Though Wyndham’s Privacy Policy did not technically promise that the information would remain secure, the FTC faulted it for the lapse anyway.  Wyndham has challenged the FTC’s position in federal court and a decision is expected soon.

Being a target of an FTC action is no walk in the park.  In addition to paying for attorney fees, the FTC often demands significant remedial measures.  For instance, the company may be asked to (1) create privacy programs and protocols, (2) notify affected consumers, (3) delete private consumer data, (4) hire third-party auditors, and (5) subject itself to continual oversight by the FTC for 20 years.  What is more, if a company ever becomes a repeat offender and violates its agreement not to engage in future privacy violations, it will face significant fines by the FTC.  In this regard, for example, Google was required to pay $22.5 million for violating a previous settlement with the FTC.

All told, technology companies should not feel emboldened by recent class action victories in the privacy context.  To avoid FTC investigation, they should carefully review their data handling practices to ensure that they are in accord with their privacy policy.  Further, they would be wise to invest in the necessary resources required to safeguard data and regularly ensure that their methods are state of the art.

 


Leave a comment

Google Avoids Class Certification in Gmail Litigation

On March 18, 2014, Judge Koh in the Northern District of California denied Plaintiffs’ Motion for Class Certification in the In re: Google Inc. Gmail Litigation matter, Case No. 13-MD-02430-LHK. The case involved allegations of unlawful wiretapping in Google’s operation of its Gmail email service. Plaintiffs alleged that, without obtaining proper consent, Google unlawfully read the content of emails, extracted concepts from the emails, and used metadata from emails to create secret user profiles.

Among other things, obtaining class certification requires a plaintiff to demonstrate that class issues will predominate over individual issues. In this case, Judge Koh’s opinion focused almost exclusively on the issue of predominance. The Court noted that the predominance inquiry “tests whether proposed classes are sufficiently cohesive to warrant adjudication by representation.” Opinion (“Op.”) at 23 (citations omitted). The Court further emphasized that the predominance inquiry “is a holistic one, in which the Court considers whether overall, considering the issues to be litigated, common issues will predominate.” Op. at 24.

The Court in the Gmail litigation noted how the existence of consent is a common defense to all of Plaintiffs’ claims. Consent can either be express, or it can be implied “based on whether the surrounding circumstances demonstrate that the party whose communications were intercepted new of such interceptions.” Op. at 26. The decision explained how common issues would not predominate with respect to a determination of whether any particular class member consented to Google’s alleged conduct.

The Court briefly addressed whether the issue of express consent could be practically litigated on a class-wide basis, but the opinion focused largely on the issue of implied consent. The Court noted that implied consent “is an intensely factual question that requires consideration of the circumstances surrounding the interception to divine whether the party whose communication was intercepted was on notice that the communication would be intercepted.” Op. at 30. Google contended that implied consent would require individual inquiries into what each person knew. Google pointed to a plethora of information surrounding the scanning of Gmail emails including:  (1) Google’s Terms of Service; (2) Google’s multiple Privacy Policies; (3) Google’s product-specific Privacy Policies; (4) Google’s Help pages; (5) Google’s webpages on targeted advertising; (6) disclosures in the Gmail interface; (7) media reporting of Gmail’s launch and how Google “scans” email messages; (8) media reports regarding Google’s advertising system; and (9) media reports of litigation concerning Gmail email scanning. The Court thus agreed with Google that there was a “panoply of sources from which email users could have learned of Google’s interceptions other than Google’s TOS and Privacy Policies.” Op. at 33. With all these different means by which a user could have learned of the scanning practices (and provided implied consent to the practice) the issue of consent would overwhelmingly require individualized inquiries and thus precluded class certification.

This opinion demonstrates a key defense to class action claims where implied consent is at issue. Any class action defendant’s assessment of risk should include an early calculation of the likelihood of class certification, and that calculation should inform litigation strategy throughout the case. Google consistently litigated the matter to highlight class certification difficulties surrounding consent, and ultimately obtained a significant victory in defeating class certification.


Leave a comment

PHI Whack-a-Mole

Well, the newest Ponemon Study confirms what the news keeps reporting:  health care organizations continue to be plagued by PHI security incidents.  In fact, 90 percent of healthcare organizations surveyed reported experiencing breaches.  And 45 percent agree that they have inadequate policies and procedures in place to effectively detect or prevent PHI security incidents.  

 

So what’s causing these breaches?  According to the Ponemon Institute’s fourth annual Patient Privacy & Data Security Study, patient data security and privacy threats are new and expanding.  For the health care organizations, it is a bit like playing whack-a-mole. First place for types of actual PHI breach goes to lost or stolen computing devices (49 percent), followed by employee error (46 percent), and third-party mishap (41 percent). The rate of data breaches caused by a malicious insider or hacker has doubled from 20 percent of all incidents to 40 percent since the first Ponemon Institute study four years ago.  This last statistic is of special concern to Dr. Ponemon:  “The latest trend we are seeing is the uptick in criminal attacks on hospitals…. With millions of new patients entering the U.S. healthcare system under the Affordable Care Act, patient records have become a smorgasbord for criminals.”  The problem, of course, is that the cybercriminals are becoming more and more sophisticated with their malicious tactics and this is a huge challenge to address for the often financially strapped healthcare organizations.

 

What’s keeping the healthcare executives up at night?  75 percent worry the most about employee negligence; in particular, BYOD.  While 88 percent of the organization allow employees and medical staff to use their own devices, more than half are not confident that the BYOD are secure and 38 percent have no procedures to secure the devices or prevent them from accessing sensitive information. 

 

Business associates (BA) are worrying these executives as well.  As the healthcare environment expands with more healthcare organizations reliant on BA’s for services such as IT, claims processing and benefits management, greater risks emerge.  Less than one-third of those surveyed expressed confidence that their BAs are protecting patient data in ways mandated by the HIPAA Final Rule.

 

So, is there any good news?  Are any moles actually getting whacked?  Fortunately, more than half of the health care organizations surveyed (55 percent) believe they do have effective policies and procedures in place to prevent or quickly detect unauthorized patient data access, loss or theft.  This is not unfounded as the actual number and cost of data breaches has slightly declined from prior study results.  Further, while the 2013 Report noted that 45 percent of the healthcare organizations state they have had more than five incidents in the last two years, this year that percentage declined to 38 percent.  So, slow but steady progress. The challenge, according to Ponemon, is that organizations need to imbed a culture of compliance to stem the security risks.  Healthcare organizations must implement tools, processes and software that automate and streamline the practice of protecting PHI—these are the mallets health care organizations need to whack those data security moles. 


Leave a comment

Google ‘Glasstiquette’ and Other Google Glass Issues

I was sitting at an airport café a few weeks ago, and witnessed a group of friends having fun, speaking and laughing together around a table. Next to them was a middle-aged business man wearing Google Glass.

One of the members of the group spotted the glasses, and started a conversation with their wearer. The business man obliged, saying he liked them very much, and then said: Oh, and by the way, I just took a picture of you!

The mood of the group subtly changed. They politely ended the conversation and then, very slowly, as not to be rude, all turned their back to the business man.

I had just witnessed a Google Glass breach of etiquette, and for what is worth, it did not seem that the culprit was unaware of it.

Etiquette for a Wearable Computers World

Do we need new rules to live in a society where the person next from us on line at the supermarket counter, or worse, behind us at the pharmacy while we pick up drugs, may be able to take a photograph of us, or to film us, without us even noticing it?

Google itself seems to believe we do,  as it recently released a guide for ‘Google Glass Explorers,’ people who have been given the opportunity to test Google Glass, which are still not available for sale.

Well, it seems that the business man I witnessed at the airport broke one of the Do’s of the guide: ‘Ask for permission.’ While this seems just common sense if one if taking a photograph, how could we possibly ask for permission to film a crowd? Should we ask every single person for permission?

Google Glass and Law Enforcement

Knowing the answer to this question may be a matter of personal safety. A woman claimed this week that she was attacked and robbed in a bar at San Francisco while wearing Google Glass, and, according to her, it was because she was wearing Google Glass.

It seems that she was later able to retrieve her Google Glass. Interestingly, its camera had apparently recorded the incident, showing a man ripping the glasses off her face. Could the recoding be used as evidence?

Indeed, Google Glass may be used by law enforcement officials in the near future. New York’s finest, the NY Police Department, is experimenting with two pairs of Google Glass, to find out whether they could be used for police work. That could raise some interesting fourth amendment issues.

On the other hand, Google Glass may become the pet peeve of police officers in charge of enforcing road security.

A woman got a ticket in San Diego last October, for speeding, but also for wearing Google Glass, as 27602(a) of the California Vehicle Code forbids driving a motor vehicle if “a television receiver, a video monitor, or a television or video screen, or any other similar means of visually displaying a television broadcast or video signal that produces entertainment or business applications, is operating and is located in the motor vehicle at a point forward of the back of the driver’s seat, or is operating and the monitor, screen, or display is visible to the driver while driving the motor vehicle.”

The case was later dismissed, as the judge did not find enough evidence to prove beyond a reasonable doubt that Google Glass was switched on while its wearer was driving.

Is it safe for car drivers to wear Google Glass while driving? It may be banned soon by legislators, just as texting and using a telephone while driving is forbidden in most states. Some states, such as Illinois, have already introduced bills to that effect. It remains to be seen if they will be enacted; Reuters reported this week that Google is lobbying to stop these bills from becoming law.

Facial Recognition and Google Glass

Even though Google has not (yet) developed a facial recognition app for Google Glass, another company, NameTag has done so. It would allow Google Glass wearers to pick a suitable date among a crowd of strangers, by scanning social media information of people around them. That triggered concerns from Senator Al Franken (D-Minnesota), who sent a letter on February 5th to the President of NameTag, expressing his “deep concerns” about the new app  and urging him to delay its launch “until best practices for facial recognition technology are established.”

It is not the first time legislators are expressing concerns over the privacy issues involved in Google Glass, and it probably won’t be the last.  


Leave a comment

Washington State May Soon Regulate Personal Information Collection by Drones

Two Washington State bills are addressing the issue of government surveillance using drones, and the potential negative impact this could have on privacy.

The first bill, HB 1771, is a bi-partisan bill sponsored by Rep. David Taylor, R-Moxee, which was   introduced last year. It calls drones a “public unmanned aircraft system.”

HB 2789, is also sponsored by Rep. David Taylor. It calls drones “extraordinary sensing devices” and its Section 3(1) would have government use of drones “conducted in a transparent manner that is open to public scrutiny.”

Calling drones “devices” instead of “aircraft” has significance for a State famous for its aeronautic industry.  Indeed, while HB 1771 passed the House last week, HB 2789 stills lingers in Committee.

A Very Broad Definition of Personal Information

HB 2789 and HB 1771 both define what is “personal information” quite broadly, as it would not only encompass a social security or an I.D. number, but also “medical history, ancestry, religion, political ideology, or criminal or employment record.

Interestingly, it would also encompass information that can be “a basis for inferring personal characteristics” such as “the record of the person’s presence, registration, or membership in an organization or activity, or admission to an institution” or even, “things done by or to such person,” a definition that is so broad that it may encompass just about anything that ever happens to an individual. This definition recognizes that drone surveillance allows for a 24/7 surveillance society.

Personal information also means IP and trade secret information.

Illegal Collection of Data by Drones Must be “Minimized”

Under section 4 of HB 2789, disclosure of personal information acquired by a drone must be conducted in a way that minimizes unauthorized collection and disclosure of personal information. It reprises the words of Section 5 of HB 1771, only replacing ‘public unmanned aircraft by ‘extraordinary sensing device.’

I am not sure that I interpreted section 4 correctly, so here is the full text:

All operations of an extraordinary sensing device or disclosure of personal information about any person acquired through the operation of an extraordinary sensing device must be conducted in such a way as to minimize the collection and disclosure of personal information not authorized under this chapter.

So the standard it not complete avoidance of unauthorized collection of personal information, but instead minimization of illegal collection. The wording may reflect the understanding of the legislature that, because of the amazing volume of data that may potentially be collected by drones, including “things done by or to such person,” it would be unrealistic to set a standard of complete avoidance of data collection.

Maybe this ”minimizing” standard set by HB 1771 and HB 2789 is a glimpse of the standards for future data protection law…

Warrant Needed to Collect Personal Information by Drones

Under Section 5 of HB 2789, a drone could to collect personal information pursuant to a search warrant, which could not exceed a period of ten days.

The standard to obtain a warrant under Section5 (3)(c) of HB 2789 and Section 6 (2) (c ) of HB 1771would be “specific and articulable facts demonstrating probable cause to believe that there has been, is, or will be criminal activity

Under Section 5 (3)(d) of HB 2789, a petition for a search warrant would also have to include a statement that “other methods of data collection have been investigated and found to be either cost prohibitive or pose an unacceptable safety risk to a law enforcement officer or to the public. ”

So drones should be, at least for now, still considered an extraordinary method to be used in criminal investigations.  Such statement would not be necessary though under HB 1771.

Warrant could not exceed ten days under Section 5(5) of HB 2789, but could not exceed 48 hours under section 6(4)HB 1771, and thus HB 1771 would be much more protective for civil liberties. However, as we saw, it is unlikely that HB 1771 will ever be enacted into law.

Warrant Not Needed in Case of an Emergency

Both bills would authorize some warrantless use of drones.

However, under Section 7 of HB 2789 a warrant would not be needed if a law enforcement officer “reasonably determines that an emergency situation exists [involving] criminal activity and presents immediate danger of death or serious physical injury to any person,” and that the use of a drone is thus necessary.

Under Section 8 of HB 1771, it would only be necessary for the law enforcement officer to “reasonably determine that an emergency situation exists that involves immediate danger of death or serious physical injury to any person” which would require the use of drone, without requiring a pre-determination of criminal activity.

But even if an emergency situation does not involve criminal activity, section 8 of HB 2789 allows for the use of drones without a warrant if there is “immediate danger of death or serious physical injury to any person,” which would require the use of drones in order “to reduce the danger of death or serious physical injury.”

However, such use would only be authorized if it could be reasonably determined that such use of drones “does not intend to collect personal information and is unlikely to accidentally collect personal information,” and also that such use is not done “for purposes of regulatory enforcement.“

Both bills require that an application for a warrant be made within 48 hours after the warrantless use of a drone.

Fruits of the Poisonous Drone

Under section 10 of HB 2789 and section 10 of HB 1771, no personal information acquired illegally by a drone nor any evidence derived from it could be used as evidence in a court of law or by state authorities.

Handling Personal Information Lawfully Collected

Even if personal information has been lawfully collected by drones, such information may not be copied or disclosed for any other purpose than the one for which it has been collected, “unless there is probable cause that the personal information is evidence of criminal activity.”

If there is no such evidence, the information must be deleted within 30 days if the information was collected pursuant to a warrant and 10 days if was incidentally collected under section 11 of HB 2789, but would have to be deleted within 24 hours under section 11 of HB 1771.

Drone regulation is a new legal issue, but Washington  would not be the first State to regulate it. Many other States have introduced similar proposals, often not successfully however. But Florida, Idaho, Illinois, Montana, Oregon, Tennessee, Texas and Virginia have all enacted laws regulating the use of drones for surveillance purposes and North Carolina has enacted a two-year moratorium. It remains to be seen if and when federal legislation will be enacted.