Behavioral targeting is the process of collecting consumer online data by placing a persistent cookie on the consumer’s computer, which then tracks their cyber wanderings. This data is collected and aggregated, allowing marketers to fine tune which online advertisements are likely to interest a particular consumer.
The Washington Post reported this week that the House subcommittee for commerce, trade, and consumer protection is planning a hearing in early December on Internet privacy. The hearing will include testimony from Web firms on the idea of a Do Not Track registry.
H.R. 5777, the Best Practices Act, is a bill introduced by subcommittee Chairman Bobby Rush, which would require that entities collecting or storing data containing personal information or sensitive information to inform consumers about which information is collected and for what purpose. However, the bill would provide an exception for entities either storing this information about fewer than 15,000 individuals, or collecting this information about fewer than 10,000 individuals during any 12-month period.
It has been a while since the idea of a “Do Not Track” registry has been introduced in Washington. Several consumer groups advocated as early as 2007 for the Federal Trade Commission (FTC) to create a Do Not Track registry. Just as the FTC created a “Do Not Call” list to protect consumers against unwarranted phone calls, by allowing them to opt-out, a “Do Not Track” registry would allow consumers to opt out of having their data collected when surfing the Web.
However, FTC Chairman Jon D. Leibowitz mentioned last July during a U.S. Senate committee on commerce, science, and transportation hearing on consumer online privacy that the FTC was currently assessing whether it was technologically feasible to implement a “Do Not Track” system through browsers. Such a system which would allow consumers to opt out easily from cookies tracking their cyberspace activities, and would be either run through the FTC, or through a private sector entity (see archived webcast @ 58:50). So the FTC seems to advocate a technological solution, not the creation of a registry.
Should browsers be the solution to protect consumers against online tracking? Researchers at the Stanford Law School Center for Internet and Society and the Security Laboratory at the Stanford Department of Computer Science introduced ”DoNotTrack.us.” It is a universal Web tracking opt-out systems, which would work by adding a HTTP header to browsers, indicating that the user does not wish to be tracked.
The researchers note on the site that “[c]ompliance with Do Not Track could be purely voluntary, enforced by industry self-regulation, or mandated by state or federal law. We do not take a position on these alternatives.”
This issue is also currently debated in the European Union. The Article 29 Working Party Group (WP29) adopted Opinion 2/2010 on online behavioral advertising. It noted that Article 5(3) of the 1995 Data Protection Directive “requires obtaining informed consent to lawfully store information or to gain access to information stored in the terminal equipment of a subscriber or user.… [T]racking cookies are ‘information’ stored in the data subject’s terminal equipment and… they are accessed by advertising network providers when data subjects visit a partner website…. Hence, any storage of cookies… and any subsequent use of previously stored cookies to gain access to data subjects’ information will have to comply with Article 5(3).” The WP29 also stated that “consent must be obtained before the cookie is placed and/or information stored in the user’s terminal equipment is collected… and… informed consent can only be obtained if prior information about the sending and purposes of the cookie has been given to the user.”
We may see both the U.S. and the E.U. adopt a technological solution. They may differ though, in their choice of whether or not such a solution should be enforced by governments, or be a best-practice solution. The debate is still open on both sides of the Atlantic.
The Equal Employment Opportunity Commission (EEOC) has issued a final rule to implement Title II of the Genetic Information Nondiscrimination Act of 2008 (GINA), which takes effect January 10, 2011. These regulations, which include a section-by-section analysis of GINA, amends 29 CFR chapter XIV by adding part 1635.
GINA was enacted to protect job candidates and employees against discrimination based on their genetic information, and to restrict acquisition and disclosure of this information. Title II of GINA required the EEOC to issue implementing regulations, and these have just been published in the Federal Register.
Section 1635.9(c) deals with GINA’s relation with HIPAA Privacy Regulations. GINA section 206(c) provides that Title II of GINA does not apply to uses and disclosures of health information governed by the HIPAA Privacy Rule. Therefore, states section 1635.11(d), “entities subject to the HIPAA Privacy Rule must continue to apply the requirements of the HIPAA Privacy Rule, and not the requirements of GINA Title II and these implementing regulations, to genetic information that is protected health information. For example, if a hospital subject to the HIPAA Privacy Rule treats a patient who is also an employee of the hospital, any genetic information that is obtained or created by the hospital in its role as a health care provider is protected health information and is subject to the requirements of the HIPAA Privacy Rule and not those of GINA.”
However, if the covered entity acts as an employer, any genetic information obtained by the entity in its capacity as an employer is subject to GINA Title II and the EEOC rule.
The final rule also modifies slightly the language of GINA on Purpose, following comments made by the American Civil Liberties Union, Coalition for Genetic Fairness, Genetic Alliance and the Genetics and Public Policy Center (see Section 1635.1). GINA used to refer to the “deliberate acquisition” of genetic information as being prohibited, but this reference has been removed, as the EEOC agreed with these organizations that a covered entity may violate GINA even without having a specific intent to acquire information. Some organizations had pointed out that a covered entity may engage in acts that would present a heightened risk, “even without a specific intention to do so, such as when they… access sources of information (e.g., certain types of databases, Web sites or a social networking sites that are likely to contain genetic information about individuals).”
Indeed, there has been quite a bit of reporting lately on the use by employers of social media sites to gather information about job candidates, including police departments. How should we interpret the wording of the regulations, “likely to contain genetic information about individuals”? GINA section 201(4) defines genetic information as information from genetic tests, genetics tests of family members, family medical history, and an individual’s or one of his family member’s request for or receipt of genetic services. So posting on a wall a message such as “Mom just got tested for the breast cancer gene” qualifies as “genetic information” under GINA. Therefore, every single social networking site is “likely to contain genetic information about individuals.”
However, Section 1635.8(b) provides for several inadvertent acquisition exceptions to the general prohibition to acquire genetic information. One of these inadvertent acquisition exceptions is stated in Section 1635.8 (b)(4)(ii)(D), and applies when a “manager, supervisor, union representative, or employment agency representative inadvertently learns genetic information from a social media platform which he or she was given permission to access by the creator of the profile at issue (e.g., a supervisor and employee are connected on a social networking site and the employee provides family medical history on his page.” Therefore, whether the employee or the candidate has accepted the covered entity (or one of its agents) as a “friend” or a contact will determine whether the acquisition of genetic information on a social networking site will be inadvertent or not. Should this be one more reason not to “friend” your employer?
It had been widely anticipated that Facebook would announce today a Facebook email service, similar to gmail.com or yahoo.com. Almost, but not quite.
Instead, Mark Zuckerberg, who founded the company, announced today at a conference, which was broadcast live on Facebook, a new Facebook messaging service designed to allow users to “seamlessly integrate” all the messages they send across the different channels , texts and SMS, emails, and IMs. As Mr. Zuckerberg described it, “it is a messaging system that includes email as part of it.” Mr. Zuckerberg was careful to point out that it is not a Facebook email messaging service, nor was it designed to rival Gmail. Actually, according to the founder of Facebook,the modern messaging system will not be email.
From Facebook’s blog:
“You decide how you want to talk to your friends: via SMS, chat, email or Messages. They will receive your message through whatever medium or device is convenient for them, and you can both have a conversation in real time. You shouldn’t have to remember who prefers IM over email or worry about which technology to use. Simply choose their name and type a message.”
Some features of the new product resemble email. Indeed, Facebook will provide an email address, firstname.lastname@example.org . Users will be able to send and to receive messages to and from everybody, regardless of whether people are their “friends” or even Facebook’s users. Users will be able to forward messages, and add people to a conversational thread. The system will support sending file attachments.
The email service provided by Facebook will be integrated in the user’s account, and Mr. Zuckerberg noted that synching this service with other email systems “is on the roadmap.” How will the privacy of the messages sent through this new messaging service be protected? This was not specifically addressed during the press conference.
However, answering a question from the audience, Mr. Zuckerberg said that the new service will not target advertisements based on the content of the message. Also, people will be able to decide which information will not be stored.
One remembers that, when Google unveiled in February 2010, Buzz, its social networking service, it opted-in all Gmail users to Buzz, and allegedly made private data belonging to Gmail users publicly available without their knowledge or authorization, leading to a class action privacy lawsuit.
It remains to be seen if users will be satisfied that the level of privacy and controls offered by this new service is sufficient for them to entrust it with all their daily messages.
The Obama administration recently announced that it is preparing a report that will be issued by the U.S. Commerce Department regarding Internet privacy regulation. The Commerce Department’s report is intended to outline the Obama administration’s approach to regulating Internet privacy and steps that should be taken to protect consumers’ online privacy. While the report purportedly does not recommend specific legislation, it does indicate that self-regulation is not as robust and effective as the administration believes privacy protection should be and that Internet privacy protection should be strengthened. The Commerce Department’s report is expected to be released in the next few weeks.
This announcement follows the creation by the White House of a National Science and Technology Council Subcommittee on Privacy and Internet Policy, comprised of representatives from federal departments, agencies, and offices, including the Department of Commerce, the Federal Trade Commission (“FTC”), and the Federal Communications Commission (“FCC”). In addition to the Obama Administration’s efforts regarding privacy protection, the FTC has indicated that it will release a comprehensive report by the end of the year regarding the “Exploring Privacy” roundtables that were hosted by the Commission in fall 2009 and early 2010. The report will also contain recommendations for privacy protection and changes to the FTC’s privacy protection framework. Further, Representative Joe Barton (R-TX), currently the ranking minority member of the House of Representatives Committee on Energy and Commerce, has indicated that he intends to support tougher Internet privacy polices when Congress begins its January 2011 session.
On November 8, 2010, the Connecticut Insurance Commissioner announced that the Connecticut Insurance Department has reached a settlement with Health Net of Connecticut (“Health Net”) over Health Net’s actions during a 2009 data breach affecting approximately 500,000 Connecticut residents. The Insurance Department alleged that Health Net failed to safeguard personal health information and other personally indefinable information, including social security numbers and bank account numbers, of Connecticut residents from misuse by third parties and failed to timely notify affected residents of the data breach. The settlement requires Health Net to pay $375,000 in penalties. Since the 2009 data breach, Health Net has also provided credit monitoring protection for two years to affected Connecticut residents and agreed to improve data and information security standards to better protect information from unauthorized disclosure. Health Net’s settlement with the Insurance Department is in addition to a settlement between Health Net and the Connecticut Attorney General reached in July 2010, which required Health Net to pay the state $250,000. The Connecticut Attorney General had alleged that Health Net violated Connecticut data breach and data safeguard laws, as well as the federal Health Insurance Portability and Accountability Act (“HIPAA”).
On November 4, 2010, the European Commission (“Commission”) released proposed revisions to the European Union’s (“EU”) Data Protection Directive. The purpose of the Commission’s proposed revisions is to “set out a strategy on how to protect individuals’ data in all policy areas, including law enforcement, while reducing red tape for business and guaranteeing the free circulation of data within the EU.” Key recommendations include:
- Requiring entities that collect consumers’ personal information to inform consumers “in a clear and transparent way” about how their data will be used;
- Providing consumers the “right to be forgotten” by allowing consumers to fully delete digital information, such as social networking profiles, upon request or after there is no longer a legitimate purpose to retain the data;
- Ensuring consumers are guaranteed data portability, which would allow a consumer to transfer his or her personal data (e.g., e-mail lists, photos, documents) from one application or service to another application or service without hindrance from the data controllers;
- Requiring entities that experience a data breach affecting personal information to notify individuals whose information is compromised;
- Strengthening penalties for violations of the EU’s privacy rules, such as imposing criminal penalties for serious law breaches and allowing data protection authorities and civil society associations to bring an action for violations of data protection provisions before the national courts; and
- Expanding the role of the EU’s Article 29 Working Party—a committee comprised of data protection authorities from the EU’s 27 member states—to help ensure that EU privacy laws are applied consistently across member countries.
Businesses, privacy advocates, and other interested parties can submit comments regarding the Commission’s proposed privacy law changes until January 15, 2011. More information regarding the Commission’s proposed revisions can be found in the Commission’s press release announcing its changes.
Yesterday, Google emailed its Gmail users to announce that a settlement had been reached in the class action suit over Google Buzz. Google launched its social networking site, Buzz, in February, and automatically enrolled all Gmail users in Buzz. A lawsuit was filed by some Buzz users alleging that Buzz exposed personal data, including personal contact information, place of residence, occupation and users’ most frequent Gmail contacts, without appropriate user consent. The complaint also states that Buzz located pictures, video, text and other data that its Gmail users had posted to other websites such as YouTube and automatically sent those posts to the email accounts of users’ frequent email contacts without the users’ knowledge or consent.
Last Friday, the Indiana Attorney General’s office announced that it filed suit against WellPoint, Inc. for failure to comply with Indiana’s notice of breach law. The suit alleges that WellPoint did not notify 32,000 affected customers or the Attorney General’s office in a timely manner. Indiana’s law requires that breached entities provide notice to affected individuals and the Attorney General’s office without unreasonable delay. Ind. Code § 24-4.9.
The Attorney General alleges that between October 2009 and March 2010, WellPoint used an unsecured website in order to accept applications for insurance. These applications asked for individuals’ social security numbers, financial information and health records. Because the website was not secure, this information was available to the public online.
The complaint also states that WellPoint was notified on February 22, 2010 and March 8, 2010 that these applications and personal information were viewable on its website. However, WellPoint did not start notification of affected individuals until June 18, 2010. The Attorney General is seeking $300,000 in civil penalties.