Posts Tagged ‘Privacy’

CWAG Panel touches on the challenges of Privacy 3.0

Yesterday, the Conference of Western Attorneys General (“CWAG”) hosted a superb panel entitled “Privacy 3.0 – Emerging Enforcement & Policy Issues” at their annual meeting in Santa Fe, NM.  Featured on the panel were FTC Commissioner Julie Brill, Assistant AG Shannon Smith of the Washington Attorney General’s Office, Professor Chris Hoofnagle of UC Berkeley Law School and Professor Paul Ohm of the University of Colorado’s Law School.

The panelists discussed the enforcement approach to privacy and data security in the 1.0 (notice and choice) and 2.0 (harm-based analysis) eras – and how this approach may need to change in the current age given continuing challenges: the emergence of scholarship showing that “anonymization” is a fallacy, the continuing struggle to create clarity around key terms used in privacy, and the need to educate consumers about basic privacy concepts.  The panel also discussed the States’ approach to some of these developments – such as the Massachusetts data law.

You can view the full webcast on CWAG’s site.

Disclosure: I worked with CWAG to help pull this panel together.


Tipping Towards an Opt-In

A few years ago, I became an instant fan of Malcolm Gladwell’s groundbreaking book – “The Tipping Point” – based on an epidemiological theory that says that in aggregate, “little things” can make a big difference.  Since then, I’ve observed the phenomenon play out on the policy stage several times – financial reform and healthcare are two immediate examples that come to mind – and I wonder if the theory has any application in what’s currently happening with online privacy today.  I think it does – particularly if you view a tipping point in scientific terms i.e. the point at which an object is displaced from a state of stable equilibrium due to a series of successive events, into a new equilibrium state qualitatively dissimilar from the first.

To say that online privacy was ever in a state of stable equilibrium is a stretch.  We are however, approaching the end of a current era in online advertising and marketing – an era in which companies captured personal and confidential data from users, and then monetized that data to sell ads back to those very same users, often without the user’s authorization or knowledge. That state of equilibrium has been threatened by many events in the last few months – market developments, consumer outcry and regulatory attention all converging to catapult data privacy and security onto the national agenda and into the mainstream press.  Some commentators, such as Jeff Chester, have characterized these events as a perfect storm; I see them a bit differently – not a storm, but a series of occurrences that finally “tipped” the issue, as companies attempted to push the privacy envelope with various features that compromised a user’s privacy (and in some cases the user’s express wishes to keep their data private).  Each of these features involved sharing data with a third party and not surprisingly, each triggered a privacy outcry – because they provided no meaningful way for users to opt-out of the feature before personal data was exposed.

It’s amazing to think that most of these pivotal events only happened during the last three months.  To recap:

February 9, 2010 – Google launches Google Buzz, and overnight, transforms users’ Gmail accounts into social networking pages, exposing personal contacts.  Google later remedies the situation by making the feature opt-in.

April 27, 2010 – 4 Democratic Senators led by Chuck Schumer of New York, send a letter to Facebook CEO Mark Zuckerberg complaining about the privacy impact of Facebook services, including its instant personalization feature (which exposed user profile data without authorization on launch).  Senator Schumer follows up his letter with a formal request urging the FTC to investigate Facebook.  Facebook eventually announces new privacy controls.

May 5, 2010 – EPIC and a coalition of other advocacy organizations file this complaint, urging the FTC to investigate Facebook.  In the complaint, they assert that “Faceboook’s business practices directly impact more American consumers than any other social network service in the United States.”

May 14, 2010 – Google announces, via a post on their policy blog, that their Streetview cars have inadvertently been capturing payload data from open WiFi networks – in violation of US, European and other global data protection laws – for over 3 years.

May 21, 2010 – The Wall Street Journal reports that a group of social networking sites – including Facebook, MySpace and Digg – routinely share user profile data with advertisers, despite public assurances to the contrary.

The result? With each successive product or feature launch, the privacy debate is now tipping towards a privacy regime that could be much stricter than anything we’ve seen before – a requirement that companies get a user’s affirmative opt-in to any use of personal data for advertising and marketing purposes.

Privacy nerds may want to revisit the words of David Vladeck, head of the FTC’s Bureau of Consumer Protection, in a New York Times interview that took place last August i.e. before the privacy mishaps of the last 3 months.  When asked about whether the FTC would mandate an opt-in standard for user disclosures, Mr. Vladek responded:

“The empirical evidence we’re seeing is that disclosures on their own don’t work, particularly disclosures that are long, they’re written by lawyers, and they’re written largely as a defense to liability cases. Maybe we’re moving into a post-disclosure environment. But there has to be greater transparency about what’s going on. Until I see evidence otherwise, we have to presume that most people don’t understand, and the burden is going to be on industry to persuade us that people really are well informed about this.”

The emphasis on transparency becomes even more important with the  impending rollout of the FTC’s privacy framework this summer.  Will  the FTC make an affirmative opt-in mandatory in all instances where personal data is being shared with a third party?  Clearly, an opt-in is one of the best ways to ensure transparency, and to give users meaningful notice about what data is being collected.  The question is whether an opt-in requirement would be so cumbersome it would turn users off of the service altogether.  For instance, would an opt-in be required once – before the feature is first launched, or each successive time it launches?

Also, it’s unclear whether the FTC’s framework will derive strength (or weakness) from a federal privacy law if such a law does indeed pass this session.  Critics on both sides have mostly panned the House legislation i.e. the Boucher-Stearns bill, but there is news of another, more stringent bill being drafted by Senator Schumer who reached his tipping point with Facebook as outlined earlier.

I saved my most important “little thing” for last. Even if you don’t believe that the privacy debate has yet to reach a tipping point, consider this: in June, the Supreme Court will issue its decision in City of Ontario v. Quon. This is the first time that the Supremes have considered the crucial question of what expectation of privacy users have in their electronic communications.  Their decision will most likely impact any regulatory or legislative scheme around privacy currently being proposed by the federal agencies or Congress.  Most importantly, a Supreme Court decision that finds an expectation of privacy in electronic communications will most certainly translate into increased obligations on companies that deal in these types of electronic communications and data.  A tipping point?  Absolutely.  In fact, such a decision would definitely signal something much bigger (to quote another popular book title) – a Game Change for advertising and marketing on the web.

Note to Facebook: Privacy is Personalization Too

April 29, 2010 Leave a comment

Just last week, Facebook introduced “instant personalization” – a feature that extends your Facebook experience via familiar Facebook “plug-ins” (Activity Feed, the “Like” and “Recommend” buttons) – to partner websites such as yelp and  Already, the features are drawing much criticism – this time, from a Democratic quad of senators – Begich (AK), Bennet (CO), Franken (MN) and Schumer (NY) – who are urging Facebook to change its policies on sharing user data with third parties. Their letter to Facebook founder Mark Zuckerberg highlight three main concerns: Facebook’s continually elastic definition of what it considers personal information, the storage of Facebook user profile data with advertisers and other third parties, and the aforementioned “instant personalization” feature. The Senators acknowledge the FTC’s role in examining the issue but also advocate that Facebook take “swift and productive steps to alleviate the concerns of its users” while FTC regulation is pending.  On Monday, Senator Schumer followed up with an open letter that urges the FTC to investigate Facebook’s privacy practices.

Instant personalization is the latest Facebook feature to draw flak for its perceived impact on privacy.  It’s actually a very cool technology, designed for people who want to publicly share their likes and dislikes with their Facebook network.  It works by sharing certain Facebook profile data with a partner site.  The feature is personalization defined – every user using Facebook plug-ins on a partner site will have a different experience based on who they are friends with on Facebook.

A recent post on the Facebook blog describes the process:

“At a technical level, social plugins work when external websites put an iframe from on their site—as if they were agreeing to give Facebook some real estate on their website. If you are logged into Facebook, the Facebook iframe can recognize you and show personalized content within the plugin as if the visitor were on directly. Even though the iframe is not on Facebook, it is designed with all the privacy protections as if it were (emphasis added).”

Note the last sentence of that excerpt – it seems to suggest that as a Facebook user, you don’t have to worry about privacy whether you are on Facebook, or using a Facebook plug-in on another site.  So what’s the flap about?  What is fuelling the concerns with Facebook’s privacy practices – from the letter from Democratic Senators, to ongoing concerns from EPIC, to this thoughtfully penned article from a Facebook user who also happens to work for PC World?

I think it has to come down to notice – especially to users.  Facebook debuted “instant personalization” as an opt-in feature that automatically exposed a user’s Facebook profile data to partner sites. This has raised concerns with regulators, and certain Facebook users too – just take a look at the comments to this recent Facebook blog on the topic. To further complicate things, Facebook makes it particularly difficult to opt-out of the instant personalization features.

With this latest move, Facebook reaches outside its walled garden to extend its reach across the web – I almost think of it now as the world’s largest social platform (not network).  Consider for instance that it took Microsoft nearly twenty years to reach an install base of 1 billion for Windows; Facebook, now approaching 500 million users, will probably reach that number in less than a decade. As Facebook continues to evolve its platform strategy, its processes – particularly around informing users about what it plans to do with their profile data – must be better defined.  I think this goes beyond a static privacy policy – it may even involve engaging select users at the beta stage to pre-determine privacy concerns (like whether to launch a feature as opt-in or out).  Engaging trust with your ecosystem is essential for any platform company and when it comes to Facebook, users are an essential part of the ecosystem equation.

For the most part, Facebook users divulge data about themselves with the expectation that this data will be used on Facebook only; sharing that same data with other sites, even if it’s via a Facebook plug-in, is clearly not part of that expectation.  If Facebook wants to use user profile data for secondary purposes, it should first get the user’s permission to do so.  Such a system honors a user’s privacy preferences – which are also a personalization of sorts.  And when it comes to privacy, Facebook should be doing everything it can to ensure that this type of personalization is preserved.

Today at the ABA: Expanding the FTC’s Role through Financial Reform

April 22, 2010 1 comment

I have also posted this entry to the ABA’s Secure Times blog.

The big question being debated at this morning’s session on financial reform legislation and the proposed Consumer Financial Protection Agency/Bureau: how will the legislation impact the FTC’s authority, both in terms of rulemaking and imposition of civil penalties?

In December 2009, the House passed the “Wall Street Reform and Consumer Protection Act of 2009” (HR 4173).  An important provision in the bill would strip the FTC of its powers to regulate consumer financial protection — while also expanding the agency’s powers in two key ways.  First, by giving the FTC “APA” rulemaking authority for areas that fall within the FTC’s jurisdiction and second, by giving the agency greater latitude to assess civil penalties for unfair and deceptive practices.

These amendments will surely impact FTC enforcement of online advertising, marketing, privacy, and data security.  For instance, violations under the FTC’s expanded authority could trigger civil penalties even in the absence of an FTC order. Civil penalties would be assessed in antitrust cases brought by the FTC that include a consumer protection claim.

In addition, the HR 4173 language that expands the FTC’s authority would impose liability on companies that “substantially assist” in an unlawful act, even if the company does not have direct knowledge or responsibility for the violation.  This provision will probably raise some serious concerns for companies currently enjoying a safe harbor under the Communications Decency Act.

Today, FTC rulemaking jurisdiction comes in two flavors – “APA” rulemaking under certain laws as prescribed by Congress e.g. the Children’s Online Privacy Protection Act, as well as general rulemaking authority under the 1975 Magnusson-Moss Act.  Under the latter, the FTC can only regulate “prevalent” unfair and deceptive acts, and must justify that regulation with “substantial evidence.”   The key difference between these two types of rulemaking occurs during judicial review; a court can overturn an FTC regulation under Magnusson-Moss if the rule lacks a substantial evidentiary record to support it.  In contrast, FTC regulations enacted under the APA rulemaking scheme, such as those implementing COPPA, can only be overturned if the agency was “arbitrary or capricious” in enacting the rule – a much higher standard. As former FTC Chairman Muris explained in his presentation at the panel, Magnusson-Moss gives the FTC authority to act only when a problem occurs often enough to justify a rule, or when a problem has a common cause in a sufficient number of cases.

Current FTC Chairman Jon Leibowitz, supported by President Obama and the Administration, has strongly advocated for an expansion in the FTC’s authority, stating that it is “critical” for the FTC to carry out its mission of protecting consumers.  In particular, Leibowitz has argued that the procedural requirements of Magnusson-Moss – such as the requirement that a practice be prevalent before the agency can act – makes FTC rulemaking more burdensome than at most other federal agencies. Although the relevant amendments expanding the FTC’s power are missing from the Senate version of the legislation, it is widely expected that these differences will be worked out in conference.  Financial reform legislation appears to be on a fast track – earlier today, a Senate panel approved the bill, and both Republicans and Democrats have indicated that passage is likely.

The CFPA would be a new independent federal agency – the composition of which would vary depending on whether you are looking at the House (5 members and a Director for two years) or Senate Bill (5 members).  Its enactment would strip the FTC and other federal banking agencies of their federal consumer protection powers under a number of laws, including the Electronic Funds Transfer Act, the Equal Credit Opportunity Act, the Fair Credit Reporting Act, the Fair Debt Collection Practices Act, the Home Mortgage Disclosure Act, the Real Estate Settlement Procedures Act, the Secure and Fair Enforcement for Mortgage Licensing Act, the Truth in Lending Act and the Truth in Savings Act.   In short, any product or service that results from or is related to engaging in a financial activity and that is to be used by a consumer “primarily for personal, family or household purposes” will come under the new agency’s purview.

At today’s session, we saw differing viewpoints from both Tim Muris, former FTC Chairman, and Julie Brill, incoming FTC Commissioner, on this current push to expand the FTC’s authority under financial reform legislation.

Former Chairman Muris views the FTC’s current role as important, and he sees FTC rulemaking as relevant in certain areas – e.g. the do-not-call rules.  He is concerned about the current proposals to expand the FTC’s authority because the agency often lacks industry-specific knowledge and expertise (I see this most recently in the area of privacy, as the FTC is currently gleaning this knowledge through its Exploring Privacy roundtable series). Muris also thinks the agency’s rulemaking authority under Magnusson-Moss is more than sufficient as it imposes an obligation on the agency to be clear about its proposed theories while focusing its evidence on key questions.  He cites the agency’s recent business opportunity rulemaking as an example of an instance where the FTC initially proposed a broad rule that would have disproportionately impacted both fraudulent and legitimate business.  The FTC eventually narrowed its proposed business opportunity rule after the public comment process.

On civil penalties, Muris thinks these are important only when a company violates an FTC order or rule.  He sees blanket civil penalty authority as a mistake that may have unintended consequences – such as a penalty on a firm’s stock price.  He’s also concerned that the standard of review laid out in the financial reform legislation will return the FTC’s definition of unfairness to its pre 1994 definition i.e. the Sperry-Hutchinson or “cigarette rule” which defines an unfair practice as one that is injurious to consumers, violate established public policy or is it unethical or unscrupulous.  As many know, Congress amended the FTC Act in 1994 to specify that an unfair act or practice is one that causes or is likely to cause substantial injury to consumers that is not reasonably avoidable and is not outweighed by countervailing benefits to consumers or competition.

Providing a counterpoint to Muris’ remarks, FTC Commissioner Julie Brill, speaking “on behalf of herself,” is generally in favor of expanding the FTC’s authority.  She sees the FTC as both a law enforcement and regulatory agency.  She views civil penalties as just “one of the arrows” in the FTC’s quiver – not to be used in every instance, but as appropriate.  As a law enforcer, she does not see the FTC’s request to have civil penalty authority as unusual – since most state AGs already have this type of authority.  To view such penalties as “automatic” is particularly misleading to her, since the FTC would only be able to obtain such penalties after judicial review in court. Brill also sees the FTC as a regulatory agency and notes that APA rulemaking is enjoyed by most other federal agencies. In addition, she points out that APA rulemaking under the proposed amendments would also be subject to review by a judge in court. Brill also views civil penalties as helpful in quantifying equitable remedies to compensate consumers for their injury – e.g. disgorgement or restitution for data breach violations.

Taking a broader view of the situation, Brill sees an expansion of the FTC’s authority as a way to make the agency’s enforcement efforts more effective – which benefits both consumers and competition in the long run. She also feels that consumers want an agency that has the right enforcement tools – not an “emasculated” FTC – and finds it surprising that the issue is even being debated, given the events of the financial meltdown and the current economic recession.

On the subject of FTC regulation, Brill is strongly in favor of an update, noting that rulemaking under Magnusson-Moss can often take up to 8 – 10 years.  She recalls comments she made on the hearing aid rule as an Assistant AG in Vermont in 1992 – rules that have yet to be issued, nearly 20 years later.  Her statements suggest that expanded rulemaking authority might give companies in dynamic industries – such as technology – FTC regulation that actually keeps pace with innovation.

The question of course, is whether such FTC regulation would also stifle innovation preemptively.  Companies have started to take note of the recent push to expand the FTC’s power, and it is likely that the topic will continue to be debated fiercely in the coming weeks as financial reform legislation comes to a vote. Some have even expressed concerns that such an expansion of the FTC’s rulemaking authority could impact funding and investment in technology and Internet companies by both Wall Street and Silicon Valley VCs.  For more, take a look at this transcript of the Progress & Freedom Foundation’s recent forum entitled “Supersizing the FTC.”

Blogging (or Live-Blogging) at the ABA Spring Antitrust Meeting

April 16, 2010 Leave a comment

Depending on wireless access, I will either be blogging or live-blogging certain tracks of the 2010 ABA Spring Antitrust Meeting, which will be held next week, April 21 – 23, in Washington DC.  The posts will appear on the ABA’s Secure Times blog and here, at the Balancing Act.

Please check back here on April 21st – I look forward to your review of my posts and of course, I always welcome comments!

The consumer protection and private advertising tracks for this year’s ABA Spring Antitrust Conference are laid out below:


8:45-11:45: Antitrust and Consumer Protection Fundamentals

9:00-10:30: Handling State Attorney General Advertising Cases: Substantive & Procedural Questions

10:45-12: It’s Not Easy Being Green: Environmental Claims, Standards and Deception (this session with focus on  third-party certification)

2-3:30 Administrative Litigation at the FTC – Navigating the Shifting Procedural Waters

2-3:30 Is Nothing Typical?  Applying the New Standards in the Revised FTC Testimonial Guides

3:34-5:15 Economics & Consumer Protection Law

3:45-5:15 False Claims of IP Protection: Competition & Consumer Protection Perspectives


8:15-9:45: Consumer Financial Protection:  Assessing the New Landscape

8:15-9:45: False Advertising Litigation: The Lanham Act Preliminary Injunction Hearing

1:30-3: Enforcement Priorities in Advertising Law


8:15-9:45: Changing Standards for Certifying Class Actions (the panel will address both antitrust and consumer protection standards)

8:15-9:45: Security & Privacy in the Cloud: Developing the Right Framework for Service Providers, Business customers, and Consumers

Categories: Uncategorized Tags: , , , ,

Filtering and Sniffing after FCC v. Comcast

The topic du jour is definitely the DC Circuit’s decision in FCC v. Comcast – and what it will mean for net neutrality, and the FCC’s plan to regulate broadband access and consumer protections on the Internet. The decision – which states that the FCC does not have the authority under current law to regulate how ISPs police traffic on their networks – will most certainly impact the FCC’s implementation of its recently announced broadband plan.  Already, the agency has decided not to pursue certain cybersecurity, privacy and consumer protection policies in the wake of yesterday’s decision.

The DC Circuit crafted a careful and narrow decision, which will probably survive appeal (if indeed the FCC chooses to take that route).  The ruling focuses predominantly on the question of whether the FCC had explicit or implicit jurisdiction to regulate Comcast.  It leaves open however, the question of when and how ISPs can use filtering technologies to detect content.

All ISPs employ filtering technologies to some degree.  These technologies filter or “sniff” data packets while they are en route to their final destination.  The packets usually consist of two parts: control information and user data (also known as the payload). Shallow packet inspection involves examination of the control data – to allow an ISP to route content to the right server for example.  A deeper form of packet inspection allows the ISP to check for viruses and other malware that might be attached or embedded in content.  But it’s deep packet inspection that concerns most privacy advocates.  This is a more intense process that allows the ISP to literally peer in and scan the payload portion of the packet – to serve ads, or track user behavior.  The NSA uses the technology in its terrorism surveillance efforts. Certain governments –such as China – use deep packet inspection to censor content.

The FCC complaint which led to yesterday’s decision was filed against Comcast by Free Press, Public Knowledge, and several other advocacy groups and academics.  It suggested that the company was using some sort of deeper detection technology to “throttle” consumers’ use of P2P applications like BitTorrent. Comcast defended its actions, stating that it had a right to slow access in instances when network resources are scarce, because applications like BitTorrent consume large amounts of bandwidth.  In their FCC petition, the Free Press / Public Knowledge coalition disagreed, stating that Comcast had violated FCC policy which entitles consumers to “access lawful Internet content… and run applications and user services of their choice.”

The FCC investigated the matter and sought public comment.  Its investigation showed that Comcast was restricting user downloads of large files 24/7 – even when network resources were not scarce.  There was no suggestion by the FCC that Comcast used deep packet inspection technology to restrict downloads – although some have noted the company’s relationship with Sandvine, a company specializing in “network policy control solutions.”

After concluding its investigation, the FCC issued an Order, stating that it had the jurisdiction to regulate Comcast’s network management practices.  Furthermore, the agency decided to resolve the issue through adjudication – and not through the notice and comment cycle of typical FCC rulemaking.  In his press release announcing the Order, then FCC Chairman Kevin Martin analogized the situation to the US Postal Service opening your mail and then deciding, based on the contents contained therein, that it was either going to delay sending your mail or not send it altogether.

Comcast challenged the Order successfully in District Court; the FCC appealed, resulting in yesterday’s decision.  Where does the agency go from here? FCC Chairman Genachowski has stated that the agency will find other legal authority to pursue its stated goals if it loses its case against Comcast.  With a precedent like yesterday’s decision on the books, the agency will most certainly need Congress to pass a law giving it that explicit authority.  Enabling legislation of this type could take years – particularly given the special interests involved in this particular debate.  In the meantime, perhaps the FCC should think about reversing its 2002 decision deregulating broadband – a suggestion made by Public Knowledge’s Gigi Sohn on the PBS News Hour earlier this evening.

Until these issues are resolved, the legality of using filtering or sniffing technologies to block or slow down questionably large files on networks remains unclear. Comcast, in its muted response to the DC Circuit decision, indicates that it intends to keep working with the FCC to “…preserve a vibrant and open Internet.”  Will this intention change if its merger with NBC Universal is approved?  Would the merged entity employ such techniques to ensure that its own content is not being downloaded illegally?  Active network management can rob an ISP of its safe harbor for copyright infringement under the DMCA – but what happens if the network provider and the copyright owner are one and the same?

This case started with consumer complaints.  Yet the court’s decision does not provide any guidance for consumers who believe that they are being denied efficient broadband access due to their web surfing appetites. Professor Paul Ohm of the University of Colorado has an interesting approach – in this delightful article on net neutrality and privacy.  In it, he suggests that consumers look to the ECPA to address concerns with ISP discrimination against large file downloads.  ECPA of course, is the focus of its own reform effort – the move to secure digital due process.  Many of the same companies that support ECPA reform also support net neutrality.  Perhaps the move to reform ECPA will provide a way to obtain some legal guidance on how and when packet sniffing technologies can be used by ISPs to filter content.  But even with such clarity, the question of which agency should regulate the Internet – FCC, FTC or even the Department Commerce – will remain open.

Categories: Uncategorized Tags: , , ,

No 4th Amdt Protection for Backups & Delivered Email in 11th Circuit

March 17, 2010 Leave a comment

From Professor Orin Kerr, a terrific blog and analysis of the Eleventh Circuit’s recent decision in Rehberg v. Paulk, which held that there is no expectation of privacy in stored copies of delivered emails that are stored by your ISP. The Court cited Sixth and Second Circuit precedents to find no 4th Amendment protections for delivered email  (as Professor Kerr’s post indicates, however, the issue of when an email has been delivered is still unsettled).

Professor Kerr’s analysis focuses on the Court’s differing treatment of the email copy that was delivered vs. the backup copy of the email that was stored with the plaintiff’s ISP (just because the delivered copy lost protection, the ISP copy should not have).  While I agree, I also think that the Court’s decision might have been very different had they analyzed the issue under the Stored Communications Act, and not the 4th amendment. The SCA proscribes unlawful access to stored electronic communications, which it defines as:

“…any temporary, intermediate storage of a wire or electronic communication incidental to the electronic transmission thereof; and

any storage of such communication by an electronic communication service for purposes of backup protection of such communication;”

Under this definition, the email copy at issue in Rehberg should fall squarely within the backup exception to the SCA.

The Rehberg decision conflicts with the holding of the Ninth Circuit in Theofel v. Farey-Jones.  Here, the court found that copies of emails stored with an ISP post-transmission were protected from disclosure under the backup exception to the SCA.

Arguably, Rehberg has further complicated the split in the Circuits on privacy protection – statutory or under the 4th Amendment – for email.  It’s unclear when the Supreme Court will have a chance to review this particular issue.  Perhaps we will get some guidance from the Court’s upcoming review of USA Mobility Wireless Inc. v. Quon . Quon is a 9th Circuit case involving a related issue – the expectation of privacy in text messages.  The facts are complicated, since the case involves texts sent by a government employee, but it is one of the few decisions to find privacy protections for electronic communication.  This is definitely a case to watch next term, as a ruling in Quon will likely impact how courts view privacy for other electronic communications, especially email.

Categories: Uncategorized Tags: ,