Archive for the ‘New Technologies’ Category

Thinking About Location-Aware Apps?

October 19, 2010 Leave a comment

Then please, check out this white paper I co-authored recently with Janet Jaiswal, Director of Enterprise at TRUSTe.  TRUSTe is one of the few organizations that offers a privacy certification for mobile apps and services.  Over the last few months, I’ve been working with TRUSTe on their mobile privacy certification program. In this white paper, Janet and I zero in on geo-location – exploring the market dynamics, privacy concerns and some best practices around location-aware apps and services, and the hot new area of “geo-marketing.”


I’m With Scalia on Quon

This past week, after reading and re-reading the Supreme Court’s decision in City of Ontario, California v. Quon several times, I’ve come to a conclusion that is surprising for me: I am in agreement with Justice Scalia, not the majority, when it comes to Quon.  In fact, I think Scalia hit it right on the head when he wrote in his concurrence describing the majority opinion:

“…In saying why it is not saying more, the Court says much more than it should.”

The Quon court clearly wrestled with whether or not to address the big question being posed in this case – whether employees have a reasonable expectation of privacy in certain electronic communications i.e. messages sent through a government employer-issued pager.  This intellectual wrestling, and the majority’s thoughtful debate on the role of technology in the workplace are particularly revealing.

For instance, the Court noted that it’s not just technology that’s evolving rapidly – it’s society’s view on what is acceptable behavior vis-à-vis these technologies that is also changing at a rapid pace.  The Court considered the position put forward in the amicus brief submitted by the ACLU, EFF, CDT and others – that in today’s workplace, employers expect and tolerate the personal use of office equipment and technology by employees (and therefore, employees should have a reasonable expectation of privacy in personal messages sent through an employer’s equipment or technology).  In contrast, the Court also contemplated the efficiency of a rule that reserves office equipment and technology for work use only – since mobile phones are now affordable and widely used, shouldn’t employees be required to purchase their own device for personal use at work?   Unfortunately, the analysis stopped here; the Court for instance didn’t distinguish between mobiles phones (for voice and text) which remain affordable vs. smartphones (for email and webmail) which remain fairly expensive.

In the end, the Court chose to sidestep the important question of whether employees have a reasonable expectation of privacy in electronic communications made in the government workplace. It concluded that it needed to “proceed with care” in determining the issue; it also cautioned the judiciary against “elaborating too fully” when evaluating “the Fourth Amendment implications” of technology in the employer-employee context.  And then, most surprisingly, the Court admitted that unlike past decisions (specifically referencing the phone booth in Katz), it could no longer rely on its “own knowledge and experience” to conclude that there was a reasonable expectation in text messages sent via a government-owned pager.  Wow.  While Supreme Court justices were using phone booths in 1967, they aren’t using their pagers (or mobile phones) to send text messages in 2010.  Is technology advancing at a rate that is too fast for the highest court in the land?  Opportunities to decide important issues like this come before the Court only so often.  The disapproval in Justice Scalia’s concurrence is hard to miss:

“Applying the Fourth Amendment to new technologies may sometimes be difficult, but when it is necessary to decide a case, we have no choice.”

Yet by trying so hard to say nothing, the Court actually says something fairly significant.  By dithering over whether to use the Court’s precedent in O’Connor v. Noriega, but then using it anyway to determine that Jeff Quon had a reasonable expectation of privacy in text messages he sent from his work pager, the Court has – as Scalia puts it – “inadvertently” boosted the “operational realities” standard and case-by-case approach of the O’Connor court (the Court ultimately determined that even though there was a reasonable expectation of privacy, the City’s search was reasonable).  The facts here are telling – in one month, Jeff Quon, a member of the SWAT team at the Ontario, California police department sent or received 456 messages, of which only 57 were work-related.  These messages were transmitted via a pager that Jeff Quon received from the City so that he and fellow SWAT team could “mobilize and respond to emergency situations.”  In fact, the case started began when the City – faced with recurring overages for Quon and another employee – requested text message transcripts from the City’s wireless provider to determine whether the City’s current limit for text messages needed to be raised.

Regardless, it’s likely that the O’Connor rule will have more of an influence on privacy – and workplace privacy in particular – after the Quon decision.  Scalia insists that the Court’s ruling in Quon will automatically drive lower courts to the “operational realities” of the O’Connor test.   Perhaps he’s right – but a case-by-case approach in privacy analysis is almost inevitable in the absence of a federal privacy framework and bright line rules.   Context will continue to be a key part of the privacy analysis.  It’s already an important part of the United States’ “sectoral” approach to privacy.  For instance, personal data is even more sensitive when it’s being handled for medical purposes (see HIPAA for more details).  Similarly, there’s been a lot of concern about geo-location technologies recently – even though these technologies have been around for years, it is their emergence on mobile phones, and the ability to combine a user’s profile data with their geo-location data, that has raised privacy concerns.

Scalia also predicts lots of litigation around the question of whether employees have a reasonable expectation of privacy in the workplace (what did the workplace policy say, was there one, etc). That may be right – but it could also just be a natural evolution of the case-by-case approach outlined above – a lack of a bright line rule will always result in lots of litigation about what the standard should be.  There are, however, are at least two counseling gems to be found in Quon that are worth considering in anticipation of such litigation.

  1. Make sure your company’s privacy policy covers all types of electronic communications that occur on work equipment. The City of Ontario had a written policy that covered email, computer and Internet use – but didn’t explicitly cover pager use (although Jeff Quon’s boss did attempt to extend the written policy verbally to pager use in staff meetings).  And remember, the Court did find that Jeff Quon had a reasonable expectation of privacy in the text messages sent via his work pager.
  2. Make sure your company’s work policy matches up to its actual practice. Even though the City issued pagers to Jeff Quon and other SWAT team members with the understanding that they were intended for work use, the City sent a contrary message when it allowed employees to use those same pagers for personal use (so long as the employee picked up the extra charges for exceeding usage).  To send a clear message, and match its written policy to actual practice, the City should have banned personal use of the SWAT pagers altogether.

Of course Scalia was pushing for a bright line rule in this case – one that would have applied the Fourth Amendment to all employers – not just government.  At a time when companies are eager for privacy guidance, are bright line rules for workplace privacy needed?  This is the only part of the Scalia concurrence that I’ve struggled with.  On the one hand, we don’t want to have inflexible standards that ignore the realities of the present-day workplace.  On the other, having a rule that says there is no reasonable expectation of privacy when the workplace privacy policy says otherwise (assuming that policy is communicated properly), could be very helpful for compliance – especially when you consider the companies struggling to navigate the maze of court rulings, FTC rules and guidance, and sectoral laws that comprise US privacy law.  The need becomes even more compelling as once again, we are faced with the real possibility that Congress will not pass a federal privacy law this year – even as the FTC looks to Congress for guidance on how to shape a regulatory framework around privacy and security.

Which comes to my final point of agreement with Scalia.  When it comes to Quon, the Court chose not to address the important question – in short, they punted.  Or as Scalia puts it:

“… The-times-they-are-a-changin’ is a feeble excuse for disregard of duty.”

FTC announces its 30th data protection settlement – with Twitter

Remember when hackers got into Twitter’s databases, allowing them to send out phony tweets from the likes of then President-elect Obama and certain Fox news anchors? Well, the FTC was definitely paying attention to that incident.   Today, the agency announced an investigation and settlement of Twitter’s data security practices – its first ever case against a social networking service (and 30th data security case to date).  The term of the settlement is 20 years, and Twitter will be required to set up a comprehensive data security program, implementing privacy and security controls in its systems and workplace.  The company will also be subject to an independent, third party audit of its practices every other year for the next 10 years.. Read all the details on the FTC’s site.

Announcing: The Internet of Things

For those of us who believe that technology is a foundation for all industry – a horizontal feature, not a vertical – the recent interest in The Internet of Things is particularly vindicating.  The Internet of Things or “IoT” for short describes the larger network that is created by the inter-connection of  the traditional Internet and items you use everyday in the offline world.  IoT was mostly relegated to geek speak when it was first coined by a virtual team of RFID technology researchers in 1999, but lately it’s been experiencing a resurgence – as a pithy way to define the interconnected network of online and offline worlds, especially when the scenario involves RFID chips embedded in everyday items, allowing them to “talk” to the Internet.  Real world application? A user can communicate and control an RFID-enabled item – a car or kitchen appliance for example, using their computer or mobile phone.  This is the idea behind some of those really helpful mobile apps – like the one that turns your phone into a remote oven control.

At the moment, the IoT is really just a concept that is being researched and beta’ed. A recent presentation by Don Caprio of McKenna Aldridge suggests that in a decade the picture will be very different, and there will be billions of devices connected to the Internet worldwide.  In the US, the impact will be significant; Caprio estimates that each American will own at least 50 internet-enabled items, most of which will be “tagged” to that specific users.  Currently, the Internet could not handle this volume of web-enabled device traffic.  For IoT to progress from vision to reality, there must be “widespread deployment” of the next version of internet protocol – IP v.6 – which will allow for billions of additional, unique IP addresses.   This will involve considerable investment by the private sector – investment that probably won’t happen until regulators decide how to approach the big policy issues implicated by the IoT’s expansion – interoperability, data flows across jurisdiction, copyright, and of course, privacThis probably sounds familiar –  the scenario is similar to the current discussions around what should constitute effective regulation of cloud computing.

The implications for marketers and retailers are enormous.  The persistent harvesting of data at all points of customer contact – website, RFID-enabled device and retail stores – will help online marketers develop comprehensive customer profiles that will be worth their weight in gold to advertisers.  Retailers are already aware of the possibilities; Macy’s and Best Buy recently signed development deals for mobile, location-based applications that would “enhance the brick and mortar experience” by providing access to discounts and other promotions to offline shoppers.

But the privacy concerns posed by the IoT are enormous too.  A recent New York Times article discussed the probability of “online redlining” – where products and services are offered to some, but not all customers, based on usage data and statistical predictions.  Another recent article talks about how retailers use data to track spending habits – truly, the era of big brother surveillance in retail has already arrived.  These are just two examples of where the connection between online and offline worlds could result in more consumer harms than benefit.

Regulators are starting to awake to the connection.  Last week, Neelie Kroes, Commissioner for the EU’s Digital Agenda, spoke publicly about how an IoT might unfold and what policies would advance its development.  Her remarks followed on the recent announcements of  the EU’s Digital Agenda – which include increasing R&D for the relevant technologies that support advanced Internet deployment and adoption and increasing user trust in broadband technologies.  I have yet to see the IoT referenced publicly by a US regulator (I don’t recall it being raised in the recent FTC privacy workshops – but correct me if I’m wrong).  The issue is addressed to some degree in the discussion draft of federal privacy legislation that’s currently being circulated by Congressmen Rick Boucher and Cliff Stearns; the bill’s requirements apply equally to online and offline activities, and requires an explicit user “opt-in” before using location-based information for ad-targeting.

If anything, the IoT and the myriad privacy concerns raised by tracking an individuals use of everyday items are a helpful reminder that privacy concerns are not limited to the online world alone.   Indeed, if we want users to trust in this growing network of online and offline systems, then regulators must start treating the IoT as part of the Internet as a whole. And while online-specific issues – most prominently data privacy and security – have shaped much of the discussion around what constitutes adequate “information” privacy, it’s likely that offline issues will begin to influence the discussion.  As Jennifer Barrett of Acxiom, one of the nation’s largest data brokers, recently stated “the clear distinction that we had a number of years ago between online and offline [marketing methods] is blurring.”

What’s next? Perhaps we’ll look to Commissioner Kroes for the last word.  When asked whether she had a specific plan to support the IoT, Kroes borrowed another term – this time from Spanish author Antonio Machado – saying simply:  “”We have no road. We make the road by walking.”  I think that’s code for more discussions, more workshops, and definitely more speeches on the topic.

Note to Facebook: Privacy is Personalization Too

April 29, 2010 Leave a comment

Just last week, Facebook introduced “instant personalization” – a feature that extends your Facebook experience via familiar Facebook “plug-ins” (Activity Feed, the “Like” and “Recommend” buttons) – to partner websites such as yelp and  Already, the features are drawing much criticism – this time, from a Democratic quad of senators – Begich (AK), Bennet (CO), Franken (MN) and Schumer (NY) – who are urging Facebook to change its policies on sharing user data with third parties. Their letter to Facebook founder Mark Zuckerberg highlight three main concerns: Facebook’s continually elastic definition of what it considers personal information, the storage of Facebook user profile data with advertisers and other third parties, and the aforementioned “instant personalization” feature. The Senators acknowledge the FTC’s role in examining the issue but also advocate that Facebook take “swift and productive steps to alleviate the concerns of its users” while FTC regulation is pending.  On Monday, Senator Schumer followed up with an open letter that urges the FTC to investigate Facebook’s privacy practices.

Instant personalization is the latest Facebook feature to draw flak for its perceived impact on privacy.  It’s actually a very cool technology, designed for people who want to publicly share their likes and dislikes with their Facebook network.  It works by sharing certain Facebook profile data with a partner site.  The feature is personalization defined – every user using Facebook plug-ins on a partner site will have a different experience based on who they are friends with on Facebook.

A recent post on the Facebook blog describes the process:

“At a technical level, social plugins work when external websites put an iframe from on their site—as if they were agreeing to give Facebook some real estate on their website. If you are logged into Facebook, the Facebook iframe can recognize you and show personalized content within the plugin as if the visitor were on directly. Even though the iframe is not on Facebook, it is designed with all the privacy protections as if it were (emphasis added).”

Note the last sentence of that excerpt – it seems to suggest that as a Facebook user, you don’t have to worry about privacy whether you are on Facebook, or using a Facebook plug-in on another site.  So what’s the flap about?  What is fuelling the concerns with Facebook’s privacy practices – from the letter from Democratic Senators, to ongoing concerns from EPIC, to this thoughtfully penned article from a Facebook user who also happens to work for PC World?

I think it has to come down to notice – especially to users.  Facebook debuted “instant personalization” as an opt-in feature that automatically exposed a user’s Facebook profile data to partner sites. This has raised concerns with regulators, and certain Facebook users too – just take a look at the comments to this recent Facebook blog on the topic. To further complicate things, Facebook makes it particularly difficult to opt-out of the instant personalization features.

With this latest move, Facebook reaches outside its walled garden to extend its reach across the web – I almost think of it now as the world’s largest social platform (not network).  Consider for instance that it took Microsoft nearly twenty years to reach an install base of 1 billion for Windows; Facebook, now approaching 500 million users, will probably reach that number in less than a decade. As Facebook continues to evolve its platform strategy, its processes – particularly around informing users about what it plans to do with their profile data – must be better defined.  I think this goes beyond a static privacy policy – it may even involve engaging select users at the beta stage to pre-determine privacy concerns (like whether to launch a feature as opt-in or out).  Engaging trust with your ecosystem is essential for any platform company and when it comes to Facebook, users are an essential part of the ecosystem equation.

For the most part, Facebook users divulge data about themselves with the expectation that this data will be used on Facebook only; sharing that same data with other sites, even if it’s via a Facebook plug-in, is clearly not part of that expectation.  If Facebook wants to use user profile data for secondary purposes, it should first get the user’s permission to do so.  Such a system honors a user’s privacy preferences – which are also a personalization of sorts.  And when it comes to privacy, Facebook should be doing everything it can to ensure that this type of personalization is preserved.

The Law Struggles to Keep Up – ECPA, SCA and Privacy in Electronic Communications

March 3, 2010 1 comment

Two federal laws enacted before the advent of the World Wide Web, are at the heart of class action complaints against Facebook and Google for violations of online privacy. It’s time to brush up on the Electronic Communications Privacy Act (ECPA), and the Stored Communications Act (SCA) – federal statutes that prevent intrusion by government and private actors into electronic communications. The ECPA works by preventing the interception and unauthorized access of electronic communications – such as email, texts and user keystrokes – by government agencies and law enforcement. The SCA is an “act within an act” as it is essentially Section II of the ECPA.  It regulates unauthorized access of electronic communications by service providers such as ISPs.

Plaintiffs in the Facebook class actions allege that the company violated both statutes, when changes in its privacy policy caused “unwary Users into inadvertently revealing large amounts of information about themselves.”  Similarly, class plaintiff Eva Hibnick (an enterprising Harvard 2L), alleges violations of both statutes in her complaint against Google for the unauthorized disclosure of users’ personal information during the launch of company’s social networking product, Google Buzz, in February.

It’s likely that the courts will struggle with the application of the ECPA and SCA in both of these cases. A pivotal question will be how the courts interpret the “consent” exception to ECPA in these cases i.e.  whether use of Facebook or Google’s service indicated user consent to the disclosure of personal information in the case of either Facebook’s privacy policy changes or the launch of Google’s buzz.

Both ECPA and the SCA are a legacy of a time when, two-way, not real-time, communication was the norm. The application of both statutes to email provides an illustrative example.  Under ECPA and the SCA, communication service providers are treated differently depending on whether they are “transmitters” or “storage facilities.” This is an important distinction for telephonic communication, but not so important for email – particularly web-based email that is stored on your providers’ server. Courts have interpreted ECPA to find greater protection for unopened email in transit to a computer, as opposed to unopened email sitting on your computer’s hard drive or provider’s server (from a user privacy perspective, is there a difference?).  Under the SCA, some courts have distinguished between pre and post transmission storage, even though the SCA defines “electronic storage” as “any temporary, intermediate storage of a wire or electronic communication incidental to the electronic transmission.”  Luckily, the Ninth Circuit rejected this distinction in the 2004 case of Theofel v. Farey-Jones.

ECPA and the SCA statutes represent yet another example of how the law has failed to keep up with technology.  Indeed, both statutes have been the focus of much criticism, with several experts calling for ECPA reform and amendments to the SCA.

More importantly, neither statute gives us the answer to a question that has remained unanswered for too long – do users have a reasonable expectation of privacy in web communications such as emails, blogs and posts? Even though we live in the age of email and instant communication, the contents of an email sent from the privacy of your own home, has less constitutional protection than a conversation in a public phone booth.

The Supreme Court’s 1967 decision in US v. Katz – finding that privacy attaches to a person, not a place – has yet to be extended to electronic communications. Could the court’s dicta, stating that “[w]herever a man may be, he is entitled to know that he will remain free from unreasonable searches and seizures,” have some applicability to communication on the web in 2010? Will the recent class actions against Facebook and Google evolve into long-standing litigation that provides the Supreme Court the opportunity to consider the application of Katz to electronic, real-time communication?  With persistent litigants and the right rulings, it could happen.

In the meantime, legislators should seriously consider a redraft of both ECPA and the SCA – one that ushers both these important statutes into the Internet age.

Toyota Recall: Are the Feds Due for Business Model Changes?

February 24, 2010 Leave a comment

A thought occurred to me while listening to Rep. Henry Waxman during the recent congressional hearings on Toyota’s recalls – is the federal government due for some business-model change?

I’m referring to the Department of Transportation (DOT) and the National Highway Transportation Safety Board (NHTSB) in particular here, and the revelations this week that a potential cause of the sudden acceleration in certain Toyota models was due to defects in the car’s electronics system – and not from sticky brake pedals.  Waxman was laser focused on the issue, reminding me that we often drive our biggest and most expensive computer – our cars.

In fact, most newer cars feature sophisticated electronics systems that control most of the vehicle’s functions.  In Toyota’s vehicles, the technology is known as ETCS-i (Electronic Throttle Control System with intelligence).  ETCS-i replaces the mechanical link between the accelerator and the engine throttle with an electronics system; when the accelerator pedal is pressed, electronic signals are sent to the car’s electronics system which in turn regulates the engine throttle to allow gas or fuel to enter the engine (for more, check out this paper by Professor Raj Rajkumar of Carnegie Mellon University).

Toyota US President James Lentz insisted that the cause of the sudden acceleration problem was mechanical, not electric.  But the questions persist. The latest vocal voice on the topic is Apple co-founder Steve Wozniak, owner of “many models of Prius” that have been recalled.  He has pinpointed the problem to the software used in the recalled Toyota cars.

Transportation Secretary Ray LaHood was visibly defensive about his agency’s response to what has become one of the biggest recalls in automobile history and DOT has started looking into the electronics systems as a possible cause of the problem with recalled Toyota cars.  But some suggest that this is a matter of whether the DOT and NHTSB have the necessary resources to pinpoint problems of this type.  Certainly this is the view of the Joan Claybrook, a former NHTSB administrator, who insists that 18 investigators in the NHTSB’s Office of Defect Investigation are simply not enough to handle the over 30,000 defect complaints that the agency receives all year.

But a bigger question persists. Indeed, technological innovation is forcing a dramatic business model change in so many other industries – consider what is happening in healthcare or the newspaper industry for instance.  The US government however, continues to view technology as a separate “vertical.” This is best illustrated by how we approach privacy, with different statutes covering the use of information for health, financial or credit reporting purposes in the absence of a national privacy law.

With technology providing a renewed foundation for so many traditional industries, should the feds establish and fund an office of technology, dedicating to regulating the implementation of technology in important products, particularly those that could impact the health and safety of consumers?  Such an agency would work with industry-specific agencies – like DOT and NHTSB – to address defect complaints, while also having a unique perspective on how technological innovation is impacting our economy.  This agency could also house our efforts to define privacy in the electronic age, address cybercrime and set out minimum requirements for handling sensitive data (assuming of course, that we can get national laws passed in each of these key areas).

In this way, our regulatory system would more closely mimic the massive shift that technology is forcing in our industries today.  Business model change shouldn’t just be reserved for private industry.  As this latest recall shows us, the federal government could benefit from a little business model change too.