Archive

Archive for the ‘Regulation’ Category

An FTC decision on Endorsements is Reverb-ing with Me…

September 15, 2010 Leave a comment

I know, it’s been ages since I last posted to the Balancing Act.  It’s not been a question of just time; it’s also been a question of whether it was permissible to write about certain subjects under the FTC’s recently revised Endorsement Guides. Which is why I’ve been putting some thought into how the Endorsement Guides actually work in practice when it comes to blogging about a topic that your client – or your clients’ client – may have an interest in.

Late last month, I was helped along in my thinking by the FTC’s first administrative decision under the revised Endorsement Guides – against Reverb Communications. Reverb is a company with a niche practice – they provide marketing and PR for video game companies who develop for the iPhone platform.  According to the FTC’s complaint, Reverb’s fee often included a percentage of the sales of its clients’ gaming apps, giving Reverb added incentive to boost those sales in the iTunes Music Store.  Reverb’s enterprising owner, Tracie Snitker, along with other Reverb employees, became regular visitors to the comments section of the iTunes Store.  Posing as customers, they posted positive comments about their clients’ products to encourage their sale

Although the Reverb comments were mostly generic and unimaginative (“One of the Best,” “Really Cool Game”), they managed to attract the FTC’s attention.  One wonders if the vigilant Apple, patrolling the iTunes store for other violations, helped the investigation along here… But back to Reverb.  Of course, the Commission found against this kind of practice, stating that endorsers must disclose any “material connection” i.e. “any relationship that materially affects the weight or credibility of any endorsement and would not be reasonably expected by consumers.”  And when it comes to secondary liability, the Commission reminds us that under the Endorsement Guides, “someone who receives cash or in-kind payment to review a product or service, should disclose the material connection the reviewer shares with the seller of the product or service.  This applies to employees of both the seller and the seller’s advertising agency.”

Applying Reverb to this blog, I realize that it’s becoming harder to identify which areas I can and can’t write about.  Many of the issues I want to blog on are also issues of paramount importance to my clients – and in some cases, their clients.  The collective policy interests of these companies pretty much engulf the universe of issues that I’ve been writing about in the Balancing Act.  Recently for instance, I wanted to write about a Third Circuit case that found no reasonable expectation of privacy in certain kinds of geo-location data (this was in the context of a government request, under ECPA).  I realized however that if I did so, then I would need to also disclose my material connections to certain clients, particularly those developing location-based apps and services, for whom the treatment of geo-location data – especially by the courts – is a very important issue.  I didn’t want to do this, especially some of these projects are still under development and a trade secret.

All of this must add up to one conclusion – I’ve decided to discontinue the Balancing Act – at least in the analytical, longer post format that I’ve been maintaining for over a year now. If you’ll forgive the pun, Reverb is a decision that’s “reverb”erating with me – especially when it comes to this blog.

Thanks for taking the time to read the Balancing Act during the past year; thanks also for the comments on specific posts.  I have learned so much from this experience – not just in terms of the substance, but also in observing, firsthand, how blogging and other web technologies are transforming how we read and get news.

As the ancient Greek orator Demosthenes once said, “small opportunities are often the beginning of great enterprises.” This could also be said of the many blogs and websites that populate the Internet today.  Web technologies – on your computer, phone or other wireless devices – are ushering in a revolution that will change the human experience forever.  Just look at the potential impact of web technologies in the publishing industry alone – resulting access to more analysis, news and other content than ever before.  The last time we had a revolution of this magnitude – one that shook society at its core – was when Guttenberg invented the printing press.  As a result of having access to the printed word, literacy rates rose and people began to read and form opinions for themselves.  The Enlightenment and Reformation followed, and the rest was history.

I think the Web has the potential to be at least as significant as the printing press, if not more.  So this is definitely a story that is “to be continued.” I’ll continue to enjoy observing the Web’s evolution across multiple platforms (desktop, mobile), as well as all of the attendant issues that will necessarily crop up when government seeks to restrain that evolution in the interest of public policy.  And I still plan to blog – especially on other blogs – and will cross-post to the Balancing Act.  You should see me posting on the ABA’s Secure Times blog in the near future (I recently was appointed a Vice Chair of the ABA’s Privacy & Data Security Committee for the 2010-2011 year).

So I’m sure I’ll see you out there – especially if you follow these issues on a regular basis.  Again, thanks for taking the time to read the Balancing Act.

Categories: Regulation Tags: ,

Tipping Towards an Opt-In

A few years ago, I became an instant fan of Malcolm Gladwell’s groundbreaking book – “The Tipping Point” – based on an epidemiological theory that says that in aggregate, “little things” can make a big difference.  Since then, I’ve observed the phenomenon play out on the policy stage several times – financial reform and healthcare are two immediate examples that come to mind – and I wonder if the theory has any application in what’s currently happening with online privacy today.  I think it does – particularly if you view a tipping point in scientific terms i.e. the point at which an object is displaced from a state of stable equilibrium due to a series of successive events, into a new equilibrium state qualitatively dissimilar from the first.

To say that online privacy was ever in a state of stable equilibrium is a stretch.  We are however, approaching the end of a current era in online advertising and marketing – an era in which companies captured personal and confidential data from users, and then monetized that data to sell ads back to those very same users, often without the user’s authorization or knowledge. That state of equilibrium has been threatened by many events in the last few months – market developments, consumer outcry and regulatory attention all converging to catapult data privacy and security onto the national agenda and into the mainstream press.  Some commentators, such as Jeff Chester, have characterized these events as a perfect storm; I see them a bit differently – not a storm, but a series of occurrences that finally “tipped” the issue, as companies attempted to push the privacy envelope with various features that compromised a user’s privacy (and in some cases the user’s express wishes to keep their data private).  Each of these features involved sharing data with a third party and not surprisingly, each triggered a privacy outcry – because they provided no meaningful way for users to opt-out of the feature before personal data was exposed.

It’s amazing to think that most of these pivotal events only happened during the last three months.  To recap:

February 9, 2010 – Google launches Google Buzz, and overnight, transforms users’ Gmail accounts into social networking pages, exposing personal contacts.  Google later remedies the situation by making the feature opt-in.

April 27, 2010 – 4 Democratic Senators led by Chuck Schumer of New York, send a letter to Facebook CEO Mark Zuckerberg complaining about the privacy impact of Facebook services, including its instant personalization feature (which exposed user profile data without authorization on launch).  Senator Schumer follows up his letter with a formal request urging the FTC to investigate Facebook.  Facebook eventually announces new privacy controls.

May 5, 2010 – EPIC and a coalition of other advocacy organizations file this complaint, urging the FTC to investigate Facebook.  In the complaint, they assert that “Faceboook’s business practices directly impact more American consumers than any other social network service in the United States.”

May 14, 2010 – Google announces, via a post on their policy blog, that their Streetview cars have inadvertently been capturing payload data from open WiFi networks – in violation of US, European and other global data protection laws – for over 3 years.

May 21, 2010 – The Wall Street Journal reports that a group of social networking sites – including Facebook, MySpace and Digg – routinely share user profile data with advertisers, despite public assurances to the contrary.

The result? With each successive product or feature launch, the privacy debate is now tipping towards a privacy regime that could be much stricter than anything we’ve seen before – a requirement that companies get a user’s affirmative opt-in to any use of personal data for advertising and marketing purposes.

Privacy nerds may want to revisit the words of David Vladeck, head of the FTC’s Bureau of Consumer Protection, in a New York Times interview that took place last August i.e. before the privacy mishaps of the last 3 months.  When asked about whether the FTC would mandate an opt-in standard for user disclosures, Mr. Vladek responded:

“The empirical evidence we’re seeing is that disclosures on their own don’t work, particularly disclosures that are long, they’re written by lawyers, and they’re written largely as a defense to liability cases. Maybe we’re moving into a post-disclosure environment. But there has to be greater transparency about what’s going on. Until I see evidence otherwise, we have to presume that most people don’t understand, and the burden is going to be on industry to persuade us that people really are well informed about this.”

The emphasis on transparency becomes even more important with the  impending rollout of the FTC’s privacy framework this summer.  Will  the FTC make an affirmative opt-in mandatory in all instances where personal data is being shared with a third party?  Clearly, an opt-in is one of the best ways to ensure transparency, and to give users meaningful notice about what data is being collected.  The question is whether an opt-in requirement would be so cumbersome it would turn users off of the service altogether.  For instance, would an opt-in be required once – before the feature is first launched, or each successive time it launches?

Also, it’s unclear whether the FTC’s framework will derive strength (or weakness) from a federal privacy law if such a law does indeed pass this session.  Critics on both sides have mostly panned the House legislation i.e. the Boucher-Stearns bill, but there is news of another, more stringent bill being drafted by Senator Schumer who reached his tipping point with Facebook as outlined earlier.

I saved my most important “little thing” for last. Even if you don’t believe that the privacy debate has yet to reach a tipping point, consider this: in June, the Supreme Court will issue its decision in City of Ontario v. Quon. This is the first time that the Supremes have considered the crucial question of what expectation of privacy users have in their electronic communications.  Their decision will most likely impact any regulatory or legislative scheme around privacy currently being proposed by the federal agencies or Congress.  Most importantly, a Supreme Court decision that finds an expectation of privacy in electronic communications will most certainly translate into increased obligations on companies that deal in these types of electronic communications and data.  A tipping point?  Absolutely.  In fact, such a decision would definitely signal something much bigger (to quote another popular book title) – a Game Change for advertising and marketing on the web.

Dartmouth Study Finds P2P Networks Hemorrhaging Sensitive Data

While peer-to-peer may be a good metaphor for human interaction – social networking comes to mind – it may not always be the greatest model for the sharing of sensitive information.   Your medical history for instance, shouldn’t be shared with others on a P2P network.  Is this happening? Absolutely.  A study presented this week by Professor Eric Johnson of Dartmouth’s Tuck School of Business, describes how researchers found mounds of sensitive medical data on popular P2P networks: medical history, contact information, insurance details, treatment data, diagnosis and psychiatric evaluations – all mixed in with the song and movie downloads that usually make up the traffic on these networks.

So, how is this sensitive medical data getting on P2P networks in the first place?  Primarily through an employee’s computer – the employee downloads a P2P application on her work machine, and then uses that same machine to process sensitive medical data at work.  Sometimes the employee takes work home, making edits to a spreadsheet on her home computer (yes, a hospital-generated spreadsheet containing SSNs and other personally identifiable information for employees was one of the documents that the Dartmouth researchers found).  In both cases, the user configures the P2P application incorrectly, making all their personal data visible to other users on the P2P network.  Once that happens, the data is a prime target for cybercriminals and fraudsters who engage in identity theft.  Sensitive medical data is a particularly lucrative prize.  As Professor Johnson put it: “For criminals to profit, they don’t need to “steal” an identity, but only to borrow it for a few days, while they bill the insurance carrier thousands of dollars for fabricated medical bills.”

Arguably, this could be a potential area of concern for the companies covered by HIPAA and that deal with sensitive medical data online. But although HIPAA and the FTC’s Health Breach Notification Rule set out requirements for what companies need to do in case of a “breach” of sensitive medical data, they give little guidance to companies on what policies they could be implementing internally to prevent such breaches in the first place. Some may view this as a nod to self-regulation, but the truth is there are “best practices” that both HHS and the FTC could endorse.  A simple best practice that addresses the “data hemorrhaging” that Professor Johnson alludes to in his study, would be an internal policy against the use of P2P networks on machines that also handle sensitive medical data.  Another best practice – companies that deal with this type of data should consider partnering with regulators and health care providers to educate patients on the importance of securing their medical data – and how certain file-sharing technologies can promote medical ID theft when configured incorrectly.  Already, there’s collateral for such an effort – the FTC’s  tips to deter medical ID theft, which could be required patient reading (along with those HIPAA notices).

Categories: Regulation Tags: ,

Toyota Recall: Are the Feds Due for Business Model Changes?

February 24, 2010 Leave a comment

A thought occurred to me while listening to Rep. Henry Waxman during the recent congressional hearings on Toyota’s recalls – is the federal government due for some business-model change?

I’m referring to the Department of Transportation (DOT) and the National Highway Transportation Safety Board (NHTSB) in particular here, and the revelations this week that a potential cause of the sudden acceleration in certain Toyota models was due to defects in the car’s electronics system – and not from sticky brake pedals.  Waxman was laser focused on the issue, reminding me that we often drive our biggest and most expensive computer – our cars.

In fact, most newer cars feature sophisticated electronics systems that control most of the vehicle’s functions.  In Toyota’s vehicles, the technology is known as ETCS-i (Electronic Throttle Control System with intelligence).  ETCS-i replaces the mechanical link between the accelerator and the engine throttle with an electronics system; when the accelerator pedal is pressed, electronic signals are sent to the car’s electronics system which in turn regulates the engine throttle to allow gas or fuel to enter the engine (for more, check out this paper by Professor Raj Rajkumar of Carnegie Mellon University).

Toyota US President James Lentz insisted that the cause of the sudden acceleration problem was mechanical, not electric.  But the questions persist. The latest vocal voice on the topic is Apple co-founder Steve Wozniak, owner of “many models of Prius” that have been recalled.  He has pinpointed the problem to the software used in the recalled Toyota cars.

Transportation Secretary Ray LaHood was visibly defensive about his agency’s response to what has become one of the biggest recalls in automobile history and DOT has started looking into the electronics systems as a possible cause of the problem with recalled Toyota cars.  But some suggest that this is a matter of whether the DOT and NHTSB have the necessary resources to pinpoint problems of this type.  Certainly this is the view of the Joan Claybrook, a former NHTSB administrator, who insists that 18 investigators in the NHTSB’s Office of Defect Investigation are simply not enough to handle the over 30,000 defect complaints that the agency receives all year.

But a bigger question persists. Indeed, technological innovation is forcing a dramatic business model change in so many other industries – consider what is happening in healthcare or the newspaper industry for instance.  The US government however, continues to view technology as a separate “vertical.” This is best illustrated by how we approach privacy, with different statutes covering the use of information for health, financial or credit reporting purposes in the absence of a national privacy law.

With technology providing a renewed foundation for so many traditional industries, should the feds establish and fund an office of technology, dedicating to regulating the implementation of technology in important products, particularly those that could impact the health and safety of consumers?  Such an agency would work with industry-specific agencies – like DOT and NHTSB – to address defect complaints, while also having a unique perspective on how technological innovation is impacting our economy.  This agency could also house our efforts to define privacy in the electronic age, address cybercrime and set out minimum requirements for handling sensitive data (assuming of course, that we can get national laws passed in each of these key areas).

In this way, our regulatory system would more closely mimic the massive shift that technology is forcing in our industries today.  Business model change shouldn’t just be reserved for private industry.  As this latest recall shows us, the federal government could benefit from a little business model change too.

A Cry for Regulation

January 30, 2010 Leave a comment

A bizarre thing happened at the FTC’s second Exploring Privacy Workshop which was held in Berkeley this week.  Many of the web’s most popular companies – several of whom were featured panelists – were seen publicly urging the FTC to regulate the web.  As the day-long workshop progressed, it became clear that we have reached a point in the Internet’s evolution where regulatory guidance is critical.  For a company whose very business model relies on data mining of some sort – predictability regarding data security and online privacy rules is fast becoming a need, not a want.

The FTC understands these concerns and has been particularly responsive during the last few months, reaching out to stakeholders – web companies, academics and consumer advocacy organizations – all of which were well represented at last week’s workshop.  Based on the day’s discussions (you can see a replay of my live blog here), it became clear that participants were falling into two camps – one which urges clear guidelines and self-regulation, the other which wants more mandates and enforcement.  Then there’s the FTC’s current view – as discussed in this recent New York Times interview with current FTC Chairman Jon Leibowitz and David Vladek, the head of the FTC’s Bureau of Consumer Protection.

One thing everyone does agree on is the need for better and more consumer education – particularly around data flows.  With this type of education, the need to regulate data security and online privacy so stringently may be alleviated. For instance, much has been made during this workshop and in previous discussions, about the failure of privacy notices.  I wonder however, how much of that failure is because consumers simply don’t understand the significance of an opt-in or opt-out, especially when it comes to their personal or identifiable data.

Obviously, there’s a joint role here for all stakeholders – an educated consumer is your best customer (to paraphrase the Syms slogan).  Companies should be thinking about ways to partner with regulators on public education initiatives – just take a look at what the alcohol industry has done by partnering with state AGs on underage drinking awareness campaigns.

The FTC’s third Exploring Privacy workshop will be held on March 17th in Washington DC in March.  Here are the questions posed by the FTC in anticipation of this final workshop:

  • How can we best achieve accountability for best practices or standards for commercial handling of consumer data?  Can consumer access to and correction of their data be made cost effective?  Are there specific accountability or enforcement regimes that are particularly effective?
  • What potential benefits and concerns are raised by emerging business models built around the collection and use of consumer health information?  What, if any, legal protections do consumers expect apply to their personal health information when they conduct online searches, respond to surveys or quizzes, seek medical advice online, participate in chat groups or health networks, or otherwise?
  • Should “sensitive” information be treated or handled differently than other consumer information?  How do we determine what information is “sensitive”?  What standards should apply to the collection and uses of such information?  Should information about children and teenagers be subject to different standards and, if so, what should they be?
Categories: Regulation Tags: , ,

Live Blogging the FTC’s Privacy Workshop

January 27, 2010 Leave a comment

I will be live-blogging the FTC’s next Exploring Privacy workshop which will be held at Boalt Law School on the UC-Berkeley campus tomorrow at 8:30 a.m.  You can find the agenda for the workshop here.

Please check back here at 8:30 a.m. tomorrow (the 28th) and click on the link below to access the live blog.

Click Here

Categories: Regulation Tags: ,

You’ve Been Tagged (and now you know it)

January 27, 2010 Leave a comment

In a clever and clearly self-regulatory move, the Future of Privacy forum and a coalition of companies have come up with a symbol that lets you know when advertisers are using your behavioral data and demographics to serve ads.  Read more about it in today’s New York Times.

Categories: Regulation Tags: ,