Thinking About Location-Aware Apps?

October 19, 2010 Leave a comment

Then please, check out this white paper I co-authored recently with Janet Jaiswal, Director of Enterprise at TRUSTe.  TRUSTe is one of the few organizations that offers a privacy certification for mobile apps and services.  Over the last few months, I’ve been working with TRUSTe on their mobile privacy certification program. In this white paper, Janet and I zero in on geo-location – exploring the market dynamics, privacy concerns and some best practices around location-aware apps and services, and the hot new area of “geo-marketing.”

An FTC decision on Endorsements is Reverb-ing with Me…

September 15, 2010 Leave a comment

I know, it’s been ages since I last posted to the Balancing Act.  It’s not been a question of just time; it’s also been a question of whether it was permissible to write about certain subjects under the FTC’s recently revised Endorsement Guides. Which is why I’ve been putting some thought into how the Endorsement Guides actually work in practice when it comes to blogging about a topic that your client – or your clients’ client – may have an interest in.

Late last month, I was helped along in my thinking by the FTC’s first administrative decision under the revised Endorsement Guides – against Reverb Communications. Reverb is a company with a niche practice – they provide marketing and PR for video game companies who develop for the iPhone platform.  According to the FTC’s complaint, Reverb’s fee often included a percentage of the sales of its clients’ gaming apps, giving Reverb added incentive to boost those sales in the iTunes Music Store.  Reverb’s enterprising owner, Tracie Snitker, along with other Reverb employees, became regular visitors to the comments section of the iTunes Store.  Posing as customers, they posted positive comments about their clients’ products to encourage their sale

Although the Reverb comments were mostly generic and unimaginative (“One of the Best,” “Really Cool Game”), they managed to attract the FTC’s attention.  One wonders if the vigilant Apple, patrolling the iTunes store for other violations, helped the investigation along here… But back to Reverb.  Of course, the Commission found against this kind of practice, stating that endorsers must disclose any “material connection” i.e. “any relationship that materially affects the weight or credibility of any endorsement and would not be reasonably expected by consumers.”  And when it comes to secondary liability, the Commission reminds us that under the Endorsement Guides, “someone who receives cash or in-kind payment to review a product or service, should disclose the material connection the reviewer shares with the seller of the product or service.  This applies to employees of both the seller and the seller’s advertising agency.”

Applying Reverb to this blog, I realize that it’s becoming harder to identify which areas I can and can’t write about.  Many of the issues I want to blog on are also issues of paramount importance to my clients – and in some cases, their clients.  The collective policy interests of these companies pretty much engulf the universe of issues that I’ve been writing about in the Balancing Act.  Recently for instance, I wanted to write about a Third Circuit case that found no reasonable expectation of privacy in certain kinds of geo-location data (this was in the context of a government request, under ECPA).  I realized however that if I did so, then I would need to also disclose my material connections to certain clients, particularly those developing location-based apps and services, for whom the treatment of geo-location data – especially by the courts – is a very important issue.  I didn’t want to do this, especially some of these projects are still under development and a trade secret.

All of this must add up to one conclusion – I’ve decided to discontinue the Balancing Act – at least in the analytical, longer post format that I’ve been maintaining for over a year now. If you’ll forgive the pun, Reverb is a decision that’s “reverb”erating with me – especially when it comes to this blog.

Thanks for taking the time to read the Balancing Act during the past year; thanks also for the comments on specific posts.  I have learned so much from this experience – not just in terms of the substance, but also in observing, firsthand, how blogging and other web technologies are transforming how we read and get news.

As the ancient Greek orator Demosthenes once said, “small opportunities are often the beginning of great enterprises.” This could also be said of the many blogs and websites that populate the Internet today.  Web technologies – on your computer, phone or other wireless devices – are ushering in a revolution that will change the human experience forever.  Just look at the potential impact of web technologies in the publishing industry alone – resulting access to more analysis, news and other content than ever before.  The last time we had a revolution of this magnitude – one that shook society at its core – was when Guttenberg invented the printing press.  As a result of having access to the printed word, literacy rates rose and people began to read and form opinions for themselves.  The Enlightenment and Reformation followed, and the rest was history.

I think the Web has the potential to be at least as significant as the printing press, if not more.  So this is definitely a story that is “to be continued.” I’ll continue to enjoy observing the Web’s evolution across multiple platforms (desktop, mobile), as well as all of the attendant issues that will necessarily crop up when government seeks to restrain that evolution in the interest of public policy.  And I still plan to blog – especially on other blogs – and will cross-post to the Balancing Act.  You should see me posting on the ABA’s Secure Times blog in the near future (I recently was appointed a Vice Chair of the ABA’s Privacy & Data Security Committee for the 2010-2011 year).

So I’m sure I’ll see you out there – especially if you follow these issues on a regular basis.  Again, thanks for taking the time to read the Balancing Act.

Categories: Regulation Tags: ,

CWAG Panel touches on the challenges of Privacy 3.0

Yesterday, the Conference of Western Attorneys General (“CWAG”) hosted a superb panel entitled “Privacy 3.0 – Emerging Enforcement & Policy Issues” at their annual meeting in Santa Fe, NM.  Featured on the panel were FTC Commissioner Julie Brill, Assistant AG Shannon Smith of the Washington Attorney General’s Office, Professor Chris Hoofnagle of UC Berkeley Law School and Professor Paul Ohm of the University of Colorado’s Law School.

The panelists discussed the enforcement approach to privacy and data security in the 1.0 (notice and choice) and 2.0 (harm-based analysis) eras – and how this approach may need to change in the current age given continuing challenges: the emergence of scholarship showing that “anonymization” is a fallacy, the continuing struggle to create clarity around key terms used in privacy, and the need to educate consumers about basic privacy concepts.  The panel also discussed the States’ approach to some of these developments – such as the Massachusetts data law.

You can view the full webcast on CWAG’s site.

Disclosure: I worked with CWAG to help pull this panel together.

I’m With Scalia on Quon

This past week, after reading and re-reading the Supreme Court’s decision in City of Ontario, California v. Quon several times, I’ve come to a conclusion that is surprising for me: I am in agreement with Justice Scalia, not the majority, when it comes to Quon.  In fact, I think Scalia hit it right on the head when he wrote in his concurrence describing the majority opinion:

“…In saying why it is not saying more, the Court says much more than it should.”

The Quon court clearly wrestled with whether or not to address the big question being posed in this case – whether employees have a reasonable expectation of privacy in certain electronic communications i.e. messages sent through a government employer-issued pager.  This intellectual wrestling, and the majority’s thoughtful debate on the role of technology in the workplace are particularly revealing.

For instance, the Court noted that it’s not just technology that’s evolving rapidly – it’s society’s view on what is acceptable behavior vis-à-vis these technologies that is also changing at a rapid pace.  The Court considered the position put forward in the amicus brief submitted by the ACLU, EFF, CDT and others – that in today’s workplace, employers expect and tolerate the personal use of office equipment and technology by employees (and therefore, employees should have a reasonable expectation of privacy in personal messages sent through an employer’s equipment or technology).  In contrast, the Court also contemplated the efficiency of a rule that reserves office equipment and technology for work use only – since mobile phones are now affordable and widely used, shouldn’t employees be required to purchase their own device for personal use at work?   Unfortunately, the analysis stopped here; the Court for instance didn’t distinguish between mobiles phones (for voice and text) which remain affordable vs. smartphones (for email and webmail) which remain fairly expensive.

In the end, the Court chose to sidestep the important question of whether employees have a reasonable expectation of privacy in electronic communications made in the government workplace. It concluded that it needed to “proceed with care” in determining the issue; it also cautioned the judiciary against “elaborating too fully” when evaluating “the Fourth Amendment implications” of technology in the employer-employee context.  And then, most surprisingly, the Court admitted that unlike past decisions (specifically referencing the phone booth in Katz), it could no longer rely on its “own knowledge and experience” to conclude that there was a reasonable expectation in text messages sent via a government-owned pager.  Wow.  While Supreme Court justices were using phone booths in 1967, they aren’t using their pagers (or mobile phones) to send text messages in 2010.  Is technology advancing at a rate that is too fast for the highest court in the land?  Opportunities to decide important issues like this come before the Court only so often.  The disapproval in Justice Scalia’s concurrence is hard to miss:

“Applying the Fourth Amendment to new technologies may sometimes be difficult, but when it is necessary to decide a case, we have no choice.”

Yet by trying so hard to say nothing, the Court actually says something fairly significant.  By dithering over whether to use the Court’s precedent in O’Connor v. Noriega, but then using it anyway to determine that Jeff Quon had a reasonable expectation of privacy in text messages he sent from his work pager, the Court has – as Scalia puts it – “inadvertently” boosted the “operational realities” standard and case-by-case approach of the O’Connor court (the Court ultimately determined that even though there was a reasonable expectation of privacy, the City’s search was reasonable).  The facts here are telling – in one month, Jeff Quon, a member of the SWAT team at the Ontario, California police department sent or received 456 messages, of which only 57 were work-related.  These messages were transmitted via a pager that Jeff Quon received from the City so that he and fellow SWAT team could “mobilize and respond to emergency situations.”  In fact, the case started began when the City – faced with recurring overages for Quon and another employee – requested text message transcripts from the City’s wireless provider to determine whether the City’s current limit for text messages needed to be raised.

Regardless, it’s likely that the O’Connor rule will have more of an influence on privacy – and workplace privacy in particular – after the Quon decision.  Scalia insists that the Court’s ruling in Quon will automatically drive lower courts to the “operational realities” of the O’Connor test.   Perhaps he’s right – but a case-by-case approach in privacy analysis is almost inevitable in the absence of a federal privacy framework and bright line rules.   Context will continue to be a key part of the privacy analysis.  It’s already an important part of the United States’ “sectoral” approach to privacy.  For instance, personal data is even more sensitive when it’s being handled for medical purposes (see HIPAA for more details).  Similarly, there’s been a lot of concern about geo-location technologies recently – even though these technologies have been around for years, it is their emergence on mobile phones, and the ability to combine a user’s profile data with their geo-location data, that has raised privacy concerns.

Scalia also predicts lots of litigation around the question of whether employees have a reasonable expectation of privacy in the workplace (what did the workplace policy say, was there one, etc). That may be right – but it could also just be a natural evolution of the case-by-case approach outlined above – a lack of a bright line rule will always result in lots of litigation about what the standard should be.  There are, however, are at least two counseling gems to be found in Quon that are worth considering in anticipation of such litigation.

  1. Make sure your company’s privacy policy covers all types of electronic communications that occur on work equipment. The City of Ontario had a written policy that covered email, computer and Internet use – but didn’t explicitly cover pager use (although Jeff Quon’s boss did attempt to extend the written policy verbally to pager use in staff meetings).  And remember, the Court did find that Jeff Quon had a reasonable expectation of privacy in the text messages sent via his work pager.
  2. Make sure your company’s work policy matches up to its actual practice. Even though the City issued pagers to Jeff Quon and other SWAT team members with the understanding that they were intended for work use, the City sent a contrary message when it allowed employees to use those same pagers for personal use (so long as the employee picked up the extra charges for exceeding usage).  To send a clear message, and match its written policy to actual practice, the City should have banned personal use of the SWAT pagers altogether.

Of course Scalia was pushing for a bright line rule in this case – one that would have applied the Fourth Amendment to all employers – not just government.  At a time when companies are eager for privacy guidance, are bright line rules for workplace privacy needed?  This is the only part of the Scalia concurrence that I’ve struggled with.  On the one hand, we don’t want to have inflexible standards that ignore the realities of the present-day workplace.  On the other, having a rule that says there is no reasonable expectation of privacy when the workplace privacy policy says otherwise (assuming that policy is communicated properly), could be very helpful for compliance – especially when you consider the companies struggling to navigate the maze of court rulings, FTC rules and guidance, and sectoral laws that comprise US privacy law.  The need becomes even more compelling as once again, we are faced with the real possibility that Congress will not pass a federal privacy law this year – even as the FTC looks to Congress for guidance on how to shape a regulatory framework around privacy and security.

Which comes to my final point of agreement with Scalia.  When it comes to Quon, the Court chose not to address the important question – in short, they punted.  Or as Scalia puts it:

“… The-times-they-are-a-changin’ is a feeble excuse for disregard of duty.”

FTC announces its 30th data protection settlement – with Twitter

Remember when hackers got into Twitter’s databases, allowing them to send out phony tweets from the likes of then President-elect Obama and certain Fox news anchors? Well, the FTC was definitely paying attention to that incident.   Today, the agency announced an investigation and settlement of Twitter’s data security practices – its first ever case against a social networking service (and 30th data security case to date).  The term of the settlement is 20 years, and Twitter will be required to set up a comprehensive data security program, implementing privacy and security controls in its systems and workplace.  The company will also be subject to an independent, third party audit of its practices every other year for the next 10 years.. Read all the details on the FTC’s site.

Announcing: The Internet of Things

For those of us who believe that technology is a foundation for all industry – a horizontal feature, not a vertical – the recent interest in The Internet of Things is particularly vindicating.  The Internet of Things or “IoT” for short describes the larger network that is created by the inter-connection of  the traditional Internet and items you use everyday in the offline world.  IoT was mostly relegated to geek speak when it was first coined by a virtual team of RFID technology researchers in 1999, but lately it’s been experiencing a resurgence – as a pithy way to define the interconnected network of online and offline worlds, especially when the scenario involves RFID chips embedded in everyday items, allowing them to “talk” to the Internet.  Real world application? A user can communicate and control an RFID-enabled item – a car or kitchen appliance for example, using their computer or mobile phone.  This is the idea behind some of those really helpful mobile apps – like the one that turns your phone into a remote oven control.

At the moment, the IoT is really just a concept that is being researched and beta’ed. A recent presentation by Don Caprio of McKenna Aldridge suggests that in a decade the picture will be very different, and there will be billions of devices connected to the Internet worldwide.  In the US, the impact will be significant; Caprio estimates that each American will own at least 50 internet-enabled items, most of which will be “tagged” to that specific users.  Currently, the Internet could not handle this volume of web-enabled device traffic.  For IoT to progress from vision to reality, there must be “widespread deployment” of the next version of internet protocol – IP v.6 – which will allow for billions of additional, unique IP addresses.   This will involve considerable investment by the private sector – investment that probably won’t happen until regulators decide how to approach the big policy issues implicated by the IoT’s expansion – interoperability, data flows across jurisdiction, copyright, and of course, privacThis probably sounds familiar –  the scenario is similar to the current discussions around what should constitute effective regulation of cloud computing.

The implications for marketers and retailers are enormous.  The persistent harvesting of data at all points of customer contact – website, RFID-enabled device and retail stores – will help online marketers develop comprehensive customer profiles that will be worth their weight in gold to advertisers.  Retailers are already aware of the possibilities; Macy’s and Best Buy recently signed development deals for mobile, location-based applications that would “enhance the brick and mortar experience” by providing access to discounts and other promotions to offline shoppers.

But the privacy concerns posed by the IoT are enormous too.  A recent New York Times article discussed the probability of “online redlining” – where products and services are offered to some, but not all customers, based on usage data and statistical predictions.  Another recent article talks about how retailers use data to track spending habits – truly, the era of big brother surveillance in retail has already arrived.  These are just two examples of where the connection between online and offline worlds could result in more consumer harms than benefit.

Regulators are starting to awake to the connection.  Last week, Neelie Kroes, Commissioner for the EU’s Digital Agenda, spoke publicly about how an IoT might unfold and what policies would advance its development.  Her remarks followed on the recent announcements of  the EU’s Digital Agenda – which include increasing R&D for the relevant technologies that support advanced Internet deployment and adoption and increasing user trust in broadband technologies.  I have yet to see the IoT referenced publicly by a US regulator (I don’t recall it being raised in the recent FTC privacy workshops – but correct me if I’m wrong).  The issue is addressed to some degree in the discussion draft of federal privacy legislation that’s currently being circulated by Congressmen Rick Boucher and Cliff Stearns; the bill’s requirements apply equally to online and offline activities, and requires an explicit user “opt-in” before using location-based information for ad-targeting.

If anything, the IoT and the myriad privacy concerns raised by tracking an individuals use of everyday items are a helpful reminder that privacy concerns are not limited to the online world alone.   Indeed, if we want users to trust in this growing network of online and offline systems, then regulators must start treating the IoT as part of the Internet as a whole. And while online-specific issues – most prominently data privacy and security – have shaped much of the discussion around what constitutes adequate “information” privacy, it’s likely that offline issues will begin to influence the discussion.  As Jennifer Barrett of Acxiom, one of the nation’s largest data brokers, recently stated “the clear distinction that we had a number of years ago between online and offline [marketing methods] is blurring.”

What’s next? Perhaps we’ll look to Commissioner Kroes for the last word.  When asked whether she had a specific plan to support the IoT, Kroes borrowed another term – this time from Spanish author Antonio Machado – saying simply:  “”We have no road. We make the road by walking.”  I think that’s code for more discussions, more workshops, and definitely more speeches on the topic.

Tipping Towards an Opt-In

A few years ago, I became an instant fan of Malcolm Gladwell’s groundbreaking book – “The Tipping Point” – based on an epidemiological theory that says that in aggregate, “little things” can make a big difference.  Since then, I’ve observed the phenomenon play out on the policy stage several times – financial reform and healthcare are two immediate examples that come to mind – and I wonder if the theory has any application in what’s currently happening with online privacy today.  I think it does – particularly if you view a tipping point in scientific terms i.e. the point at which an object is displaced from a state of stable equilibrium due to a series of successive events, into a new equilibrium state qualitatively dissimilar from the first.

To say that online privacy was ever in a state of stable equilibrium is a stretch.  We are however, approaching the end of a current era in online advertising and marketing – an era in which companies captured personal and confidential data from users, and then monetized that data to sell ads back to those very same users, often without the user’s authorization or knowledge. That state of equilibrium has been threatened by many events in the last few months – market developments, consumer outcry and regulatory attention all converging to catapult data privacy and security onto the national agenda and into the mainstream press.  Some commentators, such as Jeff Chester, have characterized these events as a perfect storm; I see them a bit differently – not a storm, but a series of occurrences that finally “tipped” the issue, as companies attempted to push the privacy envelope with various features that compromised a user’s privacy (and in some cases the user’s express wishes to keep their data private).  Each of these features involved sharing data with a third party and not surprisingly, each triggered a privacy outcry – because they provided no meaningful way for users to opt-out of the feature before personal data was exposed.

It’s amazing to think that most of these pivotal events only happened during the last three months.  To recap:

February 9, 2010 – Google launches Google Buzz, and overnight, transforms users’ Gmail accounts into social networking pages, exposing personal contacts.  Google later remedies the situation by making the feature opt-in.

April 27, 2010 – 4 Democratic Senators led by Chuck Schumer of New York, send a letter to Facebook CEO Mark Zuckerberg complaining about the privacy impact of Facebook services, including its instant personalization feature (which exposed user profile data without authorization on launch).  Senator Schumer follows up his letter with a formal request urging the FTC to investigate Facebook.  Facebook eventually announces new privacy controls.

May 5, 2010 – EPIC and a coalition of other advocacy organizations file this complaint, urging the FTC to investigate Facebook.  In the complaint, they assert that “Faceboook’s business practices directly impact more American consumers than any other social network service in the United States.”

May 14, 2010 – Google announces, via a post on their policy blog, that their Streetview cars have inadvertently been capturing payload data from open WiFi networks – in violation of US, European and other global data protection laws – for over 3 years.

May 21, 2010 – The Wall Street Journal reports that a group of social networking sites – including Facebook, MySpace and Digg – routinely share user profile data with advertisers, despite public assurances to the contrary.

The result? With each successive product or feature launch, the privacy debate is now tipping towards a privacy regime that could be much stricter than anything we’ve seen before – a requirement that companies get a user’s affirmative opt-in to any use of personal data for advertising and marketing purposes.

Privacy nerds may want to revisit the words of David Vladeck, head of the FTC’s Bureau of Consumer Protection, in a New York Times interview that took place last August i.e. before the privacy mishaps of the last 3 months.  When asked about whether the FTC would mandate an opt-in standard for user disclosures, Mr. Vladek responded:

“The empirical evidence we’re seeing is that disclosures on their own don’t work, particularly disclosures that are long, they’re written by lawyers, and they’re written largely as a defense to liability cases. Maybe we’re moving into a post-disclosure environment. But there has to be greater transparency about what’s going on. Until I see evidence otherwise, we have to presume that most people don’t understand, and the burden is going to be on industry to persuade us that people really are well informed about this.”

The emphasis on transparency becomes even more important with the  impending rollout of the FTC’s privacy framework this summer.  Will  the FTC make an affirmative opt-in mandatory in all instances where personal data is being shared with a third party?  Clearly, an opt-in is one of the best ways to ensure transparency, and to give users meaningful notice about what data is being collected.  The question is whether an opt-in requirement would be so cumbersome it would turn users off of the service altogether.  For instance, would an opt-in be required once – before the feature is first launched, or each successive time it launches?

Also, it’s unclear whether the FTC’s framework will derive strength (or weakness) from a federal privacy law if such a law does indeed pass this session.  Critics on both sides have mostly panned the House legislation i.e. the Boucher-Stearns bill, but there is news of another, more stringent bill being drafted by Senator Schumer who reached his tipping point with Facebook as outlined earlier.

I saved my most important “little thing” for last. Even if you don’t believe that the privacy debate has yet to reach a tipping point, consider this: in June, the Supreme Court will issue its decision in City of Ontario v. Quon. This is the first time that the Supremes have considered the crucial question of what expectation of privacy users have in their electronic communications.  Their decision will most likely impact any regulatory or legislative scheme around privacy currently being proposed by the federal agencies or Congress.  Most importantly, a Supreme Court decision that finds an expectation of privacy in electronic communications will most certainly translate into increased obligations on companies that deal in these types of electronic communications and data.  A tipping point?  Absolutely.  In fact, such a decision would definitely signal something much bigger (to quote another popular book title) – a Game Change for advertising and marketing on the web.

Dartmouth Study Finds P2P Networks Hemorrhaging Sensitive Data

While peer-to-peer may be a good metaphor for human interaction – social networking comes to mind – it may not always be the greatest model for the sharing of sensitive information.   Your medical history for instance, shouldn’t be shared with others on a P2P network.  Is this happening? Absolutely.  A study presented this week by Professor Eric Johnson of Dartmouth’s Tuck School of Business, describes how researchers found mounds of sensitive medical data on popular P2P networks: medical history, contact information, insurance details, treatment data, diagnosis and psychiatric evaluations – all mixed in with the song and movie downloads that usually make up the traffic on these networks.

So, how is this sensitive medical data getting on P2P networks in the first place?  Primarily through an employee’s computer – the employee downloads a P2P application on her work machine, and then uses that same machine to process sensitive medical data at work.  Sometimes the employee takes work home, making edits to a spreadsheet on her home computer (yes, a hospital-generated spreadsheet containing SSNs and other personally identifiable information for employees was one of the documents that the Dartmouth researchers found).  In both cases, the user configures the P2P application incorrectly, making all their personal data visible to other users on the P2P network.  Once that happens, the data is a prime target for cybercriminals and fraudsters who engage in identity theft.  Sensitive medical data is a particularly lucrative prize.  As Professor Johnson put it: “For criminals to profit, they don’t need to “steal” an identity, but only to borrow it for a few days, while they bill the insurance carrier thousands of dollars for fabricated medical bills.”

Arguably, this could be a potential area of concern for the companies covered by HIPAA and that deal with sensitive medical data online. But although HIPAA and the FTC’s Health Breach Notification Rule set out requirements for what companies need to do in case of a “breach” of sensitive medical data, they give little guidance to companies on what policies they could be implementing internally to prevent such breaches in the first place. Some may view this as a nod to self-regulation, but the truth is there are “best practices” that both HHS and the FTC could endorse.  A simple best practice that addresses the “data hemorrhaging” that Professor Johnson alludes to in his study, would be an internal policy against the use of P2P networks on machines that also handle sensitive medical data.  Another best practice – companies that deal with this type of data should consider partnering with regulators and health care providers to educate patients on the importance of securing their medical data – and how certain file-sharing technologies can promote medical ID theft when configured incorrectly.  Already, there’s collateral for such an effort – the FTC’s  tips to deter medical ID theft, which could be required patient reading (along with those HIPAA notices).

Categories: Regulation Tags: ,

Unintended Consequences? The Multistate Case Against Craigslist

Yesterday’s news about a Connecticut subpoena issued to Craigslist reminds us that sometimes, actions may have unintended consequences.

In November 2008, Craigslist negotiated a groundbreaking settlement with 43 states led by Connecticut AG Dick Blumenthal, concerning illegal use of its online listing services.  Specifically, Craigslist agreed to wall off the “erotic services” or adult ads section of its site and implement the appropriate age-verification procedures (via credit card).  Then, going a very charitable step further, Craigslist also agreed to donate 100% of its net revenue from the adult area of its site to charity, (and have that net revenue verified by an external auditor as part of the process).

Fast forward to May 2010, and it’s déjà vu all over again. Led by Connecticut, a group of 39 states is investigating the prevalence of prostitution ads on Craigslist, claiming that the site is earning “huge profits” from the sale of such ads.   According to a detailed subpoena issued to Craigslist yesterday the multistate group is “seeking evidence that the company is fulfilling its public promise to fight advertisements for prostitution and other illegal activity.”

Ironically, the age and credit card verification procedures Craiglist was required to implement for its adult only section in 2008 has boosted growth of this section.  According to the multistate group, Craiglist now makes over $36 million from hosting “adult and possibly illegal ads.”

The allegations – that the company may be withholding substantial amounts of money from charities – seem a bit disjointed.  This is Craigslist – a company that has always emphasized free over profit and has yet to fully exploit its enormous streams of traffic. Even though he knows that an IPO would make him a billionaire, Craiglist founder Craig Newmark refuses to take the company public.  Remember that unflattering article in Wired last August?  Clearly, this is a company that appears indifferent to income.

And yet, reading the subpoena requests (some of which are reproduced in the CT AGO’s press release) it is clear that the States are pursuing a theory that Craigslist is reneging on its settlement promise, and not donating all of the profits from its adult ad sales to charity.  I hope their allegations do not have an unintended consequence.  The damage to Craigslist reputation within its ecosystem – especially if these allegations are proven false – could prove particularly costly.

Note to Facebook: Privacy is Personalization Too

April 29, 2010 Leave a comment

Just last week, Facebook introduced “instant personalization” – a feature that extends your Facebook experience via familiar Facebook “plug-ins” (Activity Feed, the “Like” and “Recommend” buttons) – to partner websites such as yelp and nfl.com.  Already, the features are drawing much criticism – this time, from a Democratic quad of senators – Begich (AK), Bennet (CO), Franken (MN) and Schumer (NY) – who are urging Facebook to change its policies on sharing user data with third parties. Their letter to Facebook founder Mark Zuckerberg highlight three main concerns: Facebook’s continually elastic definition of what it considers personal information, the storage of Facebook user profile data with advertisers and other third parties, and the aforementioned “instant personalization” feature. The Senators acknowledge the FTC’s role in examining the issue but also advocate that Facebook take “swift and productive steps to alleviate the concerns of its users” while FTC regulation is pending.  On Monday, Senator Schumer followed up with an open letter that urges the FTC to investigate Facebook’s privacy practices.

Instant personalization is the latest Facebook feature to draw flak for its perceived impact on privacy.  It’s actually a very cool technology, designed for people who want to publicly share their likes and dislikes with their Facebook network.  It works by sharing certain Facebook profile data with a partner site.  The feature is personalization defined – every user using Facebook plug-ins on a partner site will have a different experience based on who they are friends with on Facebook.

A recent post on the Facebook blog describes the process:

“At a technical level, social plugins work when external websites put an iframe from Facebook.com on their site—as if they were agreeing to give Facebook some real estate on their website. If you are logged into Facebook, the Facebook iframe can recognize you and show personalized content within the plugin as if the visitor were on Facebook.com directly. Even though the iframe is not on Facebook, it is designed with all the privacy protections as if it were (emphasis added).”

Note the last sentence of that excerpt – it seems to suggest that as a Facebook user, you don’t have to worry about privacy whether you are on Facebook, or using a Facebook plug-in on another site.  So what’s the flap about?  What is fuelling the concerns with Facebook’s privacy practices – from the letter from Democratic Senators, to ongoing concerns from EPIC, to this thoughtfully penned article from a Facebook user who also happens to work for PC World?

I think it has to come down to notice – especially to users.  Facebook debuted “instant personalization” as an opt-in feature that automatically exposed a user’s Facebook profile data to partner sites. This has raised concerns with regulators, and certain Facebook users too – just take a look at the comments to this recent Facebook blog on the topic. To further complicate things, Facebook makes it particularly difficult to opt-out of the instant personalization features.

With this latest move, Facebook reaches outside its walled garden to extend its reach across the web – I almost think of it now as the world’s largest social platform (not network).  Consider for instance that it took Microsoft nearly twenty years to reach an install base of 1 billion for Windows; Facebook, now approaching 500 million users, will probably reach that number in less than a decade. As Facebook continues to evolve its platform strategy, its processes – particularly around informing users about what it plans to do with their profile data – must be better defined.  I think this goes beyond a static privacy policy – it may even involve engaging select users at the beta stage to pre-determine privacy concerns (like whether to launch a feature as opt-in or out).  Engaging trust with your ecosystem is essential for any platform company and when it comes to Facebook, users are an essential part of the ecosystem equation.

For the most part, Facebook users divulge data about themselves with the expectation that this data will be used on Facebook only; sharing that same data with other sites, even if it’s via a Facebook plug-in, is clearly not part of that expectation.  If Facebook wants to use user profile data for secondary purposes, it should first get the user’s permission to do so.  Such a system honors a user’s privacy preferences – which are also a personalization of sorts.  And when it comes to privacy, Facebook should be doing everything it can to ensure that this type of personalization is preserved.