Archive for the ‘policy’ Category

New Facebook Groups Considered Somewhat Harmful

Thursday, October 7th, 2010

I always think of things I should have added in the hour after making a post. Sigh. Here goes…

The situation is perhaps not so different from mailing lists, Google groups or any number of similar systems. I can set up one of those and add people to them without their consent — even people who are are not my friends. Even people whom I don’t know and who don’t know me. Such email-oriented lists can also have public membership lists. The only check on this is that most mailing lists frameworks send a notice to people being added informing them of the action. But many frameworks allow the list owner to suppress such notifications.

But still, Facebook seems different, based on the how the rest of it is configured and on how people use it. I believe that a common expectation would be that if you are listed as a member of an open or private group, that you are a willing member.

When you get a notification that you are now a member of the Facebook group Crazy people who smell bad, you can leave the group immediately. llBut we have Facebook friends, many of them in fact, who only check in once a month or even less frequently. Notifications of their being added to a group will probably be missed.

Facebook should fix this by requiring that anyone added to a group confirm that they want to be in the group before they become members. After fixing it, there’s lots more that can be done to make Facebook groups a powerful way for assured information sharing.

New Facebook Groups Considered Harmful

Thursday, October 7th, 2010

Facebook has rolled out a new version of groups announced on the Facebook blog.

“Until now, Facebook has made it easy to share with all of your friends or with everyone, but there hasn’t been a simple way to create and maintain a space for sharing with the small communities of people in your life, like your roommates, classmates, co-workers and family.

Today we’re announcing a completely overhauled, brand new version of Groups. It’s a simple way to stay up to date with small groups of your friends and to share things with only them in a private space. The default setting is Closed, which means only members see what’s going on in a group.”

There are three kinds of groups: open, closed and secret. Open groups have public membership listings and public content. Private ones have public membership but public but private content. For secret groups, both the membership and content are private.

A key part of the idea is that the group members collectively define who is in the group, spreading the work of setting up and maintaining the group over many people.

But a serious issue with the new Facebook group framework is that a member can unilaterally add any of their friends to a group. No confirmation is required by the person being added. This was raised as an issue by Jason Calacanis.

The constraint that one can only add Facebook friend to a group he belongs to does offer some protection against ending up in unwanted groups (e.g., by spammers). But it could still lead to problems. I could, for example, create a closed group named Crazy people who smell bad and add all of my friends without their consent. Since the group is not secret like this one, anyone can see who is in the group. Worse yet, I could then leave the group. (By the way, let me know if you want to join any of these groups).

While this might just be an annoying prank, it could spin out of control — what might happen if one of your so called friends adds you to the new, closed “Al-Queda lovers” group?

The good news is that this should be easy to fix. After all, Facebook does require confirmation for the friend relation and has a mechanism for recommending that friends like pages or try apps. Either mechanism would work for inviting others to join groups.

We have started working with a new group-centric secure information sharing model being developed by Ravi Sandhu and others as a foundation for better access and privacy contols in social media systems. It seems like a great match.

See update.

Taintdroid catches Android apps that leak private user data

Thursday, September 30th, 2010

Ars Technica has an an article on bad Android apps, Some Android apps caught covertly sending GPS data to advertisers.

“The results of a study conducted by researchers from Duke University, Penn State University, and Intel Labs have revealed that a significant number of popular Android applications transmit private user data to advertising networks without explicitly asking or informing the user. The researchers developed a piece of software called TaintDroid that uses dynamic taint analysis to detect and report when applications are sending potentially sensitive information to remote servers.

They used TaintDroid to test 30 popular free Android applications selected at random from the Android market and found that half were sending private information to advertising servers, including the user’s location and phone number. In some cases, they found that applications were relaying GPS coordinates to remote advertising network servers as frequently as every 30 seconds, even when not displaying advertisements. These findings raise concern about the extent to which mobile platforms can insulate users from unwanted invasions of privacy.”

TaintDroid is an experimental system that “analyses how private information is obtained and released by applications ‘downloaded’ to consumer phones”. A paper on the system will be presented at the 2010 USENIX Symposium on Operating Systems Design and Implementation later this month.

TaintDroid: An Information-Flow Tracking System for Realtime Privacy Monitoring on Smartphones, William Enck, Peter Gilbert, Byung-gon Chun, Landon P. Cox, Jaeyeon Jung, Patrick McDaniel, and Anmol N. Sheth, OSDI, October 2010.

The project, Realtime Privacy Monitoring on Smartphones has a good overview site with a FAQ and demo.

This is just one example of a rich and complex area full of trade-offs. We want our systems and devices to be smarter and to really understand us — our preferences, context, activities, interests, intentions, and pretty much everything short of our hopes and dreams. We then want them to use this knowledge to better serve us — selecting music, turing the ringer on and off, alerting us to relevant news, etc. Developing this technology is neither easy nor cheap and the developers have to profit from creating it. Extracting personal information that can be used or sold is one model — just as Google and others do to provide better ad placement on the Web.

Here’s a quote from the Ars Technical article that resonated with me.

“As Google says in its list of best practices that developers should adopt for data collection, providing users with easy access to a clear and unambiguous privacy policy is really important.”

We, and many others, are trying to prepare for the next step — when users can define their own privacy policies and these will be understood and enforced by their devices.

Usability determines password policy

Monday, August 16th, 2010

Some online sites let you use any old five-character string as your password for as long as you like. Others force you to pick a new password every six months and it has to match a complicated set of requirements — at least eight characters, mixed case, containing digits, letters, punctuation and at least one umlaut. Also, it better not contain any substrings that are legal Scrabble words or match any past password you’ve used since the Bush 41 administration.

A recent paper by two researchers from Microsoft concludes that an organization’s usability requirements is the main factor that determines the complexity of its password policy.

Dinei Florencio and Cormac Herley, Where Do Security Policies Come From?, Symposium on Usable Privacy and Security (SOUPS), 14–16 July 2010, Redmond.

We examine the password policies of 75 different websites. Our goal is understand the enormous diversity of requirements: some will accept simple six-character passwords, while others impose rules of great complexity on their users. We compare different features of the sites to find which characteristics are correlated with stronger policies. Our results are surprising: greater security demands do not appear to be a factor. The size of the site, the number of users, the value of the assets protected and the frequency of attacks show no correlation with strength. In fact we find the reverse: some of the largest, most attacked sites with greatest assets allow relatively weak passwords. Instead, we find that those sites that accept advertising, purchase sponsored links and where the user has a choice show strong inverse correlation with strength.

We conclude that the sites with the most restrictive password policies do not have greater security concerns, they are simply better insulated from the consequences of poor usability. Online retailers and sites that sell advertising must compete vigorously for users and traffic. In contrast to government and university sites, poor usability is a luxury they cannot afford. This in turn suggests that much of the extra strength demanded by the more restrictive policies is superfluous: it causes considerable inconvenience for negligible security improvement.

h/t Bruce Schneier

An ontology of social media data for better privacy policies

Sunday, August 15th, 2010

Privacy continues to be an important topic surrounding social media systems. A big part of the problem is that virtually all of us have a difficult time thinking about what information about us is exposed and to whom and for how long. As UMBC colleague Zeynep Tufekci points out, our intuitions in such matters come from experiences in the physical world, a place whose physics differs considerably from the cyber world.

Bruce Schneier offered a taxonomy of social networking data in a short article in the July/August issue of the IEEE Security & Privacy. A version of the article, A Taxonomy of Social Networking Data, is available on his site.

“Below is my taxonomy of social networking data, which I first presented at the Internet Governance Forum meeting last November, and again — revised — at an OECD workshop on the role of Internet intermediaries in June.

  • Service data is the data you give to a social networking site in order to use it. Such data might include your legal name, your age, and your credit-card number.
  • Disclosed data is what you post on your own pages: blog entries, photographs, messages, comments, and so on.
  • Entrusted data is what you post on other people’s pages. It’s basically the same stuff as disclosed data, but the difference is that you don’t have control over the data once you post it — another user does.
  • Incidental data is what other people post about you: a paragraph about you that someone else writes, a picture of you that someone else takes and posts. Again, it’s basically the same stuff as disclosed data, but the difference is that you don’t have control over it, and you didn’t create it in the first place.
  • Behavioral data is data the site collects about your habits by recording what you do and who you do it with. It might include games you play, topics you write about, news articles you access (and what that says about your political leanings), and so on.
  • Derived data is data about you that is derived from all the other data. For example, if 80 percent of your friends self-identify as gay, you’re likely gay yourself.”

I think most of us understand the first two categories and can easily choose or specify a privacy policy to control access to information in them. The rest however, are more difficult to think about and can lead to a lot of confusion when people are setting up their privacy preferences.

As an example, I saw some nice work at the 2010 IEEE International Symposium on Policies for Distributed Systems and Networks on “Collaborative Privacy Policy Authoring in a Social Networking Context” by Ryan Wishart et al. from Imperial college that addressed the problem of incidental data in Facebook. For example, if I post a picture and tag others in it, each of the tagged people can contribute additional policy constraints that can narrow access to it.

Lorrie Cranor gave an invited talk at the workshop on Building a Better Privacy Policy and made the point that even P3P privacy policies are difficult for people to comprehend.

Having a simple ontology for social media data could help us move forward toward better privacy controls for online social media systems. I like Schneier’s broad categories and wonder what a more complete treatment defined using Semantic Web languages might be like.

FaceBook default privacy policies changing

Wednesday, July 1st, 2009

FaceBook is changing how it manages privacy starting today. After reading last week’s post on the FaceBook blog, More Ways to Share in the Publisher, and a followup note on ReadWriteWeb, A Closer Look at Facebook’s New Privacy Options, I thought I understood: Facebook was sharing more but only for people who have made their profiles public. From the official FaceBook post:

“We’ve received some questions in the comments about default privacy settings for this beta. Nothing has changed with your default privacy settings. The beta is only open to people who already chose to set their profile and status privacy to “Everyone.” For those people, the default for sharing from the Publisher will be the same. If you have your default privacy set to anything else—such as “Friends and Networks” or “Friends Only”—you are not part of this beta.”

But today the New York Times has an article, The Day Facebook Changed: Messages to Become Public by Default that clearly says more is coming (emphasis added):

“By default, all your messages on Facebook will soon be naked visible to the world. The company is starting by rolling out the feature to people who had already set their profiles as public, but it will come to everyone soon. You’ll be able each time you publish a message to change that message’s privacy setting and from that drop down there’s a link to change your default setting.

But most people will not change the setting. Facebook messages are about to be publicly visible. A whole lot of people are going to hate it. When ex-lovers, bosses, moms, stalkers, cops, creeps and others find out what people have been posting on Facebook – the reprimand that “well, you could have changed your default setting” is not going to sit well with people.”

But it will come to everyone soon! That’s a big change if true. I hope that there is come clarification soon from FaceBook. I, for one, am left confused.

In face, as the ReadWrite post notes, the FaceBook privacy policy interface is confusing and not easy to use.

“Unfortunately, it’s very difficult to manage the new privacy settings as they are currently constituted. Several members of our staff struggled to make changes to message-specific and default privacy settings really stick. The feature is confusing if not outright broken. A lot of messages intended for limited distribution are going to be sent out wider than the author intended. That’s not good.”

This is an important thing to get right.

Murat Kantarcioglu on Facebook Privacy Issues

Monday, June 22nd, 2009

KDAF-TV in Dallas/Fort Worth did a story on privacy and social media featuring an interview with Murat Kantarcioglu.

“Online Social Networks are redefining privacy and personal security, but how much of your personal life have you already given up? A professor at UT Dallas says chances are you’ve given up more than you know.

Semantic Web and Policy

Tuesday, January 13th, 2009

Elsevier has made the January 2009 Journal of Web Semantics special issue on the Semantic Web and Policy our new sample issue, which means that its paper are freely available online until a new sample issue is selected. The special issue editors, Lalana Kagal, Tim Berners-Lee and James Hendler wrote in the introduction:

“As Semantic Web technologies mature and become more accepted by researchers and developers alike, the widespread growth of the Semantic Web seems inevitable. However, this growth is currently hampered by the lack of well-defined security protocols and specifications. Though the Web does include fairly robust security mechanisms, they do not translate appropriately to the Semantic Web as they do not support autonomous machine access to data and resources and usually require some kind of human input. Also, the ease of retrieval and aggregation of distributed information made possible by the Semantic Web raises privacy questions as it is not always possible to prevent misuse of sensitive information. In order to realize it’s full potential as a powerful distributed model for publishing, utilizing, and extending information, it is important to develop security and privacy mechanisms for the Semantic Web. Policy frameworks built around machine-understandable policy languages, with their promise of flexibility, expressivity and automatable enforcement appear to be the obvious choice.

It is clear that these two technologies – Semantic Web and Policy – complement each other and together will give rise to security infrastructures that provide more flexible management, are able to accommodate heterogeneous information, have improved communication, and are able to dynamically adapt to variations in the environment. These infrastructures could be used for a wide spectrum of applications ranging from network management, quality of information, to security, privacy and trust. This special issue of the Journal of Web Semantics is focused on the impact of Semantic Web technologies on policy management, and the specification, analysis and application of these Semantic Web-based policy frameworks.”

In addition to the editors’ Introduction, the special issue includes five papers: