Archive for the ‘Uncategorized’ Category

How Rapleaf is eroding our privacy on the Web

Monday, October 25th, 2010

RapLeaf knows what you did last summer.

The Wall Street Journal continues its exploration of how our privacy is eroding on the Web in new article by Emily Steel — A Web Pioneer Profiles Users by Name. The article profiles the San Francisco startup RapLeaf, which defines its vision as follows.

“We want every person to have a meaningful, personalized experience – whether online or offline. We want you see the right content at the right time, every time. We want you to get better, more personalized service. To achieve this, we help Fortune 2000 companies gain insight into their customers, engage them more meaningfully, and deliver the right message at the right time. We also help consumers understand their online footprint.”

RapLeaf ties email address to profiles with information about people and uses the profiles to target advertisements for clients. The articles shows the information collected for one person, Linda Twombly of Nashua NH, and what some of the coded information means.

Rapleaf does allow you to see the information it has collected about you, but you have to create a RapLeaf account to see it. You might be surprised about how well it knows you. Visit this page to see if your browser has RapLeaf cookies. You can also use it to opt out your email addresses from the RapLeaf system.

To be fair, RapLeaf and other companies are not doing anything illegal and mainly collect information that people choose to make public on the Web. However, their use of cookies does allow them to aggregate and integrate information about individuals and to associate that information with email addresses, Facebook UIDs and dozens of other identifiers. The information can be used to help Web-based systems serve you better — but their idea of serving you better is likely to involve peppering you with targeted ads.

How RapLeaf collects information about Web users

Anatomy of a cyber crime

Saturday, July 4th, 2009

Brian Krebs’ most recent Security Fix post in the Washington Post, PC Invader Costs Ky. County $415,000, goes into some detail on how cyber criminals stole $415K from a county in Kentucky.

“Cyber criminals based in Ukraine stole $415,000 from the coffers of Bullitt County, Kentucky this week. The crooks were aided by more than two dozen co-conspirators in the United States, as well as a strain of malicious software capable of defeating online security measures put in place by many banks.”

The particulars, both technical and social, were fascinating and helped me to better understand how these things happen.

Gates puts NSA in charge of USCYBERCOM

Wednesday, June 24th, 2009

The NYT reports in New Military Command for CyberspaceNew Military Command for Cyberspace that that the DoD has put NSA in charge of a a unified U.S. Cyber Command to oversee the protection of military networks against cyber threats.

“Defense Secretary Robert M. Gates on Tuesday ordered the creation of the military’s first headquarters designed to coordinate Pentagon efforts in the emerging battlefield of cyberspace and computer-network security, officials said. Pentagon officials said Mr. Gates intends to nominate Lt. Gen. Keith Alexander, currently director of the National Security Agency, for a fourth star and to take on the top job at the new organization, to be called Cybercom. The new command’s mission will be to coordinate the day-to-day operation — and protection — of military and Pentagon computer networks.”

USCYBERCOM is a subordinate unified command under the US Strategic Command.

Misunderstood Wall Between Intelligence and Law Enforcement

Saturday, June 20th, 2009

A 2004 report prepared for the 9/11 Commission looked at the oft cited problem that various information sharing policies resulted in a “wall” between intelligence and law enforcement agencies. That 3 page report was recently declassified.

“Legal Barriers to Information Sharing: The Erection of a Wall Between Intelligence and Law Enforcement Investigations”, Commission on Terrorist Attacks Upon the United States, Staff Monograph, Barbara A. Grewe, Senior Counsel for Special Projects, 20 August 2004.

One of the study’s conclusions is that there was no legal reason why intelligence information could not have been shared before 9/11 but that many people thought that there were legal restrictions.

“The information sharing failures in the summer of 2001 were not the result of legal barriers but of the failure of individuals to understand that the barriers did not apply to the facts at hand. Simply put, there was no legal reason why the information could not have been shared.”

So, if the applicable laws and official policies were not the the problem, changing them doesn’t directly address the problem. To me this report points out an opportunity for those of us working with computer-based policy systems.

We can and should create prototype systems that can help humans by (1) automatically rendering opinions about what existing policies allow and prohibit; (2) providing good explanations for those opinions; (3) letting people examine the reasoning and data provenance underlying the opinions and explore related situations through alternate assumptions and counterfactuals; and (4) collecting feedback from people for analysis and to drive the evolution of the policy systems.

This is a tall order, but there is a lot of prior work on explanation in expert systems that we can draw on.

(spotted on FAS Secrecy News)

Researchers ask Google to make HTTPS the default

Thursday, June 18th, 2009

A group of 37 researchers in information security and privacy law have sent an open letter to Google encouraging the company to enable HTTPS as the default protocol for Google Mail, Docs, and Calendar. Doing so, the letter says, will protect Google customers’ communications from theft and snooping.

“Google uses industry-standard Hypertext Transfer Protocol Secure (HTTPS) encryption technology to protect customers’ login information. However, encryption is not enabled by default to protect other information transmitted by users of Google Mail, Docs or Calendar. As a result, Google customers who compose email, documents, spreadsheets, presentations and calendar plans from a public connection (such as open wireless networks in coffee shops, libraries, and schools) face a very real risk of data theft and snooping, even by unsophisticated attackers. Tools to steal information are widely available on the Internet.

Google supports HTTPS encryption for the entire Gmail, Docs or Calendar session. However, this is disabled by default, and the configuration option controlling this security mechanism is not easy to discover. Few users know the risks they face when logging into Google’s Web applications from an unsecured network, and Google’s existing efforts are little help.

Support for HTTPS is built into every Web browser and is widely used in the finance and health industries to protect consumers’ sensitive information. Google even uses HTTPS encryption, enabled by default, to protect customers using Google Voice, Health, AdSense and Adwords.

Rather than forcing users of Gmail, Docs and Calendar to “opt-in” to adequate security, Google should make security and privacy the default.”

See the post on Wired’s Threat Level blog for more information and a quote from Google in response.

“Google responded Tuesday morning, saying that it is already ahead of the pack by even offering HTTPS, and that the company is looking into whether it would make sense to turn it on as the default for all Gmail users.

“Free, always-on HTTPS is pretty unusual in the e-mail business, particularly for a free e-mail service,” Google engineer Alma Whitten wrote Tuesday morning on Google’s security blog. ”It’s something we’d like to see all major webmail services provide.”

The company is planning a trial where small samples of different types of Gmail users will be shifted to a default HTTPS to see how fast things load, how happy users are and what networks or computer setsups fair badly, according to Whitten.

“Unless there are negative effects on the user experience or it’s otherwise impractical, we intend to turn on HTTPS by default more broadly, hopefully for all Gmail users,” Whitten wrote, noting that the extra cost associated with the computing power needed for encyrption was not holding the company back.”

Privacy and the law

Sunday, May 3rd, 2009

The ABA Journal news blog has an post, Fordham Law Class Collects Personal Info About Scalia; Supreme Ct. Justice Is Steamed, on privacy and the law — or at least one very famous lawyer: U.S. Supreme Court Justice Antonin Scalia. Joel Reidenberg teaches a course on information privacy law at Fordham University and illustrates the scale of the problem empirically.

“Last year, when law professor Joel Reidenberg wanted to show his Fordham University class how readily private information is available on the Internet, he assigned a group project. It was collecting personal information from the Web about himself. This year, after U.S. Supreme Court Justice Antonin Scalia made public comments that seemingly may have questioned the need for more protection of private information, Reidenberg assigned the same project. Except this time Scalia was the subject, the prof explains to the ABA Journal in a telephone interview.

His class turned in a 15-page dossier that included not only Scalia’s home address, home phone number and home value, but his food and movie preferences, his wife’s personal e-mail address and photos of his grandchildren, reports Above the Law.

And, as Scalia himself made clear in a statement to Above the Law, he isn’t happy about the invasion of his privacy: “Professor Reidenberg’s exercise is an example of perfectly legal, abominably poor judgment. Since he was not teaching a course in judgment, I presume he felt no responsibility to display any,” the justice says, among other comments.

Scantegrity cryptographic voting system to be used in binding governmental election

Thursday, April 2nd, 2009

This November will be the first time any end-to-end cryptographic system will be used in a binding governmental election.

UMBC Professor Alan Sherman and his students have been helping develop the Scantegrity open source election verification technology for optical scan voting systems. It uses privacy preserving confirmation numbers to allow each voter to verify her vote is counted and that all the votes were counted correctly.

The group has been working with Takoma Park MD to use this in a binding governmental election later this year. Alan recently wrote:

“On Saturday April 11, there will be a mock election in Takoma Park, MD, using the Scantegrity II high-integrity voting system being developed in part at the UMBC Cyber Defense Lab. Anyone is welcome to come and vote – polls will be open 10am-2pm in the Community Center at 7500 Maple Ave. This mock election is preparation for the Nov 2009 municipal election in Takoma Park which will also use Scantegrity – the first time any end-to-end cryptographic system will have been used in a binding governmental election.”

Here’s the text a short article on the election from the April 2009 Takoma Park newsletter.

This Arbor Day: Plant the Seeds for Election Verifiability

Election integrity is a major issue both nationally and internationally. During the City’s annual Arbor Day celebration, Takoma Park will try out what may be one solution. From 10 a.m. until 2 p.m. on April 11, City residents and their families and friends are invited to participate in a mock election administered by the City and its Board of Elections. The point of this mock election is to give voters an opportunity to test out and provide feedback to the City on the voting system it will use in the November 2009 municipal elections.

First among the many characteristics that set this system apart from those previously used by the City is that voters will be able to confirm that their ballots were counted.

As part of their ballot, voters will receive a confirmation code that they can write down, take home and check online to make sure their votes were counted. The confirmation number does not say how you voted and your vote remains private. What it does say, however, is that your vote is included in the final tally and that the machine read your vote correctly.

The system is paper-based and works like an optical scan voting system, making it easy to use. The only difference is that when you vote, instead of a completely black bubble, you will see the confirmation number appear as shown in the illustration above.

Writing down and checking the confirmation number is optional. So, this Arbor Day, while enjoying the festivities, drop by the Community Center Azalea Room to see how the system works. Try it out, ask questions, give feedback, and enjoy the refreshments!

To obtain more information on the Arbor Day Mock Election, visit the City’s website at www.takomaparkmd. gov. Questions may also be addressed to the City Clerk’s office at 301-891-7267 or

Persistent Identifiers for Scientific Data Provenance

Monday, February 23rd, 2009

In this week’s ebiquity meeting (10:00am EDT Wed 2/25, ITE 325), Curt Tilmes will talk on “Persistent Identifiers for Earth Science Provenance“.

Historically, published scientific research could include a description of an experiment that an independent party could use to reproduce the experiment with the same results, confirming the research. Modern research in the field of earth science often depends on terrabytes of data captured from remote sensing instruments, complex computer algorithms that undergo numerous changes over the year. A single result could be the result of the work of hundreds of individuals over decades. The representation of the measurements, algorithms and all the other artifacts of experimentation leading to that result becomes a daunting problem. A key to handling this representation is a good scheme for persisent identifiers.

Persistent identifiers seem like a simple problem. Just make a good URL and don’t change it [1]. This sounds good in theory, but is difficult to maintain forever. Many other schemes have been proposed to attack various aspects of the problem of identification, with various advantages and disadvantages. I will introduce this topic and briefly describe some of the concerns with using identifiers specifically in the context described above, and some of the characteristics of various identifier schemes.

The presentation will be streamed live via

References and some identifier schemes

[1] Cool URIs Don’t Change
[2] Naming and Addressing: URIs, URLs, …
[3] Object Identifer (OID)
[4] The Digital Object Identifier (DOI) System
[5] Persistent Uniform Resource Locator
[6] A Universally Unique IDentifier (UUID) URN Namespace
[7] XRI (Extensible Resource Identifier)

Ravi Sandhu elected AAAS fellow

Friday, January 2nd, 2009

Ravi Sandhu (University of Texas at San Antonio was elected as a Fellow of the American Association for the Advancement of Science. Sandhu, professor and Lutcher Brown Endowed Chair in Cyber Security, serves as the executive director for UTSA’s Institute for Cyber Security. Ravi’s election cites him for “distinguished contributions to cyber security, including seminal role-based access control and usage control models, and for professional leadership in research journals and conferences.” The American Association for the Advancement of Science is the world’s largest general scientific society, and publisher of the journal, Science.

How the Srizbi botnet escaped destruction to spam again

Sunday, November 30th, 2008

Just like Freddy Kreuger, botnets are hard to kill.

In a series of posts on his Security Fix blog, Brian Krebs provides a good explanation of how the Srizbi botnet was able to come back to life after being killed (we thought!) earlier this month.

“The botnet Srizbi was knocked offline Nov. 11 along with Web-hosting firm McColo, which Internet security experts say hosted machines that controlled the flow of 75 percent of the world’s spam. One security firm, FireEye, thought it had found a way to prevent the botnet from coming back online by registering domain names it thought Srizbi was likely to target. But when that approach became too costly for the firm, they had to abandon their efforts.”

In a example of good distributed programming design, the botnet had a backup plan if its control servers were taken down.

“The malware contained a mathematical algorithm that generates a random but unique Web site domain name that the bots would be instructed to check for new instructions and software updates from its authors. Shortly after McColo was taken offline, researchers at FireEye said they deciphered the instructions that told computers infected with Srizbi which domains to seek out. FireEye researchers thought this presented a unique opportunity: If they could figure out what those rescue domains would be going forward, anyone could register or otherwise set aside those domains to prevent the Srizbi authors from regaining control over their massive herd of infected machines.”

Unfortunately, FireEye did not have the resources to carry out its plan and was forced to abandon it, but not before seeking help from other companies and organizations with deeper pockets.

“A week ago, FireEye researcher Lanstein said they were looking for someone else to register the domain names that the Srizbi bots might try to contact to revive themselves. He said they approached other companies such as VeriSign Inc. and Microsoft Corp. After FireEye abandoned its efforts, some other members of the computer security community said they reached out for help from the United States Computer Emergency Readiness Team, or US-CERT, a partnership between the Department of Homeland Security and the private sector to combat cypersecurity threats.”

File this one under opportunity, lost.