Archive for July, 2010

Privacy vulnerability in Apple Safari

Thursday, July 22nd, 2010

Apple’s Safari browser has a privacy vulnerability allowing web sites you visit to extract your personal information (e.g., name, address, phone number) from your computer’s address book. The fix is to turn off Safari’s web form autofill feature, which is selected by default (Preferences > AutoFill > AutoFill web form).


prefs

It’s an interesting Javascript exploit that does not seem to be a problem for other browsers.

JWS special issue on Provenance and Semantic Web

Monday, July 19th, 2010

Journal of Web Semantics Special Issue on
Using Provenance in the Semantic Web

Editors: Yolanda Gil, University of Southern California’s Information Sciences Institute and Paul Groth, Free University of Amsterdam

The Web is a decentralized system full of information provided by diverse open sources of varying quality. For any given question there will be a multitude of answers offered, raising the need for assessing their relative value and for making decisions about what sources to trust. In order to make effective use of the Web, we routinely evaluate the information we get, the sources that provided it, and the processes that produced it. A trust layer was always present in the Web architecture, and Berners-Lee envisioned an “oh-yeah?” button in the browser to check the sources of an assertion. The Semantic Web raises these questions in the context of automated applications (e.g. reasoners, aggregators, or agents), whether trying to answer questions using the Linked Data cloud, use a mashup appropriately or determine trust on a social network. Therefore, provenance is an important aspect of the Web that becomes crucial in Semantic Web research.

This special issue on Using Provenance in the Semantic Web of the Journal of Web Semantics aims to collect representative research in handling provenance while using and reasoning about information and resources on the web. Provenance has been addressed in a variety of areas in computer science targeting specific contexts, such as databases and scientific workflows. Provenance is important in a variety of contexts, including open science, open government, and intellectual property and copyright. Provenance requirements must be understood for specific kinds of Web resources, such as documents, services, ontologies, workflows, and datasets.

We seek high quality submissions that describe recent projects, articulate research challenges, or put forward synergistic perspectives on provenance. We solicit submissions that advance the Semantic Web through exploiting provenance, addressing research issues including:

  • representing provenance
  • relating provenance to the underlying data and information
  • managing provenance in a distributed web
  • reasoning about trust based on provenance
  • handling incomplete provenance
  • taking advantage of the web’s structure for provenance

Submissions may focus on uses of provenance in the Semantic Web for:

  • linked data
  • social networking
  • data integration
  • inference from diverse sources
  • trust and proof

Papers may also focus on application areas, highlighting the challenges and benefits of using provenance:

  • provenance in open science
  • provenance in open government
  • provenance in copyright and intellectual property for documents
  • provenance in web publishing

Important Dates

We will aim at an efficient publication cycle in order to guarantee prompt availability of the published results. We will review papers on a rolling basis as they are submitted and explicitly encourage submissions well before the submission deadline. Submit papers online at the journal’s Elsevier Web site.

  • Submission deadline: 5 September 2010
  • Author notification: 15 December 2010
  • Revisions submitted: 1 February 2010
  • Final decisions: 15 March 2011
  • Publication: 1 April 2011

Submission guidelines

The Journal of Web Semantics solicits original scientific contributions of high quality. Following the overall mission of the journal, we emphasize the publication of papers that combine theories, methods and experiments from different subject areas in order to deliver innovative semantic methods and applications. The publication of large-scale experiments and their analysis is also encouraged to clearly illustrate scenarios and methods that introduce semantics into existing Web interfaces, contents and services. Submission of your manuscript is welcome provided that it, or any translation of it, has not been copyrighted or published and is not being submitted for publication elsewhere. Upon acceptance of an article, the author(s) will be asked to transfer copyright of the article to the publisher. This transfer will ensure the widest possible dissemination of information. Manuscripts should be prepared for publication in accordance with instructions given in the “Guide for Authors” (available from the publisher), details can be found online. The submission and review process will be carried out using Elsevier’s Web-based EES system. Final decisions of accepted papers will be approved by an editor in chief.

About the Journal of Web Semantics

The Journal of Web Semantics is published by Elsevier since 2003. It is an interdisciplinary journal based on research and applications of various subject areas that contribute to the development of a knowledge-intensive and intelligent service Web. These areas include: knowledge technologies, ontology, agents, databases and the semantic grid, obviously disciplines like information retrieval, language technology, human-computer interaction and knowledge discovery are of major relevance as well. All aspects of the Semantic Web development are covered. The current Editors-in-Chief are Tim Finin, Riichiro Mizoguchi and Steffen Staab. For all editors information, see our site.

The Journal of Web Semantics offers to its authors and readers:

  • Professional support with publishing by Elsevier staff
  • Indexed by Thomson-Reuters web of science
  • Impact factor 3.41: the third highest out of 92 titles in Thomson-Reuters’ category “Computer Science, Information Systems

Creating more secure cloud computing environments

Saturday, July 10th, 2010


The Air Force recently highlighted some of our AISL MURI research done at the University of Texas in Dallas on developing solutions for maintaining privacy in cloud computing environments.

The work is part of a three year project funded by the Air Force Office of Scientific Research aimed at understanding the fundamentals of information sharing and developing new approaches to making it easier to do so securely.

Dr. Bhavani Thuraisingham has put together a team of researchers from the UTD School of Management and its School of Economics, Policy and Political Sciences to investigate information sharing with consideration to confidentiality and privacy in cloud computing.

“We truly need an interdisciplinary approach for this,” she said. “For example, proper economic incentives need to be combined with secure tools to enable assured information sharing.”

Thuraisingham noted that cloud computing is increasingly being used to process large amounts of information. Because of this increase, some of the current technologies are being modified to be useful for that environment as well as to ensure security of a system.

To achieve their goals, the researchers are inserting new security programming directly into software programs to monitor and prevent intrusions. They have provided additional security by encrypting sensitive data that is not retrievable in its original form without accessing encryption keys. They are also using Chinese Wall, which is a set of policies that give access to information based on previously viewed data.

The scientists are using prototype systems that can store semantic web data in an encrypted form and query it securely using a web service that provides reliable capacity in the cloud. They have also introduced secure software and hardware attached to a database system that performs security functions.

Assured information sharing in cloud computing is daunting, but Thuraisingham and her team are creating both a framework and incentives that will be beneficial to the Air Force, other branches of the military and the private sector.

The next step for Thuraisingham and her fellow researchers is examining how their framework operates in practice.

“We plan to run some experiments using online social network applications to see how various security and incentive measures affect information sharing,” she said.

Thuraisingham is especially glad that AFOSR had the vision to fund such an initiative that is now becoming international in its scope.

“We are now organizing a collaborative, international dimension to this project by involving researchers from Kings College, University of London, University of Insubria in Italy and UTD related to secure query processing strategies,” said AFOSR program manager, Dr. Robert Herklotz.

USCYBERCOM secret revealed

Thursday, July 8th, 2010
USCYBERCOM logo.  Click to enlarge.

The secret message embedded in the USCYBERCOM logo

     9ec4c12949a4f31474f299058ce2b22a

is what the md5sum function returns when applied to the string that is USCYBERCOM’s official mission statement. Here’s a demonstration of this fact done on a Mac. On linux, use the md5sum command instead of md5.

~> echo -n "USCYBERCOM plans, coordinates, integrates, \
synchronizes and conducts activities to: direct the \
operations and defense of specified Department of \
Defense information networks and; prepare to, and when \
directed, conduct full spectrum military cyberspace \
operations in order to enable actions in all domains, \
ensure US/Allied \ freedom of action in cyberspace and \
deny the same to our adversaries." | md5
9ec4c12949a4f31474f299058ce2b22a
~>

md5sum is a standard Unix command that computes a 128 bit “fingerprint” of a string of any length. It is a well designed hashing function that has the property that its very unlikely that any two non-identical strings in the real world will have the same md5sum value. Such functions have many uses in cryptography.

Thanks to Ian Soboroff for spotting the answer on Slashdot and forwarding it.

Someone familiar with md5 would recognize that the secret string has the same length and character mix as an md5 value — 32 hexadecimal characters. Each of the possible hex characters (0123456789abcdef) represents four bits, so 32 of them is a way to represent 128 bits.

We’ll leave it as an exercise for the reader to compute the 128 bit sequence that our secret code corresponds to.

Cyber Command embeds encrypted message in USCYBERCOM logo

Wednesday, July 7th, 2010
USCYBERCOM logo.  Click to enlarge.

Cyber Command (USCYBERCOM) is the new unit in the US Department of Defense that is responsible for the “defense of specified Department of Defense information networks” and, when needed, to “conduct full-spectrum military cyberspace operations in order to enable actions in all domains, ensure freedom of action in cyberspace for the U.S. and its allies, and deny the same to adversaries.”

Their logo as an encrypted message in its inner gold ring:

          9ec4c12949a4f31474f299058ce2b22a

An article in Wired quotes a USCYBERCOM source:

“It is not just random numbers and does ‘decode’ to something specific,” a Cyber Command source tells Danger Room. “I believe it is specifically detailed in the official heraldry for the unit symbol.”

“While there a few different proposals during the design phase, in the end the choice was obvious and something necessary for every military unit,” the source adds. “The mission.”

Here’s your chance to use those skills you learned in CMSC 443. Wired is offering a T-shirt to the first person who can crack the code. With that hint in hand, go crack this code open. E-mail us your best guess, or leave it in the comments below. Our Cyber Command source will confirm the right answer. And the first person to get it gets his/her choice of a Danger Room T-shirt. USCYBERCOM might offer you a job.

ICWSM best paper award for work on study of online social dynamics

Thursday, July 1st, 2010

A paper by AISL CO-PI Lada Adamic and her students received a best paper award from the Fourth International Conference on Weblogs and Social Media. The paper studied how online social structures effected economic activity in Second Life, a massively multiplayer virtual world that allows its users to create and trade virtual objects and commodities.

The rise of online social environments like Second Life are important for information sharing for two reasons. First, the provide researchers with an opportunity to easily collect vast amounts of data about the behavior of real people. Such data is invaluable in developing and testing new models to better understand the factors that underlie information sharing behavior. Second, online social environments have become an important way that people interact to share information. Understanding how they work and can be better managed is important.

Dr. Adamic and her students estimated the strength of social ties in Second Life using the frequency of chatting between pairs of users. They found that free items are more likely to be exchanged as the strength of the tie increases and that social ties particularly play a significant role in paid transactions for sellers with a moderately sized customer base. They also developed a novel method of visualizing the transaction activities.

Eytan Bakshy, Matthew Simmons, David Huffaker, ChunYuen Teng, Lada Adamic, The Social Dynamics of Economic Activity in a Virtual World, Fourth International AAAI Conference on Weblogs and Social Media, May 2010.

This paper examines social structures underlying economic activity in Second Life (SL), a massively multiplayer virtual world that allows users to create and trade virtual objects and commodities. We find that users conduct many of their transactions both within their social networks and within groups. Using frequency of chat as a proxy of tie strength, we observe that free items are more likely to be exchanged as the strength of the tie increases. Social ties particularly play a significant role in paid transactions for sellers with a moderately sized customer base. We further find that sellers enjoying repeat business are likely to be selling to niche markets, because their customers tend to be contained in a smaller number of groups. But while social structure and interaction can help explain a seller’s revenues and repeat business, they provide little information in the forecasting a seller’s future performance. Our quantitative analysis is complemented by a novel method of visualizing the transaction activity of a seller, including revenue, customer base growth, and repeat business.