Post image for Informatics professor receives $2.4 million to keep computer users safe

Informatics professor receives $2.4 million to keep computer users safe

January 7, 2013

What to do with that dreaded pop-up warning, “Secure Connection Failed. The certificate is not trusted”? Continue, view the security certificate or, tempting fate, add an exception and press forward?

An Indiana University Bloomington professor in the School of Informatics and Computing is helping make such decisions easier. L. Jean Camp, whose research and work foucses on privacy and trust issues in technology, has been awarded more than $2.4 million by the U.S. Department of Homeland Security’s Cyber Security Division to give people the information they need to stop a range of attacks.

An IU team led by Camp will develop user-centered security software that reduces cyber-attacks by making sure people have the information they need to support a security decision when they need it. As opposed to annual training on how to spot a phishing email, when you open an attachment, the computer will ask whether you realize it is from outside the company when opening it. If you still want to open it, Camp says, “We will limit the document’s ability to change the machine. In contrast, people today are asked about every document, or asked something inscrutable like, ‘enable macros?'”

People often fail to see the dangers in sometimes simple actions such as downloading files, or they disable security functions thinking they slow the computer down. Computers also completely fail to identify pertinent facts at decision time: for instance, identifying a never-visited site if you are entering a bank password at a site you have never been to before.

With Camp’s technology, the computer will tell you what you need to know to spot a fraud, at the time you need to know it. The Department of Homeland Security project, called CUTS: Coordinating User and Technical Security, aims to implement these human factors that are often overlooked when security systems fail in different applicable contexts, such as banking, Web browsing, or working from home. Joining Camp as a co-principal investigator on the research is Jim Blythe of the Information Sciences Institute at the University of Southern California.

By understanding the mental models people bring to computing, the researchers will address how problems are identified and how information about those problems is communicated. They can then put forth a prototype that knows when and how much to communicate to the user about the problem, automating responses that are intuitive and timely.

“Unfortunately today, user involvement appears to be required too often and usually in terms that non-technical users have difficulty understanding,” Camp said. “Security decision-making lacks effective decision support.”

People don’t want to be warned too often, and the biases that we have in place — amount of security knowledge, the cost of distraction in making a security decision versus the time-dependent value of the advice — have to be identified and weighed into the models.

“We’ll be designing with simulations of human error in play,” Camp says. “When a system fails systematically because of human behavior, then it is the fault of the system, not the human.”

Leave a Comment