Science of Security (SoS) Newsletter (2015 - Issue 6)
Each issue of the SoS Newsletter highlights achievements in current research, as conducted by various global members of the Science of Security (SoS) community. All presented materials are open-source, and may link to the original work or web page for the respective program. The SoS Newsletter aims to showcase the great deal of exciting work going on in the security community, and hopes to serve as a portal between colleagues, research projects, and opportunities.
Please feel free to click on any issue of the Newsletter, which will bring you to their corresponding subsections:
Publications of Interest
The Publications of Interest provides available abstracts and links for suggested academic and industry literature discussing specific topics and research problems in the field of SoS. Please check back regularly for new information, or sign up for the CPSVO-SoS Mailing List.
Table of Contents
Science of Security (SoS) Newsletter (2015 - Issue 6)
(ID#:15-5928)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
In The News |
This section features topical, current news items of interest to the international security community. These articles and highlights are selected from various popular science and security magazines, newspapers, and online sources.
US News
"Update: Chrysler recalls 1.4M vehicles after Jeep hack", Computerworld, 24 July 2015. [Online]. Following the demonstration of security weaknesses in the 2015 Jeep Cherokee by security experts Charlie Miller and Chris Valasek, Fiat Chrysler Automobiles has issued a recall for 1.4 million of their vehicles. The researchers demonstrated that they were able to, via cellular connection, gain access to the vehicle's entertainment system and from there move to more critical systems. (ID#: 15-50404) See http://www.computerworld.com/article/2952186/mobile-security/chrysler-recalls-14m-vehicles-after-jeep-hack.html
"Trojan Porn-Clicker Infests Android Apps for Hundreds of Thousands of Downloads", InfoSecurity Magazine, 24 July 2015. [Online]. The Google Play Store is under assault by various malicious Android apps, which deliver porn clicker Trojans. Masquerading as harmless video games, the trojans manage to evade Google's malware filter and are downloaded thousands of times before being removed. (ID#: 15-50401) See http://www.infosecurity-magazine.com/news/trojan-pornclicker-infests-android/
"100% of Tested Smartwatches have Big Vulnerabilities", InfoSecurity Magazine, 24 July 2015. [Online]. A part of the IoT trend, the recent advent of smartwatches has created a new niche for security vulnerabilities. After testing ten different smartwatches, HP found that every single one had security holes, including "insufficient authentication, lack of encryption and privacy concerns." Wearable devices are expected to become much more common, notably in the business world, because of their convenience. (ID#: 15-50402) See http://www.infosecurity-magazine.com/news/100-tested-smartwatches-have-big/
"IT security spending priorities don't match threats", GCN, 23 July 2105. [Online; Blog archive]. A survey at the 2015 Black Hat conference indicates that professionals working in security feel that the issues they think are the most critical -- namely, phishing, social network exploits or other forms of social engineering -- are not what they spend the most time addressing in the office. There is a similar disconnect between budget priorities and the latest security threats, according to the survey. (ID#: 15-50408) See http://gcn.com/articles/2015/07/23/black-hat-survey.aspx?admgarea=TC_SecCybersSec
"OPM bringing users back on to e-QIP in increments", SC Magazine, 23 July 2015. [Online]. Late last month, The Office of Personnel Management (OPM) discovered that their e-QIP system, which they use to submit background investigation forms, had a vulnerability that warranted shutting it down. After extensive testing, the OPM has begun incrementally restoring user access to the system. The OPM was recently awarded a $21 million budget boost, which will be used to fix similar issues. (ID#: 15-50397) See http://www.scmagazine.com/users-gaining-access-to-opm-background-investigation-processing-system/article/428232/
"WordPress gets a patch for critical XSS flaw", Computerworld, 23 July 2015. [Online]. Blog hosting website WordPress has released a patch for a critical cross-site scripting (XSS) vulnerability that would allow a hacker to use a compromised non-administrator user account to execute a "complete website takeover". Wordpress-hosted sites are valuable to attackers, who can use them to host malware and launch DDoS attacks. (ID#: 15-50405) See http://www.computerworld.com/article/2951771/security/wordpress-gets-a-patch-for-critical-xss-flaw.html
"5 arrested in JPMorgan hacking case", Computerworld, 22 July 2014. [Online]. Last year, stolen login credentials led to a high-profile breach of JPMorgan, in which information about 76 million households was stolen when the attackers accessed 90 of JPMorgan's servers. U.S. law enforcement has arrested five individuals who were allegedly involved in the breach. Three of the five were arrested on grounds of stock manipulation, with the other two being accused of operating an illegal Bitcoin exchange. (ID#: 15-50406) See http://www.computerworld.com/article/2951215/legal/5-arrested-in-jpmorgan-hacking-case.html
"ICE unveils expanded cyber forensics lab", FCW, 22 July 2015. [Online]. The incredible capabilities of modern computing and internet have unfortunately created a new means by which cybercriminals can carry out their activities. To combat this, the Immigration and Customs Enforcement's Cyber Crimes Center (C3) undertook a massive upgrade to it's facilities. The new lab has new and advanced cyber forensics capabilities and data processing power, which will aid in thwarting child exploitation, drug trafficking, and other crimes. (ID#: 15-50410) See http://fcw.com/articles/2015/07/22/dhs-ice-expansion.aspx
"Cybercrime is paying with 1,425% return on investment", Cyber Defense Magazine, 22 June 2015. [Online]. A report by Trustwave indicates that the typical cybercriminal should expect to make a 1,425% return on the money they spend executing attacks. They found that ransomware and exploit kits are among the most common methods used to compromise a victim's systems, along with extortion and ransoms. CTB-Locker was found to be one of the more notable pieces of malware. (ID#: 15-50363) See http://www.cyberdefensemagazine.com/cybercrime-is-paying-with-1425-return-on-investment/
"Free security tools help detect Hacking Team malware", SC Magazine, 21 July 2015. [Online]. Rook Security and Facebook have both taken action against Hacking Team's malware by issuing free security tools to detect files that were revealed during the 400GB Hacking Team leak. The leaked information has been invaluable for creating such tools, as well as fixing vulnerabilities that Hacking Team had kept secret; however, hackers have been utilizing the information as well. (ID#: 15-50398) See http://www.scmagazine.com/rook-security-facebook-release-free-security-tools-in-response-to-hacking-team-leaks/article/427682/
"Senators Propose Bill to Tighten Vehicle Security", Security Magazine, 21 July 2015. [Online]. The Security and Privacy in Your Car (SPY Car) Act was filed by Senators Edward Markey and Richard Blumenthal in the hopes of standardizing and mandating countermeasures against vehicle cyber attacks. Under the legislation, cars would be required to have active countermeasures against hacking, and standards would be developed by the FTC on the privacy and transparency of collecting data from vehicles. (ID#: 15-50400) See http://www.securitymagazine.com/articles/86530-senators-propose-bill-to-tighten-vehicle-security
"Hacking Drones Close to Being Drawn up by Boeing and Hacking Team", Hacked, 21 July 2015. [Online]. Among the alleged 400GB of data leaked in from the Hacking Team hack is an email conversation Hacking Team and Boeing subsidiary Insitu, in which the two groups negotiated a deal to team up and put Hacking Team's Wi-Fi hacking technology onto an "airborne system", such as a multi-copter. Though negotiations went stale, this indicates that airborne malware is close to becoming a reality. (ID#: 15-50395) See https://hacked.com/hacking-drones-close-drawn-boeing-hacking-team/
"Phishing campaigns target US government agencies exploiting Hacking Team flaw CVE-2015-5119", Security Affairs, 20 July 2015. [Online]. The FBI is warning that phishers are targeting government agencies with CVE-2015-5119, a Adobe Flash vulnerability that was discovered as part of the Hacking Team breach. Adobe released a patch, but APT groups are still going after un-patched systems. (ID#: 15-50415) See http://securityaffairs.co/wordpress/38707/cyber-crime/phishing-cve-2015-5119.html
"More Retailers Hit by New Third-Party Breach?", GovInfoSecurity, 20 July 2015. [Online]. Large retailers including CVS, Rite-Aid, Sam's Club, and Walmart Canada are the suspected victims of a data breach tied to PNI Digital Media Inc., which provides online photo services for the retailers. Some of the retailers confirmed that payment card data was compromised. Third-party vendors are a persistent security issue for companies; exporting services to these external groups can mean losing complete control over the security of your data. (ID#: 15-50416) See http://www.govinfosecurity.com/more-retailers-hit-by-new-third-party-breach-a-8416#
"Army National Guard Exposes 850K Service Member Records", InfoSecurity Magazine, 18 July 2015. [Online]. It is often human error, not vulnerable computer systems, that lead to security incidents, the Army National Guard learned recently. Personal information of over 850,000 members was put at risk when an employee transferred files to a non-approved data center. According to Adam Levin, "This incident demonstrates once more that any system is only as secure as its weakest link and humans have proven yet again that we are the weakest link." (ID#: 15-50403) See http://www.infosecurity-magazine.com/news/national-guard-exposes-850k-member/
"Windows XP: The undead OS", GCN, 17 July 2015. [Online]. Many government agencies and other entities are still using Windows XP, for which Microsoft ended support back in April of 2015. Analysis indicates that as much as 17% of the worldwide desktop OS market share, which puts vast numbers of machines at much greater risk of suffering security incidents. Legacy systems are an item of concern for government in particular; for instance, it is suspected that XP systems could have played a role in the massive OPM breach this year. (ID#: 15-50409) See http://gcn.com/blogs/cybereye/2015/07/windows-xp-undead-os.aspx?admgarea=TC_SecCybersSec
"Softchoice finds 21% of servers still running on Windows Server 2003", Softchoice, 17 July 2015. [Online]. A recent analysis of nearly 100,000 servers by SoftChoice found that a worrying 21% of servers are using Windows Server 2003, for which support is being officially ended by Microsoft. A meager seven percent of organizations had no instances of old server operating systems running, which puts the other 93% (especially those running MS server 2003) at greater risk of becoming the victim of hacking. (ID#: 15-50399) See http://www.softchoice.com/about/press/2015/167
"4.5 Million UCLA Health Patients' Data Compromised In Cyber Attack", Forbes, 17 July 2015. [Online]. The UCLA Hospital System has announced that patient information, including Social Security numbers, dates of birth, and other personal data of 4.5 million patients was compromised. Though suspicious activity was noticed as far back as October 2014, it was not until May 2015 that the network had been breached. The investigation is still ongoing; it is not known whether any personal information was actually accessed. (ID#: 15-50413) See http://www.forbes.com/sites/katevinton/2015/07/17/4-5-million-ucla-health-patients-data-compromised-in-cyber-attack/?ss=Security
"United hackers given million free flight miles", BBC, 16 July 2015. [Online]. White hats everywhere have a new source of inspiration after United Airlines rewarded two ethical hackers with a million free miles each as part of their bug bounty program. The win-win scenario is a case study for the potential effectiveness of bug bounty programs during a time where cyber attacks and breaches on businesses are costing the U.S. economy more and more. (ID#: 15-50377) See http://www.bbc.com/news/technology-33552195
"Swipes, Taps and Cursor Movements Can Foil Cyberthieves", Tech News World, 16 July 2015. [Online]. Behavioral characteristics can be used to provide an extra layer of authentication, according the BioCatch. By analyzing the way in which a user interacts with a webpage or device, a behavioral profile can be built for any given user. When someone claiming to be you uses uncharacteristic actions (unusually hard screen taps, for example), identity thieves can be unmasked. (ID#: 15-50412) See http://www.technewsworld.com/story/82280.html
"Darkode Shutdown: FireEye Intern Accused Of Creating $65,000 Android Malware", Forbes, 15 July 2015. [Online]. The FBI and Europol confirmed that Darkode, a very popular cybercrime forum, was shut down. Twenty-eight individuals were arrested as part of the shut-down, including 20-year-old Morgan Culbertson, an ex-intern at FireEye who is accused of developing the sophisticated Android malware known as Dedroid. (ID#: 15-50414) See http://www.forbes.com/sites/thomasbrewster/2015/07/15/fireeye-intern-dendroid-charges/?ss=Security
"Vietnamese man gets 13 years for massive ID theft scheme", Computerworld, 14 July 2015. [Online]. Hieu Minh Ngo was convicted by a U.S. District Court of wire fraud and identity fraud, among others. Ngo tricked data aggregator Court Ventures into giving him access to a database of sensitive personal records. He sold this data, as well as data from other sources, to cybercriminals worldwide. Ngo will spend 13 years in prison. (ID#: 15-50407) See http://www.computerworld.com/article/2948219/data-security/vietnamese-man-gets-13-years-for-massive-id-theft-scheme.html
"How To Make Internet Voting Secure", Dark Reading, 10 July 2015. [Online]. The U.S. Vote Foundation has just commissioned a report detailing the steps necessary to make internet voting secure and effective. Because of the increasing popularity of remote voting, measures must be in place to keep it secure from interception and tampering. Threats from both malware and human error must be minimized, while also ensuring that the voter has his/her anonymity protected. (ID#: 15-50396) See http://www.darkreading.com/cloud/how-to-make-internet-voting-secure/d/d-id/1321262
"OPM Director Katherine Archuleta Resigns After Federal Data Breach Affects 25 Million Americans", Forbes, 10 July 2015. [Online]. Within 24 hours of the announcement that 21.5 million Americans had their information compromised (on top of the 4.2 million from the first breach), OPM Director Katherine Archuleta has resigned. In her resignation statement, Archuleta states that she hopes "new leadership" will be able to tackle the slew of issues facing the organization. (ID#: 15-50389) See http://www.forbes.com/sites/katevinton/2015/07/10/opm-director-katherine-archuleta-resigns-after-federal-data-breach-affects-25-million-americans/?ss=...
"The Insurance Industry's Unique Vantage Point On Cyber Security", Forbes, 10 July 2015. [Online]. Christopher Skroupa interviews Scott Kannry, CEO of Axio Global, on the state of the cyber security industry. Because of the way it's business model works, Kannry believes that the insurance industry has a "very unique vantage point", from which lessons can be learned about how to take the right approach to cyber threats. (ID#: 15-50390) See http://www.forbes.com/sites/christopherskroupa/2015/07/09/the-insurance-industrys-unique-vantage-point-on-cyber-security/?ss=Security
"Wi-Fi Password Sharing Feature in Windows 10 Raising Security Concerns", Information Security Buzz, 10 July 2015. [Online]. Window's newest operating system, Windows 10, has a Wi-Fi password sharing feature that has some security experts concerned. Wi-Fi Sense, as it is known, is set to share your Wi-Fi Network password with your contacts by default. However, as Tripwire's Manager of Security Research points out, "This doesn't decrease security, it simply makes an insecure action easier." (ID#: 15-50392) See http://www.informationsecuritybuzz.com/wi-fi-password-sharing-feature-in-windows-10-raising-security-concerns/
"Security Researchers Hack Politicians Over Public Wi-Fi", Infosecurity Magazine, 09 July 2015. [Online]. A group of security experts hacked three politicians in an effort to bring awareness to the insecurity of public Wi-Fi. The politicians fell for three different unsophisticated attacks: phishing, a VoIP call interception, and a "simple public Wi-Fi attack". The researchers hope that demonstrations like these will help educate lawmakers on cyber issues so they can use their positions to push technology like HTTPS encryption. (ID#: 15-50375) See http://www.infosecurity-magazine.com/news/security-researchers-hack/
"OpenSSL Security Advisory", OpenSSL.org, 09 July 2015. [Online]. The OpenSSL project has announced an alternative chains certificate forgery (CVE-2015-1793) vulnerability caused by a logic error in the certificate verification process that could allow an attacker to use invalid certificates. The issue affects versions 1.0.2c, 1.0.2b, 1.0.1n and 1.0.1o. (ID#: 15-50386) See https://mta.openssl.org/pipermail/openssl-announce/2015-July/000040.html
"Cyber attack on U.S. power grid could rack up $1 trillion in losses, study says", SC Magazine, 08 July 2015. [Online]. A study by the Centre for Risk Studies at Cambridge University and insurer Lloyd's of London concluded that the economic losses of a cyber attack could reach as much as one trillion USD, with the insurance industry alone suffering a hefty $21 billion in losses. American voters are becoming increasingly aware of the threat that cyber attacks pose, with 32% considering them a major threat, just under the 36% concerned with terrorism. (ID#: 15-50367) See http://www.scmagazine.com/american-voters-rank-cyber-attacks-second-biggest-threat/article/425391/
"Ransomware mimicks APT campaigns for first time", SC Magazine, 08 July 2015. [Online]. For the first time, ransomware has been observed using evasion techniques usually used only by Advanced Persistent Threat (APT) groups. "Operation Kofer", as it is known, is never the same on any given victim machine; it generates a new variant and different delivery/packaging methods for each and every individual victim. The malicious payload itself is not unique, though it is delivered with other "junk" to mask it's malicious nature. (ID#: 15-50368) See http://www.scmagazine.com/operation-kofer-identified-in-europe/article/425382/
"Mysterious Hacking Group Wild Neutron Returns to Wreak Havoc", Infosecurity Magazine, 08 July 2015. [Online]. Wild Neuron, a hacking group that gained notoriety when it attacked Apple, Facebook, Twitter and Microsoft in 2013, appears to be making a comeback. Using stolen code verification certificates and a Flash Player exploit, they are focusing on a set of targets that seems to indicate a financial motive. (ID#: 15-50376) See http://www.infosecurity-magazine.com/news/mysterious-hacking-group-wild/
"Adobe to Patch Hacking Team Flash Player Bug", Infosecurity Magazine, 08 July 2015. [Online]. Adobe will be issuing a patch for the flaw (CVE-2015-5119) that was discovered as part of the 400GB Hacking Team leak. The recent Hacking Team incident highlights the large demand for custom exploit kits and hacking software by "both sides of cyber-conflicts", as well as the significance of Adobe software as a "prime target" for such software. (ID#: 15-50378) See http://www.infosecurity-magazine.com/news/adobe-to-patch-hacking-team-flash/
"Defense secretary to renew call for cooperation with tech industry", Computerworld, 08 July 2015. [Online]. U.S. Defense Secretary Ashton Carter is pushing for increased cooperation between the technology industry and military, acknowledging the "decline in trust of the military" following the Snowden incident. Despite (or perhaps because of) the size of the U.S. armed forces, the Pentagon has a strong need for the innovation and talent that can only be provided by private, independent businesses. (ID#: 15-50379) See http://www.computerworld.com/article/2946013/encryption/defense-secretary-to-renew-call-for-cooperation-with-tech-industry.html
"Did hackers remotely execute 'unexplained' commands on German Patriot missile battery?", Computerworld, 08 July 2015. [Online]. Questions about a potential hacking of a German patriot missile battery are arising after the missile system executed "unexplained" commands. Whether it was an actual hacking incident of just technical glitch, the event raises concerns about the security of high-tech weapons and adds such weapons to the long list of increasingly automated, high-risk technologies that are potentially vulnerable to cyber attack. (ID#: 15-50380) See http://www.computerworld.com/article/2945383/cyberwarfare/did-hackers-remotely-execute-unexplained-commands-on-german-patriot-missile-battery.html
"Comey renews encryption plea on Capitol Hill", FCW, 08 July 2015. [Online]. The debate over law enforcement access to encrypted communications is not cooling down as FBI director James Comey reiterates the need for law enforcement agencies to thwart terror plots. Technology experts -- including two House members with CMSC degrees, as well as various members of industry and academia -- have insisted that putting a government backdoor in encryption compromises encryption entirely. (ID#: 15-50383) See http://fcw.com/articles/2015/07/08/comey-encryption-hearing.aspx
"Pentagon's Silicon Valley unit gets $1.75M for fiscal 2015", FCW, 08 July 2015. [Online]. In the hopes of fostering a relationship between the Pentagon and Silicon Valley, the Defense Department has opened a new, full-time Silicon Valley office. By strengthening the relationship between the Pentagon and tech sector, the Defense Department hopes to take advantage of the innovation provided by commercial entities, and to "serve as a broker between acquisition officials and tech executives". (ID#: 15-50384) See http://fcw.com/articles/2015/07/08/pentagon-fiscal-2015.aspx
"NYSE, United Airlines Shutdowns Spark Paranoia", Information Week, 08 July 2015. [Online]. The networks of the New York Stock Exchange and United Airlines both went down on the same morning due to technical issues, initially leading many to believe that malicious activity was responsible. United Airlines faced a 1 hour and 19 minute FAA ground stop due to the inability for the company to check passenger records with the no-fly list, while trading on NYSE's non-electronic exchanges were halted for five hours. (ID#: 15-50393) See http://www.informationweek.com/nyse-united-airlines-shutdowns-spark-paranoia/a/d-id/1321228?
"Is Isolating the Internet Key to Bulletproof Security?", Tech News World, 07 July 2015. [Online]. Since the advent of computing, the status quo for cybersecurity has been to detect malicious activity and then try to block it. However, a new paradigm for security could change everything: isolation. Menlo Security has developed an Isolation Platform that utilizes containers in the cloud to keep all internet activity -- not just malicious network activity -- away from an organization's systems. (ID#: 15-50385) See http://www.technewsworld.com/story/82245.html
"NIST drafts security building blocks", GCN, 07 July 2015. [Online; Blog archive]. The NIST is aiming to set the stage for an increase in email security and mobile security that via personal identity verification (PIV) credentials. These "building blocks" will be used to form the NIST Cybersecurity Practice Guides,a guide to implementing a cybersecurity reference design. (ID#: 15-50381) See http://gcn.com/blogs/pulse/2015/07/nist-cyber-building-blocks.aspx?admgarea=TC_SecCybersSec
"USD Creates New Cyber Security Center", Security Magazine, 07 July 2015. [Online]. The University of San Diego announced plans to create the USD Center for Cyber Security Engineering and Technology. The new center will support the Master of Science in Cyber Security Engineering as well as an online Master of Science in Cyber Security Information Technology Leadership, as well as several publicly-available certificate programs. Additionally, research and development of cyber security solutions will be supported under the new center. (ID#: 15-50372) See http://www.securitymagazine.com/articles/86507-usd-creates-new-cyber-security-center
"When 'int' is the new 'short' ", Project Zero, 07 July 2015. [Online]. Project Zero team member Mark Brand discovered an issue in Google Chrome's IOBuffer interface that allows "unsandboxed arbitrary code execution from a drive-by with a single bug", which is very serious. The culprit: a simple misuse of the int type to denote the size of a buffer instead of the more correct size_t type. Misuse of types, especially integer-related ones, is a common cause for vulnerabilities with using C/C++. (ID#: 15-50387) See http://googleprojectzero.blogspot.com/2015/07/when-int-is-new-short.html
"Hacking Team hacked; leaked documents confirm sale of software to Sudan and Ethiopia", SC Magazine, 06 July 2015. [Online]. Accusations of human rights violations against intrusion software development company "Hacking Team" seem to have been justified following the disclosure of internal documents indicating that the group sold software to the governments of Ethiopia and Sudan, enabling them to target journalists, hack private companies, and evade UN sanctions. (ID#: 15-50369) See http://www.scmagazine.com/hacking-team-systems-breached-and-docs-posted-online/article/424860/
"Oracle PeopleSoft attack could enable big data breaches", SC Magazine, 06 July 2015. [Online]. ERPScan researchers have discovered that 231 Oracle PeopleSoft systems are vulnerable to the TokenChpoken attack, which enables bad actors to gain full access to the PeopleSoft system. PeopleSoft software, which is used to manage resources like Social Security numbers and payment card data, could be of great value to attackers. ERPScan noted that Harvard, which suffered a data breach recently, is one of the 231. (ID#: 15-50370) See http://www.scmagazine.com/peoplesoft-systems-vulnerable-to-tokenchpoken/article/424863/
"Government credentials show up on paste sites", GCN, 06 July 2015. [Online; Blog archive]. A year-long investigation in 2014 by a security threat analyst found government credentials of 47 U.S. government agencies from 89 unique domains on a collection of public paste sites. Techniques like two-factor authentication can be used to mitigate the risk of credential leaks, though both government and private industry both have room to improve in this area -- the OPM breach and others like it are evidence of this. (ID#: 15-50382) See http://gcn.com/blogs/cybereye/2015/07/passwords-for-sale.aspx?admgarea=TC_SecCybersSec
"Mastercard Testing Facial Recognition Security App", Security Magazine, 03 July 2015. [Online]. MasterCard has plans to implement a new take on multi-factor authentication: online shoppers will need to take a facial scan when making a purchase. Fingerprint scanning will also be an option for MasterCard customers, and plans are even being made to develop and use heartbeat-recognition technology for identity verification. (ID#: 15-50373) See http://www.securitymagazine.com/articles/86499-mastercard-testing-facial-recognition-security-app
"Former Georgia-Pacific sysadmin charged with damaging protected computers", SC Magazine, 02 July 2015. [Online]. Louisianan Brian Johnson was arrested and indicted with charges of intentionally damaging computer systems of his former employer, Georgia-Pacific. Johnson allegedly began to attack the manufacturing company after he was fired from his position as IT specialist and sysadmin, resulting in over $5000 in losses. (ID#: 15-50371) See http://www.scmagazine.com/louisiana-man-arrested-for-damaging-employers-computers/article/424513/
"CryptoWall 3.0 Attacks via Google Drive", InfoSecurity Magazine, 02 July 2015. [Online]. Heimdal Security discovered the latest campaign of the third revision of the notorious CryptoWall malware. CryptoWall 3.0 uses the exceedingly common RIG exploit kit to facilitate drive-by download attacks on 45 websites. The initial payload is retrieved from Google Drive, at which point the rest of the malware is downloaded from compromised webpages. (ID#: 15-50365) See http://www.infosecurity-magazine.com/news/cryptowall-30-attacks-via-google/
"Harvard University announces network intrusion, possible data exposure", SC Magazine, 02 July 2015. [Online]. Login credentials for Harvard's Faculty of Arts and Sciences (FAS) and Central Administration information technology might be compromised, the university announced on June 19th. Harvard is requiring members of various programs and schools to change their passwords in order to prevent further malicious activity. Many universities -- even ivy league schools -- have dated security measures, making it easy for attackers to gain entry. (ID#: 15-50366) See http://www.scmagazine.com/harvard-login-credentials-may-have-been-exposed-in-breach/article/424500/
"Europol arrested members of a gang behind Zeus And SpyEye", Cyber Defense Magazine, 30 June 2015. [Online]. European law enforcement has arrested five suspected cybercriminals accused of developing and running the notorious Zeus and SpyEye botnets. The criminal group is estimated to have caused over 2 million Euros in damage. They appeared to have been a "structured and efficient organization", bringing the Gameover Zeus botnet back up after having been defeated by the FBI and Europol a month prior. (ID#: 15-50374) See http://www.cyberdefensemagazine.com/europol-arrested-members-of-a-gang-behind-zeus-and-spyeye/
International News
"The Dinosaurs of Cybersecurity Are Planes, Power Grids and Hospitals", Tech Crunch, 10 July 2015. [Online]. One of the most prominent risks in cybersecurity comes in the form of infrastructure and things like airplanes and hospitals. As these systems are compromised, patches are developed to remedy the problem. However patches are slow to roll out and take a great deal of time to develop. By the time patches are complete, often, the damage has already been done. (ID#: 15-60040)
See: http://techcrunch.com/2015/07/10/the-dinosaurs-of-cybersecurity-are-planes-power-grids-and-hospitals/
"Huge Global Sting Yields World's Biggest Hacker Bust Ever", CIO Today, 16 July 2015. [Online]. U.S. Attorney David J. Hickton said that of all the criminal forums online, Darkode was perhaps the most dangeous. Darkode was taken down by federal officials, and now all responsible parties will be facing charges. This is a major achievement in the fight against cyber crime as it was the largest take down in cyber crime history. (ID#: 15-60054)
See: http://www.cio-today.com/article/index.php?story_id=02300001CE4J
"Microsoft is Reportedly Planning to Buy an Israeli Cyber Security Firm for $320 Million", Business Insider, 20 July 2015. [Online]. A new report shows that Microsoft has a deal in place to purchase the Israeli cybersecurity company, Adallom. Adallom is expected to become Microsoft's cyber security center for the entirety of Israel. Adallom was founded in 2012 and has since grown to 80 employees. (ID#: 15-60041)
See: http://www.businessinsider.com/r-microsoft-to-buy-israeli-cyber-security-firm-adallom-report-2015-7
"Hackers Remotely Kill a Jeep on the Highway - With Me in it", Wired, 21 July 2015. [Online]. Charlie Miller and Chris Valase successfully hacked in to a Jeep Cherokee from a remote computer, all while the car was being driven miles away. The two were able to take full control of nearly everything from the windshield wipers and air conditioning to the steering wheel itself. They plan on releasing some of their findings at Black Hat in Las Vegas in August. (ID#: 15-60042)
See: http://www.wired.com/2015/07/hackers-remotely-kill-jeep-highway/
"Postal Service Lacked 'Cybersecurity Culture' Before Hack, Watchdog Says", NextGov, 22 July 2015. [Online]. Last November, the USPS fell victim to a cyber attack in which the data of 800,000 employees and nearly 3 million customers was stolen. Reports later revealed that only 1% of employees, compared to 80% on average, had completed any kind of cyber security awareness training. Since the attack the USPS has worked to improve their staffing, computing environment protections, awareness, and more. (ID#: 15-60045)
See: http://www.nextgov.com/cybersecurity/2015/07/postal-service-lacked-cybersecurity-culture-hack-watchdog-says/118371/
"Lawmakers Seek to Boost US Cyber Security", Voice of America News, 22 July 2015. [Online]. Congress is looking to pass a new cyber security bill to prevent another hack, following the OPM fallout. The bill would give the Department of Homeland Security the power to monitor certain federal networks, and upgrade their breach detection. Meanwhile, others are calling for President Obama to name the party responsible for the attack on the Office of Personnel Management. (ID#: 15-60047)
See: http://www.voanews.com/content/congress-cyber-security/2873734.html
"Japan to train thousands on cyber-security ahead of 2020 Olympics", SC Magazine, 23 July 2015. [Online]. Japan's Ministry of Internal Affairs and Communications plans to ask for 20 billion yend over the four years leading up to the 2020 Olympics. The money will be used to fund training for local authorities, schools, and businesses. During the course of the games, olympic cities are frequent targets for cyber attacks. (ID#: 15-60046)
See: http://www.scmagazine.com/japan-to-train-thousands-on-cyber-security-ahead-of-2020-olympics/article/428048/
"Fiat Chrysler Issues Massive Recall After Hack", Top Tech News, 24 July 2015. [Online]. Two hackers, Charlie Miller and Chris Valasek, were able to successfully hack into and assume total control of a Jeep Cherokee. The flaw was later revealed to be a vulnerability in the Uconnect system. As a result, Chrysler has issued a recall on all affected models. (ID#: 15-60051)
See: http://www.toptechnews.com/article/index.php?story_id=011000DNTS82
"Super-Scary Android Flaw Found", Tech News World, 28 July 2015. [Online]. A new flaw discovered in Android's Stagefright media engine should have user's concerned. The flaw, which affects 95% of users or 950 million phones, can be exploited without the user ever touching their phone. A specially crafted MMS message sent to a user's number is all it takes for their phone to be completely and totally compromised. Experts say that the exploit has not been seen in the public yet, however, that may change now that the vulnerabilities have been released to the public. (ID#: 15-60048)
See: http://www.technewsworld.com/story/82315.html
"Cybersecurity Bill Could 'Sweep Away' Internet Users' Privacy, Agency Warns", The Guardian, 3 August 2015. [Online]. A new revision of the Cybersecurity Information Sharing Act bill will be voted on by the Senate. The bill allows companies with large amounts of information to share it with the appropriate government agencies, who can then share the information as they see fit. The bill has turned a lot of attention to companies such as Google and Facebook who possess large amounts of user's data and online habits. (ID#: 15-60044)
See: http://www.theguardian.com/world/2015/aug/03/cisa-homeland-security-privacy-data-internet
"Hacking Victim JPMorgan Chasing Cybersecurity Fixes", Investors, 4 August 2015. [Online]. Last year, JP Morgan Chase suffered a cyber attack that compromised the contact information of roughly 76 million customers. Although no accounts or social security numbers were taken, the company is planning on taking measures to prevent another major attack. The bank says that theire cyber security budget will be increased from $250 million to $500 million in order to improve upon their analytics, testing and coverage. (ID#: 15- 60043)
See: http://news.investors.com/business/080415-764935-jpmorgan-chase-to-double-cybersecurity-spending.htm
"Homeland Official Asks Information Security Crowd to Start Building Trust with the Government", Newser, 6 August 2015. [Online]. A top official for the Obama administration spoke at the Black Hat convention about the need for trust between the government and the security community. He said that in order to prevent future cyber-attacks and to serve the greater good, the two groups need to come together. (ID#: 15-60052)
See: http://www.newser.com/article/8af6ee60ef9f41c7b52858a43f49f370/homeland-official-asks-information-security-crowd-to-start-building-trust-with-the-government.html
"Android Firms Team on Monthly Security Fixes", CIO Today, 7 August 2015. [Online]. Following the revelation that an exploit in Android's Stagefright media engine, Google, Samsung, and LG have all announced that they plan to push out security patches each month. They believe that these teams will help clear up the long wait times sometimes experienced when software patches are pushed out by the carriers. (ID#: 15-60050)
See: http://www.cio-today.com/article/index.php?story_id=020000OU8ELS
"Kaspersky Lab: Based In Russia, Doing Cybersecurity In The West", NPR, 10 August 2015. [Online]. One of the leading security software providers, Kaspersky Labs, has come in to question over their connection to the Russian government. Kaspersky has been adamant that there is no secret sharing going on with Russian officials and that it would be nothing but destroy the business he worked so hard to build. (ID#: 15-60053)
See: http://www.npr.org/sections/alltechconsidered/2015/08/10/431247980/kaspersky-lab-a-cybersecurity-leader-with-ties-to-russian-govt
"India, US Holding Talks to Step Up Cyber Security; Seek to Tap Digital Economies", India Times, 11 August 2015. [Online]. US and Indian officials are meeting to discuss how they can work together to better protect cyber space. Cooperation is becoming one of the biggest keys in defending against cyber attacks. In addition, India is seeking guidance as how to protect their transforming economy, which is quickly turning more digitalized like that of the US. (ID#: 15-60049)
See: http://economictimes.indiatimes.com/news/defence/india-us-holding-talks-to-step-up-cyber-security-seek-to-tap-digital-economies/articleshow/48430302.cms
(ID#: 15-5929)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Conferences |
The following pages provide highlights on Science of Security related research presented at the following International Conferences:
(ID#: 15-5930)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
International Conference: Online Social Networks, 2014, Dublin, Ireland |
The Second ACM Conference on Online Social Networks was held October 1-2, 2014 in Dublin Ireland.
Presentations from the sessions on privacy and anonymity, network identity, security in social networks are cited here. Materials were recovered from the ACM Digital Library on March 2, 2015.
Mishari Almishari. Ekin Oguz,Gene Tsudik; Fighting Authorship Linkability With Crowdsourcing; COSN '14 Proceedings of the Second ACM Conference on Online Social Networks, October 2014, Pages 69-82.
Doi: 10.1145/2660460.2660486
Abstract: Massive amounts of contributed content -- including traditional literature, blogs, music, videos, reviews and tweets -- are available on the Internet today, with authors numbering in many millions. Textual information, such as product or service reviews, is an important and increasingly popular type of content that is being used as a foundation of many trendy community-based reviewing sites, such as TripAdvisor and Yelp. Some recent results have shown that, due partly to their specialized/topical nature, sets of reviews authored by the same person are readily linkable based on simple stylometric features. In practice, this means that individuals who author more than a few reviews under different accounts (whether within one site or across multiple sites) can be linked, which represents a significant loss of privacy. In this paper, we start by showing that the problem is actually worse than previously believed. We then explore ways to mitigate authorship linkability in community-based reviewing. We first attempt to harness the global power of crowdsourcing by engaging random strangers into the process of re-writing reviews. As our empirical results (obtained from Amazon Mechanical Turk) clearly demonstrate, crowdsourcing yields impressively sensible reviews that reflect sufficiently different stylometric characteristics such that prior stylometric linkability techniques become largely ineffective. We also consider using machine translation to automatically re-write reviews. Contrary to what was previously believed, our results show that translation decreases authorship linkability as the number of intermediate languages grows. Finally, we explore the combination of crowdsourcing and machine translation and report on results.
Keywords: author anonymization, author identification, author linkability, authorship attribution, crowdsourcing, stylometry (ID#:15-3944)
URL: http://doi.acm.org/10.1145/2660460.2660486
Sai Teja Peddinti, Keith W. Ross, Justin Cappos; "On the Internet, Nobody Knows You're A Dog": A Twitter Case Study Of Anonymity In Social Networks; COSN '14 Proceedings of the Second ACM Conference on Online Social Networks , October 2014, Pages 83-94. Doi: 10.1145/2660460.2660467
Abstract: Twitter does not impose a Real-Name policy for usernames, giving users the freedom to choose how they want to be identified. This results in some users being Identifiable (disclosing their full name) and some being Anonymous (disclosing neither their first nor last name). In this work we perform a large-scale analysis of Twitter to study the prevalence and behavior of Anonymous and Identifiable users. We employ Amazon Mechanical Turk (AMT) to classify Twitter users as Highly Identifiable, Identifiable, Partially Anonymous, and Anonymous. We find that a significant fraction of accounts are Anonymous or Partially Anonymous, demonstrating the importance of Anonymity in Twitter. We then select several broad topic categories that are widely considered sensitive--including pornography, escort services, sexual orientation, religious and racial hatred, online drugs, and guns--and find that there is a correlation between content sensitivity and a user's choice to be anonymous. Finally, we find that Anonymous users are generally less inhibited to be active participants, as they tweet more, lurk less, follow more accounts, and are more willing to expose their activity to the general public. To our knowledge, this is the first paper to conduct a large-scale data-driven analysis of user anonymity in online social networks.
Keywords: anonymity, behavioral analysis, online social networks, quantify, twitter (ID#:15-3945)
URL: http://doi.acm.org/10.1145/2660460.2660467
Emre Sarigol, David Garcia, Frank Schweitzer; Online Privacy as a Collective Phenomenon; COSN '14 Proceedings of the Second ACM conference on Online Social Networks, October 2014, Pages 95-106. Doi: 10.1145/2660460.2660470
Abstract: The problem of online privacy is often reduced to individual decisions to hide or reveal personal information in online social networks (OSNs). However, with the increasing use of OSNs, it becomes more important to understand the role of the social network in disclosing personal information that a user has not revealed voluntarily: How much of our private information do our friends disclose about us, and how much of our privacy is lost simply because of online social interaction? Without strong technical effort, an OSN may be able to exploit the assortativity of human private features, this way constructing shadow profiles with information that users chose not to share. Furthermore, because many users share their phone and email contact lists, this allows an OSN to create full shadow profiles for people who do not even have an account for this OSN. We empirically test the feasibility of constructing shadow profiles of sexual orientation for users and non-users, using data from more than 3 Million accounts of a single OSN. We quantify a lower bound for the predictive power derived from the social network of a user, to demonstrate how the predictability of sexual orientation increases with the size of this network and the tendency to share personal information. This allows us to define a privacy leak factor that links individual privacy loss with the decision of other individuals to disclose information. Our statistical analysis reveals that some individuals are at a higher risk of privacy loss, as prediction accuracy increases for users with a larger and more homogeneous first- and second-order neighborhood of their social network. While we do not provide evidence that shadow profiles exist at all, our results show that disclosing of private information is not restricted to an individual choice, but becomes a collective decision that has implications for policy and privacy regulation.
Keywords: prediction, privacy, shadow profiles (ID#:15-3946)
URL: http://doi.acm.org/10.1145/2660460.2660470
Luca Rossi, Mirco Musolesi; It's the Way You Check-In: Identifying Users in Location-Based Social Networks; COSN '14 Proceedings of the Second ACM conference on Online Social Networks, October 2014, Pages 215-226. Doi: 10.1145/2660460.2660485
Abstract: In recent years, the rapid spread of smartphones has led to the increasing popularity of Location-Based Social Networks (LBSNs). Although a number of research studies and articles in the press have shown the dangers of exposing personal location data, the inherent nature of LBSNs encourages users to publish information about their current location (i.e., their check-ins). The same is true for the majority of the most popular social networking websites, which offer the possibility of associating the current location of users to their posts and photos. Moreover, some LBSNs, such as Foursquare, let users tag their friends in their check-ins, thus potentially releasing location information of individuals that have no control over the published data. This raises additional privacy concerns for the management of location information in LBSNs. In this paper we propose and evaluate a series of techniques for the identification of users from their check-in data. More specifically, we first present two strategies according to which users are characterized by the spatio-temporal trajectory emerging from their check-ins over time and the frequency of visit to specific locations, respectively. In addition to these approaches, we also propose a hybrid strategy that is able to exploit both types of information. It is worth noting that these techniques can be applied to a more general class of problems where locations and social links of individuals are available in a given dataset. We evaluate our techniques by means of three real-world LBSNs datasets, demonstrating that a very limited amount of data points is sufficient to identify a user with a high degree of accuracy. For instance, we show that in some datasets we are able to classify more than 80% of the users correctly.
Keywords: location-based social networks, privacy, user identification (ID#:15-3947)
URL: http://doi.acm.org/10.1145/2660460.2660485
Ratan Dey, Madhurya Nangia, Keith W. Ross, Yong Liu; Estimating Heights From Photo Collections: A Data-Driven Approach; COSN '14 Proceedings of the Second ACM conference on Online Social Networks, October 2014, Pages 227-238. Doi: 10.1145/2660460.2660466
Abstract: A photo can potentially reveal a tremendous amount of information about an individual, including the individual's height, weight, gender, ethnicity, hair color, skin condition, interests, and wealth. A {\em photo collection} -- a set of inter-related photos including photos of many people appearing in two or more photos -- could potentially reveal a more vivid picture of the individuals in the collection. In this paper we consider the problem of estimating the heights of all the users in a photo collection, such as a collection of photos from a social network. The main ideas in our methodology are (i) for each individual photo, estimate the height differences among the people standing in the photo, (ii) from the photo collection, create a people graph, and combine this graph with the height difference estimates from the individual photos to generate height difference estimates among all the people in the collection, (iii) then use these height difference estimates, as well as an a priori distribution, to estimate the heights of all the people in the photo collection. Because many people will appear in multiple photos across the collection, height-difference estimates can be chained together, potentially reducing the errors in the estimates. To this end, we formulate a Maximum Likelihood Estimation (MLE) problem, which we show can be easily solved as a quadratic programming problem. Intuitively, this data-driven approach will improve as the number of photos and people in the collection increases. We apply the technique to estimating the heights of over 400 movie stars in the IMDb database and of about 30 graduate students.
Keywords: concept extraction, height estimate, image processing, maximum likelihood estimation, people graph, photo collection, privacy (ID#:15-3948)
URL: http://doi.acm.org/10.1145/2660460.2660466
Arthi Ramachandran, Yunsung Kim, Augustin Chaintreau; "I Knew They Clicked When I Saw Them With Their Friends": Identifying Your Silent Web Visitors On Social Media; COSN '14 Proceedings of the Second ACM conference on Online Social Networks, October 2014, Pages 239-246. Doi: 10.1145/2660460.2660461
Abstract: An increasing fraction of users access content on the web from social media. Endorsements by microbloggers and public figures you connect with gradually replaces the curation originally in the hand of traditional media sources. One expects a social media provider to possess a unique ability to analyze audience and trends since they collect not only information about what you actively share, but also about what you silently watch. Your behavior in the latter seems safe from observations outside your online service provider, for privacy but also commercial reasons. In this paper, we show that supposing that your passive web visits are anonymous to your host is a fragile assumption, or alternatively that third parties -- content publishers or providers serving ads onto them -- can efficiently reconciliate visitors with their social media identities. What is remarkable in this technique is that it need no support from the social media provider, it seamlessly applies to visitors who \emph{never} post or endorse content, and a visitor's public identity become known after a few clicks. This method combines properties of the public follower graph with posting behaviors and recent time-based inference, making it difficult to evade without drastic or time-wasting measures. It potentially offers researchers working on traffic datasets a new view into who access content or through which channels.
Keywords: data mining, privacy, social networks (ID#:15-3950)
URL: http://doi.acm.org/10.1145/2660460.2660461
Nicky Robinson, Joseph Bonneau ; Cognitive Disconnect: Understanding Facebook Connect Login Permissions; COSN '14 Proceedings of the Second ACM conference on Online Social Networks, October 2014, Pages 247-258. Doi: 10.1145/2660460.2660471
Abstract: We study Facebook Connect's permissions system using crawling, experimentation, and user surveys. We find several areas in which it it works differently than many users and developers expect. More permissions can be granted than developers intend. In particular, permissions that allow a site to post to the user's profile are granted on an all-or-nothing basis. While users generally understand what data sites can read from their profile, they generally do not understand the full extent of what sites can post. In the case of write permissions, we show that user expectations are influenced by the identity of the requesting site although this has no impact on what is actually enforced. We also find that users generally do not understand the way Facebook Connect permissions interact with Facebook's privacy settings. Our results suggest that users understand detailed, granular messages better than those that are broad and vague.
Keywords: facebook, online social networks, permissions, privacy (ID#:15-3951)
URL: http://doi.acm.org/10.1145/2660460.2660471
Ting-Kai Huang, Bruno Ribeiro, Harsha V. Madhyastha, Michalis Faloutsos; The Socio-Monetary Incentives Of Online Social Network Malware Campaigns; COSN '14 Proceedings of the Second ACM conference on Online Social Networks, October 2014, Pages 259-270. Doi: 10.1145/2660460.2660478
Abstract: Online social networks (OSNs) offer a rich medium of malware propagation. Unlike other forms of malware, OSN malware campaigns direct users to malicious websites that hijack their accounts, posting malicious messages on their behalf with the intent of luring their friends to the malicious website, thus triggering word-of-mouth infections that cascade through the network compromising thousands of accounts. But how are OSN users lured to click on the malicious links? In this work, we monitor 3.5 million Facebook accounts and explore the role of pure monetary, social, and combined socio-monetary psychological incentives in OSN malware campaigns. Among other findings we see that the majority of the malware campaigns rely on pure social incentives. However, we also observe that malware campaigns using socio-monetary incentives infect more accounts and last longer than campaigns with pure monetary or social incentives. The latter suggests the efficiency of an epidemic tactic surprisingly similar to the mechanism used by biological pathogens to cope with diverse gene pools.
Keywords: labor markets, monetary incentives, osn malware, social incentives (ID#:15-3952)
URL: http://doi.acm.org/10.1145/2660460.2660478
Pili Hu, Ronghai Yang, Yue Li, Wing Cheong Lau; Application Impersonation: Problems Of Oauth And API Design In Online Social Networks; COSN '14 Proceedings of the Second ACM conference on Online Social Networks, October 2014, Pages 271-278. Doi: 10.1145/2660460.2660463
Abstract: OAuth 2.0 protocol has enjoyed wide adoption by Online Social Network (OSN) providers since its inception. Although the security guideline of OAuth 2.0 is well discussed in RFC6749 and RFC6819, many real-world attacks due to the implementation specifics of OAuth 2.0 in various OSNs have been discovered. To our knowledge, previously discovered loopholes are all based on the misuse of OAuth and many of them rely on provider side or application side vulnerabilities/ faults beyond the scope of the OAuth protocol. It was generally believed that correct use of OAuth 2.0 is secure. In this paper, we show that OAuth 2.0 is intrinsically vulnerable to App impersonation attack due to its provision of multiple authorization flows and token types. We start by reviewing and analyzing the OAuth 2.0 protocol and some common API design problems found in many 1st tiered OSNs. We then propose the App impersonation attack and investigate its impact on 12 major OSN providers. We demonstrate that, App impersonation via OAuth 2.0, when combined with additional API design features/ deficiencies, make large-scale exploit and privacy-leak possible. For example, it becomes possible for an attacker to completely crawl a 200-million-user OSN within just one week and harvest data objects like the status list and friend list which are expected, by its users, to be private among only friends. We also propose fixes that can be readily deployed to tackle the OAuth2.0-based App impersonation problem.
Keywords: api design in osn, app impersonation attack, oauth 2.0, single sign on, social network privacy (ID#:15-3953)
URL: http://doi.acm.org/10.1145/2660460.2660463
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
International Conferences: ACM Symposium on InformAtion, Computer and Communications Security (ASIACCS) 2015, Singapore |
The 10th annual ACM Symposium on InformAtion, Computer and Communications Security (ASIACCS) held in Singapore, April 14-17, 2015. This year’s conference featured tracks on cyber-physical security and cryptography. The web page for the conference is at: http://icsd.i2r.a-star.edu.sg/asiaccs15/
Chris Y.T. Ma, David K.Y. Yau; “On Information-theoretic Measures for Quantifying Privacy Protection of Time-series Data;” ASIA CCS '15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 427-438. Doi: 10.1145/2714576.2714577
Abstract: Privacy protection of time-series data, such as traces of household electricity usage reported by smart meters, is of much practical importance. Solutions are available to improve data privacy by perturbing clear traces to produce noisy versions visible to adversaries, e.g., in battery-based load hiding (BLH) against non-intrusive load monitoring (NILM). A foundational task for research progress in the area is the definition of privacy measures that can truly evaluate the effectiveness of proposed protection methods. It is a difficult problem since resilience against any attack algorithms known to the designer is inconclusive, given that adversaries could discover or indeed already know stronger algorithms for attacks. A more basic measure is information-theoretic in nature, which quantifies the inherent information available for exploitation by an adversary, independent of how the adversary exploits it or indeed any assumed computational limitations of the adversary. In this paper, we analyze information-theoretic measures for privacy protection and apply them to several existing protection methods against NILM. We argue that although these measures abstract away the details of attacks, the kind of information the adversary considers plays a key role in the evaluation, and that a new measure of offline conditional entropy is better suited for evaluating the privacy of perturbed real-world time-series data, compared with other existing measures.
Keywords: conditional entropy, correlated time-series, privacy measure, privacy protection (ID#: 15-5577)
URL: http://doi.acm.org/10.1145/2714576.2714577
Maryam Mehrnezhad, Ehsan Toreini, Siamak F. Shahandashti, Feng Hao; “TouchSignatures: Identification of User Touch Actions based on Mobile Sensors via JavaScript;” ASIA CCS '15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 673-673. Doi: 10.1145/2714576.2714650
Abstract: Conforming to the recent W3C specifications (www.w3.org/TR/orientation-event), modern mobile web browsers generally allow JavaScript code in a web page to access motion and orientation sensor data without the user's permission. The associated risks to user privacy are however not considered in W3C specifications. In this work, for the first time, we show how user privacy can be compromised using device motion and orientation sensor data available in-browser, despite the fact that the data rate is 5 to 10 times slower than what is attainable in-app. We examine different browsers on the Android and iOS platforms and study their policies in granting permissions to JavaScript code with respect to access to motion and orientation sensor data and identify multiple vulnerabilities. Based on our findings, we propose TouchSignatures, implementation of an attack in which malicious JavaScript code on an inactive tab listens to such sensor data measurements. Based on these streams, TouchSignatures is able to distinguish the user's touch actions (e.g., tap, scroll, hold, and zoom) on an active tab, allowing the remote website to learn the client-side user activities. Finally, we demonstrate the practicality of this attack by collecting real-world user data and reporting high success rates using our proof-of-concept implementation.
Keywords: classifier, javascript attack, mobile browser, mobile sensors, touch actions, user privacy (ID#: 15-5578 )
URL: http://doi.acm.org/10.1145/2714576.2714650
Zhi-Kai Zhang, Michael Cheng Yi Cho, Shiuhpyng Shieh; “Emerging Security Threats and Countermeasures in IoT;” ASIA CCS '15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 1-6. Doi: 10.1145/2714576.2737091
Abstract: IoT (Internet of Things) diversifies the future Internet, and has drawn much attention. As more and more gadgets (i.e. Things) connected to the Internet, the huge amount of data exchanged has reached an unprecedented level. As sensitive and private information exchanged between things, privacy becomes a major concern. Among many important issues, scalability, transparency, and reliability are considered as new challenges that differentiate IoT from the conventional Internet. In this paper, we enumerate the IoT communication scenarios and investigate the threats to the large-scale, unreliable, pervasive computing environment. To cope with these new challenges, the conventional security architecture will be revisited. In particular, various authentication schemes will be evaluated to ensure the confidentiality and integrity of the exchanged data.
Keywords: authentication, communication, iot, privacy, security (ID#: 15-5579)
URL: http://doi.acm.org/10.1145/2714576.2737091
Gokay Saldamli, Richard Chow, Hongxia Jin; “Albatross: A Privacy-Preserving Location Sharing System;” ASIA CCS '15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 1-6. Doi: 10.1145/2714576.2714640
Abstract: We describe an architecture and a trial implementation of a privacy-preserving location sharing system called Albatross. The system protects location information from the service provider and yet enables fine-grained location-sharing. One main feature of the system is to protect an individual's social network structure. The pattern of location sharing preferences towards contacts can reveal this structure without any knowledge of the locations themselves. Albatross protects locations sharing preferences through protocol unification and masking. Albatross has been implemented as a standalone solution, but the technology can also be integrated into location-based services to enhance privacy.
Keywords: location privacy, privacy, private location sharing (ID#: 15-5580)
URL: http://doi.acm.org/10.1145/2714576.2714640
Wei-Yen Day, Ninghui Li; “Differentially Private Publishing of High-dimensional Data Using Sensitivity Control;” ASIA CCS '15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 451-462. Doi: 10.1145/2714576.2714621
Abstract: In this paper, we present DPSense, an approach to publish statistical information from datasets under differential privacy via sensitivity control. More specifically, we consider the problem of publishing column counts for high-dimensional datasets, such as query logs or the Netflix dataset. The key challenge is that as the sensitivity is high, high-magnitude noises need to be added to satisfy differential privacy. We explore how to effectively performs sensitivity control, i.e., limiting the contribution of each tuple in the dataset. We introduce a novel low-sensitivity quality function that enables one to effectively choose a contribution limit while satisfying differential privacy. Based on DPSense, we further propose an extension to correct the under-estimation bias, which we call DPSense-S. Experimental results show that our proposed approaches advance the state of the art for publishing noisy column counts and for finding the columns with the highest counts. Finally, we give the analysis and discussion for the stability of DPSense and DPSense-S, which benefits from the high correlation between quality function and error, as well as other insights of DPSense, DPSense-S, and existing approaches.
Keywords: differential privacy, high-dimensional data, private data publishing (ID#: 15-5581)
URL: http://doi.acm.org/10.1145/2714576.2714621
Katerina Doka, Mingqiang Xue, Dimitrios Tsoumakos, Panagiotis Karras; “k-Anonymization by Freeform Generalization;” ASIA CCS '15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 451-462. Doi: 10.1145/2714576.2714590
Abstract: Syntactic data anonymization strives to (i) ensure that an adversary cannot identify an individual's record from published attributes with high probability, and (ii) provide high data utility. These mutually conflicting goals can be expressed as an optimization problem with privacy as the constraint and utility as the objective function. Conventional research using the k-anonymity model has resorted to publishing data in homogeneous generalized groups. A recently proposed alternative does not create such cliques; instead, it recasts data values in a heterogeneous manner, aiming for higher utility. Nevertheless, such works never defined the problem in the most general terms; thus, the utility gains they achieve are limited. In this paper, we propose a methodology that achieves the full potential of heterogeneity and gains higher utility while providing the same privacy guarantee. We formulate the problem of maximal-utility k-anonymization by freeform generalization as a network flow problem. We develop an optimal solution therefor using Mixed Integer Programming. Given the non-scalability of this solution, we develop an O(k n2) Greedy algorithm that has no time-complexity disadvantage vis-á-vis previous approaches, an O(k n2 log n) enhanced version thereof, and an O(k n3) adaptation of the Hungarian algorithm; these algorithms build a set of k perfect matchings from original to anonymized data, a novel approach to the problem. Moreover, our techniques can resist adversaries who may know the employed algorithms. Our experiments with real-world data verify that our schemes achieve near-optimal utility (with gains of up to 41%), while they can exploit parallelism and data partitioning, gaining an efficiency advantage over simpler methods.
Keywords: anonymization, freeform generalization, privacy (ID#: 15-5582)
URL: http://doi.acm.org/10.1145/2714576.2714590
Anirban Basu, Juan Camilo Corena, Jaideep Vaidya, Jon Crowcroft, Shinsaku Kiyomoto, Yung Shin Van Der Sype, Yutaka Miyake; “Practical Private One-way Anonymous Message Routing;” ASIA CCS '15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 665-665. Doi: 10.1145/2714576.2714641
Abstract: Opinions from people can either be biased or reflect low participation due to legitimate concerns about privacy and anonymity. To alleviate those concerns, the identity of a message sender should be disassociated from the message while the contents of the actual message should be hidden from any relaying nodes. We propose a novel message routing scheme based on probabilistic forwarding that guarantees message privacy and sender anonymity through additively homomorphic public-key encryption. Our scheme is applicable to anonymous surveys and microblogging.
Keywords: anonymity, privacy, routing (ID#: 15-5583)
URL: http://doi.acm.org/10.1145/2714576.2714641
Gorka Irazoqui, Mehmet Sinan Inci, Thomas Eisenbarth, Berk Sunar; “Lucky 13 Strikes Back;” ASIA CCS '15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 85-96. Doi: 10.1145/2714576.2714625
Abstract: In this work we show how the Lucky 13 attack can be resurrected in the cloud by gaining access to a virtual machine co-located with the target. Our version of the attack exploits distinguishable cache access times enabled by VM deduplication to detect dummy function calls that only happen in case of an incorrectly CBC-padded TLS packet. Thereby, we gain back a new covert channel not considered in the original paper that enables the Lucky 13 attack. In fact, the new side channel is significantly more accurate, thus yielding a much more effective attack. We briefly survey prominent cryptographic libraries for this vulnerability. The attack currently succeeds to compromise PolarSSL, GnuTLS and CyaSSL on deduplication enabled platforms while the Lucky 13 patches in OpenSSL, Mozilla NSS and MatrixSSL are immune to this vulnerability. We conclude that, any program that follows secret data dependent execution flow is exploitable by side-channel attacks as shown in (but not limited to) our version of the Lucky 13 attack.
Keywords: cross-vm attacks, deduplication, lucky 13 attack, virtualization (ID#: 15-5584)
URL: http://doi.acm.org/10.1145/2714576.2714625
Ahmad-Reza Sadeghi, Lucas Davi, Per Larsen; “Securing Legacy Software against Real-World Code-Reuse Exploits: Utopia, Alchemy, or Possible Future?;” ASIA CCS '15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 55-61. Doi: 10.1145/2714576.2737090
Abstract: Exploitation of memory-corruption vulnerabilities in widely-used software has been a threat for over two decades and no end seems to be in sight. Since performance and backwards compatibility trump security concerns, popular programs such as web browsers, servers, and office suites still contain large amounts of untrusted legacy code written in error-prone languages such as C and C++. At the same time, modern exploits are evolving quickly and routinely incorporate sophisticated techniques such as code reuse and memory disclosure. As a result, they bypass all widely deployed countermeasures including data execution prevention (DEP) and code randomization such as address space layout randomization (ASLR). The good news is that the security community has recently introduced several promising prototype defenses that offer a more principled response to modern exploits. Even though these solutions have improved substantially over time, they are not perfect and weaknesses that allow bypasses are continually being discovered. Moreover, it remains to be seen whether these prototype defenses can be matured and integrated into operating systems, compilers, and other systems software. This paper provides a brief overview of current state-of-the-art exploitation and defense techniques against run-time exploits and elaborates on innovative research prototypes that may one day stem the tide of sophisticated exploits. We also provide a brief analysis and categorization of existing defensive techniques and ongoing work in the areas of code randomization and control-flow integrity, and cover both hardware and software-based solutions.
Keywords: control-flow integrity, fine-grained randomization, software exploitation (ID#: 15-5585)
URL: http://doi.acm.org/10.1145/2714576.2737090
Hua Deng, Qianhong Wu, Bo Qin, Willy Susilo, Joseph Liu, Wenchang Shi; “Asymmetric Cross-cryptosystem Re-encryption Applicable to Efficient and Secure Mobile Access to Outsourced Data;” ASIA CCS '15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 393-404. Doi: 10.1145/2714576.2714632
Abstract: With the increasing development of pervasive computing and wireless bandwidth communication, more mobile devices are used to access sensitive data stored in remote servers. In such applications, a practical issue emerges such as how to exploit the sufficient resource of a server so that the file owners can enforce fine-grained access control over the remotely stored files, while enable resource-limited mobile devices to easily access the protected data, especially if the storage server maintained by a third party is untrusted. This challenge mainly arises from the asymmetric capacity among the participants, i.e., the capacity limited mobile devices and the resource abundant server (and file owners equipped with fixed computers). To meet the security requirements in mobile access to sensitive data, we propose a new encryption paradigm, referred to as asymmetric cross-cryptosystem re-encryption (ACCRE) by leveraging the asymmetric capacity of the participants. In ACCRE, relatively light-weight identity-based encryption (IBE) is deployed in mobile devices, while resource-consuming but versatile identity-based broadcast encryption (IBBE) is deployed in servers and fixed computers of the file owners. The core of ACCRE is a novel ciphertext conversion mechanism that allows an authorized proxy to convert a complicated IBBE ciphertext into a simple IBE ciphertext affordable to mobile devices, without leaking any sensitive information to the proxy. Following this paradigm, we propose an efficient ACCRE scheme with its security formally reduced to the security of the underlying IBE and IBBE schemes. Thorough theoretical analyses and extensive experiments confirm that the scheme takes very small cost for mobile devices to access encrypted data and is practical to secure mobile computing applications.
Keywords: data security, identity-based broadcast encryption, identity-based encryption, proxy re-encryption (ID#: 15-5586)
URL: http://doi.acm.org/10.1145/2714576.2714632
Fengwei Zhang, Kevin Leach, Haining Wang, Angelos Stavrou; “TrustLogin: Securing Password-Login on Commodity Operating Systems;” ASIA CCS '15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 333-344. Doi: 10.1145/2714576.2714614
Abstract: With the increasing prevalence of Web 2.0 and cloud computing, password-based logins play an increasingly important role on user-end systems. We use passwords to authenticate ourselves to countless applications and services. However, login credentials can be easily stolen by attackers. In this paper, we present a framework, TrustLogin, to secure password-based logins on commodity operating systems. TrustLogin leverages System Management Mode to protect the login credentials from malware even when OS is compromised. TrustLogin does not modify any system software in either client or server and is transparent to users, applications, and servers. We conduct two study cases of the framework on legacy and secure applications, and the experimental results demonstrate that TrustLogin is able to protect login credentials from real-world keyloggers on Windows and Linux platforms. TrustLogin is robust against spoofing attacks. Moreover, the experimental results also show TrustLogin introduces a low overhead with the tested applications.
Keywords: keyloggers, login password, system management mode (ID#: 15-5587)
URL: http://doi.acm.org/10.1145/2714576.2714614
Haoyu Ma, Kangjie Lu, Xinjie Ma, Haining Zhang, Chunfu Jia, Debin Gao; “Software Watermarking using Return-Oriented Programming;” ASIA CCS '15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 369-380. Doi: 10.1145/2714576.2714582
Abstract: We propose a novel dynamic software watermarking design based on Return-Oriented Programming (ROP). Our design formats watermarking code into well-crafted data arrangements that look like normal data but could be triggered to execute. Once triggered, the pre-constructed ROP execution will recover the hidden watermark message. The proposed ROP-based watermarking technique is more stealthy and resilient over existing techniques since the watermarking code is allocated dynamically into data region and therefore out of reach of attacks based on code analysis. Evaluations show that our design not only achieves satisfying stealth and resilience, but also causes significantly lower overhead to the watermarked program.
Keywords: code obfuscation, return-oriented programming, reverse engineering, software watermarking (ID#: 15-5588)
URL: http://doi.acm.org/10.1145/2714576.2714582
Chung Hwan Kim, Sungjin Park, Junghwan Rhee, Jong-Jin Won, Taisook Han, Dongyan Xu; “CAFE: A Virtualization-Based Approach to Protecting Sensitive Cloud Application Logic Confidentiality;” ASIA CCS '15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 651-656. Doi: 10.1145/2714576.2714594
Abstract: Cloud application marketplaces of modern cloud infrastructures offer a new software deployment model, integrated with the cloud environment in its configuration and policies. However, similar to traditional software distribution which has been suffering from software piracy and reverse engineering, cloud marketplaces face the same challenges that can deter the success of the evolving ecosystem of cloud software. We present a novel system named CAFE for cloud infrastructures where sensitive software logic can be executed with high secrecy protected from any piracy or reverse engineering attempts in a virtual machine even when its operating system kernel is compromised. The key mechanism is the end-to-end framework for the execution of applications, which consists of the secure encryption and distribution of confidential application binary files, and the runtime techniques to load, decrypt, and protect the program logic by isolating them from tenant virtual machines based on hypervisor-level techniques. We evaluate applications in several software categories which are commonly offered in cloud marketplaces showing that strong confidential execution can be provided with only marginal changes (around 100-220 lines of code) and minimal performance overhead.
Keywords: cloud computing marketplace, code confidentiality protection, secure execution environment (ID#: 15-5589)
URL: http://doi.acm.org/10.1145/2714576.2714594
Enrico Bacis, Simone Mutti, Stefano Paraboschi; “AppPolicyModules: Mandatory Access Control for Third-Party Apps;” ASIA CCS '15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 309-320. Doi: 10.1145/2714576.2714626
Abstract: Android has recently introduced the support for Mandatory Access Control, which extends previous security services relying on the Android Permission Framework and on the kernel-level Discretionary Access Control. This extension has been obtained with the use of SELinux and its adaptation to Android (SEAndroid). Currently, the use of the MAC model is limited to the protection of system resources. All the apps that are installed by users fall in a single undifferentiated domain, untrusted_app. We propose an extension of the architecture that permits to associate with each app a dedicated MAC policy, contained in a dedicated appPolicyModule, in order to protect app resources even from malware with root privileges. A crucial difference with respect to the support for policy modules already available in some SELinux implementations is the need to constrain the policies in order to guarantee that an app policy is not able to manipulate the system policy. We present the security requirements that have to be satisfied by the support for modules and show that our solution satisfies these requirements. The support for appPolicyModules can also be the basis for the automatic generation of policies, with a stricter enforcement of Android permissions. A prototype has been implemented and experimental results show a minimal performance overhead for app installation and runtime.
Keywords: administrative policies, android, app security, mandatory access control, policy modularity, selinux (ID#: 15-5590)
URL: http://doi.acm.org/10.1145/2714576.2714626
Jongho Won, Seung-Hyun Seo, Elisa Bertino; “A Secure Communication Protocol for Drones and Smart Objects;” ASIA CCS '15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 249-260. Doi: 10.1145/2714576.2714616
Abstract: In many envisioned drone-based applications, drones will communicate with many different smart objects, such as sensors and embedded devices. Securing such communications requires an effective and efficient encryption key establishment protocol. However, the design of such a protocol must take into account constrained resources of smart objects and the mobility of drones. In this paper, a secure communication protocol between drones and smart objects is presented. To support the required security functions, such as authenticated key agreement, non-repudiation, and user revocation, we propose an efficient Certificateless Signcryption Tag Key Encapsulation Mechanism (eCLSC-TKEM). eCLSC-TKEM reduces the time required to establish a shared key between a drone and a smart object by minimizing the computational overhead at the smart object. Also, our protocol improves drone's efficiency by utilizing dual channels which allows many smart objects to concurrently execute eCLSC-TKEM. We evaluate our protocol on commercially available devices, namely AR.Drone2.0 and TelosB, by using a parking management testbed. Our experimental results show that our protocol is much more efficient than other protocols.
Keywords: certificateless signcryption, drone communications (ID#: 15-5591)
URL: http://doi.acm.org/10.1145/2714576.2714616
Heqing Huang, Kai Chen, Chuangang Ren, Peng Liu, Sencun Zhu, Dinghao Wu; “Towards Discovering and Understanding Unexpected Hazards in Tailoring Antivirus Software for Android;” ASIA CCS '15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 7-18. Doi: 10.1145/2714576.2714589
Abstract: In its latest comparison of Android Virus Detectors (AVDs), the independent lab AV-TEST reports that they have around 95% malware detection rate. This only indicates that current AVDs on Android have good malware signature databases. When the AVDs are deployed on the fast-evolving mobile system, their effectiveness should also be measured on their runtime behavior. Therefore, we perform a comprehensive analysis on the design of top 30 AVDs tailored for Android. Our new understanding of the AVDs' design leads us to discover the hazards in adopting AVD solutions for Android, including hazards in malware scan (malScan) mechanisms and the engine update (engineUpdate). First, the malScan mechanisms of all the analyzed AVDs lack comprehensive and continuous scan coverage. To measure the seriousness of the identified hazards, we implement targeted evasions at certain time (e.g., end of the scan) and locations (certain folders) and find that the evasions can work even under the assumption that the AVDs are equipped with "complete" virus definition files. Second, we discover that, during the engineUpdate, the Android system surprisingly nullifies all types of protections of the AVDs and renders the system for a period of high risk. We confirmed the presence of this vulnerable program logic in all versions of Google Android source code and other vendor customized system images. Since AVDs have about 650-1070 million downloads on the Google store, we immediately reported these hazards to AVD vendors across 16 countries. Google also confirmed our discovered hazard in the engineUpdate procedure, so feature enhancements might be included in later versions. Our research sheds the light on the importance of taking the secure and preventive design strategies for AVD or other mission critical apps for fast-evolving mobile-systems.
Keywords: anti-malware, malware, mobile, vulnerability measurement (ID#: 15-5592)
URL: http://doi.acm.org/10.1145/2714576.2714589
Nitin Chiluka, Nazareno Andrade, Johan Pouwelse, Henk Sips; “Social Networks Meet Distributed Systems: Towards a Robust Sybil Defense under Churn;” ASIA CCS '15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 507-518. Doi: 10.1145/2714576.2714606
Abstract: This paper examines the impact of heavy churn on the robustness of decentralized social network-based Sybil defense (SNSD) schemes. Our analysis reveals that (i) heavy churn disintegrates the social overlay network that is fundamental to these schemes into multiple disconnected components, resulting in poor network connectivity, and (ii) a naive solution that adds links from each node to all its 2-hop neighbors improves network connectivity but comes at a significant cost of poor attack resilience of these schemes. We propose a new design point in the trade-off between network connectivity and attack resilience of SNSD schemes, where each node adds links to only a selective few of all its 2-hop neighbors based on a minimum expansion contribution (MinEC) heuristic. Extensive evaluation through simulations shows that our approach fares as good as the naive 2-hop solution in terms of network connectivity, while making little compromise on the attack resilience. Moreover, our approach preserves the fast-mixing property that is fundamental to many SNSD schemes even at high levels of churn. This result suggests that existing and potential future SNSD schemes relying on this property can incorporate our approach into their designs with minimal changes.
Keywords: churn, social overlay network, sybil attack (ID#: 15-5593)
URL: http://doi.acm.org/10.1145/2714576.2714606
Marco Caselli, Emmanuele Zambon, Frank Kargl; “Sequence-aware Intrusion Detection in Industrial Control Systems;” ASIA CCS '15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 13-24. Doi: 10.1145/2732198.2732200
Abstract: Nowadays, several threats endanger cyber-physical systems. Among these systems, industrial control systems (ICS) operating on critical infrastructures have been proven to be an attractive target for attackers. The case of Stuxnet has not only showed that ICSs are vulnerable to cyber-attacks, but also that some of these attacks rely on understanding the processes beyond the employed systems and using such knowledge to maximize the damage. This concept is commonly known as "semantic attack". Our paper discusses a specific type of semantic attack involving "sequences of events". Common network intrusion detection systems (NIDS) generally search for single, unusual or "not permitted" operations. In our case, rather than a malicious event, we show how a specific series of "permitted" operations can elude standard intrusion detection systems and still damage an infrastructure. Moreover, we present a possible approach to the development of a sequence-aware intrusion detection system (S-IDS). We propose a S-IDS reference architecture and we discuss all the steps through its implementations. Finally, we test the S-IDS on real ICS traffic samples captured from a water treatment and purification facility.
Keywords: cyber-physical system, intrusion detection system, semantic attack, sequence attack (ID#: 15-5594)
URL: http://doi.acm.org/10.1145/2732198.2732200
Dinesha Ranathunga, Matthew Roughan, Phil Kernick, Nick Falkner, Hung Nguyen; “Identifying the Missing Aspects of the ANSI/ISA Best Practices for Security Policy;” ASIA CCS '15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 13-24. Doi: 10.1145/2732198.2732201
Abstract: Firewall configuration is a critical activity for the Supervisory Control and Data Acquisition (SCADA) networks that control power stations, water distribution, factory automation, etc. The American National Standards Institute (ANSI) provides specifications for the best practices in developing high-level security policy [1]. However, firewalls continue to be configured manually, a common but error prone process. Automation can make designing firewall configurations more reliable and their deployment increasingly cost-effective. ANSI best practices lack specification in several key aspects needed to allow a firewall to be automatically configured. In this paper we discuss the missing aspects of the existing best practice specifications and propose solutions. We then apply our corrected best practice specifications to real SCADA firewall configurations and evaluate their usefulness for high-level automated specification of firewalls.
Keywords: firewall auto-configuration, scada network security, security policy, zone-conduit model (ID#: 15-5595)
URL: http://doi.acm.org/10.1145/2732198.2732201
Ning Zhang, Kun Sun, Wenjing Lou, Y. Thomas Hou, Sushil Jajodia; “Now You See Me: Hide and Seek in Physical Address Space;” ASIA CCS '15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 321-331. Doi: 10.1145/2714576.2714600
Abstract: With the growing complexity of computing systems, memory based forensic techniques are becoming instrumental in digital investigations. Digital forensic examiners can unravel what happened on a system by acquiring and inspecting in-memory data. Meanwhile, attackers have developed numerous anti-forensic mechanisms to defeat existing memory forensic techniques by manipulation of system software such as OS kernel. To counter anti-forensic techniques, some recent researches suggest that memory acquisition process can be trusted if the acquisition module has not been tampered with and all the operations are performed without relying on any untrusted software including the operating system. However, in this paper, we show that it is possible for malware to bypass the current state-of-art trusted memory acquisition module by manipulating the physical address space layout, which is shared between physical memory and I/O devices on x86 platforms. This fundamental design on x86 platform enables an attacker to build an OS agnostic anti-forensic system. Base on this finding, we propose Hidden in I/O Space (HIveS) which manipulates CPU registers to alter such physical address layout. The system uses a novel I/O Shadowing technique to lock a memory region named HIveS memory into I/O address space, so all operation requests to the HIveS memory will be redirected to the I/O bus instead of the memory controller. To access the HIveS memory, the attacker unlocks the memory by mapping it back into the memory address space. Two novel techniques, Blackbox Write and TLB Camouflage, are developed to further protect the unlocked HIveS memory against memory forensics while allowing attackers to access it. A HIveS prototype is built and tested against a set of memory acquisition tools for both Windows and Linux running on x86 platform. Lastly, we propose potential countermeasures to detect and mitigate HIveS.
Keywords: digital forensics, memory acquisition, rootkits, system security (ID#: 15-5596)
URL: http://doi.acm.org/10.1145/2714576.2714600
Tsz Hon Yuen, Cong Zhang, Sherman S.M. Chow, Siu Ming Yiu; “Related Randomness Attacks for Public Key Cryptosystems;” ASIA CCS '15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 215-223. Doi: 10.1145/2714576.2714622
Abstract: We initiate the study of related randomness attack in the face of a number of practical attacks in public key cryptography, ranges from active attacks like fault-injection, to passive attacks like software (mis)implementation on choosing random numbers. Our new definitions cover the well-known related-key attacks (RKA) where secret keys are related, and a number of new attacks, namely, related encryption randomness attacks, related signing randomness attacks, and related public key attacks. We provide generic constructions for security against these attacks, which are efficiently built upon normal encryption and signature schemes, leveraging RKA-secure pseudorandom function and generator.
Keywords: identity-based encryption, public key encryption, related-key attack, related-randomness attack, signatures (ID#: 15-5597)
URL: http://doi.acm.org/10.1145/2714576.2714622
David Nuñez, Isaac Agudo, Javier Lopez; “NTRUReEncrypt: An Efficient Proxy Re-Encryption Scheme Based on NTRU;” ASIA CCS '15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 179-189. Doi: 10.1145/2714576.2714585
Abstract: The use of alternative foundations for constructing more secure and efficient cryptographic schemes is a topic worth exploring. In the case of proxy re-encryption, the vast majority of schemes are based on number theoretic problems such as the discrete logarithm. In this paper we present NTRUReEncrypt, a new bidirectional and multihop proxy re-encryption scheme based on NTRU, a widely known lattice-based cryptosystem. We provide two versions of our scheme: the first one is based on the conventional NTRU encryption scheme and, although it lacks a security proof, remains as efficient as its predecessor; the second one is based on a variant of NTRU proposed by Stehlé and Steinfeld, which is proven CPA-secure under the hardness of the Ring-LWE problem. To the best of our knowledge, our proposals are the first proxy re-encryption schemes to be based on the NTRU primitive. In addition, we provide experimental results to show the efficiency of our proposal, as well as a comparison with previous proxy re-encryption schemes, which confirms that our first scheme outperforms the rest by an order of magnitude.
Keywords: lattice-based cryptography, ntru, proxy re-encryption (ID#: 15-5598)
URL: http://doi.acm.org/10.1145/2714576.2714585
Chris Y.T. Ma, David K.Y. Yau; “On Information-theoretic Measures for Quantifying Privacy Protection of Time-series Data;” ASIA CCS '15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 427-438. Doi: 10.1145/2714576.2714577
Abstract: Privacy protection of time-series data, such as traces of household electricity usage reported by smart meters, is of much practical importance. Solutions are available to improve data privacy by perturbing clear traces to produce noisy versions visible to adversaries, e.g., in battery-based load hiding (BLH) against non-intrusive load monitoring (NILM). A foundational task for research progress in the area is the definition of privacy measures that can truly evaluate the effectiveness of proposed protection methods. It is a difficult problem since resilience against any attack algorithms known to the designer is inconclusive, given that adversaries could discover or indeed already know stronger algorithms for attacks. A more basic measure is information-theoretic in nature, which quantifies the inherent information available for exploitation by an adversary, independent of how the adversary exploits it or indeed any assumed computational limitations of the adversary. In this paper, we analyze information-theoretic measures for privacy protection and apply them to several existing protection methods against NILM. We argue that although these measures abstract away the details of attacks, the kind of information the adversary considers plays a key role in the evaluation, and that a new measure of offline conditional entropy is better suited for evaluating the privacy of perturbed real-world time-series data, compared with other existing measures.
Keywords: conditional entropy, correlated time-series, privacy measure, privacy protection (ID#: 15-5599)
URL: http://doi.acm.org/10.1145/2714576.2714577
Min Zheng, Hui Xue, Yulong Zhang, Tao Wei, John C.S. Lui; “Enpublic Apps: Security Threats Using iOS Enterprise and Developer Certificates;” ASIA CCS '15 Proceedings of the 10th ACM Symposium on Information, Computer and Communications Security, April 2015, Pages 463-474. Doi: 10.1145/2714576.2714593
Abstract: Compared with Android, the conventional wisdom is that iOS is more secure. However, both jailbroken and non-jailbroken iOS devices have number of vulnerabilities. For iOS, apps need to interact with the underlying system using Application Programming Interfaces (APIs). Some of these APIs remain undocumented and Apple forbids apps in App Store from using them. These APIs, also known as "private APIs", provide powerful features to developers and yet they may have serious security consequences if misused. Furthermore, apps which use private APIs can bypass the App Store and use the "Apple's Enterprise/Developer Certificates" for distribution. This poses a significant threat to the iOS ecosystem. So far, there is no formal study to understand these apps and how private APIs are being encapsulated. We call these iOS apps which distribute to the public using enterprise certificates as "enpublic" apps. In this paper, we present the design and implementation of iAnalytics, which can automatically analyze "enpublic" apps' private API usages and vulnerabilities. Using iAnalytics, we crawled and analyzed 1,408 enpublic iOS apps. We discovered that: 844 (60%) out of the 1408 apps do use private APIs, 14 (1%) apps contain URL scheme vulnerabilities, 901 (64%) enpublic apps transport sensitive information through unencrypted channel or store the information in plaintext on the phone. In addition, we summarized 25 private APIs which are crucial and security sensitive on iOS 6/7/8, and we have filed one CVE (Common Vulnerabilities and Exposures) for iOS devices.
Keywords: enterprise certificate, ios, private apis (ID#: 15-5600)
URL: http://doi.acm.org/10.1145/2714576.2714593
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
International Conferences: Workshop on IoT Privacy, Trust, and Security, 2015, Singapore |
The 2015 ACM Workshop on IoT Privacy, Trust, and Security was held 14-17 April 2015. The conference organizers say that, “at a basic level, the Internet of Things (IoT) refers simply to networked devices, but the IoT vision consists of a complex ecosystem that ranges from cloud backend services and big-data analytics to home, public, industrial, and wearable sensor devices and appliances. Architectures for these systems are in the formative stages, and IoTPTS 2015 gives researchers and practitioners a unique opportunity ensure privacy, trust, and security are designed into these systems from the beginning.” For the inaugural year of the IoTPTS Workshop, there were 13 submissions worldwide from 12 countries and 4 continents. The final program contained 5 papers (representing an acceptance rate of 38%) and a keynote.
Ihor Vasyltsov, Seunghwan Lee; “Entropy Extraction from Bio-Signals in Healthcare IoT;” IoTPTS '15 Proceedings of the 1st ACM Workshop on IoT Privacy, Trust, and Security, April 2015, Pages 11-17. Doi: 10.1145/2732209.2732213
Abstract: In this paper, the theoretical approach to estimate the amount of entropy which can be extracted from heart-rate based biomedical signals has been considered. Mathematical models for estimating the values of min-entropy, Shannon entropy, and collision entropy have been created. This allows obtaining the theoretical background and estimations for upper bound of entropy that can be extracted from the biomedical inter-pulse interval signal for the usage in healthcare and biomedical applications. These results will be useful when estimating the security of healthcare systems and during the certification of the devices.
Keywords: ecg, entropy, heart rate, hrv, inter-pulse interval, mathematical model, ppg (ID#: 15-5527)
URL: http://doi.acm.org/10.1145/2732209.2732213
Tobias Rauter, Andrea Höller, Nermin Kajtazovic, Christian Kreiner; “Privilege-Based Remote Attestation: Towards Integrity Assurance for Lightweight Clients;” IoTPTS '15 Proceedings of the 1st ACM Workshop on IoT Privacy, Trust, and Security, April 2015, Pages 3-9. doi: 10.1145/2732209.2732211
Abstract: Remote attestation is used to assure the integrity of a trusted platform (prover) to a remote party (challenger). Traditionally, plain binary attestation (i.e., attesting the integrity of software by measuring their binaries) is the method of choice. Especially in the resource-constrained embedded domain with the ever-growing number of integrated services per platform, this approach is not feasible since the challenger has to know all possible 'good' configurations of the prover. In this work, a new approach based on software privileges is presented. It reduces the number of possible configurations the challenger has to know by ignoring all services on the prover that are not used by the challenger. For the ignored services, the challenger ensures that they do not have the privileges to manipulate the used services. To achieve this, the prover measures the privileges of its software modules by parsing their binaries for particular system API calls. The results show significant reduction of need-to-know configurations. The implementation of the central system parts show its practicability, especially if combined with a fine-grained system API.
Keywords: embedded systems, privilege classification, remote attestation, trusted computing (ID#: 15-5528)
URL: http://doi.acm.org/10.1145/2732209.2732211
Feng Hao, Xun Yi, Liqun Chen, Siamak Fayyaz Shahandashti; “The Fairy-Ring Dance: Password Authenticated Key Exchange in a Group;” IoTPTS '15 Proceedings of the 1st ACM Workshop on IoT Privacy, Trust, and Security, April 2015, Pages 22-37. doi: 10.1145/2732209.2732212
Abstract: In this paper, we study Password Authenticated Key Exchange (PAKE) in a group. First, we present a generic "fairy-ring dance" construction that transforms any secure two-party PAKE scheme to a group PAKE protocol while preserving the round efficiency in the optimal way. Based on this generic construction, we present two concrete instantiations based on using SPEKE and J-PAKE as the underlying PAKE primitives respectively. The first protocol, called SPEKE+, accomplishes authenticated key exchange in a group with explicit key confirmation in just two rounds. This is more round-efficient than any existing group PAKE protocols in the literature. The second protocol, called J-PAKE+, requires one more round than SPEKE+, but is computationally faster. Finally, we present full implementations of SPEKE+ and J-PAKE+ with detailed performance measurements. Our experiments suggest that both protocols are feasible for practical applications in which the group size may vary from three to several dozen. This makes them useful, as we believe, for a wide range of applications - e.g., to bootstrap secure communication among a group of smart devices in the Internet of Things (IoT).
Keywords: group key exchange, j-pake, pake, speke (ID#: 15-5529)
URL: http://doi.acm.org/10.1145/2732209.2732212
Pawel Szalachowski, Adrian Perrig; “Lightweight Protection of Group Content Distribution;” IoTPTS '15 Proceedings of the 1st ACM Workshop on IoT Privacy, Trust, and Security, April 2015, Pages 35-42. doi: 10.1145/2732209.2732215
Abstract: Achieving security properties in distributed, hardware-limited, and unattended networks is a challenging task. This setting is challenging because an adversary can capture and physically compromise unattended nodes. In this setting, this paper presents one-way group communication protocols with strong security properties. In particular, how to send messages to a group of hardware-limited nodes with message secrecy and authenticity? We present several protocols and analyze them in terms of security, efficiency, and deployability. The resulting solutions are generic and can be useful in a variety of distributed systems.
Keywords: broadcast authentication, broadcast encryption, internet of things security, secure sensor networks (ID#: 15-5530)
URL: http://doi.acm.org/10.1145/2732209.2732215
Yong Ho Hwang; “IoT Security & Privacy: Threats and Challenges;” IoTPTS '15 Proceedings of the 1st ACM Workshop on IoT Privacy, Trust, and Security, April 2015, Page 1. doi: 10.1145/2732209.2732216
Abstract: The era of the Internet of Things (IoT) has already started and it will profoundly change our way of life. While IoT provides us many valuable benefits, IoT also exposes us to many different types of security threats in our daily life. Before the advent of IoT, most security threats were just related to information leakage and the loss of service. With IoT, security threats have become closely related to our non-virtual lives and they can directly influence physical security risk. The Internet of Things consists of various platforms and devices with different capabilities, and each system will need security solutions depending on its characteristics. There is a demand for security solutions that are able to support multi-profile platforms and provide equivalent security levels for various device interactions. In addition, user privacy will become more important in the IoT environment because a lot of personal information will be delivered and shared among connected things. Therefore, we need mechanisms to protect personal data and monitor their flow from things to the cloud. In this talk, we describe threats and concerns for security and privacy arising from IoT services, and introduce approaches to solve these security and privacy issues in the industrial field.
Keywords: data protection, internet of things, platform security, privacy protection (ID#: 15-5531)
URL: http://doi.acm.org/10.1145/2732209.2732216
Lihua Wang, Ryo Nojima, Shiho Moriai; “A Secure Automobile Information Sharing System;” IoTPTS '15 Proceedings of the 1st ACM Workshop on IoT Privacy, Trust, and Security, April 2015, Pages 19-26. doi: 10.1145/2732209.2732214
Abstract: Utilizing the proxy re-encryption technique described in \cite{w-IBPdr}, we construct a secure storage system named PRINCESS Proxy Re-encryption with INd-Cca security in an Encrypted file Storage System). With PRINCESS, the files encrypted in accordance with the confidentiality levels can be shared among appointed users while remaining encrypted. Furthermore, we implement an automobile information-sharing system based on PRINCESS. With this system, location information obtained from a GPS and the vehicle data obtained via on-board diagnosis and Bluetooth can be shared flexibly and securely. By using this system, it is possible to share automobile information, such as the position and speed, and even the engine's rotational frequency, while ensuring user control and privacy. This system facilitates the potential for new services that require automobile information to be shared securely via cloud technology.
Keywords: automobile information sharing, cloud security, privacy, proxy re-encryption (ID#: 15-5532)
URL: http://doi.acm.org/10.1145/2732209.2732214
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
International Conferences: EuroSec 15, Bordeaux, France |
The 2015 EuroSys conference was held April 17-24 at Bordeaux, France. This conference series brings together professionals from academia and industry and has a strong focus on systems research and development: operating systems, data base systems, real-time systems and middleware for networked, distributed, parallel, or embedded computing systems. EuroSys is a forum for discussing systems software research and development and related to hardware and applications. The conference web page is available at: http://eurosys2015.labri.fr/
Thanasis Petsas, Giorgos Tsirantonakis, Elias Athanasopoulos, Sotiris Ioannidis; “Two-Factor Authentication: Is The World Ready?: Quantifying 2FA Adoption;” EuroSec '15 Proceedings of the Eighth European Workshop on System Security, April 2015, Article No. 4. Doi: 10.1145/2751323.2751327
Abstract: As text-based passwords continue to be the dominant form for user identification today, services try to protect their customers by offering enhanced, and more secure, technologies for authentication. One of the most promising is two-factor authentication (2FA). 2FA raises the bar for the attacker significantly, however, it is still questionable if the technology can be realistically adopted by the majority of Internet users. In this paper, we attempt a first study for quantifying the adoption of 2FA in probably the largest existing provider, namely Google. For achieving this, we leverage the password-reminder process in a novel way for discovering if 2FA is enabled for a particular account, without annoying or affecting the account's owner. Our technique has many challenges to overcome, since it requires issuing massively thousands of password reminders. In order to remain below the radar, and therefore avoid solving CAPTCHAs or having our hosts blocked, we leverage distributed systems, such as TOR and PlanetLab. After examining over 100,000 Google accounts, we conclude that 2FA has not yet been adopted by more than 6.4% of the users. Last but not least, as a side-effect of our technique, we are also able to exfiltrate private information, which can be potentially used for malicious purposes. Thus, in this paper we additionally present important findings for raising concerns about privacy risks in designing password reminders.
Keywords: adoption, authentication, password reminder, privacy leak, two-factor (ID#: 15-5570)
URL: http://doi.acm.org/10.1145/2751323.2751327
Davide Frey, Rachid Guerraoui, Anne-Marie Kermarrec, Antoine Rault; “Collaborative Filtering Under A Sybil Attack: Analysis Of A Privacy Threat;” EuroSec '15 Proceedings of the Eighth European Workshop on System Security, April 2015, Article No. 5. doi: 10.1145/2751323.2751328
Abstract: Recommenders have become a fundamental tool to navigate the huge amount of information available on the web. However, their ubiquitous presence comes with the risk of exposing sensitive user information. This paper explores this problem in the context of user-based collaborative filtering. We consider an active attacker equipped with externally available knowledge about the interests of users. The attacker creates fake identities based on this external knowledge and exploits the recommendations it receives to identify the items appreciated by a user. Our experiment on a real data trace shows that while the attack is effective, the inherent similarity between real users may be enough to protect at least part of their interests.
Keywords: collaborative filtering, privacy, recommender, sybil attack (ID#: 15-5571)
URL: http://doi.acm.org/10.1145/2751323.2751328
Hugo Gonzalez, Andi A. Kadir, Natalia Stakhanova, Abdullah J. Alzahrani, Ali A. Ghorbani; “Exploring Reverse Engineering Symptoms in Android Apps;” EuroSec '15 Proceedings of the Eighth European Workshop on System Security, April 2015, Article No.7. Doi: 10.1145/2751323.2751330
Abstract: The appearance of the Android platform and its popularity has resulted in a sharp rise in the number of reported vulnerabilities and consequently in the number of mobile threats. Leveraging openness of Android app markets and the lack of security testing, malware authors commonly plagiarize Android applications (e.g., through code reuse and repackaging) boosting the amount of malware on the markets and consequently the infection rate. In this study, we present AndroidSOO, a lightweight approach for the detection of repackaging symptoms on Android apps. In this work, we introduce and explore novel and easily extractable attribute called String Offset Order. Extractable from string identifiers list in the .dex file, the method is able to pinpoint symptoms of reverse engineered Android apps without the need for complex further analysis. We performed extensive evaluation of String Order metric to assess its capabilities on datasets made available by three recent studies: Android Malware Genome Project, DroidAnalytics and Drebin. We also performed a large-scale study of over 5,000 Android applications extracted from Google Play market and over 80 000 samples from Virus Total service.
Keywords: Android, malware, privacy (ID#: 15-5572)
URL: http://doi.acm.org/10.1145/2751323.2751330
Jonathan Voris, Jill Jermyn, Nathaniel Boggs, Salvatore Stolfo; “Fox in the Trap: Thwarting Masqueraders via Automated Decoy Document Deployment;” EuroSec '15 Proceedings of the Eighth European Workshop on System Security, April 2015, Article No.3. Doi: 10.1145/2751323.2751326
Abstract: Organizations face a persistent challenge detecting malicious insiders as well as outside attackers who compromise legitimate credentials and then masquerade as insiders. No matter how good an organization's perimeter defenses are, eventually they will be compromised or betrayed from the inside. Monitored decoy documents (honey files with enticing names and content) are a promising approach to aid in the detection of malicious masqueraders and insiders. In this paper, we present a new technique for decoy document distribution that can be used to improve the scalability of insider detection. We develop a placement application that automates the deployment of decoy documents and we report on two user studies to evaluate its effectiveness. The first study indicates that our automated decoy distribution tool is capable of strategically placing decoy files in a way that offers comparable security to optimal manual deployment. In the second user study, we measure the frequency that normal users access decoy documents on their own systems and show that decoy files do not significantly interfere with normal user tasks.
Keywords: decoy, honey files, insider threat, masquerade detection (ID#: 15-5573)
URL: http://doi.acm.org/10.1145/2751323.2751326
Stephanos Matsumoto, Pawel Szalachowski, Adrian Perrig; “Deployment Challenges In Log-Based PKI Enhancements;” EuroSec '15 Proceedings of the Eighth European Workshop on System Security, April 2015, Article No.1. Doi: 10.1145/2751323.2751324
Abstract: Log-based PKI enhancements propose to improve the current TLS PKI by creating public logs to monitor CA operations, thus providing transparency and accountability. In this paper we take the first steps in studying the deployment process of log-based PKI enhancements in two ways. First, we model the influences that parties in the PKI have to incentivize one another to deploy a PKI enhancement, and determine that potential PKI enhancements should focus their initial efforts on convincing browser vendors to deploy. Second, as a promising vendor-based solution we propose deployment status filters, which use a Bloom filter to monitor deployment status and efficiently defend against downgrade attacks from the enhanced protocol to the current TLS PKI. Our results provide promising deployment strategies for log-based PKI enhancements and raise additional questions for further fruitful research.
Keywords: Bloom filters, deployment, public-key infrastructures (ID#: 15-5574)
URL: http://doi.acm.org/10.1145/2751323.2751324
Jan Spooren, Davy Preuveneers, Wouter Joosen; “Mobile Device Fingerprinting Considered Harmful For Risk-Based Authentication;” EuroSec '15 Proceedings of the Eighth European Workshop on System Security, April 2015, Article No.6. Doi: 10.1145/2751323.2751329
Abstract: In this paper, we present a critical assessment of the use of device fingerprinting for risk-based authentication in a state-of-practice identity and access management system. Risk-based authentication automatically elevates the level of authentication whenever a particular risk threshold is exceeded. Contemporary identity and access management systems frequently leverage browser-based device fingerprints to recognize trusted devices of a certain individual. We analyzed the variability and the predictability of mobile device fingerprints. Our research shows that particularly for mobile devices the fingerprints carry a lot of similarity, even across models and brands, making them less reliable for risk assessment and step-up authentication.
Keywords: authentication, device fingerprinting, fraud detection, risk (ID#: 15-5575)
URL: http://doi.acm.org/10.1145/2751323.2751329
Valentin Tudor, Magnus Almgren, Marina Papatriantafilou; “A Study on Data De-Pseudonymization in the Smart Grid;” EuroSec '15 Proceedings of the Eighth European Workshop on System Security, April 2015, Article No.2. Doi: 10.1145/2751323.2751325
Abstract: In the transition to the smart grid, the electricity networks are becoming more data intensive with more data producing devices deployed, increasing both the opportunities and challenges in how the collected data are used. For example, in the Advanced Metering Infrastructure (AMI) the devices and their corresponding data give more information about the operational parameters of the environment but also details about the habits of the people living in the houses monitored by smart meters. Different anonymization techniques have been proposed to minimize privacy concerns, among them the use of pseudonyms. In this work we return to the question of the effectiveness of pseudonyms, by investigating how a previously reported methodology for de-pseudonymization performs given a more realistic and larger dataset than was previously used. We also propose and compare the results with our own simpler de-pseudonymization methodology. Our results indicate, not surprisingly, that large realistic datasets are very important to properly understand how an experimental method performs. Results based on small datasets run the risk of not being generalizable. In particular, we show that the number of re-identified households by breaking pseudonyms is dependent on the size of the dataset and the period where the pseudonyms are constant and not changed. In the setting of the smart grid, results will even vary based on the season when the dataset was captured. Knowing that relative simple changes in the data collection procedure may significantly increase the resistance to de-anonymization attacks will help future AMI deployments.
Keywords: AMI data de-pseudonymization, AMI privacy, smart grid data (ID#: 15-5576)
URL: http://doi.acm.org/10.1145/2751323.2751325
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
International Conferences: Human Computer Interaction (CHI 15), Korea |
The 33d ACM Conference on Human Factors in Computing Systems was held on April 18-23, 2015 in Seoul. Korea. The conference web page is available at: http://chi2015.acm.org/ The citations are on topics of interest directly related to the Science of Security community—human factors in cybersecurity.
Mahdi Nasrullah Al-Ameen, Matthew Wright, Shannon Scielzo; “Towards Making Random Passwords Memorable: Leveraging Users' Cognitive Ability Through Multiple Cues;” CHI '15 Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, April 2015, Pages 2315-2324. Doi: 10.1145/2702123.2702241
Abstract: Given the choice, users produce passwords reflecting common strategies and patterns that ease recall but offer uncertain and often weak security. System-assigned passwords provide measurable security but suffer from poor memorability. To address this usability-security tension, we argue that systems should assign random passwords but also help with memorization and recall. We investigate the feasibility of this approach with CuedR, a novel cued-recognition authentication scheme that provides users with multiple cues (visual, verbal, and spatial) and lets them choose the cues that best fit their learning process for later recognition of system-assigned keywords. In our lab study, all 37 of our participants could log in within three attempts one week after registration (mean login time: 38.0 seconds). A pilot study on using multiple CuedR passwords also showed 100% recall within three attempts. Based on our results, we suggest appropriate applications for CuedR, such as financial and e-commerce accounts.
Keywords: authentication, cued-recognition, usable security (ID#: 15-5601)
URL: http://doi.acm.org/10.1145/2702123.2702241
Emanuel von Zezschwitz, Alexander De Luca, Philipp Janssen, Heinrich Hussmann; “Easy to Draw, but Hard to Trace?: On the Observability of Grid-based (Un)lock Patterns;” CHI '15 Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, April 2015, Pages 2339-2342. Doi: 10.1145/2702123.2702202
Abstract: We performed a systematic evaluation of the shoulder surfing susceptibility of the Android pattern (un)lock. The results of an online study (n=298) enabled us to quantify the influence of pattern length, line visibility, number of knight moves, number of overlaps and number of intersections on observation resistance. The results show that all parameters have a highly significant influence, with line visibility and pattern length being most important. We discuss implications for real-world patterns and present a linear regression model that can predict the observability of a given pattern. The model can be used to provide proactive security measurements for (un)lock patterns, in analogy to password meters.
Keywords: authentication, observability, pattern, security (ID#: 15-5602)
URL: http://doi.acm.org/10.1145/2702123.2702202
Hendrik Meutzner, Santosh Gupta, Dorothea Kolossa; “Constructing Secure Audio CAPTCHAs by Exploiting Differences between Humans and Machines;” CHI '15 Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, April 2015, Pages 2335-2338. Doi: 10.1145/2702123.2702127
Abstract: To prevent abuses of Internet services, CAPTCHAs are used to distinguish humans from programs where an audio-based scheme is beneficial to support visually impaired people. Previous studies show that most audio CAPTCHAs, albeit hard to solve for humans, are lacking security strength. In this work we propose an audio CAPTCHA that is far more robust against automated attacks than it is reported for current CAPTCHA schemes. The CAPTCHA exhibits a good trade-off between human usability and security. This is achieved by exploiting the fact that the human capabilities of language understanding and speech recognition are clearly superior compared to current machines. We evaluate the CAPTCHA security by using a state-of-the-art attack and assess the intelligibility by means of a large-scale listening experiment.
Keywords: audio captcha, humans vs. machines, security, usability, user studies, visual impairment, web accessibility (ID#: 15-5603)
URL: http://doi.acm.org/10.1145/2702123.2702127
Eric Gilbert; “Open Book: A Socially-inspired Cloaking Technique that Uses Lexical Abstraction to Transform Messages;” CHI '15 Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, April 2015, Pages 477-486. Doi: 10.1145/2702123.2702295
Abstract: Both governments and corporations routinely surveil computer-mediated communication (CMC). Technologists often suggest widespread encryption as a defense mechanism, but CMC encryption schemes have historically faced significant usability and adoption problems. Here, we introduce a novel technique called Open Book designed to address these two problems. Inspired by how people deal with eavesdroppers offline, Open Book uses data mining and natural language processing to transform CMC messages into ones that are vaguer than the original. Specifically, we present: 1) a greedy Open Book algorithm that cloaks messages by transforming them to resemble the average Internet message; 2) an open-source, browser-based instantiation of it called Read Me, designed for Gmail; and, 3) a set of experiments showing that intended recipients can decode Open Book messages, but that unintended human- and machine-recipients cannot. Finally, we reflect on some open questions raised by this approach, such as recognizability and future side-channel attacks.
Keywords: cmc, encryption, social media, usable security (ID#: 15-5604)
URL: http://doi.acm.org/10.1145/2702123.2702295
Serge Egelman, Eyal Peer; “Scaling the Security Wall: Developing a Security Behavior Intentions Scale (SeBIS);” CHI '15 Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, April 2015, Pages 2873-2882. Doi: 10.1145/2702123.2702249
Abstract: Despite the plethora of security advice and online education materials offered to end-users, there exists no standard measurement tool for end-user security behaviors. We present the creation of such a tool. We surveyed the most common computer security advice that experts offer to end-users in order to construct a set of Likert scale questions to probe the extent to which respondents claim to follow this advice. Using these questions, we iteratively surveyed a pool of 3,619 computer users to refine our question set such that each question was applicable to a large percentage of the population, exhibited adequate variance between respondents, and had high reliability (i.e., desirable psychometric properties). After performing both exploratory and confirmatory factor analysis, we identified a 16-item scale consisting of four sub-scales that measures attitudes towards choosing passwords, device securement, staying up-to-date, and proactive awareness.
Keywords: individual differences, psychometrics, security behavior (ID#: 15-5605)
URL: http://doi.acm.org/10.1145/2702123.2702249
Adrienne Porter Felt, Alex Ainslie, Robert W. Reeder, Sunny Consolvo, Somas Thyagaraja, Alan Bettes, Helen Harris, Jeff Grimes; “Improving SSL Warnings: Comprehension and Adherence;” CHI '15 Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, April 2015, Pages 2893-2902. Doi: 10.1145/2702123.2702442
Abstract: Browsers warn users when the privacy of an SSL/TLS connection might be at risk. An ideal SSL warning would empower users to make informed decisions and, failing that, guide confused users to safety. Unfortunately, users struggle to understand and often disregard real SSL warnings. We report on the task of designing a new SSL warning, with the goal of improving comprehension and adherence. We designed a new SSL warning based on recommendations from warning literature and tested our proposal with microsurveys and a field experiment. We ultimately failed at our goal of a well-understood warning. However, nearly 30% more total users chose to remain safe after seeing our warning. We attribute this success to opinionated design, which promotes safety with visual cues. Subsequently, our proposal was released as the new Google Chrome SSL warning. We raise questions about warning comprehension advice and recommend that other warning designers use opinionated design.
Keywords: design, google consumer surveys, https, microsurveys, security, ssl, tls/ssl, warnings (ID#: 15-5606)
URL: http://doi.acm.org/10.1145/2702123.2702442
Youngbae Song, Geumhwan Cho, Seongyeol Oh, Hyoungshick Kim, Jun Ho Huh; “On the Effectiveness of Pattern Lock Strength Meters: Measuring the Strength of Real World Pattern Locks;” CHI '15 Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, April 2015, Pages 2893-2902. Doi: 10.1145/2702123.2702365
Abstract: We propose an effective pattern lock strength meter to help users choose stronger pattern locks on Android devices. To evaluate the effectiveness of the proposed meter with a real world dataset (i.e., with complete ecological validity), we created an Android application called EnCloud that allows users to encrypt their Dropbox files. 101 pattern locks generated by real EnCloud users were collected and analyzed, where some portion of the users were provided with the meter support. Our statistical analysis indicates that about 10% of the pattern locks that were generated without the meter support could be compromised through just 16 guessing attempts. As for the pattern locks that were generated with the meter support, that number goes up to 48 guessing attempts, showing significant improvement in security. Our recommendation is to implement a strength meter in the next version of Android.
Keywords: password, password strength meter, pattern lock, security (ID#: 15-5607)
URL: http://doi.acm.org/10.1145/2702123.2702365
Alina Hang, Alexander De Luca, Heinrich Hussmann; “I Know What You Did Last Week! Do You?: Dynamic Security Questions for Fallback Authentication on Smartphones;” CHI '15 Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, April 2015, Pages 2893-2902. Doi: 10.1145/2702123.2702131
Abstract: In this paper, we present the design and evaluation of dynamic security questions for fallback authentication. In case users lose access to their device, the system asks questions about their usage behavior (e.g. calls, text messages or app usage). We performed two consecutive user studies with real users and real adversaries to identify questions that work well in the sense that they are easy to answer for the genuine user, but hard to guess for an adversary. The results show that app installations and communication are the most promising categories of questions. Using three questions from the evaluated categories was sufficient to get an accuracy of 95.5% - 100%.
Keywords: dynamic security questions, fallback authentication (ID#: 15-5608)
URL: http://doi.acm.org/10.1145/2702123.2702131
Richard Shay, Lujo Bauer, Nicolas Christin, Lorrie Faith Cranor, Alain Forget, Saranga Komanduri, Michelle L. Mazurek, William Melicher, Sean M. Segreti, Blase Ur; “A Spoonful of Sugar?: The Impact of Guidance and Feedback on Password-Creation Behavior;” CHI '15 Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, April 2015, Pages 2903-2912. Doi: 10.1145/2702123.2702586
Abstract: Users often struggle to create passwords under strict requirements. To make this process easier, some providers present real-time feedback during password creation, indicating which requirements are not yet met. Other providers guide users through a multi-step password-creation process. Our 6,435-participant online study examines how feedback and guidance affect password security and usability. We find that real-time password-creation feedback can help users create strong passwords with fewer errors. We also find that although guiding participants through a three-step password-creation process can make creation easier, it may result in weaker passwords. Our results suggest that service providers should present password requirements with feedback to increase usability. However, the presentation of feedback and guidance must be carefully considered, since identical requirements can have different security and usability effects depending on presentation.
Keywords: authentication, password-composition policies, passwords, security policy, usable security (ID#: 15-5609)
URL: http://doi.acm.org/10.1145/2702123.2702586
Jason W. Clark, Peter Snyder, Damon McCoy, Chris Kanich; “’I Saw Images I Didn't Even Know I Had’: Understanding User Perceptions of Cloud Storage Privacy;” CHI '15 Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, April 2015, Pages 1641-1644. Doi: 10.1145/2702123.2702535
Abstract: Billions of people use cloud-based storage for personal files. While many are likely aware of the extent to which they store information in the cloud, it is unclear whether users are fully aware of what they are storing online. We recruited 30 research subjects from Craigslist to investigate how users interact with and understand the privacy issues of cloud storage. We studied this phenomenon through surveys, an interview, and custom software which lets users see and delete their photos stored in the cloud. We found that a majority of users stored private photos in the cloud that they did not intend to upload, and a large portion also chose to permanently delete some of the offending images. We believe our study highlights a mismatch between user expectation and reality. As cloud storage is plentiful and ubiquitous, effective tools for enabling risk self-assessment are necessary to protect users' privacy.
Keywords: cloud, privacy, security, threat modeling (ID#: 15-5610)
URL: http://doi.acm.org/10.1145/2702123.2702535
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
International Conferences: The Third International Conference on Computer, Communication, Control and Information Technology (C3IT), 2015, India |
The Third International Conference on Computer, Communication, Control and Information Technology (C3IT),2015, was held 7-8 Feb. 2015 at Adisaptagram, Hooghly, WestBengal, India. C3IT’s are to bring together leading academicians, scientists and researchers to exchange their innovative ideas, experiences and research outcomes about research advances in all areas of computer, communication, control and information technology between multinational participants and improve international cooperation and collaborative research in these fields.
Haider, R., "Language-Based Security Analysis Of Database Applications," Computer, Communication, Control and Information Technology (C3IT), 2015 Third International Conference on, pp. 1, 4, 7-8 Feb. 2015. doi: 10.1109/C3IT.2015.7060109
Abstract: In today's information-age, databases are at the heart of information systems. Unauthorized leakage of confidential database information, while computed by the associated database applications, may put the system at risk. Language-based information flow analysis is a promising field of research to detect possible information leakage in any software systems. So far, researchers pay little attention to the case of applications embedding database languages. In this paper, we address the need of proper analysis of data manipulation languages, and we overview the possible extension of language-based approaches to the case of information systems supporting databases at the back-end.
Keywords: authorisation; database languages; database management systems; confidential database information; data manipulation language; database language; information system; language-based information flow analysis; language-based security analysis; unauthorized leakage; Abstracts; Context; Database languages; Databases; Information systems; Security; Semantics; Database Query Languages; Information System; Language-based Information Flow; Static Analysis (ID#: 15-5144)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7060109&isnumber=7060104
Datta, B.; Tat, S.; Bandyopadhyay, S.K., "Robust High Capacity Audio Steganography Using Modulo Operator," Computer, Communication, Control and Information Technology (C3IT), 2015 Third International Conference on, pp. 1, 5, 7-8 Feb. 2015. doi: 10.1109/C3IT.2015.7060146
Abstract: A secure audio steganography technique is introduced in this paper. Here modulo operator is used for hiding target string. The embedding as well as extracting process is of two steps which provides more robustness in this method. During preprocessing the hexadecimal equivalent of target string is calculated by taking four bits at a time. That increases capacity of cover media. Modulo operator is used during embedding and the adjustment is done in such a way so that the distortion becomes less which increases imperceptibility. The quality of experimental result is analyzed by SNR and also compared with standard LSB and HLLAS technique. Bits per sample is also calculated which shows more efficiency of the proposed technique.
Keywords: audio coding; distortion; steganography; SNR; audio steganography technique security; modulo operator; target string hexadecimal equivalent preprocessing; target string hiding;Conferences;Cryptography;Media;Receivers;Robustness;Signal to noise ratio; Standards; Audio Steganography; Cover Audio; Hexadecimal; Modulo Operator; Post-processing; Pre-processing; Stego Audio (ID#: 15-5145)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7060146&isnumber=7060104
Mishra, M.K.; Mukhopadhyay, S.; Biswas, G.P., "Architecture And Secure Implementation For Video Conferencing Technique," Computer, Communication, Control and Information Technology (C3IT), 2015 Third International Conference on, pp.1,6, 7-8 Feb. 2015. doi: 10.1109/C3IT.2015.7060168
Abstract: With rapid development of various multimedia technologies, ample amount of multimedia data are generated and transmitted for different usage including commercial, medical, military etc, and if they are not well protected, the data may be accessed by opponents or unauthorized users. Thus, security and privacy protection of the important messages have become important issues, and a number of techniques based on selective, entropy-based and/or complete encryption are developed. In this paper, we consider some existing cryptographic techniques namely trusted third-party, RSA, GDH.2, RC4 etc. and integrated themselves in such a way that an overall securiy protection to the video conferencing is achieved. Both the required block diagrams and the protools of the proposed scheme are provided, and a security and performance analysis show that it is well secured, computation-efficient and applicable for real life operations.
Keywords: cryptographic protocols; data protection; public key cryptography; teleconferencing; video communication;GDH.2;RC4;RSA;cryptographic protocol; data privacy protection; data security; entropy-based encryption; group Diffie Hellman key agreement ;multimedia technology; trusted third-party; video conferencing technique secure implementation; Ciphers; Encryption; Standards; Streaming media; Video coding;GDH.2;H.264/AVC;RC4;group public key; video conference; video encryption (ID#: 15-5146)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7060168&isnumber=7060104
Chowdhury, P.; Ray, S.; Mukherjee, D., "An Embedded Monitoring Unit For A Lead-Acid Battery With Reference To A PV System," Computer, Communication, Control and Information Technology (C3IT), 2015 Third International Conference on, pp. 1, 3, 7-8 Feb. 2015. doi: 10.1109/C3IT.2015.7060119
Abstract: Most significant Balance of System (BOS) unit for monitoring health of a Photovoltaic (PV) system is the power Conditioner unit. Here, authors propose incorporation of an additional controller unit for monitoring all possible states of the Battery-bank leading to an enhanced energy security mechanism for small decentralized PV applications.
Keywords: battery storage plants condition monitoring; energy security; photovoltaic power systems; PV system; Pb; balance of system unit; battery bank; controller; decentralized PV applications; embedded monitoring unit; enhanced energy security mechanism; health monitoring; lead-acid battery; photovoltaic system; Batteries; Discharges (electric);Light emitting diodes; Monitoring; Security; System-on-chip; Voltage control; Battery; Depth of Discharge (DOD);Energy Security; Energy Security Enhancing Mechanism (ESEM);Run Time to Empty (RTTE); State of Charge (ID#: 15-5147)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7060119&isnumber=7060104
Amin, R.; Biswas, G.P., "Anonymity Preserving Secure Hash Function Based Authentication Scheme For Consumer USB Mass Storage Device," Computer, Communication, Control and Information Technology (C3IT), 2015 Third International Conference on, pp. 1, 6, 7-8 Feb. 2015. doi: 10.1109/C3IT.2015.7060190
Abstract: A USB (Universal Serial Bus) mass storage device, which makes a (USB) device accessible to a host computing device and enables file transfers after completing mutual authentication between the authentication server and the user. It is also very popular device because of it's portability, large storage capacity and high transmission speed. To protect the privacy of a file transferred to a storage device, several security protocols have been proposed but none of them is completely free from security weaknesses. Recently He et al. proposed a multi-factor based security protocol which is efficient but the protocol is not applicable for practical implementation, as they does not provide password change procedure which is an essential phase in any password based user authentication and key agreement protocol. As the computation and implementation of the cryptographic one-way hash function is more trouble-free than other existing cryptographic algorithms, we proposed a light weight and anonymity preserving three factor user authentication and key agreement protocol for consumer mass storage devices and analyzes our proposed protocol using BAN logic. Furthermore, we have presented informal security analysis of the proposed protocol and confirmed that the protocol is completely free from security weaknesses and applicable for practical implementation.
Keywords: cryptographic protocols; file organisation; BAN logic; USB device; anonymity preserving secure hash function based authentication scheme; anonymity preserving three factor user authentication; authentication server; consumer USB mass storage device; consumer mass storage devices; cryptographic algorithms; cryptographic one-way hash function; file transfers; host computing device; informal security analysis; key agreement protocol; multifactor based security protocols; password based user authentication; password change procedure; storage capacity; universal serial bus mass storage device; Authentication; Cryptography; Protocols; Servers; Smart cards; Universal Serial Bus; Anonymity; Attack; File Secrecy; USB MSD; authentication (ID#: 15-5148)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7060190&isnumber=7060104
Das, S.; Dey, H.; Ghosh, R., "An Approach To Assess The Optimality Of Refining RC4," Computer, Communication, Control and Information Technology (C3IT), 2015 Third International Conference on, pp. 1, 6, 7-8 Feb. 2015. doi: 10.1109/C3IT.2015.7060139
Abstract: Though RC4 has proved itself as a simple, fast and robust stream cipher and it is trusted by many organizations, a number of researchers claimed that RC4 has some weakness and bias in its internal states. To increase its security, some guidelines recommended discarding some initial bytes like N, 2N or more from the RC4 key-stream (N is 256, generally). In this paper, the authors tried to find out the optimum number of bytes that is to be discarded to have a more secured RC4, by analyzing some variants of it. All the algorithms, including the original one, are analyzed by the NIST Statistical Test Suite and it has been found that it is not necessary discarding more and more number of bytes to increase the security of RC4.
Keywords: cryptography; statistical analysis; trusted computing; NIST statistical test suite; RC4 key stream;RC4 refining optimality;trusted robust stream cipher; Algorithm design and analysis; Ciphers; Generators; Hardware;NIST;NIST test suite;RC4 security; key stream generator; modified RC4; stream cipher (ID#: 15-5149)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7060139&isnumber=7060104
Barman, S.; Samanta, D.; Chattopadhyay, S., "Revocable Key Generation From Irrevocable Biometric Data For Symmetric Cryptography," Computer, Communication, Control and Information Technology (C3IT), 2015 Third International Conference on, pp. 1, 4, 7-8 Feb. 2015. doi: 10.1109/C3IT.2015.7060182
Abstract: Crypto-biometric system (CBS) is a combination of biometrie with cryptography to enhance network security. Biometrie is the most trustworthy measure to identify a person uniquely using his or her behavioral and physiological characteristics. Cryptography is an effective concern to the security of information. The security of cryptography depends on the strength of cryptographic key and strength of key depends on the length of key. In the traditional cryptography, key is generated randomly and it is very difficult to remember as the key is not linked with user. To address this limitation of cryptography, CBS uses biometrie data of user to bind key with its owner and as the key is linked with user's biometrie data, user does not need to remember the key. As biometrie data is irrevocable, it becomes useless when compromised and as a result the biometrie based key becomes also useless. In this approach, fingerprint features are used to generate key for cryptographic application. The key is revocable and easy to revoke when required. In our experiment, FVC2004 fingerprint database is used to investigate the result.
Keywords: cryptography; fingerprint identification; CBS; FVC2004 fingerprint database; behavioral and physiological characteristics; biometric based key; biometric data; crypto-biometric system; cryptographic application; cryptographic key; fingerprint feature; information security; irrevocable biometric data; network security; revocable key generation; symmetric cryptography; Bioinformatics; Cryptography; Databases; Feature extraction; Fingerprint recognition; Iris recognition (ID#: 15-5150)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7060182&isnumber=7060104
Haider, R., "Language-Based Security Analysis Of Database Applications," Computer, Communication, Control and Information Technology (C3IT), 2015 Third International Conference on, pp. 1, 4, 7-8 Feb. 2015. doi: 10.1109/C3IT.2015.7060109
Abstract: In today's information-age, databases are at the heart of information systems. Unauthorized leakage of confidential database information, while computed by the associated database applications, may put the system at risk. Language-based information flow analysis is a promising field of research to detect possible information leakage in any software systems. So far, researchers pay little attention to the case of applications embedding database languages. In this paper, we address the need of proper analysis of data manipulation languages, and we overview the possible extension of language-based approaches to the case of information systems supporting databases at the back-end.
Keywords: authorisation; database languages; database management systems; confidential database information; data manipulation language; database language; information system; language-based information flow analysis; language-based security analysis; unauthorized leakage; Abstracts; Context; Database languages; Databases; Information systems; Security; Semantics; Database Query Languages; Information System; Language-based Information Flow; Static Analysis (ID#: 15-5151)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7060109&isnumber=7060104
Mahto, D.; Yadav, D.K., "Enhancing Security Of One-Time Password Using Elliptic Curve Cryptography With Biometrics For E-Commerce Applications," Computer, Communication, Control and Information Technology (C3IT), 2015 Third International Conference on, pp. 1, 6, 7-8 Feb. 2015. doi: 10.1109/C3IT.2015.7060172
Abstract: Security of one-time password (OTP) is essential because nowadays most of the e-commerce transactions are performed with the help of this mechanism. OTP is used to counter replay attack/eavesdropping. Replay attack or eavesdropping is one type of attacks on network-connected computing environment or isolated computing environment. For achieving 112 bits of security level, Rivest Shamir and Adleman (RSA) algorithm needs key size of 2048 bits, while Elliptic Curve Cryptography (ECC) needs key size of 224-255 bits. Another issue with most of the existing implementation of security models is storage of secret keys. Cryptographic keys are often kept in en-secured way that can either be guessed/social-engineered or obtained through brute force attacks. This becomes a weak link and leads integrity issues of sensitive data in a security model. To overcome the above problem, biometrics is combined with cryptography for developing strong security model. This paper suggests an enhanced security model of OTP system using ECC with palm-vein biometrie. This model also suggests better security with lesser key size than other prevalent public key crypto-model. The cryptographic keys are also not required to memorize or keep anywhere, these keys are generated as and when needed.
Keywords: authorisation; biometrics (access control);electronic commerce; public key cryptography; ECC; OTP; cryptographic keys; e-commerce; eavesdropping; elliptic curve cryptography; isolated computing environment; network-connected computing environment; one-time password; palm-vein biometrics; replay attack; security model; Biological system modeling; Biometrics (access control); Elliptic curve cryptography; Elliptic curves; Veins; Biometrics; Elliptic Curve Cryptography (ECC); One-Time Password; Online Banking; Palm Vein (ID#: 15-5152)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7060172&isnumber=7060104
Ghosh, P.; Mitra, R., "Proposed GA-BFSS And Logistic Regression Based Intrusion Detection System," Computer, Communication, Control and Information Technology (C3IT), 2015 Third International Conference on, pp.1,6, 7-8 Feb. 2015. doi: 10.1109/C3IT.2015.7060117
Abstract: Enormous growth in Internet Technology accelerates sharing of limitless data, service and resources. But along with the innumerable benefits of Internet, a number of serious issues have also taken birth regarding data security, system security and user privacy. A numbers of intruders attempt to gain unauthorized access into computer network. Intrusion Detection System (IDS) is a stronger strategy to provide security. In this paper, we have proposed an efficient IDS by selecting relevant futures from NSL-KDD dataset and using Logistic Regression (LR) based classifier. To decrease memory space and learning time, a feature selection method is required. In this paper we have selected a number of feature sets, using the approach of Genetic Algorithm (GA), with our proposed fitness score based on Mutual Correlation. From the number of feature sets, we have selected the fittest set of features using our proposed Best Feature Set Selection (BFSS) method. After selecting the most relevant features from NSL-KDD data set, we used LR based classification. Thus, an efficient IDS is created by applying the concept of GA with BFSS for feature selection and LR for classification to detect network intrusions.
Keywords: feature selection; genetic algorithms; pattern classification; regression analysis; security of data; BFSS; GA; IDS; LR classifier; best feature set selection method; genetic algorithm; intrusion detection system; logistic regression; mutual correlation; Biological cells; Genetic algorithms; Intrusion detection; Logistics; Sociology; Statistics; Training; BFSS; GA; Gradient Descent; IDS; LR; Mutual Correlation; NSL-KDD (ID#: 15-5153)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7060117&isnumber=7060104
Mondai, S.; Setua, S.K., "Extending Trust In Enterprise Systems," Computer, Communication, Control and Information Technology (C3IT), 2015 Third International Conference on, pp. 1, 6, 7-8 Feb. 2015. doi: 10.1109/C3IT.2015.7060169
Abstract: Modern enterprises are facing more and more uncertainties and challenges from insecurity and context sensitivity. In view of information security, an enterprise is considered as a collection of assets and their interrelationships and how users use their rights to access the enterprise. These interrelationships may be built into the enterprise information infrastructure, as in the case of connection of hardware elements in network architecture, or in the installation of software or in the information assets. As a result, access to one element may enable access to another if they are connected. An enterprise may specify the conditions on how to access certain assets in certain mode (read, write etc.) as policies. The interconnection of assets, along with specified policies, may lead to vulnerabilities in the enterprise information system if misused. This paper presents a formal methodology for detection of vulnerabilities and threats to enterprise information systems.
Keywords: enterprise resource planning; information systems; security of data; enterprise information infrastructure; enterprise information system; information security; Additives; Authorization; Availability; Databases; Information systems; Permission; Enterprise information security; Policies; Security parameters; Vulnerability (ID#: 15-5154)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7060169&isnumber=7060104
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Lablet Presentations Quarterly Lablet Meeting at CMU July 2015 |
The Science of Security (SoS) quarterly Lablet meeting, sponsored by NSA, was hosted by the Carnegie Mellon University Lablet on July 14 and 15, 2015. Quarterly meetings are held to share research, coordinate, present interim findings, and stimulate thought and discussion about the Science of Security. Two panel sessions produced lively discussions about the nature of the Science of Security and the developing Science of Privacy. Each Lablet presented an update about its ongoing research that included a description of active projects, the number of researchers involved, papers published, the hard problems addressed, and future work.
University of Illinois Urbana Champaign (UIUC)
Dave Nicol and Bill Sanders, UIUC co-PIs, presented their overview. UIUC is actively engaged in five projects addressing metrics (primary), human behavior, and policy and resiliency. These projects are looking at data driven models of attacker behavior, human circumvention of security, and data driven model-based decision making.
The UIUC lablet has generated 21 publications. Two received best paper awards and one as best poster award at the conferences where they were presented. 20 students are supported.
Nicol described the “model centric” focus at Illinois to both develop data driven predictive models and to test and verify them. Their attacker test bed is a key tool for this work, both for software and cyber physical systems.
Illinois hosted the 2015 Hot SoS Conference in April and has reached out to new students through a graduate Science of Security seminar and providing summer internships to graduate students. The four summer interns were a diverse group from two other universities. Their work will be presented in the poster session at Hot SoS 2016.
(ID#: 15-6132)
University of Maryland, College Park (UMD)
The Lablet PI, Jonathan Katz, presented UMD’s overview. UMD currently has 20 faculty, including 15 at Maryland from the departments of Computer Science, Electrical and Computer Engineering, Information Science, Criminology, and reliability engineering. In addition, they have five5 collaborators at other universities.
The ten projects currently underway support more than 15 PhD students and have generated more than a dozen publications. These efforts include workshops on data driven approaches to security and privacy. Their strengths, according to Katz, are human behavior and policy-governed collaboration and security metrics.
Katz highlighted three projects. The first looked at management of Public Key Interfaces. The big issue for PKIs is revocation-what happens to them when no longer useful or needed? ‘The second looks at understanding how users process security advice and how they process it. How, for example, do they decide whose advice to take? How do they process it? UMD addresses these questions through semi-structured interviews, the using the preliminary results to refine their questions. These refined tools then are used to determine the credibility of sources. The third project looks at the development of empirical models for vulnerability exploits. Using real world attack data, they look at vulnerability count and attack surface and the exercised attack surface and have determined that fewer than 40% of known vulnerabilities are exploited. One interesting conclusion offered is that there are fewer exploited vulnerabilities as we go forward with new generations and versions of software.
(ID#: 15-6133)
Carnegie Mellon University (CMU)
Prof. Travis Breaux presented the CMU overview. CMU is focused on the hard problem of composability and human behavior. The Lablet supports ten students and four post-doctoral research positions.
Fifteen faculty and senior researchers participate along with seven collaborating universities. Within CMU, seven departments and three colleges are involved. Eleven projects are underway and seven major papers have been published to date.
Breaux described the CMU research focus on the hard problems of human factors and composability, which he defined as modular systems that don’t have to be reviewed; their security properties are known and will be retained when used to assemble or build more complex structures. There are addressing the problem through modeling. The challenge is to address both complexity and modularity. Humans are a big challenge. The Lablet hosts the Security Observatory Laboratory (SOL). The SOL is collecting behavioral data from users to identify missing knowledge about how people react or interact with security.
(ID#: 15-6134))
North Carolina State University (NCSU)
Prof. Laurie Williams, NCSU’s PI, offered the NCSU overview. NCSU is working on research projects addressing issues in resilience, policy, metrics, and human factors. Vulnerability and resilience prediction projects include data flow-based detectors, scalable enforcement of network security policies that are resilient, resiliency requirement writing, design and testing, assessment of security problems in open source software, and smart isolation in large-scale production computing infrastructures.
Some of the areas Williams highlighted included policy, human factors, and metrics. On policy, she cited work on formal specifications and analysis of critical norms and policies and scientific understanding of policy complexity-human study on firewall complexity. Work on human factors includes warning of phishing attacks and identifying them, information processing analysis of online deception detection (work being done by collaborating university Purdue), and leveraging cognitive function effects on input device analytics, e.g., eye tracking and keystrokes. Metrics are one of the keys to a scientific understanding of security. The Lablet’s work looks specifically at including attack surface and defense in depth and systematization of knowledge from intrusion detection models.
NCSU’s research has generated twenty publications with another nine undergoing peer and editorial review. Their collaboration has engaged 55 authors and 13 institutions.
The Lablet hosted an NSA strategy meeting on the Science of Privacy. This workshop shared hard problem strategy and research methods on this emerging topic.
(ID#: 15-6135)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Panel Presentations Quarterly Lablet Meeting at CMU July 2015 |
The Science of Security (SoS) quarterly Lablet meeting, sponsored by NSA, was hosted by the Carnegie Mellon University Lablet on July 14 and 15, 2015. Quarterly meetings are held to share research, coordinate, present interim findings, and stimulate thought and discussion about the Science of Security. Two panel sessions produced lively discussions about the nature of the Science of Security and the developing Science of Privacy.
Science of Security
The Science of Security panel addressed the concept and tenets of the emerging science of security (SoS). A panel of Lablet faculty members led by moderator Bill Scherlis (CMU), offered observations about the nature of science generally and how modern cybersecurity issues relate. Following opening remarks by the panelists, a lively discussion among them and the audience took place. The panel consisted of Dave Nicol (UIUC), Carl Landwehr (GWU), Jonathan “Jono” Spring (CMU CERT), Emerson Murphy-Hill (NCSU), Tudor Dumitras (UMD), and Bill Scherlis (CMU), serving as moderator.
In his opening remarks, Scherlis stated much of what we do is modeling that we relate to artifacts and phenomena. Good models are good because they are resonant, support reasoning. But adversaries attempt to escape the model, that is, to find work-arounds; given this, what does it mean to say the Science of Security? Offering what he labelled a “provocative comment” he offered the opinions that it is incorrect to say it [Science of Security] will simply be more rigorous. He thinks focusing on the right dimensions will disproportionately improve what we do in practicality and productivity. For example, he says there is a robust consensus on the 5 hard problems. Another example is the emphasis on measurement and methodology. He says we are not focusing on all of cybersecurity; by focusing and being attentive to scope, we will do better in internal collaboration and in engineering better systems.
Carl Landwehr reviewed the history and evolution of cybersecurity thinking. Don Good, University of Texas, did work in 1986 that suggested a need to “review foundations of computer security.” 15 years later Fred Chang, distinguished professor and scholar at SMU, asked what was the science in security? So the question has been asked for a while.
Landwehr avers it is engineering, not science, according to much of the literature such as the Mitre-JASON study. He clarifies that it is really Science of Cybersecurity—that security is way broader topic. To be scientific, a subject must be empirical, testable, rigorous, broad, and incremental. There are different types of science—Aristotelian, Newtonian, Darwinian, and now behavioral economics.
Citing Herbert Simon in his 1969 “Sciences of the Artificial” [n.b. 1969. The Sciences of the Artificial. MIT Press, Cambridge, Mass, 1st edition. "objects (real or symbolic) in the environment of the decision-maker influence choice as much as the intrinsic information-processing capabilities of the decision-maker." He explained "the principles of modeling complex systems, particularly the human information-processing system that we call the mind" One important distinction is that science teaches about actual things while engineering deals with artificial things. We must, according to Landwehr, make an effort to understand real problems in real systems and understand how experiments enhance knowledge.
Jonathan (“Jono”) Spring asked “What is SoS? How does our understanding advance SoS?” His answer to the first is the second. He avers SoS is it not just engineering because philosophy is integral to science and that building community and building consensus, are part of building science. SoS offers unique challenges: confluence of these obstacles: engineered mechanisms designed by someone else who may want to thwart us and digital economics and the oddness of zero marginal costs and non-rival goods.
Tudor Dumitras said a big component of SoS is moving toward systems that have measurable elements that allow us to capture the advantages and limitations of real world adversaries. Observation and experimentation give us insights and ways to defeat these adversaries. Measurement also allows correction for model drift and change over time. He added that there is value in reproducing older studies using modern data and techniques to validate those studies and use the knowledge gained to develop the principles.
Emerson Murphy-Hill said that cybersecurity to date has been like a radar gun. The radar gun generated the radar detector. Then the radar detector generated a radar detector detector and so on. Cybersecurity has been linear in its development of tools and foils extended on and on.
Dave Nichol says that the “piece of the elephant” that resonates with him is when we study SoS as first class objects. The primary question is “what is security?” He likens it to the Bell–LaPadula model of information flow. [n.b. The Bell–LaPadula Model is a state machine model used for enforcing access control in government and military applications.] Another piece of SoS is going from model to implementation. He says SoS is about studying security in a specific context—how to break a piece of cryptography.
Bill Scherlis quoted Tony Hoare “we treat our software as a phenomenon of nature.” We don’t control it, but merely building on a basis that is “tacit.”
Carl Landwehr countered that his can’t predict or control nature comment is at odds with predictability.
Bill Sanders commented that models are good—build them, learn from them and then build better models.
Jono Spring commented that we can have the conversation across a range of models—but then asked whether we can translate among each other or only one adjacent person who understands the jargon?
The panel wrapped up their discussion.
(ID#: 15-6136)
Science of Privacy
The second panel discussed the emerging question of the Science of Privacy in a fundamental way. As with the panel discussion about the Science of Security, the panel offered opening remarks, and an interactive discussion with the audience ensued. The Science of Privacy panel included Adam Tagert (DoD), Travis Breaux (CMU), Munidar Singh (NCSU), Serge Egelman (Cal-Berkeley), and Lorrie Cranor (CMU), acting as Moderator.
In introducing the panel, Bill Scherlis stated “privacy is ‘an issue of the day’” and not as far along in its conceptual development at the Science of Security. This discussion was intended to stimulate thinking and discussion within the community about privacy and its relationship to cybersecurity.
In her opening remarks, Lorrie Cranor said it is typical to have a panel and also for panels to disagree about definition of privacy. The topic is current, but there is also a body of work about the nature of privacy, primarily from legal scholars and behaviorists.
Serge Egelman identified http://teachingprivacy.org as a source of materials for teachers and described plans for doing a MOOC courseware. The material cited includes 10 principles. Edelman states that one shouldn’t figure on finding a single absolute definition of privacy, that researchers should simply define how they are using it for their specific work.
Adam Tagert stated privacy is about information and not about the technical aspects. Considerations of privacy are about setting up rules so that information can be provided in just enough detail—data minimization.
Munidar Singh says that privacy, like security, is always about a human and correlates to human identities.
Travis Breaux offered a pluralistic view suggesting that privacy has many different definitions. The researchers job is to just tell which one is being used. Privacy, he adds, is trust based and that sharing information between two is one thing, but adding a third party creates a much greater problem. Need to focus.
Audience member Comment/question: why didn’t anyone say it is about expectations? How do we reconcile these with other expectations? How do you guarantee these expectations are met?
Lorrie Cranor—P3P project { n.b. The Platform for Privacy Preferences Project (P3P) enables Websites to express their privacy practices in a standard format that can be retrieved automatically and interpreted easily by user agents. 2007] thought the goal was to restrict the flow of information but that companies thought it was not about restricting data flows but to control or manage those flows. She countered with another open question: What metrics are there?
Serge Egelman: one metric is expectations; in the lab, give tests on privacy attitudes
Travis Breaux: We have to be able to measure privacy knowing that each member of the population has different privacy expectations. Working on design requirements shows there are conflicting goals sometimes. Systems are changing as well as users, so we will need to collect data for analysis.
Munidar Singh: measure reactions.
Adam Tagert: being transparent is really hard. With privacy, there is a need to get to a point where we can identify the privacy thresholds.
Travis Breaux: There is a lack of knowledge about predictability in current methods such as k-anonymity.
Audience member: what is privacy about? Negative things happening. So we need to know what the outcomes are, and what the consequences are, the likelihood of certain outcomes, and what the final things are people care about.
Lorrie Cranor: privacy is context dependent.
Bill Scherlis: if there are no models, how can you figure out what is ok and what is not? Not monetized.
Travis Breaux: there is a variable market for info
Audience member: back to expectations: how do we measure and what are the formalisms we have created aligned with people’s expectations?
Serge Edelman: What is the difference between privacy and security? Is there any reason to care?
Carl Landwehr asked whether a science of privacy is possible without a definition.
Serge Edelman said we need consistent definitions and consistent metrics.
Singh Munidar suggested we need to scope the problem.
Jono Spring countered by asking whether SoS is all about definitions.
Wrap up comments:
Adam Tagert: we need to get to a scientific understanding or it.
Serge Edelman: We need to do more to understand and inform expectations.
Mudinar Singh: Privacy is about norms.
Travis Breaux: We can use engineering to “suss” out the problem and use that information to inform the science.
(ID#: 15-6137)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Technical Papers Presentations Quarterly Lablet Meeting at CMU July 2015 |
The Science of Security (SoS) quarterly Lablet meeting, sponsored by NSA, was hosted by the Carnegie Mellon University Lablet on July 14 and 15, 2015. Quarterly meetings are held to share research, coordinate, present interim findings, and stimulate thought and discussion about the Science of Security. Technical papers were presented about subjects related to the five Hard Problems in the Science of Security. Individual researchers from each Lablet and their teams presented materials from work.
Alain Forget (CMU) “Early Results from the Security Behavior Observatory: An Infrastructure for Long Term Monitoring of Client Machines”
The Security Behavior Observatory consists of a large panel of home users’ computers used to do a longitudinal study of security use. The research goals are to collect data over a long period of time, to provide usage data for multiple research domains and to gain ecologically-valid insights on the most pressing usable security challenges users face. Since it was begun a year ago, the SBO has met a number of milestones including client development, secure server architecture, participant recruitment and data ingestion.
The SBO now has about 50 clients, but researchers had expected recruiting to be easier and numbers to be larger. Preliminary results identified the prevalence of unwanted software. They found ~3000 distinct software product names including “malware” and “suspicious software” used on the client machines. They advised the clients to use http://shouldiremoveit.com to determine malware. The researchers coded adware, negative online reputation, changed settings, disguised itself, difficult to remove or uninstall as “suspicious.” Using this rubric, they classified 18 programs as non-suspicious; 27 as malware and 16 as suspicious. The SBO has looked only at Windows clients so far.
(ID#: 15-6138)
Arbob Ahmad et al (CMU) “Declassification and Authorization in Epistemic Logic”
This work focuses on the hard problem of security policy and governance from the viewpoint of formal logic. Defining epistemic logic as a form of logic for reasoning about knowledge, the researchers aver knowledge transfers are extrapolated from a trace of the relevant reads and writes from memory. Non-interference is repositioned as the adequacy of the epistemic model of information flow. Non-interference and epistemic logic are compared in the study.
For examples, authorized declassification permits limited violations of non-interference provided a proof authorizing the violation is supplied according to some authorization policy. In practice this could be verified by a cryptographic key from an authorized authority, e.g., a doctor may consent to release of a medical record.
They argue that non-interference is too restrictive since it does not permit limited disclosure of information while the rest remains secret, does not account for authorization proofs that are essential to many policies, and the statement of non-interference says if a low observer sees a high input then false. We want to generalize this to say if a low observer sees a high input then there is an authorization proof permitting that particular flow of knowledge
In contrast, epistemic logic is a form of logic for reasoning about knowledge using common connectives: A B, A ^ B; knowledge modality: [k]A \k knows A"; knowledge transfers are extrapolated from a trace of the relevant reads and writes from memory and the epistemic logic used is taken from the work of DeYoung and Pfenning (2009).
The authors provide an approach that combines epistemic logic and authorization logic. They are both embedded in the same logic so that a proof that a flow of knowledge is possible also includes the proof that it is authorized.
They conclude epistemic logic can directly reason about how knowledge is transferred through the execution of a program so that non-interference is repositioned as the adequacy of the epistemic model and can express the consequences of authorized declassification so that the derivation that a flow of knowledge requiring declassification takes place includes the proof that that declassification is authorized.
(ID#: 15-6139)
Javier Cámara et al. (CMU) “Reasoning about Human Involvement in Self-Protecting Systems”
Addressing the hard problem of human behavior issues, the authors compare human oversight to fully automated systems for system security. The problems, they say, are that modern software systems operate in constantly changing environments and that security must respond to a constant appearance of new threats and vulnerabilities. Human oversight has scalability and timeliness issues. Current approaches to self-protection are agnostic to system specifics, threat-specific, ignore business context, and application-level approaches are often designed as part of the system.
Their approach is to formally reason about human participation in adaptation and to reason about security in the context of other business concerns, discriminate situations that should involve humans, and focus on actuation. They conclude that “Humans better at providing context for protection mechanisms” than fully automated systems.
(ID#: 15-6140)
Kathleen M. Carley, et al. (CMU SEI CERT) “Characterizing Insider Threats Using Network Analytics”
This presentation showed an analysis of two case studies using network analytics and semi-automated metadata extracted from texts. The case studies were drawn from SEI CERT.
The first case looked at insider threat examples: extracted metadata from texts; semi-automated. Coding was done from the perspective of a “spy”; roles of actors were coded and attributes of a “spy” determined using e.g., PFC Manning as a lone wolf example and John Walker in contrast as one running a spy ring. In the second case, ENRON, data from network anomaly detection was used. In this case, covert actors are not top actors but are interstitial—hiding in plain sight. From them, the researchers concluded that when under cyber-attacks, inadvertent leaks go down. This finding led to the hypothesis that insider threats will be more likely in mesh and hierarchies and go down during attacks.
They determined there are differences in ego networks for covert actors: Insider Threat Example; Lone Wolf example, gang example. This research shows patterns of behavior that can inform network analytics using machine learning. Conclusions about insiders indicate it is hard to define “good” versus “threat employees.
(ID#: 15-6141)
Serge Egelman, UC Berkeley / ICSI (CMU team) “Individualizing Privacy and Security Mechanisms”
According to the presenter, systems are now designed for “the user”, who is “33 and has one ovary and one testicle.” [This description conveys the idea that these systems are not accurately identifying the user in a meaningful way.] In contrast, ad targeting can identify geography, demographics, behavioral, and psychographic distinctions. From this, he infers that security can also be tailored. His hypothesis is that security mitigations can be optimized by tailoring to the individual and not the “average.”
His research demonstrates that the best approach will take into account the need for cognition, general decision making styles, domain specific risk attitude, Barratt Impulsivity scale, and consideration for future consequences. Since there is no standardized psychometric scale in the security literature about behavior and intentions, so they created one labelled the Security Behavior Intentions Scale (SeBIS). SeBIS allows segmentation. Next after segmentation is customization. Targeted security mitigations may help us optimize, rather than continue to satisfice
(ID#: 15-6142)
Favonia, (CMU) “Logic Programming for Social Networking Sites”
Addressing the hard problem of human behavior, the presenter looked at modeling a social networking website in order to reason about its privacy properties, generation IDs, and generate lists of people as IDs. To test their hypothesis, three factors that were added into a new logic programming language with these features built in. They concluded there is a sound and complete compiler; that sound logic can facilitate modeling and reasoning and that there is a sound and complete compilation to a known logic programming language created by DeYoung and Pfenning. Logic programming facilitates modeling and reasoning.
(ID#: 15-6143)
Lindsey McGowen (NC State) “Evaluating the Science of Security”
NCSU has continuing research under way to determine ways to evaluate the quality and placement of Science of Security research. They are developing a tool to evaluate quality of SoS work based on custom bibliometrics because traditional citation-based bibliometrics are not appropriate for SoS evaluation since citations are a lagging indicator in a very fast paced field, existing databases are incomplete and sometimes inaccurate. Instead, expert based evaluation is a more appropriate and accepted method for assessment in Computer Science.
This work is based on the need to assess the potential impact of our work on the security community based on expert based assessment of publication venues (tier ranking), will allow us to demonstrate what % of our publications are appearing in top tier venues, may be used to identify venues to target for future work, and the tool will be shared with all Lablets, for optional use.
(ID#: 15-6144)
Emerson Murphy-Hill NC State) “Developers’ Adoption and Use of Security Tools”
This presentation examined the adoption and use of security tools by software developers.
They learned that only 8% of developers do use security tools and asked why the other 92% do not use these tools. Their approach began with a qualitative study consisting of interviews with developers; from these, they developed quantitative surveys.
They conducted 42 interviews about an hour long with developers from US companies and personal contacts. The found developers thought security didn’t matter internally as much as externally, that free tools led to adoption, that functionality is the top concern, that is, the task is more important than security. They also found a false belief that all tools are equally effective. In the survey round, they found for security tool use: 16 frequent users, 48 occasional users, and 130 developers who never use security tools.
From their work, they concluded social learning makes a big difference, that is, seeing others use them. Security importance alone is not a factor. In addition, they found that misconfiguration is a problem. When developers do use tools, they may misuse them, e.g., spring security annotations in Java; methodologies flawed. The researchers looked at 125 repositories and found 248 misconfiguration fixes. However, they also recognized it is hard to distinguish between misconfigurations and enhancements.
(ID#: 15-6145)
Bill Sanders, et al. (UIUC), “Accounting for User Behavior in Predictive Cyber Security Models”
This study provided evidence on the importance of modeling human behavior for giving insights into security analysis and assessment. Its overall goal is the development of the Mobius-SE Quantitative Security Evaluation Tool. Mobius-SE Security Evaluation Approach is an analysis that considers the characteristics and capabilities of adversaries in order to account for user behavior and its impact on system cyber security, considers multi-step attacks, enables trade-off comparisons among alternatives and measures the aspects of security important to owners/operators of the system.
This software development is informed by theories of human behavior. The large challenges is turning human behavior models into executable mathematical models that can be used for analysis because descriptive theories are closer to reality but are harder to quantify and normative theories are easier to quantify but they can be different than the real world behavior. Their initial case study illustrates the use of bounded rationality and deterrence theory in the context of cyber-security.
(ID#: 15-6146)
Russ Koppel, (Penn—UIUC collaborator) “Progress, Problems, Publications, Plans and Promises of the Group Studying Passwords and Cyber Security Circumvention”
The presenter described some false assumptions of security designers: that circumventions are not common, that they come only from outside, that they reflect laziness, never happen, and can be solved by technology. In contrast, this study shows people circumvent security controls or make uninformed decisions. The consequences of bad decision making or misuse of controls is a pandemic/ubiquitous circumvention that undermines the effectiveness of systems, corrodes belief in administrators and creates an environment of workarounds. The research challenge is to develop metrics to enable security engineers to fix a broken system. Semiotics (study of signs and symbols) is one tool that works. Developers and users create workarounds because of the perceived importance of the task, perceived authority to act outside of the rules, and perceived insensitivity or misunderstanding of the security designers and administrators. Some conclusions: Password reset is the most common call to help desks. People don’t think about credentialing within the organization. Rather, they assume the threat is all external. People are just trying to get their work done.
(ID#: 15-6147)
Dave Levin (UMD), “Analyzing Certificate Management in the Web’s PKI”
Revocation is an issue in PKI distribution. If not revoked, invalid certificates can be transferred. If the Certification Authority (CA) revokes the invalid certificate, the problem is solved. But generally, they are not revoked and there are many problems. The browser is supposed to check the certificate revocation list periodically to check for validity.
The researchers are looking at revocation in 3 ways: whether administrators revoke them when they should; whether browsers retain the revocations; and what the hosting provider’s role is. Their data shows problems of reissuance of the same key: patched=93%; revocations=13%; reissued=27%. Their data suggests Admins aren’t doing what the PKI needs them to do. CAs give incentives to reissue the old key and a disincentive to revoke. Cost is a factor since the storage and issuance of the new key costs the CAs. People aren’t revoking. Go to http://securepki.org
(ID#: 15-6148)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
|
|
Publications of Interest |
The Publications of Interest section contains bibliographical citations, abstracts if available and links on specific topics and research problems of interest to the Science of Security community.
How recent are these publications?
These bibliographies include recent scholarly research on topics which have been presented or published within the past year. Some represent updates from work presented in previous years, others are new topics.
How are topics selected?
The specific topics are selected from materials that have been peer reviewed and presented at SoS conferences or referenced in current work. The topics are also chosen for their usefulness for current researchers.
How can I submit or suggest a publication?
Researchers willing to share their work are welcome to submit a citation, abstract, and URL for consideration and posting, and to identify additional topics of interest to the community. Researchers are also encouraged to share this request with their colleagues and collaborators.
Submissions and suggestions may be sent to: news@scienceofsecurity.net
(ID#:15-5931)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence
Analogical Transfer, 2014 |
Analogical transfer is a theory in psychology concerned with overcoming fixed ways of viewing particular problems or objects. In security, this problem is manifested in one example by system developers and administrators overlooking critical security requirements due to lack of tools and techniques that allow them to tailor security knowledge to their particular context. The works cited here use analogy and simulations to achieve break-through thinking. The topic relates to the hard problem of human factors in the Science of Security. These works were presented in 2014.
Ashwini Rao, Hanan Hibshi, Travis Breaux, Jean-Michel Lehker, Jianwei Niu; “Less Is More?: Investigating The Role Of Examples In Security Studies Using Analogical Transfer;” HotSoS '14 Proceedings of the 2014 Symposium and Bootcamp on the Science of Security, April 2014, Article No. 7. Doi: 10.1145/2600176.2600182
Abstract: Information system developers and administrators often overlook critical security requirements and best practices. This may be due to lack of tools and techniques that allow practitioners to tailor security knowledge to their particular context. In order to explore the impact of new security methods, we must improve our ability to study the impact of security tools and methods on software and system development. In this paper, we present early findings of an experiment to assess the extent to which the number and type of examples used in security training stimuli can impact security problem solving. To motivate this research, we formulate hypotheses from analogical transfer theory in psychology. The independent variables include number of problem surfaces and schemas, and the dependent variable is the answer accuracy. Our study results do not show a statistically significant difference in performance when the number and types of examples are varied. We discuss the limitations, threats to validity and opportunities for future studies in this area.
Keywords: analogical transfer, human factors, psychology, security (ID#: 15-5697)
URL: http://doi.acm.org/10.1145/2600176.2600182
Lixiu Yu, Aniket Kittur, Robert E. Kraut; “Distributed Analogical Idea Generation: Inventing With Crowds;” CHI '14 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, April 2014,Pages 1245-1254. Doi: 10.1145/2556288.2557371
Abstract: Harnessing crowds can be a powerful mechanism for increasing innovation. However, current approaches to crowd innovation rely on large numbers of contributors generating ideas independently in an unstructured way. We introduce a new approach called distributed analogical idea generation, which aims to make idea generation more effective and less reliant on chance. Drawing from the literature in cognitive science on analogy and schema induction, our approach decomposes the creative process in a structured way amenable to using crowds. In three experiments we show that distributed analogical idea generation leads to better ideas than example-based approaches, and investigate the conditions under which crowds generate good schemas and ideas. Our results have implications for improving creativity and building systems for distributed crowd innovation.
Keywords: analogy, creativity, crowdsourcing, innovation, schema (ID#: 15-5698)
URL: http://doi.acm.org/10.1145/2556288.2557371
Lixiu Yu, Aniket Kittur, Robert E. Kraut; “Searching For Analogical Ideas with Crowds;” CHI '14 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, April 2014, Pages 1225-1234. Doi: 10.1145/2556288.2557378
Abstract: Seeking solutions from one domain to solve problems in another is an effective process of innovation. This process of analogy searching is difficult for both humans and machines. In this paper, we present a novel approach for re-presenting a problem in terms of its abstract structure, and then allowing people to use this structural representation to find analogies. We propose a crowdsourcing process that helps people navigate a large dataset to find analogies. Through two experiments, we show the benefits of using abstract structural representations to search for ideas that are analogous to a source problem, and that these analogies result in better solutions than alternative approaches. This work provides a useful method for finding analogies, and can streamline innovation for both novices and professional designers.
Keywords: analogy searching, creativity, crowdsourcing, schema (ID#: 15-5699)
URL: http://doi.acm.org/10.1145/2556288.2557378
Ian Dunwell, Sara de Freitas, Panagiotis Petridis, Maurice Hendrix, Sylvester Arnab, Petros Lameras, Craig Stewart; “A Game-Based Learning Approach To Road Safety: The Code Of Everand;” CHI '14 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, April 2014, Pages 3389-3398. Doi: 10.1145/2556288.2557281
Abstract: Game and gamification elements are increasingly seeing use as part of interface designs for applications seeking to engage and retain users whilst transferring information. This paper presents an evaluation of a game-based approach seeking to improve the road safety behaviour amongst children aged 9-15 within the UK, made available outside of a classroom context as an online, browser-based, free-to-play game. The paper reports on data for 99,683 players over 315,882 discrete logins, supplemented by results from a nationally-representative survey of children at UK schools (n=1,108), an incentivized survey of the player-base (n=1,028), and qualitative data obtained through a series of one-to-one interviews aged 9-14 (n=28). Analysis demonstrates the reach of the game to its target demographic, with 88.13% of players within the UK. A 3.94 male/female ratio was observed amongst players surveyed, with an age distribution across the target range of 9-15. Noting mean and median playtimes of 93 and 31 minutes (n=99,683), it is suggested such an approach to user engagement and retention can surpass typical contact times obtained through other forms of web-based content. The size of the player-base attracted to the game and players' qualitative feedback demonstrates the potential for serious games deployed on a national scale.
Keywords: attitudinal change, e-learning, game-based interfaces, gamification, road safety, serious games (ID#: 15-5700)
URL: http://doi.acm.org/10.1145/2556288.2557281
Dina Zayan, Michał Antkiewicz, Krzysztof Czarnecki; “Effects Of Using Examples On Structural Model Comprehension: A Controlled Experiment;” ICSE 2014 Proceedings of the 36th International Conference on Software Engineering, May 2014, Pages 955-966. Doi: 10.1145/2568225.2568270
Abstract: We present a controlled experiment for the empirical evaluation of Example-Driven Modeling (EDM), an approach that systematically uses examples for model comprehension and domain knowledge transfer. We conducted the experiment with 26 graduate and undergraduate students from electrical and computer engineering (ECE), computer science (CS), and software engineering (SE) programs at the University of Waterloo. The experiment involves a domain model, with UML class diagrams representing the domain abstractions and UML object diagrams representing examples of using these abstractions. The goal is to provide empirical evidence of the effects of suitable examples in model comprehension, compared to having model abstractions only, by having the participants perform model comprehension tasks. Our results show that EDM is superior to having model abstractions only, with an improvement of 39% for diagram completeness, 30% for questions completeness, 71% for efficiency, and a reduction of 80% for the number of mistakes. We provide qualitative results showing that participants receiving model abstractions augmented with examples experienced lower perceived difficulty in performing the comprehension tasks, higher perceived confidence in their tasks' solutions, and asked fewer clarifying domain questions, a reduction of 90%. We also present participants' feedback regarding the usefulness of the provided examples, their number and types, as well as, the use of partial examples.
Keywords: EDM, Structural domain model comprehension, abstraction, controlled experiment, example, example-driven modeling (ID#: 15-5701)
URL: http://doi.acm.org/10.1145/2568225.2568270
Paul André, Aniket Kittur, Steven P. Dow; “Crowd Synthesis: Extracting Categories And Clusters From Complex Data;” CSCW '14 Proceedings of the 17th ACM Conference On Computer Supported Cooperative Work & Social Computing, February 2014, Pages 989-998. Doi: 10.1145/2531602.2531653
Abstract: Analysts synthesize complex, qualitative data to uncover themes and concepts, but the process is time-consuming, cognitively taxing, and automated techniques show mixed success. Crowdsourcing could help this process through on-demand harnessing of flexible and powerful human cognition, but incurs other challenges including limited attention and expertise. Further, text data can be complex, high-dimensional, and ill-structured. We address two major challenges unsolved in prior crowd clustering work: scaffolding expertise for novice crowd workers, and creating consistent and accurate categories when each worker only sees a small portion of the data. To address these challenges we present an empirical study of a two-stage approach to enable crowds to create an accurate and useful overview of a dataset: A) we draw on cognitive theory to assess how re-representing data can shorten and focus the data on salient dimensions; and B) introduce an iterative clustering approach that provides workers a global overview of data. We demonstrate a classification-plus-context approach elicits the most accurate categories at the most useful level of abstraction.
Keywords: categorization, classification, clustering, crowd, synthesis (ID#: 15-5702)
URL: http://doi.acm.org/10.1145/2531602.2531653
Susan G. Campbell, J. Isaiah Harbison, Petra Bradley, Lelyn D. Saner; “Cognitive Engineering Analysis Training: Teaching Analysts To Use Expert Knowledge Structures As A Tool To Understanding;” HCBDR '14 Proceedings of the 2014 Workshop on Human Centered Big Data Research, April 2014, Pages 9. Doi: 10.1145/2609876.2609879
Abstract: One of the challenges of using big data to produce useful intelligence is that the task of intelligence analysis is hard to conceptualize and to learn. This extended abstract from the Human-Centered Big Data Research workshop describes a research program for eliciting experts' representations of problems in intelligence analysis and transferring those representations to other analysts. The program has five steps: (1) identify experts, (2) elicit experts' mental models, (3) represent experts' mental models, (4) create training to teach those mental models, and (5) include those mental models in tools designed to help analysts. Similar types of training, which use cognitive task analysis to produce curricula that allow novices to perform tasks using methods derived from expert performance, have been successful in other cognitively complex domains. We propose a way to use this kind of elicitation and training to extend expertise in intelligence analysis.
Keywords: Big Data, cognitive engineering, expert performance, expert representation, human cognition, instructional design, intelligence analysis, interface design, mental models (ID#: 15-5703)
URL: http://doi.acm.org/10.1145/2609876.2609879
Susannah B. F. Paletz; “Multidisciplinary Teamwork and Big Data;” HCBDR '14 Proceedings of the 2014 Workshop on Human Centered Big Data Research, April 2014, Pages 32. Doi: 10.1145/2609876.2609884
Abstract: In this presentation, I discuss four constructs vital to successful multidisciplinary teamwork: shared mental models, communicating unique information, conflict, and analogy. I highlight the literature and provide lessons learned for each.
Keywords: Teams, analogy, communication, conflict, disagreement, shared mental models, teamwork, unique information, unshared information (ID#: 15-5704)
URL: http://doi.acm.org/10.1145/2609876.2609884
Julie S. Hui, Michael D. Greenberg, Elizabeth M. Gerber; “Understanding the Role of Community In Crowdfunding Work;” CSCW '14 Proceedings of the 17th ACM Conference On Computer Supported Cooperative Work & Social Computing, February 2014, Pages 62-74. Doi: 10.1145/2531602.2531715
Abstract: Crowdfunding provides a new opportunity for entrepreneurs to launch ventures without having to rely on traditional funding mechanisms, such as banks and angel investing. Despite its rapid growth, we understand little about how crowdfunding users build ad hoc online communities to undertake this new way of performing entrepreneurial work. To better understand this phenomenon, we performed a qualitative study of 47 entrepreneurs who use crowdfunding platforms to raise funds for their projects. We identify community efforts to support crowdfunding work, such as providing mentorship to novices, giving feedback on campaign presentation, and building a repository of example projects to serve as models. We also identify where community efforts and technologies succeed and fail at supporting the work in order to inform the design of crowdfunding support tools and systems.
Keywords: community, crowd work, distributed work, entrepreneurship, rowdfunding, support tools (ID#: 15-5705)
URL: http://doi.acm.org/10.1145/2531602.2531715
Elie Raad, Joerg Evermann; “Is Ontology Alignment Like Analogy?: Knowledge Integration with LISA;” SAC '14 Proceedings of the 29th Annual ACM Symposium on Applied Computing, March 2014, Pages 294-301. Doi: 10.1145/2554850.2554853
Abstract: Ontologies are formal descriptions of a domain. With the growth of the semantic web, an increasing number of related ontologies with overlapping domain coverage are available. Their integration requires ontology alignment, a determination of which concepts in a source ontology are like concepts in a target ontology. This paper presents a novel approach to this problem by applying analogical reasoning, an area of cognitive science that has seen much recent work, to the ontology alignment problem. We investigate the performance of the LISA cognitive analogy algorithm and present results that show its performance relative to other algorithms.
Keywords: LISA, alignment, analogy, cognition, ontology (ID#: 15-5706)
URL: http://doi.acm.org/10.1145/2554850.2554853
Jens Kaasbøll; “Suitability of Diagrams for IT User Learning;” ISDOC '14 Proceedings of the International Conference on Information Systems and Design of Communication, May 2014, Pages 56-62. Doi: 10.1145/2618168.2618177
Abstract: Training and user documentation aim at people being able to use IT when returning from training. Such transfer is in general difficult to achieve. Based on a model of IT use learning, two types of diagrams in documentation were compared in a field study; instructions showing the sequence of how to carry out an operation by means of screen shots and structural models showing data structures without user interface elements. Instructions were in general favoured. Even if the instructions only to a small extent were presented with projector during training, the trainees stated that they learnt a lot from these presentations. The learning outcome might have been the existence of an operation and where in the software to locate it. While primarily intended as help for understanding data structures, the trainees also used structural models as guides for carrying out operations. Instructions in particular, but also structural models were utilised by the trainees after the training sessions, hence helping transfer. Trainers should include both types of models in courses.
Keywords: competence, conceptual model, mental model, skill, training, transfer, understanding, user documentation (ID#: 15-5707)
URL: http://doi.acm.org/10.1145/2618168.2618177
Carine Lallemand, Kerstin Bongard-Blanchy, Ioana Ocnarescu; “Enhancing the Design Process by Embedding HCI Research into Experience Triggers;” Ergo'IA '14 Proceedings of the 2014 Ergonomie et Informatique Avancée Conference - Design, Ergonomie et IHM: Quelle Articulation Pour La Co-Conception De L'interaction, October 2014, Pages 41-48. Doi: 10.1145/2671470.2671476
Abstract: Over the last decade, User Experience (UX) has become a core concept in the field of Human-Computer Interaction (HCI). Beyond the fact of understanding and assessing the User Experience derived from the use of interactive systems, practitioners and researchers from a wide range of disciplines are now facing the challenges of designing for User Experience. Some authors have pinpointed the existence of a gap between the theoretical knowledge developed in HCI Research and the practical knowledge actually used by designers to create rich experiences with interactive artefacts. A special focus of this paper is to translate theoretical work into experiential objects (or situations) called "Experience Triggers" [1]. Through their materiality, these artefacts bring emotions and sensations to the design process and designers can immerge into and understand the theories on experience. As a consequence of this immersion, the final product designed by the team is assumed to be more experiential. Experience Triggers are introduced here as a new tool for science-based UX design.
Keywords: HCI research, design, experience triggers, materiality, science-based design, user experience (ID#: 15-5708)
URL: http://doi.acm.org/10.1145/2671470.2671476
Michael Nebeling, Matthias Geel, Moira C. Norrie; “Engineering Information Management Tools by Example;” AVI '14 Proceedings of the 2014 International Working Conference on Advanced Visual Interfaces, May 2014, Pages 313-320. Doi: 10.1145/2598153.2598164
Abstract: While there are many established methodologies for information systems development, designing by example has not been formally explored and applied previously. Our work is also motivated by the desire to explore interface-driven development techniques that could complement existing approaches such as model-driven engineering with the goal of reducing the need for modelling and reengineering of existing applications and interfaces, while still supporting the development task. We explore the example-based technique for rapid development of powerful and flexible information management tools based on the example of Adobe Photoshop Lightroom, a system that was originally designed to support the workflow of digital photographers in a flexible way. We analyse experiments in which two new systems---one for managing collections of research papers and another for software project management---were developed based on the Lightroom paradigm. We derive a conceptual framework for engineering by example and assess the method by comparing it to traditional model-driven engineering.
Keywords: engineering by example, lightroom paradigm (ID#: 15-5709)
URL: http://doi.acm.org/10.1145/2598153.2598164
Hannu Jaakkola, Timo Mäkinen, Anna Eteläaho; “Open Data: Opportunities and Challenges;” CompSysTech '14 Proceedings of the 15th International Conference on Computer Systems and Technologies, June 2014, Pages 25-39. Doi: 10.1145/2659532.2659594
Abstract: Open data is seen as a promising source of new business, especially in the SME sector, in the form of new products, services and innovative solutions. High importance is seen also in fostering citizens' participation in political and social life and increasing the transparency of public authorities. The forerunners of the open data movement in the public sector are the USA and the UK, which started to open their public data resources in 2009. The first European Union open data related directive was drawn up as early as 2003; however progress in putting the idea into practice has been slow and adoptions by the wider member states are placed in the early 2010s. The beneficial use of open data in real applications has progressed hand in hand with the improvement of other ICT-related technologies. The (raw) data itself has no high value. The economic value comes from a balanced combination of high quality open (data) resources combined with the related value chain. This paper builds up a "big picture" of the role of open data in current society. The approach is analytical and it clarifies the topic from the viewpoints of both opportunities and challenges. The paper covers both general aspects related to open data and results of the research and regional development project conducted by the authors.
Keywords: big data, data analysis, networking, open data, public data (ID#: 15-5710)
URL: http://doi.acm.org/10.1145/2659532.2659594
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Artificial Intelligence and Privacy, 2014 |
John McCarthy, coined the term "Artificial Intelligence" in 1955 and defined it as "the science and engineering of making intelligent machines." (as quoted in Poole, Mackworth & Goebel, 1998) AI research is highly technical and specialized, and has been characterized as "deeply divided into subfields that often fail to communicate with each other." (McCorduck, Pamela (2004), Machines Who Think (2nd ed.) These divisions are attributed to both technical and social factors. The research cited here looks at the privacy implications of artificial intelligence, especially as applied to data mining. The work cited here was presented in 2014.
Jiajun Sun; Huadong Ma, "Privacy-Preserving Verifiable Incentive Mechanism For Online Crowdsourcing Markets," Computer Communication and Networks (ICCCN), 2014 23rd International Conference on, pp.1,8, 4-7 Aug. 2014. doi: 10.1109/ICCCN.2014.6911794
Abstract: Mobile crowdsourcing is a new paradigm which leverages pervasive smartphones to efficiently collect and upload data, enabling numerous novel applications. Recently, a class of new mechanisms have been proposed to determine near-optimal prices of sensing tasks for online crowdsourcing markets, where users arrive online and the crowdsourcer has budget constraints. In particular, the mechanisms can motivate extensive users to participate in online crowdsourcing markets. Although it is so promising in real-life environments, there still exist many security and privacy challenges. In this paper, we present a heterogeneous-user based privacy-preserving verifiable incentive mechanism for online crowdsourcing markets with the budget constraint, not only to explore how to protect the privacy of the bids, selection preferences, and identity from participants, but also to make the verifiable payment between the crowdsourcer (the crowdsourcing organizer) and online sequential arrival users. Results indicate that our privacy-preserving verifiable mechanisms achieve the same results as the generic one without privacy preservation.
Keywords: data privacy; mobile computing; outsourcing; security of data; smart phones; budget constraint; heterogeneous-user based privacy-preserving verifiable incentive mechanism; mobile crowdsourcing; near-optimal prices; online crowdsourcing markets; online sequential arrival users; pervasive smartphones; privacy challenges; security challenges; sensing tasks; Artificial intelligence; Crowdsourcing; Mobile communication; Privacy; Public key; incentive mechanism; online crowdsourcing markets; privacy preservation; security verification (ID#: 15-5711)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6911794&isnumber=6911704
YoungSae Kim; JinHee Han; YongSung Jeon, "Design And Requirements For Video Encryption In Intelligent Surveillance System," Information and Communication Technology Convergence (ICTC), 2014 International Conference on, pp. 763, 764, 22-24 Oct. 2014. doi: 10.1109/ICTC.2014.6983281
Abstract: This paper presents the design and requirements of effective video encryption for intelligent surveillance systems. For this purpose, we design a new video encryption system and derive requirements for it in order to protect privacy and harm in surveillance videos.
Keywords: cryptography; data privacy; video surveillance; derive requirements; intelligent surveillance system; privacy protection; surveillance videos; video encryption; Artificial intelligence; Encryption; Event detection; Object detection; Privacy; Surveillance; intelligent surveillance; video classification; video encryption (ID#: 15-5712)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6983281&isnumber=6983064
Zuxing Li; Oechtering, T.J.; Jaldén, J., "Parallel Distributed Neyman-Pearson Detection With Privacy Constraints," Communications Workshops (ICC), 2014 IEEE International Conference on, pp.765,770, 10-14 June 2014. doi: 10.1109/ICCW.2014.6881292
Abstract: In this paper, the privacy problem of a parallel distributed detection system vulnerable to an eavesdropper is proposed and studied in the Neyman-Pearson formulation. The privacy leakage is evaluated by a metric related to the Neyman-Pearson criterion. We will show that it is sufficient to consider a deterministic likelihood-ratio test for the optimal detection strategy at the eavesdropped sensor. This fundamental insight helps to simplify the problem to find the optimal privacy-constrained distributed detection system design. The trade-off between the detection performance and privacy leakage is illustrated in a numerical example.
Keywords: data privacy; maximum likelihood detection; parallel algorithms; telecommunication security; wireless sensor networks; deterministic likelihood ratio test; eavesdropped sensor; optimal privacy constrained distributed detection system design; parallel distributed Neyman-Pearson detection; privacy leakage evaluation; Artificial intelligence; Wireless communication (ID#: 15-5713)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6881292&isnumber=6881162
Yuangang Yao; Xiaoyu Ma; Hui Liu; Jin Yi; Xianghui Zhao; Lin Liu, "A Semantic Knowledge Base Construction Method for Information Security," Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, pp. 803, 808, 24-26 Sept. 2014. doi: 10.1109/TrustCom.2014.106
Abstract: Information security contains many concepts and knowledge entities. As the development of information technology, the complexity of increasing information security knowledge need an overview representation and organization for security analysis and risk evaluation. Ontology as a formal and shareable semantic model, which is often used to define domain knowledge schema, can also be applied for information security knowledge base construction. In this paper, we propose ontology knowledge base construction method for information security, discuss the ontology construction processes, and design the knowledge schema. The ontology contains main concepts in information security and related properties and relations about these concepts with semantics. It supplies related information, such as assets and weakness, to security management and analysis applications. We introduce each step of the proposed method, and valid it using a practical information security knowledge base development.
Keywords: knowledge based systems; ontologies (artificial intelligence); risk analysis; security of data; formal semantic model; information security analysis; information security knowledge base construction; information security knowledge base development; information technology; knowledge entities; ontology construction processes; ontology knowledge base construction method; risk evaluation; security analysis applications; security management applications; semantic knowledge base construction method; shareable semantic model; Data mining; Information security; Knowledge based systems; Ontologies; Semantics; information security; knowledge base; ontology construction; semantic web (ID#: 15-5714)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011330&isnumber=7011202
Yuxuan Luo; Jianjiang Feng; Jie Zhou, "Fingerprint Matching Based On Global Minutia Cylinder Code," Biometrics (IJCB), 2014 IEEE International Joint Conference on, pp. 1, 8, Sept. 29 2014-Oct. 2 2014. doi: 10.1109/BTAS.2014.6996231
Abstract: Although minutia set based fingerprint matching algorithms have achieved good matching accuracy, developing a fingerprint recognition system that satisfies accuracy, efficiency and privacy requirements simultaneously remains a challenging problem. Fixed-length binary vector like IrisCode is considered to be an ideal representation to meet these requirements. However, existing fixed-length vector representations of fingerprints suffered from either low distinctiveness or misalignment problem. In this paper, we propose a discriminative fixed-length binary representation of fingerprints based on an extension of Minutia Cylinder Code. A machine learning based algorithm is proposed to mine reliable reference points to overcome the misalignment problem. Experimental results on public domain plain and rolled fingerprint databases demonstrate the effectiveness of the proposed approach.
Keywords: fingerprint identification; image matching; image representation; learning (artificial intelligence); vectors; IrisCode; fingerprint databases; fingerprint recognition system; fixed-length binary vector fingerprint representations; global minutia cylinder code; machine learning based algorithm; minutia set based fingerprint matching algorithms; privacy requirements; Abstracts; Filtering algorithms (ID#: 15-5715)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6996231&isnumber=6996217
Bassily, R.; Smith, A.; Thakurta, A., "Private Empirical Risk Minimization: Efficient Algorithms and Tight Error Bounds," Foundations of Computer Science (FOCS), 2014 IEEE 55th Annual Symposium on, pp. 464, 473, 18-21 Oct. 2014. doi: 10.1109/FOCS.2014.56
Abstract: Convex empirical risk minimization is a basic tool in machine learning and statistics. We provide new algorithms and matching lower bounds for differentially private convex empirical risk minimization assuming only that each data point's contribution to the loss function is Lipschitz and that the domain of optimization is bounded. We provide a separate set of algorithms and matching lower bounds for the setting in which the loss functions are known to also be strongly convex. Our algorithms run in polynomial time, and in some cases even match the optimal nonprivate running time (as measured by oracle complexity). We give separate algorithms (and lower bounds) for (ε, 0)and (ε, δ)-differential privacy; perhaps surprisingly, the techniques used for designing optimal algorithms in the two cases are completely different. Our lower bounds apply even to very simple, smooth function families, such as linear and quadratic functions. This implies that algorithms from previous work can be used to obtain optimal error rates, under the additional assumption that the contributions of each data point to the loss function is smooth. We show that simple approaches to smoothing arbitrary loss functions (in order to apply previous techniques) do not yield optimal error rates. In particular, optimal algorithms were not previously known for problems such as training support vector machines and the high-dimensional median.
Keywords: computational complexity; convex programming; learning (artificial intelligence); minimisation;(ε, δ)-differential privacy; (ε, 0)-differential privacy; Lipschitz loss function; arbitrary loss function smoothing; machine learning; optimal nonprivate running time; oracle complexity; polynomial time; private convex empirical risk minimization; smooth function families;statistics; Algorithm design and analysis; Convex functions; Noise measurement;Optimization; Privacy; Risk management; Support vector machines (ID#: 15-5716)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6979031&isnumber=6978973
Wei Wang; Qian Zhang, "A Stochastic Game For Privacy Preserving Context Sensing On Mobile Phone," INFOCOM, 2014 Proceedings IEEE, pp. 2328, 2336, April 27 2014-May 2 2014. doi: 10.1109/INFOCOM.2014.6848177
Abstract: The proliferation of sensor-equipped smartphones has enabled an increasing number of context-aware applications that provide personalized services based on users' contexts. However, most of these applications aggressively collect users sensing data without providing clear statements on the usage and disclosure strategies of such sensitive information, which raises severe privacy concerns and leads to some initial investigation on privacy preservation mechanisms design. While most prior studies have assumed static adversary models, we investigate the context dynamics and call attention to the existence of intelligent adversaries. In this paper, we first identify the context privacy problem with consideration of the context dynamics and malicious adversaries with capabilities of adjusting their attacking strategies, and then formulate the interactive competition between users and adversaries as a zero-sum stochastic game. In addition, we propose an efficient minimax learning algorithm to obtain the optimal defense strategy. Our evaluations on real smartphone context traces of 94 users validate the proposed algorithm.
Keywords: data privacy; learning (artificial intelligence); minimax techniques; smart phones; stochastic games; ubiquitous computing; attacking strategy; context dynamics; context privacy problem; context-aware application; disclosure strategy; intelligent adversary; interactive competition; minimax learning algorithm; mobile phone; optimal defense strategy; personalized services; privacy preservation mechanisms design; privacy preserving context sensing; sensor-equipped smartphones; static adversary model; user context; user sensing data; zero-sum stochastic game; Context; Context-aware services; Games; Privacy; Sensors; Smart phones; Stochastic processes (ID#: 15-5717)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848177&isnumber=6847911
Wenjuan Li; Weizhi Meng; Zhiyuan Tan; Yang Xiang, "Towards Designing an Email Classification System Using Multi-view Based Semi-supervised Learning," Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, pp. 174, 181, 24-26 Sept. 2014. doi: 10.1109/TrustCom.2014.26
Abstract: The goal of email classification is to classify user emails into spam and legitimate ones. Many supervised learning algorithms have been invented in this domain to accomplish the task, and these algorithms require a large number of labeled training data. However, data labeling is a labor intensive task and requires in-depth domain knowledge. Thus, only a very small proportion of the data can be labeled in practice. This bottleneck greatly degrades the effectiveness of supervised email classification systems. In order to address this problem, in this work, we first identify some critical issues regarding supervised machine learning-based email classification. Then we propose an effective classification model based on multi-view disagreement-based semi-supervised learning. The motivation behind the attempt of using multi-view and semi-supervised learning is that multi-view can provide richer information for classification, which is often ignored by literature, and semi-supervised learning supplies with the capability of coping with labeled and unlabeled data. In the evaluation, we demonstrate that the multi-view data can improve the email classification than using a single view data, and that the proposed model working with our algorithm can achieve better performance as compared to the existing similar algorithms.
Keywords: learning (artificial intelligence); pattern classification; unsolicited e-mail; classification model; email classification system; labeled data; multiview data; multiview disagreement-based semisupervised learning; single view data;spam; unlabeled data; Data models; Electronic mail; Feature extraction; Semisupervised learning; Supervised learning; Support vector machines; Training; Email Classification; Machine Learning Applications; Multi-View; Network Security; Semi-Supervised Learning (ID#: 15-5718)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011248&isnumber=7011202
Peddinti, S.T.; Korolova, A.; Bursztein, E.; Sampemane, G., "Cloak and Swagger: Understanding Data Sensitivity through the Lens of User Anonymity," Security and Privacy (SP), 2014 IEEE Symposium on, pp. 493, 508, 18-21 May 2014. doi: 10.1109/SP.2014.38
Abstract: Most of what we understand about data sensitivity is through user self-report (e.g., surveys), this paper is the first to use behavioral data to determine content sensitivity, via the clues that users give as to what information they consider private or sensitive through their use of privacy enhancing product features. We perform a large-scale analysis of user anonymity choices during their activity on Quora, a popular question-and-answer site. We identify categories of questions for which users are more likely to exercise anonymity and explore several machine learning approaches towards predicting whether a particular answer will be written anonymously. Our findings validate the viability of the proposed approach towards an automatic assessment of data sensitivity, show that data sensitivity is a nuanced measure that should be viewed on a continuum rather than as a binary concept, and advance the idea that machine learning over behavioral data can be effectively used in order to develop product features that can help keep users safe.
Keywords: data privacy; learning (artificial intelligence); Quora; automatic assessment; behavioral data; cloak; content sensitivity; data sensitivity; machine learning; privacy enhancing product features; question-and-answer site; swagger; user anonymity; user self-report; Context; Crawlers; Data privacy; Facebook; Privacy; Search engines; Sensitivity (ID#: 15-5719)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956583&isnumber=6956545
Sundaramoorthy, P.; Bhuvaneshwari, S.; Sreekrishna, M.; Selvam, M., "Ontology Based Classification Of User History In Obscured Web Search," Current Trends in Engineering and Technology (ICCTET), 2014 2nd International Conference on, pp. 258, 261, 8-8 July 2014. doi: 10.1109/ICCTET.2014.6966298
Abstract: User history includes user search and other web activities like searching, downloading and extracting information. Normally, the user history is public, which can be viewed by other users when the search history is not cleared. This can be avoided by performing searching by signing-into the search engine account. The search is found to be personalized. In user history, when we get on to the query it will depart to the corresponding web page. In this paper, we are proposing a model which relates the user searches with the history on the personalized location and retrieving related information if search is found. We present the algorithm, called Decision making algorithm in order to classify the content in the user history. The segregated results are located into the corresponding directory. Extensive experiment demonstrates the efficiency and effectiveness of our construction.
Keywords: Internet; data privacy; decision making; ontologies (artificial intelligence); pattern classification; search engines; user interfaces; Decision making algorithm; Web page; downloading; information extraction; information retrieval; obscured Web search; ontology; search engine account; user history classification; Conferences; Data mining; History; Ontologies; Privacy; Search engines; Web search; Personalized web search; Search Engine; User history; ontology; semantic (ID#: 15-5720)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6966298&isnumber=6966253
Gokcen, Y.; Foroushani, V.A.; Heywood, A.N.Z., "Can We Identify NAT Behavior by Analyzing Traffic Flows?," Security and Privacy Workshops (SPW), 2014 IEEE, pp. 132, 139, 17-18 May 2014. doi: 10.1109/SPW.2014.28
Abstract: It is shown in the literature that network address translation devices have become a convenient way to hide the source of malicious behaviors. In this research, we explore how far we can push a machine learning (ML) approach to identify such behaviors using only network flows. We evaluate our proposed approach on different traffic data sets against passive fingerprinting approaches and show that the performance of a machine learning approach is very promising even without using any payload (application layer) information.
Keywords: Internet; learning (artificial intelligence);telecommunication traffic; NAT behavior; machine learning; malicious behaviors; network address translation devices; passive fingerprinting approach; payload information; traffic flows; Browsers; Classification algorithms; Computers; Fingerprint recognition; IP networks; Internet; Payloads; Network address translation classification; machine learning; traffic analysis; traffic flows (ID#: 15-5721)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957296&isnumber=6957265
Priyadarshini, R.; Selvan, L.T.; Kanimozhi, S., "Performance Analysis Of Static Images And Solution For Dynamic Tracking In Private Cloud," Information Communication and Embedded Systems (ICICES), 2014 International Conference on, pp. 1, 6, 27-28 Feb. 2014. doi: 10.1109/ICICES.2014.7033788
Abstract: Nowadays, the World Wide Web has grown tremendously and become more complex because of the growing number of users and the content being added in varied formats. Cloud computing is used as a platform where large amount of data or information can be stored (as it has more storage options) easily as a pay per use model. Hence the storage will not deal only with structured document; possibility of unstructured document storage will also be there. In order to compensate that there is a need to move on to unstructured database system which is mongodb. MongoDB (from "humongous") is an open source, scalable, high-performance, schema-free, document-oriented database. Technologies like cloud computing and SaaS(Software as a Service) in cloud are growing rapidly which replaces the exisiting traditional applications. The disadvantage in maintaining, integrating and acquiring traditional softwares are overcome by cloud applications. In this paper initially a static application is deployed into the private cloud environment as an image and its performance is evaluated. In the proposed system a weblog will be created with the features of locating exact source which is retrieved dynamically from web. The dynamic software application is bundled in to image and then deployed in the private cloud environment. In existing system the search was done only based on annotated word but in proposed system the search will be done concept wise meaningfully using Machine Learning techniques. The dynamic retrieval will be done using meta heuristic techniques and performance will be evaluated under private cloud environment. Then comparison will be done between static and dynamic application as the future work.
Keywords: Web sites; cloud computing; data privacy; database management systems; information retrieval; learning (artificial intelligence); MongoDB; SaaS; Weblog; World Wide Web; cloud computing; database system; document-oriented database; dynamic retrieval; dynamic tracking; machine learning; metaheuristic technique; open source database; private cloud; schema-free database; software as a service; static images; Cloud computing; Educational institutions; Electronic publishing; Information services; Semantic Web; Semantics; Automatic Metadata Extraction; Content Search; Locate Source Content; Machine Learning Algorithm; Metaheuristic Technique (ID#: 15-5722)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7033788&isnumber=7033740
Divya, R.; Robin, C.R.R., "Onto-Search: An Ontology Based Personalized Mobile Search Engine," Green Computing Communication and Electrical Engineering (ICGCCEE), 2014 International Conference on, pp. 1, 4, 6-8 March 2014. doi: 10.1109/ICGCCEE.2014.6921422
Abstract: Web search is a frequent activity on Internet connected devices, but it is still a nuisance when user is using a mobile device, due to their default keypad and small screen, and that search results could be mostly irrelevant for the user needs. User needs an efficient way to introduce query terms and receive more precise information. In this paper, we propose a new web search personalization approach that captures the user's interests and preferences in the form of concepts by mining search results and their click through. Onto-Search is based on a client server model. Heavy tasks such as training, re ranking are done on the server. To preserve privacy only the feature vectors is passed to the server. Location information is also taken into consideration. In addition GPS locations help in reinforcing search results. Finally, based on the derived ontology SVM is used for re-ranking of future search results.
Keywords: Global Positioning System; Internet; client-server systems; data privacy; mobile computing; mobile handsets; ontologies (artificial intelligence);query processing; search engines; support vector machines; GPS locations; Internet connected devices; Onto-Search; Web search; Web search personalization approach; client server model; feature vectors; location information; mobile device; ontology SVM; ontology based personalized mobile search engine; precise information; privacy preservation; query terms; search result mining; user interests; user needs; user preferences; Global Positioning System; Mobile communication; Ontologies; Search engines; Servers; Vectors; Web search; Personalization; Re ranking search results; click through data; content ontology; location ontology; mobile search engine (ID#: 15-5723)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921422&isnumber=6920919
Peipei Sui; Tianyu Wo; Zhangle Wen; Xianxian Li, "Privacy Risks in Publication of Taxi GPS Data," High Performance Computing and Communications, 2014 IEEE 6th Intl Symp on Cyberspace Safety and Security, 2014 IEEE 11th Intl Conf on Embedded Software and Syst (HPCC,CSS,ICESS), 2014 IEEE Intl Conf on, pp. 1189, 1196, 20-22 Aug. 2014. doi: 10.1109/HPCC.2014.195
Abstract: Taxis equipped with location sensing devices are increasingly becoming popular. Such location traces can be used for traffic management, taxi dispatching, and improved city planning. However, trajectory data often contain detailed information about individuals, and disclosing such information may reveal their lifestyles, preferences, and sensitive personal information. We study the GPS data of taxis in Beijing with more than 12000 taxis, and find out there are significant privacy risks associated with publishing taxi GPS data sets. In this paper, we first analyze the dataset from spatial and temporal dimensions. Second, we show that parking point information can re-identify anonymized trajectories of taxi drivers. Third, we find taxi GPS data could also expose passengers' privacy based on origin and destination (OD) queries. As a result, more than 55% trajectories can be re-identified at a probability of 1. Meanwhile, experimental results show that it is possible, using simple algorithms, to learn the destination of target passenger based on the naïve anonymized GPS data.
Keywords: Global Positioning System; data privacy; learning (artificial intelligence); query processing; risk management; traffic engineering computing; Beijing; OD queries; city planning; learning; location sensing devices; location traces; naïve anonymized GPS data; origin and destination query; parking point information; privacy risk; spatial dimension; taxi GPS data publication; taxi dispatching; temporal dimensions; traffic management; Clustering algorithms; Data privacy; Global Positioning System; Privacy; Publishing; Trajectory; Vehicles; GPS data; origin and destination; parking point; privacy leakage (ID#: 15-5724)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7056894&isnumber=7056577
Saravanan, M.; Thoufeeq, A.M.; Akshaya, S.; Jayasre Manchari, V.L., "Exploring New Privacy Approaches In A Scalable Classification Framework," Data Science and Advanced Analytics (DSAA), 2014 International Conference on, pp. 209, 215, Oct. 30 2014-Nov. 1 2014. doi: 10.1109/DSAA.2014.7058075
Abstract: Recent advancements in Information and Communication Technologies (ICT) enable many organizations to collect, store and control massive amount of various types of details of individuals from their regular transactions (credit card, mobile phone, smart meter etc.). While using these wealth of information for Personalized Recommendations provides enormous opportunities for applying data mining (or machine learning) tasks, there is a need to address the challenge of preserving individuals privacy during the time of running predictive analytics on Big Data. Privacy Preserving Data Mining (PPDM) on these applications is particularly challenging, because it involves and process large volume of complex, heterogeneous, and dynamic details of individuals. Ensuring that privacy-protected data remains useful in intended applications, such as building accurate data mining models or enabling complex analytic tasks, is essential. Differential Privacy has been tried with few of the PPDM methods and is immune to attacks with auxiliary information. In this paper, we propose a distributed implementation based on Map Reduce computing model for C4.5 Decision Tree algorithm and run extensive experiments on three different datasets using Hadoop Cluster. The novelty of this work is to experiment two different privacy methods: First method is to use perturbed data on decision tree algorithm for prediction in privacy-preserving data sharing and the second method is based on applying raw data to the privacy-preserving decision tree algorithm for private data analysis. In addition to this, we propose the combination of the methods as hybrid technique to maintain accuracy (Utility) and privacy in an acceptable level. The proposed privacy approaches has two potential benefits in the context of data mining tasks: it allows the service providers to outsource data mining tasks without exposing the raw data, and it allows data providers to share data access to third parties while limiting privacy risks.
Keywords: data mining; data privacy; decision trees; learning (artificial intelligence);C4.5 decision tree algorithm; Hadoop Cluster; ICT; big data; differential privacy; information and communication technologies; machine learning; map reduce computing model; personalized recommendation; privacy preserving data mining; private data analysis; scalable classification; Big data; Classification algorithms; Data privacy; Decision trees; Noise; Privacy; Scalability; Hybrid data privacy; Map Reduce Framework; Privacy Approaches; Privacy Preserving data Mining; Scalability (ID#: 15-5725)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058075&isnumber=7058031
Ford, V.; Siraj, A.; Eberle, W., "Smart Grid Energy Fraud Detection Using Artificial Neural Networks," Computational Intelligence Applications in Smart Grid (CIASG), 2014 IEEE Symposium on, pp. 1, 6, 9-12 Dec. 2014. doi: 10.1109/CIASG.2014.7011557
Abstract: Energy fraud detection is a critical aspect of smart grid security and privacy preservation. Machine learning and data mining have been widely used by researchers for extensive intelligent analysis of data to recognize normal patterns of behavior such that deviations can be detected as anomalies. This paper discusses a novel application of a machine learning technique for examining the energy consumption data to report energy fraud using artificial neural networks and smart meter fine-grained data. Our approach achieves a higher energy fraud detection rate than similar works in this field. The proposed technique successfully identifies diverse forms of fraudulent activities resulting from unauthorized energy usage.
Keywords: data analysis; data mining; learning (artificial intelligence); neural nets; power system security; smart meters; smart power grids; artificial neural networks; data intelligent analysis; data mining; machine learning technique; smart grid energy fraud detection; smart grid privacy; smart grid security; smart meter fine-grained data; Data mining; Energy consumption; Energy measurement; Meteorology; Neural networks; Smart meters; Training; fraud detection; neural networks; smart meter data (ID#: 15-5726)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011557&isnumber=7011539
Idrees, F.; Rajarajan, M., "Investigating The Android Intents And Permissions For Malware Detection," Wireless and Mobile Computing, Networking and Communications (WiMob), 2014 IEEE 10th International Conference on, pp. 354, 358, 8-10 Oct. 2014. doi: 10.1109/WiMOB.2014.6962194
Abstract: Mobile phones are mastering our day to day scheduling, entertainment, information and almost every aspect of life. With the increasing human dependence on smart phones, threats against these devices have also increased exponentially. Almost all the mobile apps are playing with the mobile user's privacy besides the targeted actions by the malicious apps. Android applications use permissions to use different features and resources of mobile device along with the intents to launch different activities. Various aspects of permission framework have been studied but sufficient attention has not been given to the intent framework. This work is first of its kind which is investigating the combined effects of permissions and intent filters to distinguish between the malware and benign apps. This paper proposes a novel approach to identify the malicious apps by analyzing the permission and intent patterns of android apps. This approach is supplemented with the machine learning algorithms for further classification of apps. Performance of proposed approach has been validated by applying the technique to the available malicious and benign samples collected from a number of sources.
Keywords: Android (operating system); data privacy; invasive software; learning (artificial intelligence); pattern classification; smart phones; Android applications; Android intents; Android permissions; benign apps; human dependence; machine learning algorithms; malicious apps; malware detection; mobile app classification; mobile device features; mobile device resources; mobile phones; mobile user privacy; permission framework; smart phones; Androids; Conferences; Humanoid robots; Malware; Mobile communication; Smart phones; classification; intents; malware detection; permission model (ID#: 15-5727)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6962194&isnumber=6962120
Shanny, J.A.; Sudharson, K., "User Preferred Data Enquiry System Using Mobile Communications," Information Communication and Embedded Systems (ICICES), 2014 International Conference on, pp. 1, 5, 27-28 Feb. 2014. doi: 10.1109/ICICES.2014.7033943
Abstract: Mobile device interaction with users, it is providing various purposes such as location service, road map service, traffic information service etc. It has help to user and connect to the various search engines. Because the search query is limited to small words dislike those used when interacting with various search engines through computers. This leads to drawback in good communication between the user and the server through mobile phone, as there are limitations in mobile phones. Hence the proposed solution is providing better and faster result retrieval from querying search engine through mobile phone by using user's profile information in a authentication way. Ontology ranked keyword search algorithm utilized to analyze and filter search queries and rank results accordingly. Users search history is stored only locally and search results are provided by the server in partiality to existing search engine history information. The search history partiality are categorized based. It is mining the content and location information along with the user's profile. Ranking of results are helping to the end user in very easy access to the needed available source, thus proving to be very efficient. The proposed one provides an innovative approach of searching the data on the all input text, all pattern of the text, all spatial information relative searches by user, User type specific search and finally best Ontology based Search.
Keywords: data mining; information filtering; mobile communication; mobile computing; ontologies (artificial intelligence);query processing; search engines; text analysis; content mining; location information; mobile communications; mobile phones; ontology ranked keyword search algorithm; search engine history information; search engines; search query; search query analysis; search query filtering; spatial information; user preferred data enquiry system; user profile information; Educational institutions; Mobile communication; Ontologies; Privacy; Search engines; Servers; Smart phones; Clickthrough data; concept; location search; mobile search engine; ontology; personalization; user profiling (ID#: 15-5728)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7033943&isnumber=7033740
Burke, M.-J.; Kayem, A.V.D.M., "K-Anonymity for Privacy Preserving Crime Data Publishing in Resource Constrained Environments," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on, pp. 833, 840, 13-16 May 2014. doi: 10.1109/WAINA.2014.131
Abstract: Mobile crime report services have become a pervasive approach to enabling community-based crime reporting (CBCR) in developing nations. These services hold the advantage of facilitating law enforcement when resource constraints make using standard crime investigation approaches challenging. However, CBCRs have failed to achieve widespread popularity in developing nations because of concerns for privacy. Users are hesitant to make crime reports with out strong guarantees of privacy preservation. Furthermore, oftentimes lack of data mining expertise within the law enforcement agencies implies that the reported data needs to be processed manually which is a time-consuming process. In this paper we make two contributions to facilitate effective and efficient CBCR and crime data mining as well as to address the user privacy concern. The first is a practical framework for mobile CBCR and the second, is a hybrid k-anonymity algorithm to guarantee privacy preservation of the reported crime data. We use a hierarchy-based generalization algorithm to classify the data to minimize information loss by optimizing the nodal degree of the classification tree. Results from our proof-of-concept implementation demonstrate that in addition to guaranteeing privacy, our proposed scheme offers a classification accuracy of about 38% and a drop in information loss of nearly 50% over previous schemes when compared on various sizes of datasets. Performance-wise we observe an average improvement of about 50ms proportionate to the size of the dataset.
Keywords: criminal law; data mining; data privacy; generalisation (artificial intelligence);mobile computing; pattern classification; CBCR; classification accuracy; classification tree; community-based crime reporting; crime data mining; crime investigation approach; hierarchy-based generalization algorithm; k-anonymity; law enforcement; mobile crime report services; pervasive approach; privacy preserving crime data publishing; resource constrained environment; user privacy concern; Cloud computing; Data privacy; Encryption; Law enforcement; Mobile communication; Privacy; Anonymity; Developing Countries; Encryption; Information Loss; Public/Private Key Cryptography; Resource Constrained Environments (ID#: 15-5729)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844743&isnumber=6844560
Vidyalakshmi, B.S.; Wong, R.K.; Ghanavati, M.; Chi Hung Chi, "Privacy as a Service in Social Network Communications," Services Computing (SCC), 2014 IEEE International Conference on, pp. 456, 463, June 27 2014-July 2 2014. doi: 10.1109/SCC.2014.67
Abstract: With dispersing of information on social networks - both personally identifiable and general - comes the risk of these information falling into wrong hands. Users are burdened with setting privacy of multiple social networks, each with growing number of privacy settings. Exponential growth of applications (App) running on social networks have made privacy control increasingly difficult. This necessitates Privacy as a service model, especially for social networks, to handle privacy across multiple applications and platforms. Privacy aware information dispersal involves knowing who is receiving what information of ours. Our proposed service employs a supervised learning model to assist user in spotting unintended audience for a post. Different from previous work, we combine both Tie-strength and Context of the information as features in learning. Our evaluation using several classification techniques shows that the proposed method is effective and better than methods using either only Tie-strength or only Context of the information for classification.
Keywords: Web services; data privacy; learning (artificial intelligence);pattern classification; social networking (online);classification techniques; information context; information tie-strength; privacy aware information dispersal; privacy control; privacy settings; privacy-as-a-service; social network communications; social network privacy; supervised learning model; Context; Context modeling; Education; Facebook; Feature extraction; Privacy; Privacy as a service; context; social networks; tie-strength (ID#: 15-5730)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6930567&isnumber=6930500
Shouwei Sun; Yizhang Jiang; Pengjiang Qian, "Transfer Learning Based Maximum Entropy Clustering," Information Science and Technology (ICIST), 2014 4th IEEE International Conference on, pp. 829, 832, 26-28 April 2014. doi: 10.1109/ICIST.2014.6920605
Abstract: The classical maximum entropy clustering (MEC) algorithm can only work on a single dataset, which might result in poor effectiveness in the condition that the capacity of the dataset is insufficient. To resolve this problem, using the strategy of transfer learning, this paper proposed the novel transfer learning based maximum entropy clustering (TL_MEC) algorithm. TL_MEC employs the historical cluster centers and membership of the past data as the references to guide the clustering on the current data, which promotes its performance distinctly from three aspects: clustering effectiveness, anti-noise, as well as privacy protection. Thus TL_MEC can work well on those small dataset if enough historical data are available. The experimental studies verified and demonstrated the contributions of this study.
Keywords: data handling; learning (artificial intelligence); pattern clustering; TL_MEC algorithm; anti-noise; clustering effectiveness; historical data; novel transfer learning based maximum entropy clustering; privacy protection; Algorithm design and analysis; Clustering algorithms; Educational institutions; Entropy; Equations; Linear programming; Privacy; Knowledge Transfer; Maximum Entropy Clustering (MEC);Source domain privacy protection; Transfer Rules (ID#: 15-5731)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6920605&isnumber=6920317
Boyang Wang; Ming Li; Chow, S.S.M.; Hui Li, "A Tale Of Two Clouds: Computing On Data Encrypted Under Multiple Keys," Communications and Network Security (CNS), 2014 IEEE Conference on, pp. 337, 345, 29-31 Oct. 2014. doi: 10.1109/CNS.2014.6997502
Abstract: Cloud computing provides a convenient platform for big data computation such as machine learning and data mining. However, privacy conscious users often encrypt their data with their own keys before uploading them to the cloud. Existing techniques for computation on encrypted data are either in the single key setting or far from practical. In this paper, we show how two non-colluding servers can leverage proxy re-encryption to jointly compute arithmetic functions over the ciphertexts of multiple users without learning the inputs, intermediate or final results. Moreover, the computation is non-interactive to users and only requires minimal server-to-server interactions. Experimental results demonstrate that our schemes significantly improve the efficiency of outsourced computation when compared to the existing approach.
Keywords: Big Data; cloud computing; cryptography; data mining; data privacy; learning (artificial intelligence); Big Data computation; arithmetic functions; ciphertexts; cloud computing; data encryption; data mining; machine learning; noncolluding servers; privacy; proxy reencryption; server-to-server interactions; Ash; Computational modeling; Encryption; Public key; Servers (ID#: 15-5732)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6997502&isnumber=6997445
Patil, A.; Singh, S., "Differential Private Random Forest," Advances in Computing, Communications and Informatics (ICACCI, 2014 International Conference on, pp. 2623, 2630, 24-27 Sept. 2014. doi: 10.1109/ICACCI.2014.6968348
Abstract: Organizations be it private or public often collect personal information about an individual who are their customers or clients. The personal information of an individual is private and sensitive which has to be secured from data mining algorithm which an adversary may apply to get access to the private information. In this paper we have consider the problem of securing these private and sensitive information when used in random forest classifier in the framework of differential privacy. We have incorporated the concept of differential privacy to the classical random forest algorithm. Experimental results shows that quality functions such as information gain, max operator and gini index gives almost equal accuracy regardless of their sensitivity towards the noise. Also the accuracy of the classical random forest and the differential private random forest is almost equal for different size of datasets. The proposed algorithm works for datasets with categorical as well as continuous attributes.
Keywords: data mining; data privacy; learning (artificial intelligence);Gini index; data mining algorithm; differential privacy; differential private random forest; information gain; max operator; personal information; private information; sensitive information; Accuracy; Data privacy; Indexes; Noise; Privacy; Sensitivity; Vegetation (ID#: 15-5733)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6968348&isnumber=6968191
Minghui Zhu, "Distributed Demand Response Algorithms Against Semi-Honest Adversaries," PES General Meeting | Conference & Exposition, 2014 IEEE, pp. 1, 5, 27-31 July 2014. doi: 10.1109/PESGM.2014.6939191
Abstract: This paper investigates two problems for demand response: demand allocation market and demand shedding market. By utilizing reinforcement learning, stochastic approximation and secure multi-party computation, we propose two distributed algorithms to solve the induced games respectively. The proposed algorithms are able to protect the privacy of the market participants, including the system operator and end users. The algorithm convergence is formally ensured and the algorithm performance is verified via numerical simulations.
Keywords: demand side management; learning (artificial intelligence);numerical analysis; power markets; stochastic games; demand allocation market; demand shedding market; distributed demand response algorithms; multiparty computation security; numerical simulation; reinforcement learning; stochastic approximation; Approximation algorithms; Games; Load management; Nash equilibrium; Pricing; Privacy; Resource management (ID#: 15-5734)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6939191&isnumber=6938773
Sicuranza, M.; Ciampi, M., "A Semantic Access Control for Easy Management of the Privacy for EHR Systems," P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC), 2014 Ninth International Conference on , vol., no., pp.400,405, 8-10 Nov. 2014. doi: 10.1109/3PGCIC.2014.84
Abstract: In the last years, the increasingly use of ICT in Healthcare has led to the generation of several healthcare information systems, such as Electronic Health Record systems, which enable the management and sharing of digital clinical data. Since clinical data is generally characterized by very sensitive information, such information systems have to be able to limit their sharing of it, by enabling or denying access to the various healthcare users. In order to manage who can do what on such data, there is so need to use access control mechanisms, which have to be able to satisfy access policies defined by the patient in a dynamic manner. This paper presents a semantic access control designed for specifying flexible and fine-grained access policies in the HIS. The proposed model is based on an ontological approach able to increase the usability and feasibility of real information systems.
Keywords: authorisation; data privacy; electronic health records; health care; ontologies (artificial intelligence);semantic networks; EHR system; HIS; digital clinical data; electronic health record; healthcare information system; ontological formalization; privacy management; semantic access control; Access control; Context; Context modeling; Medical services; Organizations; Unified modeling language; Access Control Model; EHR; Ontology; Security (ID#: 15-5735)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7024618&isnumber=7024297
Sadikin, M.F.; Kyas, M., "Security And Privacy Protocol For Emerging Smart RFID Applications," Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), 2014 15th IEEE/ACIS International Conference on, pp. 1, 7, June 30 2014-July 2 2014. doi: 10.1109/SNPD.2014.6888694
Abstract: The raise of smart RFID technology (i.e. sensor integration to RFID system) has introduced various advantages in the context of location awareness applications, reaching from low cost implementation and maintenance, to its flexibility to support large-scale system. Nevertheless, the use of such technology introduces tremendous security and privacy issues (e.g. unauthorized tracking, information leakage, cloning attack, data manipulation, collision attack, replay attack, Denial-of-Service, etc.). On the other hand, the constrained nature of RFID application makes the security enforcement is more complicated. This paper presents IMAKA-Tate: Identity protection, Mutual Authentication and Key Agreement using Tate pairing of Identity-based Encryption method. It is designed to tackle various challenges in the constrained nature of RFID applications by applying a light-weight cryptographic method with advanced-level 128 bit security protection. Indeed, our proposed solution protects the RFID system from various threats, as well as preserves the privacy by early performing encryption including the identity even before the authentication is started.
Keywords: data privacy; protocols; radiofrequency identification; telecommunication security; Denial-of-Service; RFID system; cloning attack; collision attack; data manipulation; identity based encryption method; identity protection; information leakage; key agreement; large-scale system; lightweight cryptographic method; location awareness applications; mutual authentication; privacy protocol; replay attack; security protection; security protocol; sensor integration; smart RFID applications; unauthorized tracking; Authentication; Cryptography; Payloads; Privacy; Protocols; Radiofrequency identification; Mutual Authentication; Privacy Preserving; Smart RFID Security (ID#: 15-5736)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6888694&isnumber=6888665
Fu Zu-feng; Wang Hai-ying; Wu Yong-wu, "Application Of Secure Multi-Party Computation In Linear Programming," Information Technology and Artificial Intelligence Conference (ITAIC), 2014 IEEE 7th Joint International, pp. 244, 248, 20-21 Dec. 2014. doi: 10.1109/ITAIC.2014.7065043
Abstract: The existing solution to the privacy preserving linear programming, can leak the user's private data when the data is much less. In this paper, the secure multiparty computation is generalized to the problem of privacy-preserving linear programming, and we present a computing protocol of privacy-preserving linear programming. The protocol is applied to consider the problem of linear programming with less and vertically distributed data, not only the maximum value of the original linear programming can be calculated in the case having optimal solution, but also the private data of all participants can be protected in the calculation.
Keywords: data privacy; linear programming; security of data; computing protocol; multiparty computation security; privacy preserving linear programming; Complexity theory; Data privacy; Linear programming; Privacy; Protocols; Security; Vectors; cryptography; linear programming; privacy preserving; secure multiparty computation (ID#: 15-5737)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7065043&isnumber=7064993
Bhati, B.S.; Venkataram, P., "Data Privacy Preserving Scheme In MANETs," Internet Security (WorldCIS), 2014 World Congress on, pp. 22, 23, 8-10 Dec. 2014. doi: 10.1109/WorldCIS.2014.7028159
Abstract: Data privacy is one among the challenging issues in Mobile Adhoc NETworks (MANETs), which are deployed in hostile environments to transfer sensitive data through multi-hop routing. The undesired disclosure of data can result in breach of data privacy, and can be used in launching several attacks. Many of the works achieved data privacy by using approaches such as data transformation, data perturbation, etc. But, these approaches introduce high computational overheads and delays in a MANET. To minimize the computations in preserving data privacy, we have proposed a computational intelligence based data privacy scheme. In the scheme we use data anonymization approach, where rough set theory is used to determine the data attributes to be anonymized. Dynamically changing multiple routes are established between a sender and a receiver, by selecting more than one trusted 1-hop neighbor nodes for data transfer in each routing step. Anonymity of the receiver is also discussed. The work has been simulated in different network sizes with several data transfers. The results are quite encouraging.
Keywords: data privacy; mobile ad hoc networks; rough set theory; security of data; telecommunication network routing; telecommunication security; MANET; computation minimization; computational intelligence; computational overheads; data anonymization approach; data attributes; data perturbation; data privacy preserving scheme; data transfers; data transformation; delays; mobile adhoc networks; multihop routing; receiver anonymity; rough set theory; Artificial neural networks; Bandwidth; Batteries; Mobile ad hoc networks; Mobile computing; Anonymity; Data Attributes; Data Privacy; Mobile Adhoc Network; Rough Sets (ID#: 15-5738)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7028159&isnumber=7027983
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Computing Theory and Privacy, 2014 |
Getting to the Science of Security will both require and generate fresh looks at computing theory. Privacy, too, is a research area with a theoretical underpinning worth researching. The material cited here was presented in 2014.
Wu Tianshui; Zhao Gang, "A New Security and Privacy Risk Assessment Model for Information System Considering Influence Relation of Risk Elements," Broadband and Wireless Computing, Communication and Applications (BWCCA), 2014 Ninth International Conference on, pp. 233, 238, 8-10 Nov. 2014. doi: 10.1109/BWCCA.2014.76
Abstract: Considering the influence relations among risk assessment elements and the uncertainty generated in the security and privacy risk assessment process, this paper proposes a new security and privacy risk assessment model for information system which is based on DEMATEL-ANP combined with grey system theory. On the basis of risk assessment standard process, this model utilizes the DEMATEL method to identify risk assessment elements and evaluate comprehensive influence relations. Further, the model combines with ANP to solve the weight distribution ratio of the subordinate element of each evaluation elements. Finally the paper uses grey system theory to obtain grey evaluation matrix, and computes final security and privacy risk level. Examples simulation demonstrates that it is an effective method for information system on security and privacy risk assessment, which the model not only weighs up the association influence among the various evaluation factors in practical evaluation system, reduces the subjective evaluation, but also can effectively mitigate the uncertainty of expert evaluation.
Keywords: data privacy; decision making; grey systems; information systems; risk management; security of data; DEMATEL-ANP; analytic network process; decision making trial-and-evaluation laboratory; final security; grey evaluation matrix; grey system theory; information system; privacy risk assessment model; privacy risk level; security risk assessment model; weight distribution ratio; Computational modeling; Indexes; Information security; Privacy; Risk management; analytic network process (ANP); decision making trial and evaluation laboratory (DEMATEL);grey system theory; risk assessment; security and privacy (ID#: 15-5616)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7016074&isnumber=7015998
Qiuwei Yang; Changquan Cheng; Xiqiang Che, "A Cost-Aware Method of Privacy Protection for Multiple Cloud Service Requests," Computational Science and Engineering (CSE), 2014 IEEE 17th International Conference on, pp. 583, 590, 19-21 Dec. 2014. doi: 10.1109/CSE.2014.131Abstract: In cloud computing environment, service requests usually carry some sensitive information that will be treated as privacy and cloud service request privacy leakage problem has become a hotspot of cloud security research. Existing studies assumed that potential attackers only collected and dealt with the relevant information
of single service request sequence, they did not distinguish the emphasis degree of users for these information. When applying directly to the scenes of multiple cloud service requests privacy protection, their strategies couldn't meet the needs of protection due to the limitations of their analytical perspective, and their cost would also increase. In this paper, we propose a method of sensitive information relation description and privacy measurement that caters to multiple cloud service requests, and conduct privacy leakage risk assessment under this scenario based on D-S evidence theory, then give the strategy of obfuscation choice and noise generation for multiple cloud service requests, finally build a cost-aware privacy protection framework for them. The simulation and analysis shows that our approach ensures the security of multiple service requests in cloud environment without significantly increasing the system overhead and saves the noise cost.
Keywords: cloud computing; data privacy; inference mechanisms; security of data; uncertainty handling; D-S evidence theory; cloud computing environment; cloud security; cloud service request privacy leakage problem; cloud service requests privacy protection; cost-aware method; cost-aware privacy protection framework; noise generation; obfuscation choice; privacy leakage risk assessment; privacy measurement; sensitive information relation description; Clouds; Correlation coefficient; Joints;Noise; Privacy; Risk management; Security; Cloud computing; D-S evidence theory; Multiple service requests; Privacy protection; Risk assessment (ID#: 15-5617)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7023641&isnumber=7023510
Yingxu Wang; Wiebe, V.J., "Big Data Analyses for Collective Opinion Elicitation in Social Networks," Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, pp. 630, 637, 24-26 Sept. 2014. doi: 10.1109/TrustCom.2014.81
Abstract: Big data are extremely large-scaled data in terms of quantity, complexity, semantics, distribution, and processing costs in computer science, cognitive informatics, web-based computing, cloud computing, and computational intelligence. Censuses and elections are a typical paradigm of big data engineering in modern digital democracy and social networks. This paper analyzes the mechanisms of voting systems and collective opinions using big data analysis technologies. A set of numerical and fuzzy models for collective opinion analyses is presented for applications in social networks, online voting, and general elections. A fundamental insight on the collective opinion equilibrium is revealed among electoral distributions and in voting systems. Fuzzy analysis methods for collective opinions are rigorously developed and applied in poll data mining, collective opinion determination, and quantitative electoral data processing.
Keywords: Big Data; cloud computing; computer science; data mining; fuzzy set theory; politics; social networking (online);Big Data analysis; Web-based computing; cloud computing; cognitive informatics; collective opinion determination; collective opinion elicitation; computational intelligence; computer science; digital democracy; fuzzy analysis; large-scaled data; poll data mining; quantitative electoral data processing; social networks; Algorithm design and analysis; Benchmark testing; Big data; Data models; Nominations and elections; Polynomials; Semantics; Big data; big data engineering; collective opinion; fuzzy models of big data; numerical methods; opinion poll; quantitative analyses; social networks; voting (ID#: 15-5618)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011305&isnumber=7011202
Duo Liu; Chung-Horng Lung; Seddigh, N.; Nandy, B., "Network Traffic Anomaly Detection Using Adaptive Density-Based Fuzzy Clustering," Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, pp. 823, 830, 24-26 Sept. 2014. doi: 10.1109/TrustCom.2014.109
Abstract: Fuzzy C-means (FCM) clustering has been used to distinguish communication network traffic outliers based on the uncommon statistical characteristics of network traffic data. The raditional FCM does not leverage spatial information in its analysis, which leads to inaccuracies in certain instances. To address this challenge, this paper proposes an adaptive fuzzy clustering technique based on existing possibilistic clustering algorithms. The proposed technique simultaneously considers distance, density, and the trend of density change of data instances in the membership degree calculation. Specifically the membership degree is quickly updated when the distance or density is beyond the pre-defined threshold, or density change does not match the data distribution. In contrast, the traditional FCM updates its membership degree only based on the distance between data points and the cluster centroid. The proposed approach enables the clustering to reflect the inherent diversity nature of communication network traffic. Further, an adaptive threshold is introduced to speed up the iterative clustering process. The proposed algorithm has been evaluated via experiments using traffic from a real network. The results indicate that the adaptive fuzzy clustering reduces false negatives while improves true positive results.
Keywords: data handling; fuzzy set theory; pattern clustering; statistical analysis; FCM clustering; adaptive density-based fuzzy clustering; data distribution; fuzzy C-means clustering; network traffic anomaly detection; network traffic data; spatial information; statistical characteristics; Conferences; Privacy; Security; Fuzzy C-means; Network anomaly detection; Partitional clustering; Possibilistic clustering (ID#: 15-5619)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011333&isnumber=7011202
Li Lin; Tingting Liu; Jian Hu; Jianbiao Zhang, "A Privacy-Aware Cloud Service Selection Method Toward Data Life-Cycle," Parallel and Distributed Systems (ICPADS), 2014 20th IEEE International Conference on, pp. 752, 759, 16-19 Dec. 2014. doi: 10.1109/PADSW.2014.7097878
Abstract: Recent years have witnessed the rapid development of cloud computing, which leverages its unique services to cloud customers in a pay-as-you-go, anytime, anywhere manner. However, a significant barrier to the adoption of cloud services is that users fear data leakage and loss of privacy if their sensitive data is processed in the cloud. Hence, the cloud customer must be able to select appropriate services according to his or her privacy and security needs. In this paper, we propose a novel cloud service selection method called PCSS, where a cloud service is estimated based on its capability of privacy protection (CoPP) covering the entire life-cycle of users' data. A scalable assessment index system with a 2-level hierarchy structure is constructed to analyze and quantify the CoPP of cloud service. The first-level index is composed of all stages of data life-cycle and the second-level index involves privacy-aware security mechanisms at each stage. We employ a fuzzy comprehensive evaluation technique to count the privacy-preserving value of security mechanism. An AHP- based approach is exploited to decide the impact weight of different security mechanisms to the CoPP of each stage. By calculating a comprehensive CoPP metric of all life-cycle stages, all cloud services can be sorted and recommended to users. An example analysis is given, and the reasonableness of the proposed method is proved. Comprehensive experiments have been conducted, which demonstrate the effectiveness of the proposed method by the comparison with the baseline method at the service selection performance.
Keywords: analytic hierarchy process; cloud computing; data privacy; fuzzy set theory;2-level hierarchy structure; AHP- based approach; CoPP; PCSS; analytic hierarchy process; capability of privacy protection; cloud computing; cloud customer; data leakage; first-level index; fuzzy comprehensive evaluation technique; privacy loss; privacy-aware cloud service selection method; privacy-aware security mechanisms; privacy-preserving value; scalable assessment index system; second-level index; security needs; service selection performance; user data life-cycle; Data privacy; Filtering; Phase locked loops; Privacy; Security; cloud service selection; data life-cycle; privacy-aware (ID#: 15-5620)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7097878&isnumber=7097773
Bijral, S.; Mukhopadhyay, D., "Efficient Fuzzy Search Engine with B -Tree Search Mechanism," Information Technology (ICIT), 2014 International Conference on, pp. 118, 122, 22-24 Dec. 2014. doi: 10.1109/ICIT.2014.19
Abstract: Search engines play a vital role in day to day life on internet. People use search engines to find content on internet. Cloud computing is the computing concept in which data is stored and accessed with the help of a third party server called as cloud. Data is not stored locally on our machines and the software's and information are provided to user if user demands for it. Search queries are the most important part in searching data on internet. A search query consists of one or more than one keywords. A search query is searched from the database for exact match, and the traditional searchable schemes do not tolerate minor typos and format inconsistencies, which happen quite frequently. This drawback makes the existing techniques unsuitable and they offer very low efficiency. In this paper, i will for the first time formulate the problem of effective fuzzy search by introducing tree search methodologies. I will explore the benefits of B trees in search mechanism and use them to have an efficient keyword search. I have taken into consideration the security analysis strictly so as to get a secure and privacy-preserving system.
Keywords: cloud computing; data privacy; fuzzy set theory; query processing; search engines; trees (mathematics); Internet; b-tree search mechanism; cloud computing; data searching; format inconsistencies; fuzzy search engine; keyword search; minor typos; privacy-preserving system; search query; secure system; security analysis; third party server; traditional searchable schemes; Cloud computing; Dictionaries; Encryption; Indexes; Information technology; Keyword search; B-Tree Search; Fuzzy keyword Search; Typos and format Inconsistencies (ID#: 15-5621)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7033307&isnumber=7033273
Bistarelli, S.; Santini, F., "Two Trust Networks In One: Using Bipolar Structures To Fuse Trust And Distrust," Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on, pp. 383, 390, 23-24 July 2014. doi: 10.1109/PST.2014.6890964
Abstract: In this paper we study weighted trust-networks, where each edge is associated with either a positive or negative score. Hence, we consider a distrust relationship as well, allowing a user to rate poor experiences with other individuals in his web of acquaintances. We propose an algorithm to compose two of such networks in a single one, in order to merge the knowledge obtained in two different communities of individuals (possibly partially-overlapping), through two different trust management-systems. Our algorithm is based on semiring algebraic-structures, in order to have a parametric computational-framework. Such composition can be adopted whenever two trust-based communities (with the same scope) need to be amalgamated: for instance, two competitor-companies that need to unify the trust-based knowledge on their (sub-) suppliers.
Keywords: algebra; network theory (graphs);trusted computing; bipolar structures; distrust relationship; semiring algebraic structures ;trust management systems; trust-based communities; trust-based knowledge; weighted trust networks; Communities; Complexity theory; Electronic mail; Lattices; Measurement; Periodic structures; Security (ID#: 15-5622)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890964&isnumber=6890911
Chi Chen; Chaogang Wang; Tengfei Yang; Dongdai Lin; Song Wang; Jiankun Hu, "Optional Multi-Biometric Cryptosystem Based On Fuzzy Extractor," Fuzzy Systems and Knowledge Discovery (FSKD), 2014 11th International Conference on, pp. 989, 994, 19-21 Aug. 2014. doi: 10.1109/FSKD.2014.6980974
Abstract: Following the wide use of smart devices, biometric cryptosystem is used to protect users' privacy data. However, biometric cryptosystem is rarely used in the scenario of mobile cloud, because the biometric sensors are different on various devices. In this paper, an optional multi-biometric cryptosystem based on fuzzy extractor and secret share technology is proposed. Each of the enrolled biometric modality generates a feature vector, and then the feature vector is put into a fuzzy extractor to get a stable codeword, namely a bit-string. All the codewords are used to bind a random key based on a secret share method, and the key can be used to encrypt users' privacy data. During the verification phase, part of the enrolled biometric modalities are enough to recover the random key. Therefore, the proposed scheme can provide a user the same biometric key on different devices. In addition, experiment on a virtual multi-biometric database shows that the novel concept of optional multi-biometric cryptosystem is better than the corresponding uni-biometric cryptosystem both in matching accuracy and key entropy.
Keywords: biometrics (access control); cloud computing; cryptography; entropy; fuzzy set theory; mobile computing; vectors; bit-string; codewords; feature vector; fuzzy extractor; key entropy; mobile cloud; optional multibiometric cryptosystem; smart devices; users privacy data; Accuracy;Cryptography; Databases; Feature extraction; Fingerprint recognition; Iris recognition; cryptosystem; fuzzy extractor; key generation; mobile cloud; multi-biometric; secret share (ID#: 15-5623)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6980974&isnumber=6980796
Yunmei Lu; Phoungphol, P.; Yanqing Zhang, "Privacy Aware Non-linear Support Vector Machine for Multi-source Big Data," Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, pp. 783, 789, 24-26 Sept. 2014. doi: 10.1109/TrustCom.2014.103
Abstract: In order to build reliable prediction models and attain high classification accuracy, assembling datasets from multiple databases maintained by different sources (such as different hospitals) has become increasingly common. However, assembling these composite datasets involves the disclosure of individuals' records, therefore many local owners are reluctant to share their data due to privacy concerns. This paper presents a framework for building a Privacy-Aware Non-linear Support Vector Machine (PAN-SVM) classifier using distributed data sources. The framework with three layers can do global classification based on distributed data sources and protect individuals' records at the same time. At the bottom layer, k-means clustering is used to select landmarks that will be used by the medium layer after they are encrypted by a secure sum protocol. The medium layer employs Nystrom low-rank approximation and kernel matrix decomposition techniques to construct a global SVM classifier which is accelerated at the top layer by employing a cutting-plane technique. Simulation results on multiple datasets indicate that the new framework can solve the classification problem on distributed data sources effectively and efficiently, and protect the privacy of individuals' data as well.
Keywords: approximation theory; data privacy; matrix algebra; support vector machines; Nystrom low-rank approximation; PAN-SVM classifier; assembling datasets; cutting plane technique; distributed data sources; global SVM classifier; kernel matrix decomposition techniques; multiple databases; multisource big data; privacy aware nonlinear support vector machine; secure sum protocol; Accuracy; Data models; Data privacy; Distributed databases; Kernel; Support vector machines; Training; Cutting-plane Method; Distributed data-mining; Low-rank Approximation; Matrix Decomposition; Multi-source Data; Privacy preserving; SVM (ID#: 15-5624)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011327&isnumber=7011202
Keith, M.J.; Babb, J.S.; Lowry, P.B., "A Longitudinal Study of Information Privacy on Mobile Devices," System Sciences (HICSS), 2014 47th Hawaii International Conference on, pp. 3149, 3158, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.391
Abstract: The real value of mobile applications is heavily dependent on consumers' trust in the privacy of their personal information and location data. However, research has generated few results based on actual information disclosure and even less that is based on longitudinal behavior. The purpose of this study is to execute a unique and authentic field experiment involving real risks and consumer behaviors regarding information disclosure over mobile devices. We compare two theoretical explanations of disclosure decisions: privacy calculus and prospect theory. Our results indicate that consumers are best modeled as "bounded" rational actors concerning their disclosure behavior. Also, actual information disclosure behavior over mobile applications is a more multifaceted issue than research has treated it thus far. For practice, mobile application providers should be aware that increasing the benefits of information disclosure via the app may have the counterintuitive effect of increasing perceived risk and reducing consumer disclosure.
Keywords: behavioural sciences; data privacy; mobile computing; risk management; security of data; bounded rational actors; consumer behaviors; consumer trust; disclosure decisions; information disclosure behavior; location data privacy; longitudinal behavior; longitudinal information privacy study; mobile applications; mobile devices; privacy calculus; prospect theory; Calculus; Educational institutions; Games; Mobile communication; Mobile handsets; Privacy; Social network services; information disclosure; location based-services; mobile application; privacy; privacy calculus; prospect theory; rationality (ID#: 15-5625)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6758993&isnumber=6758592
Schonfeld, M.; Werner, M., "Distributed Privacy-Preserving Mean Estimation," Privacy and Security in Mobile Systems (PRISMS), 2014 International Conference on, pp. 1, 8, 11-14 May 2014. doi: 10.1109/PRISMS.2014.6970597
Abstract: Due to the rise of mobile computing and smartphones, a lot of information about groups has become accessible. This information shall often be kept secret. Hence distributed algorithms for privacy-preserving distribution estimation are needed. Most research currently focuses on privacy in a database, where a single entity has collected the secret information and privacy is ensured between query results and the database. In fully distributed systems such as sensor networks it is often infeasible to move the data towards a central entity for processing. Instead, distributed algorithms are needed. With this paper we propose a fully distributed, privacy-friendly, consensus-based approach. In our approach all nodes cooperate to generate a sufficiently random obfuscation of their secret values until the estimated and obfuscated values of the individual nodes can be safely published. Then the calculations can be done on this replacement containing only non-secret values but recovering some aspects (mean, standard deviation) of the original distribution.
Keywords: data privacy; database management systems; estimation theory; mobile computing; query processing; smart phones; database; distributed algorithms; distributed privacy-preserving mean estimation; information privacy; mobile computing; query results; secret information; smartphones; Distributed databases; Estimation; Peer-to-peer computing; Privacy; Public key; Standards (ID#: 15-5626)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970597&isnumber=6970591
Andersen, A.; Yigzaw, K.Y.; Karlsen, R., "Privacy Preserving Health Data Processing," e-Health Networking, Applications and Services (Healthcom), 2014 IEEE 16th International Conference on, pp. 225, 230, 15-18 Oct. 2014. doi: 10.1109/HealthCom.2014.7001845
Abstract: The usage of electronic health data from different sources for statistical analysis requires a toolset where the legal, security and privacy concerns have been taken into consideration. The health data are typically located at different general practices and hospitals. The data analysis consists of local processing at these locations, and the locations become nodes in a computing graph. To support the legal, security and privacy concerns, the proposed toolset for statistical analysis of health data uses a combination of secure multi-party computation (SMC) algorithms, symmetric and public key encryption, and public key infrastructure (PKI) with certificates and a certificate authority (CA). The proposed toolset should cover a wide range of data analysis with different data distributions. To achieve this, large set of possible SMC algorithms and computing graphs have to be supported.
Keywords: authorisation; data analysis; data privacy; electronic health records; graph theory; public key cryptography; statistical analysis; CA;PKI;SMC algorithms; certificate authority; computing graph; data analysis; data privacy; electronic health data processing; public key encryption; public key infrastructure; secure multiparty computation; statistical analysis; Data privacy; Encryption; Privacy; Public key; Receivers; Snow (ID#: 15-5627)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7001845&isnumber=7001800
Fathabadi, Z.F.; Nogoorani, S.D.; Hemmatyar, A.M., "CR-SMTC: Privacy Preserving Collusion-Resistant Multi-Party Trust Computation," Information Security and Cryptology (ISCISC), 2014 11th International ISC Conference on, pp. 167, 172, 3-4 Sept. 2014. doi: 10.1109/ISCISC.2014.6994042
Abstract: The ever-increasing use of trust and reputation models has posed new challenges in distributed environments. One of these challenges is the computation of trust while preserving privacy of feedback providers. This is because of the fact that some people may report a dishonest value due to social pressure or fear of the consequences. In this paper, we propose a privacy-preserving collusion-resistant multi-party trust computation scheme which uses data perturbation and homomorphic encryption to preserve the privacy of feedbacks. Our scheme is consisted of two protocols for private summation (S-protocol) and inner product (P-protocol). Our protocols are resistant to collusion of up to m+1 and m+2 agents, respectively, where m is a configurable parameter. In addition, their computational complexities are O(nm) and O(n(m+h)), respectively, where n is the number of agents and h is the homomorphic encryption algorithm complexity. We compare our protocols with related works and show its superiority in terms of collusion-resilience probability as well as complexity.
Keywords: computational complexity; cryptographic protocols; data privacy; trusted computing; CR-SMTC;O(n(m+h)) computational complexity; O(nm) computational complexity; P-protocol; S-protocol; collusion resistant protocols; collusion-resilience probability; configurable parameter; data perturbation; dishonest value; distributed environments; feedback provider privacy preservation; homomorphic encryption; homomorphic encryption algorithm complexity; inner product protocols; privacy-preserving collusion-resistant multiparty trust computation scheme; private summation protocols; reputation model; social pressure; trust computation; trust model; Complexity theory; Computational modeling; Encryption; Privacy; Protocols; Resistance; collusion attack (key words);computational trust; data perturbation; homomorphic encryption; privacy preservation (ID#: 15-5628)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6994042&isnumber=6994006
Borges, F.; Martucci, L.A.; Beato, F.; Mühlhäuser, M., "Secure and Privacy-Friendly Public Key Generation and Certification," Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, pp. 114, 121, 24-26 Sept. 2014. doi: 10.1109/TrustCom.2014.19
Abstract: Digital societies increasingly rely on secure communication between parties. Certificate enrollment protocols are used by certificate authorities to issue public key certificates to clients. Key agreement protocols, such as Diffie-Hellman, are used to compute secret keys, using public keys as input, for establishing secure communication channels. Whenever the keys are generated by clients, the bootstrap process requires either (a) an out-of-band verification for certification of keys when those are generated by the clients themselves, or (b) a trusted server to generate both the public and secret parameters. This paper presents a novel constrained key agreement protocol, built upon a constrained Diffie-Hellman, which is used to generate a secure public-private key pair, and to set up a certification environment without disclosing the private keys. In this way, the servers can guarantee that the generated key parameters are safe, and the clients do not disclose any secret information to the servers.
Keywords: cryptographic protocols; data privacy; private key cryptography; public key cryptography; telecommunication security; bootstrap process; certificate authorities; certificate enrollment protocols; certification environment; constrained Diffie-Hellman; digital societies; key agreement protocols; out-of-band verification; privacy-friendly public key generation; public key certificates; secret information; secret keys; secure communication channels; secure public-private key pair; Complexity theory; DH-HEMTs; Protocols; Public key; Servers; Zinc; Certification; Privacy; Protocol; Public Key Generation; Security (ID#: 15-5629)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011241&isnumber=7011202
Lei Pan; Bangay, S., "Generating Repudiable, Memorizable, and Privacy Preserving Security Questions Using the Propp Theory of Narrative," Cyber-Enabled Distributed Computing and Knowledge Discovery (CyberC), 2014 International Conference on, pp. 66, 72, 13-15 Oct. 2014. doi: 10.1109/CyberC.2014.20
Abstract: Security questions are often based on personal information that is limited in variety, available in the public record and very difficult to change if compromised. A personalized folktale shared only by the communicating parties provides memorizable basis for individualized security questions that can be readily replaced in the event of a security breach. We utilize the Propp theory of narrative to provide a basis of abstraction for story generation systems. We develop a proof-of-concept system based on placeholder replacement to demonstrate the generation of repudiate and memorizable questions and answers suitable for online security questions. A 3-component protocol is presented that demonstrates the use of this process to derive a shared secret key through privacy amplification. This combination of story generation and communication security provides the basis for improvements in current security question practice.
Keywords: data privacy;protocols;3-component protocol; Propp theory of narrative; communication security; online security questions; personal information; personalized folktale; placeholder replacement; privacy amplification; privacy preserving security questions; public record; security breach; story generation systems; Authentication; Context; Prediction algorithms; Privacy; Protocols; Servers; Propp theory of narrative; authentication; automated text generation; privacy; security; security question; story synthesis (ID#: 15-5630)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6984283&isnumber=6984259
Ma, J.; Weining Yang; Min Luo; Ninghui Li, "A Study of Probabilistic Password Models," Security and Privacy (SP), 2014 IEEE Symposium on, pp. 689, 704, 18-21 May 2014. doi: 10.1109/SP.2014.50
Abstract: A probabilistic password model assigns a probability value to each string. Such models are useful for research into understanding what makes users choose more (or less) secure passwords, and for constructing password strength meters and password cracking utilities. Guess number graphs generated from password models are a widely used method in password research. In this paper, we show that probability-threshold graphs have important advantages over guess-number graphs. They are much faster to compute, and at the same time provide information beyond what is feasible in guess-number graphs. We also observe that research in password modeling can benefit from the extensive literature in statistical language modeling. We conduct a systematic evaluation of a large number of probabilistic password models, including Markov models using different normalization and smoothing methods, and found that, among other things, Markov models, when done correctly, perform significantly better than the Probabilistic Context-Free Grammar model proposed in Weir et al., which has been used as the state-of-the-art password model in recent research.
Keywords: Markov processes; graph theory; probability; security of data; Markov models; guess number graphs; password cracking utilities; password strength meters; probabilistic password models; probability-threshold graphs; secure passwords; statistical language modeling; Computational modeling; Dictionaries; Educational institutions; Markov processes; Probabilistic logic; Testing; Training (ID#: 15-5631)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956595&isnumber=6956545
Sardana, N.; Cohen, R., "Validating Trust Models Against Realworld Data Sets," Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on, pp. 355, 362, 23-24 July 2014. doi: 10.1109/PST.2014.6890960
Abstract: In order to validate a particular approach to trust modeling, researchers have typically designed simulations in which various multliagent conditions are modeled and tested. Graphs have tracked different measures to demonstrate success of the proposed trust model, including satisfaction of buying agents, profit of selling agents (in e-marketplaces) or the extent to which the simulation matched some ground truth for the user. In this paper we report on an effort to locate and employ existing datasets with information about real users, in order to validate a trust model. We describe how Reddit and Epinions datasets can be put to good use, towards this end. In addition to describing what we did for the validation of our own trust model, we reflect on how other trust modeling researchers may perform a similar process, of benefit for their own empirical studies.
Keywords: graph theory; multi-agent systems; trusted computing; Epinions datasets; Reddit datasets; buying agents; e-marketplaces; graphs; multiagent conditions; selling agents; trust models; Blades; Computational modeling; Data models; Decision making; Educational institutions; Measurement; Testing (ID#: 15-5632)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890960&isnumber=6890911
Mashayekhy, L.; Nejad, M.M.; Grosu, D., "A Framework for Data Protection in Cloud Federations," Parallel Processing (ICPP), 2014 43rd International Conference on, pp. 283, 290, 9-12 Sept. 2014. doi: 10.1109/ICPP.2014.37
Abstract: One of the benefits of cloud computing is that a cloud provider can dynamically scale-up its resource capabilities by forming a cloud federation with other cloud providers. Forming cloud federations requires taking the data privacy and security concerns into account, which is critical in satisfying the Service Level Agreements (SLAs). The nature of privacy and security challenges in clouds requires that cloud providers design data protection mechanisms that work together with their resource management systems. In this paper, we consider the privacy requirements when outsourcing data and computation within a federation of clouds, and propose a framework for minimizing the cost of outsourcing while considering two key data protection restrictions, the trust and disclosure restrictions. We model these restrictions as conflict graphs, and formulate the problem as an integer program. In the absence of computationally tractable optimal algorithms for solving this problem, we design a fast heuristic algorithm. We analyze the performance of our proposed algorithm through extensive experiments.
Keywords: cloud computing; data privacy; graph theory; integer programming; SLA; cloud computing; cloud federation; conflict graph; data privacy; data protection; data security; disclosure restriction; integer programming; resource management system; service level agreement; trust restriction; Algorithm design and analysis; Cloud computing; Data privacy; Measurement; Outsourcing; Partitioning algorithms; Security; cloud computing; data protection; federation formation; virtual machine placement (ID#: 15-5633)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957237&isnumber=6957198
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
DNA Cryptography, 2014 |
DNA-based cryptography is a developing interdisciplinary area combining cryptography, mathematical modeling, biochemistry and molecular biology as the basis for encryption. Research includes authentication, steganography, and masking. This research was presented in 2014.
Wei Kang; Daming Cao; Nan Liu, "Authentication With Side Information," Information Theory (ISIT), 2014 IEEE International Symposium on, pp. 1722, 1726, June 29 2014-July 4 2014. doi: 10.1109/ISIT.2014.6875128
Abstract: In this paper, we study the probability of successful deception of an uncompressed biometric authentication system with side information at the adversary. It represents the scenario where the adversary may have correlated side information, e.g., a partial finger print or a DNA sequence of a relative of the legitimate user. We find the optimal exponent of the deception probability by proving both the achievability and the converse. Our proofs are based on the connection between the problem of deception with side information and the rate distortion problem with side information at both the encoder and decoder.
Keywords: biometrics (access control); cryptography; decoding; message authentication; probability; DNA sequence; correlated side information; deception probability; decoder; encoder; legitimate user; partial finger print; uncompressed biometric authentication system; Authentication; Decoding; Distortion measurement; Educational institutions; Encoding; Rate-distortion (ID#: 15-5639)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875128&isnumber=6874773
Dobrescu, L., "Electronic Recording Of Radiation Effective Doses In Medical Imaging Investigations," Electrical and Power Engineering (EPE), 2014 International Conference and Exposition on, pp. 793, 796, 16-18 Oct. 2014. doi: 10.1109/ICEPE.2014.6970019
Abstract: The continuously increasing number of medical investigations using radiological methods imposes the necessity of recording the radiation effective doses for all patients investigated by radiological methods. In Romania, an applied national research project develops a pilot study that analyses and record such types of data using a patient database, electronic cards for patients and doctors and a secured infrastructure based on Public Keys. The effective doses received by patients in many types of medical investigations are calculated, transformed, stored and cumulated.
Keywords: data analysis; data recording; dosimetry; electronic health records; public key cryptography; radiology; data analyses; data recording; doctors; electronic cards; electronic recording; medical imaging; medical investigations; patient database; public key infrastructure; radiation effective doses; radiological methods; Biomedical imaging; Cancer; Computed tomography; DNA; Measurement units; X-rays; CTDI; DAP; DLP; LNT model; radiation effective dose (ID#: 15-5640)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970019&isnumber=6969853
Jain, S.; Bhatnagar, V., "A Novel DNA Sequence Dictionary Method For Securing Data In DNA Using Spiral Approach And Framework Of DNA Cryptography," Advances in Engineering and Technology Research (ICAETR), 2014 International Conference on, pp. 1, 5, 1-2 Aug. 2014. doi: 10.1109/ICAETR.2014.7012924
Abstract: DNA cryptography is a new branch of information security. It encrypts the information in the form of DNA nucleotides (A, T, G and C). It makes use of the vast storage capacity of DNA and its biological properties like highly stable molecule, durability, cost effective and easily available. DNA cryptography is a combination of biological and computer science domain. The researcher of this field must have the knowledge of DNA, computer science and information security methods. This paper provides the framework for DNA cryptography and it also provides the summarized information of algorithms used for providing security to the data stored in DNA. In this paper, authors propose a new method for providing security to the data in the form of DNA sequence. The proposed method provides security at the two level using spiral transposition and DNA sequence dictionary table.
Keywords: DNA; biocomputing; cryptography; DNA cryptography; DNA nucleotides; DNA sequence dictionary method; computer science; information security; spiral approach; Assembly; DNA; Dictionaries; Encryption; Image coding; Uniform resource locators; DNA; DNA Cryptography; DNA sequence Dictionary; Spiral Transposition (ID#: 15-5641)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7012924&isnumber=7012782
Sreeja, C.S.; Misbahuddin, M.; Mohammed Hashim, N.P., "DNA For Information Security: A Survey On DNA Computing And A Pseudo DNA Method Based On Central Dogma Of Molecular Biology," Computer and Communications Technologies (ICCCT), 2014 International Conference on, pp. 1, 6, 11-13 Dec. 2014. doi: 10.1109/ICCCT2.2014.7066757
Abstract: Biology is a life science which has high significance on the quality of life and information security is that aspect for social edification, which human beings will never compromise. Both are subjects of high relevance and inevitable for mankind. So, an amalgamation of these subjects definitely turns up as utility technology, either for security or data storage and is known as Bio computing. The secure transfer of information was a major concern from ancient civilizations. Various techniques have been proposed to maintain security of data so that only intended recipient should be able to receive the message other than the sender. These practices became more significant with the introduction of the Internet. Information varies from big data to a particular word, but every piece of information requires proper storage and protection which is a major concern. Cryptography is an art or science of secrecy which protects information from unauthorized access. Various techniques evolved through years for information protection, including Ciphers, Cryptography, Steganography, Biometrics and recent DNA for security. DNA cryptography was a major breakthrough in the field of security which uses Bio-molecular concepts and gives us a new hope of unbreakable algorithms. This paper discusses various DNA based Cryptographic methods proposed till now. It also proposes a DNA symmetric algorithm based on the Pseudo DNA Cryptography and Central dogma of molecular biology. The suggested algorithm uses splicing and padding techniques along with complementary rules which make the algorithm more secure as it is an additional layer of security than conventional cryptographic techniques.
Keywords: DNA; cryptography; data protection; medical computing; molecular biophysics; DNA based cryptographic method; DNA computing; DNA cryptography; DNA symmetric algorithm; bio computing; bio-molecular concept; central dogma; complementary rules; data storage; information protection; information security; information storage; life quality; life science; molecular biology; padding technique; pseudoDNA cryptography; secure information transfer; security layer; splicing technique; unauthorized access; utility technology; Ciphers; DNA; DNA computing; Encryption; Authentication; Central dogma of molecular biology; DNA; DNA Cryptography; DNA Steganography; Information security (ID#: 15-5642)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7066757&isnumber=7066690
Jain, S.; Bhatnagar, V., "Analogy Of Various DNA Based Security Algorithms Using Cryptography And Steganography," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on, pp. 285, 291, 7-8 Feb. 2014. doi: 10.1109/ICICICT.2014.6781294
Abstract: In today's era Information technology is growing day by day. The rate of information storage and transformation is growing day by day. So information security is becoming more important. Everyone wants to protect its information from attackers and hackers. To provide security to information there are various algorithms of traditional cryptography and steganography. New field DNA cryptography emerges to provide security to data store in DNA. The DNA cryptography uses the Bio-Molecular computational abilities of DNA. In this paper authors compare the various DNA cryptographic algorithms on certain key and important parameters. These parameters would also help the future researchers to design or improve the DNA storage techniques for secure data storage in more efficient and reliable manner. The authors also explain the different biological and arithmetic operators use in the DNA cryptographic algorithms.
Keywords: DNA; cryptography; information storage; steganography; DNA based security algorithms; DNA cryptography; DNA storage techniques; arithmetic operators; biological operators; biomolecular computational abilities; information security; information storage; information technology; information transformation; steganography; Biological information theory; DNA; Encryption; Facsimile; Arithmetic; Biological; Cryptography; DNA; Steganography (ID#: 15-5643)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781294&isnumber=6781240
Fasila, K.A.; Antony, D., "A Multiphase Cryptosystem with Secure Key Encapsulation Scheme Based on Principles of DNA Computing," Advances in Computing and Communications (ICACC), 2014 Fourth International Conference on, pp. 1, 4, 27-29 Aug. 2014. doi: 10.1109/ICACC.2014.7
Abstract: DNA cryptography is an upcoming field in the cryptography area. We introduce a hybrid cryptography method based on RGB colors, where the security of data is improved by using a data encryption algorithm and that of key is enhanced by means of an algorithm based on DNA steganography. Information can be encoded by using any schemes like ASCII or UNICODE. Plain text is converted to matrix form which is then passed through a number of manipulation steps. Security is further enhanced by using a strong key which is encapsulated by using DNA steganography method. As the next layer of security we propose encryption using DNA bases and amino acids. For this, some DNA techniques like coding are used. The fundamental idea is to provide a multiphase cryptosystem by combining DNA and Amino Acids concepts to other security techniques. Finally, the cipher form is converted to colors to improve the level of security.
Keywords: DNA; biocomputing; cryptography; steganography; DNA computing; DNA cryptography; DNA steganography method; RGB colors; amino acids; amino acids concepts; data encryption algorithm; hybrid cryptography method; multiphase cryptosystem; secure key encapsulation scheme; Ciphers; DNA; Encoding; Encryption; Image color analysis; Matrix converters; DNA coding; LS Base; RGB color conversion; Secure key generation; UNICODE encoding; matrix manipulation cycle (ID#: 15-5644)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6905975&isnumber=6905967
Saranya, M.R.; Mohan, A.K.; Anusudha, K., "A Composite Image Cipher Using DNA Sequence And Genetic Algorithm," Contemporary Computing and Informatics (IC3I), 2014 International Conference on, pp.1022, 1026, 27-29 Nov. 2014. doi: 10.1109/IC3I.2014.7019805
Abstract: A Composite algorithm for improved image security is proposed by taking the advantages of DNA based image encryption and evolutionary algorithms (EA). A number of deoxyribonucleic acid (DNA) masks are created using logistic map function and DNA conversion rules. Then encryption is performed on the plain image to generate a number of cipher images. Finally, genetic algorithm (GA) is applied to find the best DNA mask. From the simulation results it is observed that the proposed scheme improves the level of security.
Keywords: DNA; biocomputing; cryptography; genetic algorithms; image coding; DNA based image encryption; DNA conversion rules; DNA masks; DNA sequence; composite algorithm; composite image cipher; deoxyribonucleic acid masks; evolutionary algorithms; genetic algorithm; image security; logistic map function; Algorithm design and analysis; Ciphers; DNA; Encryption; Genetic algorithms; Histograms; Deoxyribonucleic acid (DNA); Evolutionary algorithm (EA); Image encryption; Logistic map function (ID#: 15-5645)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7019805&isnumber=7019573
Chouhan, D.S.; Mahajan, R.P., "An Architectural Framework For Encryption & Generation Of Digital Signature Using DNA Cryptography," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on, pp.743,748, 5-7 March 2014. doi: 10.1109/IndiaCom.2014.6828061
Abstract: As most of the modern encryption algorithms are broken fully/partially, the world of information security looks in new directions to protect the data it transmits. The concept of using DNA computing in the fields of cryptography has been identified as a possible technology that may bring forward a new hope for hybrid and unbreakable algorithms. Currently, several DNA computing algorithms are proposed for cryptography, cryptanalysis and steganography problems, and they are proven to be very powerful in these areas. This paper gives an architectural framework for encryption & Generation of digital signature using DNA Cryptography. To analyze the performance; the original plaintext size and the key size; together with the encryption and decryption time are examined also the experiments on plaintext with different contents are performed to test the robustness of the program.
Keywords: biocomputing; digital signatures; DNA computing; DNA cryptography; architectural framework; cryptanalysis; decryption time; digital signature encryption; digital signature generation; encryption algorithms; encryption time; information security; key size; plaintext size; steganography; Ciphers; DNA; DNA computing; Digital signatures; Encoding; Encryption; DNA; DNA computing DNA cryptography; DNA digital coding (ID#: 15-5646)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828061&isnumber=6827395
Majumder, A.; Majumdar, A.; Podder, T.; Kar, N.; Sharma, M., "Secure Data Communication And Cryptography Based On DNA Based Message Encoding," Advanced Communication Control and Computing Technologies (ICACCCT), 2014 International Conference on, pp. 360, 363, 8-10 May 2014. doi: 10.1109/ICACCCT.2014.7019464
Abstract: Secure data communication is the most important and essential issue in the area of message transmission over the networks. Cryptography provides the way of making secure message for confidential message transfer. Cryptography is the process of transforming the sender's message to a secret format called cipher text that only intended receiver will get understand the meaning of the secret message. There are various cryptographic or DNA based encoding algorithms have been proposed in order to make secret message for communication. But all these proposed DNA based encryption algorithms are not secure enough to provide better security as compared with the today's security requirement. In this paper, we have proposed a technique of encryption that will enhance the message security. In this proposed algorithm, a new method of DNA based encryption with a strong key of 256 bit is used. Along with this big size key various other encoding tools are used as key in the encoding process of the message like random series of DNA bases, modified DNA bases coding. Moreover a new method of round key selection is also given in this paper to provide better security in the message. The cipher text contains the extra bit of information as similar with the DNA strands that will provide better and enhanced security against intruder's attack.
Keywords: cryptography; DNA based encryption algorithm; DNA based message encoding; cipher text; confidential message transfer; cryptography; data communication security; Cryptography; DNA; Digital audio players; Ciphertext; Coded message; DNA sequence; Encoding tools; Final Cipher (ID#: 15-5647)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7019464&isnumber=7019129
Jain, S.; Bhatnagar, V., "Bit Based Symmetric Encryption Method Using DNA Sequence," Confluence, The Next Generation Information Technology Summit (Confluence), 2014 5th International Conference, pp. 495, 498, 25-26 Sept. 2014. doi: 10.1109/CONFLUENCE.2014.6949360
Abstract: In the present era, with the increase in technological development, threat for data security grows exponentially. With the current technological growth, it is very difficult to secure the data using the traditional cryptographic and Steganographic approach. DNA cryptography is a new field in the area of information security. It is a combination of both biological and computer domain. It provides security by using the properties of DNA and other arithmetic operations. In this paper, the authors present a new method for providing security of the data using DNA sequence as a key and complementary rule pairs. This method can encrypt any data whether it is image, audio, video or text file. The proposed method depends upon the complexity of key and the complementary rules. The proposed method improve the level of security as in this actual data is never transmitted.
Keywords: biocomputing; cryptography; DNA cryptography; DNA sequence; bit based symmetric encryption; cryptographic approach; data security; information security; steganographic approach; technological development; Biological cells; DNA; Encryption; Receivers; DNA; DNA Chromosome; DNA Cryptography; DNA Sequence (ID#: 15-5648)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6949360&isnumber=6949036
Sasikumar, S.; Karthigaikumar, P., "VLSI Implementation Of DNA Cryptography Using Quantum Key Exchange," Electronics and Communication Systems (ICECS), 2014 International Conference on, pp. 1, 5, 13-14 Feb. 2014. doi: 10.1109/ECS.2014.6892822
Abstract: In today's world, security is very fundamental and significant issues of data transmission. Technology advancement is occurring daily in order to find a new cryptographic algorithm. Data security is concerned with the areas of data transmission. Recent advancements in cryptography has led to new techniques called DNA based cryptography and Quantum Cryptography. Here both the ideas of quantum physics and molecular biology are applied and an efficient way is proposed. FPGA implementation provides better and faster results compared to other environment, hence data security is high.
Keywords: DNA; VLSI; data communication; field programmable gate arrays; molecular biophysics; quantum cryptography; DNA cryptography; FPGA implementation; VLSI implementation; data security; data transmission; deoxyribonucleic acid cryptography; molecular biology; quantum cryptography; quantum key exchange; quantum physics; Biological information theory; Ciphers; DNA; Encryption; Photonics; AES; Cryptography; DNA; Decryption; Encryption; FPGA; Quantum Cryptography (ID#: 15-5649)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6892822&isnumber=6892507
Yongnan Li; Limin Xiao, "Arithmetic Computation Using Self-Assembly Of DNA Tiles: Integer Power Over Finite Field GF(2n)," Bioinformatics and Biomedicine (BIBM), 2014 IEEE International Conference on, pp. 471, 475, 2-5 Nov. 2014. doi: 10.1109/BIBM.2014.6999202
Abstract: DNA-based cryptography is a new developing interdisciplinary area which combines cryptography, mathematical modeling, biochemistry and molecular biology. It is still an open question that how to implement the arithmetic operations used in integer power cryptosystem based on DNA computing. This paper proposes a DNA computing model to compute over finite field GF(2n). The computation tiles performing five different functions assemble into the seed configuration with inputs to figure out the result. It is given that how the computation tiles be coded in bits and how assembly rules work. The assembly time complexity is 2n2+n-1 and the space complexity is n4+n3. This model requires 6436 types of computation tiles and 12 types of boundary tiles.
Keywords: DNA; biochemistry; biology computing; computational complexity; cryptography; molecular biophysics; molecular configurations; self-assembly; DNA computing model; DNA-based cryptography; arithmetic computation; biochemistry; cryptosystem; finite field GF(2n); integer power; interdisciplinary area; mathematical modeling; molecular biology; seed configuration; self-assembly; space complexity; Assembly; Computational modeling; Conferences; DNA; DNA computing; Mathematical model; Self-assembly; DNA computing; Finite field GF(2n);Integer power; Tile assembly model (ID#: 15-5650)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6999202&isnumber=6999111
Das, P.; Kar, N., "A Highly Secure DNA Based Image Steganography," Green Computing Communication and Electrical Engineering (ICGCCEE), 2014 International Conference on, pp. 1, 5, 6-8 March 2014. doi: 10.1109/ICGCCEE.2014.6921419
Abstract: DNA steganography is one of the emerging technologies in the field of covert data transmission. In this paper we are proposing a novel DNA based steganography which uses images as primary cover media. A theoretical single stranded DNA (ssDNA) or oligonucleotide is extracted from the image which is used as the secondary cover providing a huge amount of background noise for the secret data. This is why we call it a dual cover steganography. The pixel sequence contributing the ssDNA codon series construction is determined by a two dimensional chaotic map. Performance of the algorithm is tested against several visual and statistical attacks and parameterized in terms of both security and capacity.
Keywords: DNA; biology computing; image coding; steganography; DNA based image steganography; covert data transmission; dual cover steganography; oligonucleotide; single stranded DNA; ssDNA codon series construction; statistical attack; two dimensional chaotic map; visual attack; Cryptography; DNA; Data mining; Histograms; Image color analysis; Logistics; Payloads; DNA; DNA algebra; LSB; PSNR; histogram; image; logistic map; neighbourhood; primer; steganography (ID#: 15-5651)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921419&isnumber=6920919
Chaudhary, H.; Bhatnagar, V., "Hybrid Approach For Secure Communication Of Data Using Chemical DNA," Confluence The Next Generation Information Technology Summit (Confluence), 2014 5th International Conference, pp. 967, 971, 25-26 Sept. 2014. doi: 10.1109/CONFLUENCE.2014.6949327
Abstract: Secret communications have been made traditionally using ciphers which more or less depend upon the ability of computers to sustain attacks over them. Thus we can conclude- The security is limited in terms of computation power, if the attackers increase this computation power, which will be happening in near time, the level of security will decrease to a large extent. To avoid such kind of scenario scientists are proposing to use either completely different modes of cryptography which are not dependent much on computation power of computers or to use a hybrid approach which is derived from computer based approaches and use some elements which are independent of computer based approaches, to hide or to encrypt data. In our proposed scheme, we are using a hybrid approach which will divide data into two parts: one part is encrypted using AES-128, a traditionally strong cipher, while the other data part is hided inside synthetically manufactured DNA strand.
Keywords: DNA; biocommunications; biocomputing; cryptography; steganography; AES-128; DNA cryptography; DNA steganography; chemical DNA; ciphers; computation power; computer based approaches; data encryption; hybrid approach; information hiding; secret communications; secure data communication; security level; Ciphers; Computers; DNA; Receivers; US Department of Transportation; DNA cryptography; DNA steganography; Data embedding; Hybrid cryptosystems; Information hiding (ID#: 15-5652)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6949327&isnumber=6949036
Mokhtar, M.A.; Gobran, S.N.; El-Badawy, E.-S.A.-M., "Colored Image Encryption Algorithm Using DNA Code and Chaos Theory," Computer and Communication Engineering (ICCCE), 2014 International Conference on, pp. 12, 15, 23-25 Sept. 2014. doi: 10.1109/ICCCE.2014.17
Abstract: DNA computing and Chaos theory introduce promising research areas at the field of Cryptography. In this paper, a stream cipher algorithm for Image Encryption is introduced. The chaotic logistic map is used for confusing and diffusing the Image pixels, and then a DNA sequence used as a one-time-pad (OTP) to change pixel values. The introduced algorithm shows also perfect security as a result of using OTP and good ability to resist statistical and differential attacks.
Keywords: biocomputing; cryptography; image colour analysis; DNA code; DNA computing; DNA sequence; OTP; chaos theory; chaotic logistic map; colored image encryption algorithm; cryptography; differential attacks; image pixels; one-time-pad; stream cipher algorithm; Abstracts; Ciphers; Computers; DNA; Encryption; Logistics; PSNR; Chaos theory; DNA cryptography; Image Encryption; Logistic map; one time pad OTP; stream Cipher; symmetrical encryption (ID#: 15-5653)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7031588&isnumber=7031550
Manna, S.; Roy, S.; Roy, P.; Bandyopadhyay, S.K., "Modified Technique Of Insertion Methods For Data Hiding Using DNA Sequences," Automation, Control, Energy and Systems (ACES), 2014 First International Conference on, pp. 1, 5, 1-2 Feb. 2014. doi: 10.1109/ACES.2014.6807990
Abstract: Cryptographic applications require several biological techniques and hence they have become more popular recently. In one of the most interesting techniques data is hidden in Deoxyribo Nuclic Acid (DNA). In this paper we have proposed a Data Hiding Insertion Method based upon DNA sequence. In this method we hide information data into DNA sequence randomly using certain techniques. In this method we use several procedures as: random key generation, selection of the succeeding prime number of key value, cumulative XOR operation of key value, selection of look up table index mapping.
Keywords: DNA; cryptography; data encapsulation; table lookup; DNA sequences; biological techniques; cryptographic applications; cumulative XOR operation; data hiding insertion method; deoxyribo nuclic acid; information data hiding; look up table index mapping; random key generation; DNA; Encoding; Encryption; Indexes; Java; DNA; cryptography; cumulative XOR; look up table (ID#: 15-5654)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6807990&isnumber=6807973
Jangid, R.K.; Mohmmad, N.; Didel, A.; Taterh, S., "Hybrid Approach Of Image Encryption Using DNA Cryptography And TF Hill Cipher Algorithm," Communications and Signal Processing (ICCSP), 2014 International Conference on, pp. 934, 938, 3-5 April 2014. doi: 10.1109/ICCSP.2014.6949981
Abstract: The Hill cipher algorithm is one of the symmetric key algorithms that have many variations but still not suited for all zeroes plaintext block. Chosen-plaintext attack can be possible on Toorani and Falahati Hill Cipher Algorithm by two closely related variants of Hill Cipher. In this paper we have presented a new approach of Hill Cipher (RDHill Cipher) using DNA cryptography and TFHill Cipher to overcome the drawbacks of TFHill. We implement this algorithm for image encryption in which firstly the image will be converted into binary value and the nibble of binary value will be rotated, then it will be converted into DNA, and then DNA to Amino Acids. Secondly TFHill Cipher can be applied on Amino Acids. The output is measured for the security level based on Correlation, histogram and entropy. The experimental results showed that the combination technique resulted in a higher entropy value and lower correlation, and a more uniform histogram, compared to the Hill Cipher, affine Hill, TFHill and SVK Hill Cipher. This implies the good quality of the retrieved image compared to the original one.
Keywords: biocomputing; cryptography; entropy; image coding; DNA cryptography; RDHill cipher; SVK Hill cipher; TF Hill cipher algorithm; TFHill; Toorani-Falahati Hill cipher algorithm; affine Hill; amino acids; binary value; chosen-plaintext attack; correlation; entropy value; histogram; hybrid approach; image conversion; image encryption; image quality; plaintext block; security level; symmetric key algorithm; Ciphers; DNA; Encryption; Hafnium compounds; Manganese; PSNR; Advanced Hill Cipher; Affine Hill Cipher; DNA Cryptography; Decryption; Hill Cipher; Image Encryption; SVK Hill Cipher; TF Hill Cipher (ID#: 15-5655)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6949981&isnumber=6949766
Menaka, K., "Message Encryption Using DNA Sequences," Computing and Communication Technologies (WCCCT), 2014 World Congress on, pp. 182, 184, Feb. 27 2014-March 1 2014. doi: 10.1109/WCCCT.2014.35
Abstract: Data hiding is the skill of hiding messages in such a way that only the sender and the receiver of the message know that the message has been hidden. In the contexts of secured information transmission and reception, efficient techniques for data encryption and decryption are very much essential. Though many algorithms have been developed for hiding the data, DNA sequences based data encryption seems to be a promising strategy for fulfilling the current information security needs. In this paper, an algorithm using DNA sequences for data hiding is proposed and discussed for secure data transmission and reception.
Keywords: DNA; cryptography; data encapsulation; medical computing; DNA sequence-based data encryption; data decryption; data encryption; data hiding; information security; message encryption; message hiding; message receiver; message sender; secure data reception; secure data transmission; secured information reception; secured information transmission; Algorithm design and analysis; Chemicals; DNA; Encoding; Encryption; Indexes; Receivers; DNA Sequences; Data Hiding; Secure Transmission and reception (ID#: 15-5656)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6755134&isnumber=6755083
Chng Chern Wei, "DNA Approach For Password Conversion Generator," Biometrics and Security Technologies (ISBAST), 2014 International Symposium on, pp. 161, 165, 26-27 Aug. 2014. doi: 10.1109/ISBAST.2014.7013114
Abstract: The evolution of an information technology has been expended and growth rapidly since last decade, especially in the era of an internet web technology, such as, e-commence, ebusiness or e-payment or e-shopping and more. The evolution of an internet web technology has made the transmission of the data or information over the web is more comprehensive. Thus, the data or information is easy to hack, crack or spy by the unauthorized persons over the network. This paper proposed a technique of cryptography to make the data or information to be more secure during transmission over the internet technology based on the DNA Stenography with the Finite State Machine (Mealy Machine) theory. This proposed algorithm is able to securing the data or information at least 3 levels of combinations for the password conversion.
Keywords: Internet; authorisation; cryptography; electronic commerce; finite state machines; steganography; DNA stenography; Internet; Mealy machine theory; Web technology;cryptography;e-business;e-commerce;e-payment;e-shopping;finite state machine; information technology; password conversion generator; Algorithm design and analysis; Automata; Computer science; Cryptography; DNA; Internet; DNA; automata; cryptographic; e-commence; security (ID#: 15-5657)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7013114&isnumber=7013076
Murugan, A.; Jacob, G., "On the Secure Storage and Transmission of Health Care Records Based on DNA Sequences," Intelligent Computing Applications (ICICA), 2014 International Conference on, pp.118,121, 6-7 March 2014. doi: 10.1109/ICICA.2014.33
Abstract: With the rapidly changing technological realm, modern healthcare management systems need to change to accommodate these new advances. There is an urgent need to provide and protect the confidentiality of health care records when stored in common databases and transmitted over public insecure channel. This paper outlines DNA sequence based cryptography which is easy to implement and is robust against any type of crypt attack as there is insignificant correlation between the original record and the encrypted image.
Keywords: cryptography; electronic health records; health care; DNA sequence based cryptography; crypt attack; encrypted image; health care records; record storage; record transmission; records confidentiality; Correlation; DNA; Encryption; Medical diagnostic imaging; Medical services; DNA sequence based cryptography; Health records; Magic Square; healthcare management system (ID#: 15-5658)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6965023&isnumber=6964987
Lunawat, S.; Patankar, A., "Efficient Architecture For Secure Outsourcing Of Data And Computation In Hybrid Cloud," Optimization, Reliabilty, and Information Technology (ICROIT), 2014 International Conference on, pp. 380, 383, 6-8 Feb. 2014. doi: 10.1109/ICROIT.2014.6798358
Abstract: Cloud computing provides people the way to share private resources and services. Since data are share between distributed resources via the network in the open environment there is always security threat. Due lack of proper security control policy and weakness in saving private data that lead to many vulnerability in cloud computing. Cloud computing is buzzword because of its performance, high availability, low cost, scalability and many others. The concept of cloud computing is to reduce the processing burden on the client by improving the ability of cloud to handle them, by using client as a simple input and output device, and bask of computations on the cloud. Existing solutions are based on hardware and pure cryptographic techniques to solve these security and access control problems suffer from heavy computational overhead. In this paper, we proposed a protocol that uses hybrid cloud architecture with highly secure DNA (Deoxyribonucleic acid) Matching. Currently there is a tremendous requirement for secure DNA Matching as researchers are working on storing large DNA string that requires large space that is not cost effective. In this proposed system Private Cloud (Eucalyptus) and Public Cloud (Amazon) communicate with each other for DNA Matching using Garbled Circuits. We aimed for a system to be efficient in terms of cost, communication, memory and matching result with other existing DNA Matching systems.
Keywords: DNA; authorisation; biology computing; cloud computing; cryptography; outsourcing; software architecture; Amazon; DNA string; Eucalyptus; access control problems; cloud computing; computation outsourcing security; computational overhead; data outsourcing security; deoxyribonucleic acid; distributed resources; garbled circuits; hybrid cloud architecture; private cloud; private resource sharing; private service sharing; proper security control policy; public cloud; pure cryptographic techniques; secure DNA matching; Bioinformatics; Genomics; Security; Web services; Cloud Computing; Cloud Security; DNA Matching; Garbled Circuits (ID#: 15-5659)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798358&isnumber=6798279
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Data Deletion, 2014 |
The problem of "forgetting," that is, eliminating links and references used on the Internet to focus on a specific topic or reference, is an important issue related to privacy. "Forgetting," essentially a problem in data deletion, has many implications for security and for data structures, including distributed file structures. Of particular interest is the problem data deletion in the cloud. Articles published in 2014 are cited here.
Reardon, J.; Basin, D.; Capkun, S., "On Secure Data Deletion," Security & Privacy, IEEE, vol.12, no.3, pp.37,44, May-June 2014. doi: 10.1109/MSP.2013.159
Abstract: Secure data deletion is the task of deleting data from a physical medium, such as a hard drive, phone, or blackboard, so that the data is irrecoverable. This irrecoverability distinguishes secure deletion from regular file deletion, which deletes unneeded data only to reclaim resources. Users securely delete data to prevent adversaries from gaining access to it. In this article, we explore approaches to securely delete digital data, describe different adversaries' capabilities, and show how secure deletion approaches can be integrated into systems at different interface levels to protect against specific adversaries.
Keywords: data protection; security of data; adversary access prevention; data protection; regular file deletion; secure data deletion; Computer security; Data processing; File systems; Flash memories; Forensics; Media; File systems; Flash memories; Forensics; Hardware; Media; Privacy; Security; null (ID#: 15-5660)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6678339&isnumber=6824513
Zhen Mo; Qingjun Xiao; Yian Zhou; Shigang Chen, "On Deletion of Outsourced Data in Cloud Computing," Cloud Computing (CLOUD), 2014 IEEE 7th International Conference on, pp. 344, 351, June 27 2014-July 2 2014. doi: 10.1109/CLOUD.2014.54
Abstract: Data security is a major concern in cloud computing. After clients outsource their data to the cloud, will they lose control of the data? Prior research has proposed various schemes for clients to confirm the existence of their data on the cloud servers, and the goal is to ensure data integrity. This paper investigates a complementary problem: When clients delete data, how can they be sure that the deleted data will never resurface in the future if the clients do not perform the actual data removal themselves? How to confirm the non-existence of their data when the data is not in their possession? One obvious solution is to encrypt the outsourced data, but this solution has a significant technical challenge because a huge amount of key materials may have to be maintained if we allow fine-grained deletion. In this paper, we explore the feasibility of relieving clients from such a burden by outsourcing keys (after encryption) to the cloud. We propose a novel multi-layered key structure, called Recursively Encrypted Red-black Key tree (RERK), that ensures no key materials will be leaked, yet the client is able to manipulate keys by performing tree operations in collaboration with the servers. We implement our solution on the Amazon EC2. The experimental results show that our solution can efficiently support the deletion of outsourced data in cloud computing.
Keywords: cloud computing; cryptography; data integrity;trees (mathematics);Amazon EC2;RERK;cloud computing; data integrity; data security; encryption; fine-grained deletion; multilayered key structure; outsourced data detection; recursively encrypted red-black key tree; Data privacy; Encryption; Materials; Polynomials; Servers (ID#: 15-5661)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6973760&isnumber=6973706
Li Chaoling; Chen Yue; Zhou Yanzhou, "A Data Assured Deletion Scheme In Cloud Storage," Communications, China, vol. 11, no. 4, pp.98,110, April 2014. doi: 10.1109/CC.2014.6827572
Abstract: In order to provide a practicable solution to data confidentiality in cloud storage service, a data assured deletion scheme, which achieves the fine grained access control, hopping and sniffing attacks resistance, data dynamics and deduplication, is proposed. In our scheme, data blocks are encrypted by a two-level encryption approach, in which the control keys are generated from a key derivation tree, encrypted by an All-Or-Nothing algorithm and then distributed into DHT network after being partitioned by secret sharing. This guarantees that only authorized users can recover the control keys and then decrypt the outsourced data in an owner-specified data lifetime. Besides confidentiality, data dynamics and deduplication are also achieved separately by adjustment of key derivation tree and convergent encryption. The analysis and experimental results show that our scheme can satisfy its security goal and perform the assured deletion with low cost.
Keywords: authorisation; cloud computing; cryptography; storage management; DHT network; all-or-nothing algorithm; cloud storage; convergent encryption; data assured deletion scheme; data confidentiality; data deduplication; data dynamics; fine grained access control; key derivation tree; owner-specified data lifetime; sniffing attack resistance; two-level encryption approach; Artificial neural networks; Encryption; cloud storage; data confidentiality; data dynamics; secure data assured deletion (ID#: 15-5662)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6827572&isnumber=6827540
Zhen Mo; Yan Qiao; Shigang Chen, "Two-Party Fine-Grained Assured Deletion of Outsourced Data in Cloud Systems," Distributed Computing Systems (ICDCS), 2014 IEEE 34th International Conference on, pp. 308, 317, June 30 2014-July 3 2014. doi: 10.1109/ICDCS.2014.39
Abstract: With clients losing direct control of their data, this paper investigates an important problem of cloud systems: When clients delete data, how can they be sure that the deleted data will never resurface in the future if the clients do not perform the actual data removal themselves? How to guarantee inaccessibility of deleted data when the data is not in their possession? Using a novel key modulation function, we design a solution for two-party fine-grained assured deletion. The solution does not rely on any third-party server. Each client only keeps one or a small number of keys, regardless of how big its file system is. The client is able to delete any individual data item in any file without causing significant overhead, and the deletion is permanent - no one can recover already-deleted data, not even after gaining control of both the client device and the cloud server. We validate our design through experimental evaluation.
Keywords: cloud computing; file servers; outsourcing; storage management; already-deleted data; client device; cloud server; cloud systems; data removal; modulation function; outsourced data; third-party server; two-party fine-grained assured deletion; Cryptography; Distributed databases; Modulation; Outsourcing; Radio frequency; Servers (ID#: 15-5663)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6888907&isnumber=6888866
Zhangjie Fu; Xinyue Cao; Jin Wang; Xingming Sun, "Secure Storage of Data in Cloud Computing," Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP), 2014 Tenth International Conference on, pp. 783, 786, 27-29 Aug. 2014. doi: 10.1109/IIH-MSP.2014.199
Abstract: Cloud storage brings convenient storage of data, at the same time there are also hidden security issues. Data storage security includes legal access to the data stored in the cloud, namely access authorization and authentication security data sharing and the encryption of stored data to ensure data confidentiality, consisting of the accessibility to effective cryptographic data and inaccessibility to the deleted cryptographic data, using tamper-proof technology to ensure the integrity of the data, as well as using tracking technology to ensure data traceability. This paper focuses on the file systems with secure data deletion. We design a file system which supports secure deletion of data. It uses CP-ABE which supports fine-grained access policy to encrypt files.
Keywords: cloud computing; cryptography; data integrity; storage management; CP-ABE; access authorization; authentication security data sharing; cloud computing; cloud storage; cryptographic data; data confidentiality; data integrity; data storage security; data traceability; file encryption; file systems; fine-grained access policy; legal access; secure data deletion; stored data encryption; tamper-proof technology; Access control; Cloud computing; Encryption; File systems; Secure storage; access control; data integrity; key manage; secure storage of data; tracking technology (ID#: 15-5664)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6998444&isnumber=6998244
Luo Yuchuan; Fu Shaojing; Xu Ming; Wang Dongsheng, "Enable Data Dynamics For Algebraic Signatures Based Remote Data Possession Checking In The Cloud Storage," Communications, China, vol. 11, no.11, pp. 114, 124, Nov. 2014. doi: 10.1109/CC.2014.7004529
Abstract: Cloud storage is one of the main application of the cloud computing. With the data services in the cloud, users is able to outsource their data to the cloud, access and share their outsourced data from the cloud server anywhere and anytime. However, this new paradigm of data outsourcing services also introduces new security challenges, among which is how to ensure the integrity of the outsourced data. Although the cloud storage providers commit a reliable and secure environment to users, the integrity of data can still be damaged owing to the carelessness of humans and failures of hardwares/softwares or the attacks from external adversaries. Therefore, it is of great importance for users to audit the integrity of their data outsourced to the cloud. In this paper, we first design an auditing framework for cloud storage and proposed an algebraic signature based remote data possession checking protocol, which allows a third-party to auditing the integrity of the outsourced data on behalf of the users and supports unlimited number of verifications. Then we extends our auditing protocol to support data dynamic operations, including data update, data insertion and data deletion. The analysis and experiment results demonstrate that our proposed schemes are secure and efficient.
Keywords: cloud computing; data integrity; outsourcing; protocols; storage management; algebraic signature based remote data possession checking protocol; auditing framework; auditing protocol; cloud computing; cloud server; cloud storage providers; data deletion; data dynamic operations; data insertion; data outsourcing services; outsourced data integrity; Cloud computing; Data models; Data storage; Galois fields; Protocols; Security; Servers; algebraic signatures; cloud computing; cloud storage; data dynamics; data integrity (ID#: 15-5665)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7004529&isnumber=7004513
Vanitha, M.; Kavitha, C., "Secured Data Destruction In Cloud Based Multi-Tenant Database Architecture," Computer Communication and Informatics (ICCCI), 2014 International Conference on, pp.1,6, 3-5 Jan. 2014. doi: 10.1109/ICCCI.2014.6921774
Abstract: Cloud computing falls into two general categories. Applications being delivered as service and hardware and data centers that provides those services [1]. Cloud storage evolves from just a storage model to a new service model where data is being managed, maintained, and stored in multiple remote severs for back-up reasons. Cloud platform server clusters are running in network environment and it may contain multiple users' data and the data may be scattered in different virtual data centers. In a multi-user shared cloud computing platform users are only logically isolated, but data of different users may be stored in same physical equipment. These equipments can be rapidly provisioned, implemented, scaled up or down and decommissioned. Current cloud providers do not provide the control or at least the knowledge over the provided resources to their customers. The data in cloud is encrypted during rest, transit and back-up in multi tenant storage. The encryption keys are managed per customer. There are different stages of data life cycle Create, Store, Use, Share, Archive and Destruct. The final stage is overlooked [2], which is the complex stage of data in cloud. Data retention assurance may be easier for the cloud provider to demonstrate while the data destruction is extremely difficult. When the SLA between the customer and the cloud provider ends, today in no way it is assured that the particular customers' data is completely destroyed or destructed from the cloud provider's storage. The proposed method identifies way to track individual customers' data and their encryption keys and provides solution to completely delete the data from the cloud provider's multi-tenant storage architecture. It also ensures deletion of data copies as there are always possibilities of more than one copy of data being maintained for back-up purposes. The data destruction proof shall also be provided to customer making sure that the owner's data is completely removed.
Keywords: cloud computing; contracts; database management systems; file organisation; private key cryptography; public key cryptography; SLA; cloud computing; data copy deletion; encryption keys; multitenant database architecture; multitenant storage architecture; secured data destruction; Cloud computing; Computer architecture; Computers; Encryption; Informatics; Public key; attribute based encryption; data retention; encryption; file policy (ID#: 15-5666)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921774&isnumber=6921705
Alnemr, R.; Pearson, S.; Leenes, R.; Mhungu, R., "COAT: Cloud Offerings Advisory Tool," Cloud Computing Technology and Science (CloudCom), 2014 IEEE 6th International Conference on, pp. 95, 100, 15-18 Dec. 2014. doi: 10.1109/CloudCom.2014.100
Abstract: There is a pressing need to make the differences between cloud offerings more transparent to cloud customers. Examples of properties that vary across cloud service providers (and that are reflected in cloud contracts) include subcontracting, location of data centres, use restriction, applicable law, data backup, encryption, remedies, storage period, monitoring/audits, breach notification, demonstration of compliance, dispute resolution, data portability, law enforcement access and data deletion from servers. In this paper we present our Cloud Offerings Advisory Tool (COAT), which matches user requirements to cloud offers and performs a comparison of these cloud offerings. It makes the non-functional requirements listed above more transparent to cloud customers, offering advice and guidance about the implications and thereby helping the cloud customers choose what is most appropriate.
Keywords: cloud computing; COAT; cloud customers; cloud offerings advisory tool; cloud service providers; nonfunctional requirements; Cloud computing; Contracts; Data privacy; Encryption; Law; Privacy; Accountability; Cloud Computing; Contracts; Legal; Non-functional Requirements; Privacy; Security; Transparency (ID#: 15-5667)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7037653&isnumber=7036227
Corena, J.C.; Basu, A.; Nakano, Y.; Kiyomoto, S.; Miyake, Y., "Data Storage on the Cloud under User Control," Cloud Computing Technology and Science (CloudCom), 2014 IEEE 6th International Conference on, pp. 739, 742, 15-18 Dec. 2014. doi: 10.1109/CloudCom.2014.113
Abstract: Cloud services provide advantages in terms of service scalability and availability of users' data, but increase concerns about the control that a user has over her own data. These concerns include not just issues related to access to the information itself, but issues about the effective deletion of the information by the cloud in compliance with the user's right to deletion. In this on-going work, we present a mechanism that allows users to control access to and deletion of their information stored on the cloud. Our construction separates the user's content into several encoded pieces most of which are stored by a cloud provider. The remaining encoded pieces are stored by the user and are served directly from the user's infrastructure to the persons interested in viewing the content. The encoding must satisfy the property that without the pieces stored in the user's infrastructure none of the data is revealed. This property is found in several constructions related to secret sharing. We evaluate the practical feasibility of our proposal by developing an image sharing mechanism and simulating the user infrastructure using a single-board computer connected to the home Internet connection of one of the authors.
Keywords: authorisation; cloud computing; data privacy; storage management; cloud services; data storage; image sharing mechanism; secret sharing; user access control; user data privacy; user infrastructure simulation; Cloud computing; Cryptography; Facebook; Manganese; Proposals; Transforms; cloud; privacy; security; storage (ID#: 15-5668)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7037752&isnumber=7036227
Jiansheng Wei; Hong Jiang; Ke Zhou; Dan Feng, "Efficiently Representing Membership for Variable Large Data Sets," Parallel and Distributed Systems, IEEE Transactions on, vol. 25, no. 4, pp. 960, 970, April 2014. doi: 10.1109/TPDS.2013.66
Abstract: Cloud computing has raised new challenges for the membership representation scheme of storage systems that manage very large data sets. This paper proposes DBA, a dynamic Bloom filter array aimed at representing membership for variable large data sets in storage systems in a scalable way. DBA consists of dynamically created groups of space-efficient Bloom filters (BFs) to accommodate changes in set sizes. Within a group, BFs are homogeneous and the data layout is optimized at the bit level to enable parallel access and thus achieve high query performance. DBA can effectively control its query accuracy by partially adjusting the error rate of the constructing BFs, where each BF only represents an independent subset to help locate elements and confirm membership. Further, DBA supports element deletion by introducing a lazy update policy. We prototype and evaluate our DBA scheme as a scalable fast index in the MAD2 deduplication storage system. Experimental results reveal that DBA (with 64 BFs per group) shows significantly higher query performance than the state-of-the-art approach while scaling up to 160 BFs. DBA is also shown to excel in scalability, query accuracy, and space efficiency by theoretical analysis and experimental evaluation.
Keywords: cloud computing; data handling; data structures; query processing;BF;MAD2 deduplication storage system; cloud computing; data layout; dynamic Bloom filter; membership representation scheme; query accuracy; query performance; storage systems; variable large data sets; Arrays; Distributed databases; Error analysis; Indexes; Peer-to-peer computing; Random access memory; Servers; Bloom filter; Data management; fast index; membership representation (ID#: 15-5669)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6471979&isnumber=6750096
Zhou Lei; Zhaoxin Li; Yu Lei; Yanling Bi; Luokai Hu; Wenfeng Shen, "An Improved Image File Storage Method Using Data Deduplication," Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, pp. 638, 643, 24-26 Sept. 2014. doi: 10.1109/TrustCom.2014.82
Abstract: Recent years have seen a rapid growth in the number of virtual machines and virtual machine images that are managed to support infrastructure as a service (IaaS). For example, Amazon Elastic Compute Cloud (EC2) has 6,521 public virtual machine images. This creates several challenges in management of image files in a cloud computing environment. In particular, a large amount of duplicate data that exists in image files consumes significant storage space. To address this problem, we propose an effective image file storage technique using data deduplication with a modified fixed-size block scheme. When a user requests to store an image file, this technique first calculates the fingerprint for the image file, and then compares the fingerprint with the fingerprints in a fingerprint library. If the fingerprint of the image is already in the library, a pointer to the existing fingerprint is used to store this image. Otherwise this image will be processed using the fixed-size block image segmentation method. We design a metadata format for image files to organize image file blocks and a new MD5 index table of image files to reduce their retrieval time. The experiments show that our technique can significantly reduce the transmission time of image files that have already existed in storage. Also the deletion rate for image groups which have the same version of operating systems but different versions of software applications is up about 58%.
Keywords: cloud computing; image segmentation; meta data; visual databases; data deduplication; fingerprint library; fixed-size block image segmentation method; image file blocks; image file fingerprint; image file storage method; metadata format; modified fixed-size block scheme; transmission time reduction; Educational institutions; Fingerprint recognition; Image storage; Libraries; Operating systems; Servers; Virtual machining; cloud computing; data deduplication; image files (ID#: 15-5670)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011306&isnumber=7011202
Zhangjie Fu; Lin Xin; Jin Wang; Xingming Sun, "Data Access Control for Multi-authority Cloud Storage Systems," Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP), 2014 Tenth International Conference on, pp. 714, 717, 27-29 Aug. 2014. doi: 10.1109/IIH-MSP.2014.184
Abstract: Ciphertext-Policy Attribute-based Encryption (CP-ABE) is one of the most suitable technologies for data access control in cloud storage systems. This paper firstly presents the Attribute-based Encryption (ABE), secure deletion and secret-sharing schemes. Then we construct a CP-ABE model using secret-sharing methods to insure its security. At last, we propose an improved scheme on Data Access Control for Multi-Authority Cloud Storage Systems DAC-MACS to insure the security of the central authority (CA).
Keywords: authorisation; cloud computing; cryptography; storage management; CP-ABE model; DAC-MACS;central authority; ciphertext-policy attribute-based encryption; data access control for multiauthority cloud storage systems; secret-sharing methods; secret-sharing schemes; secure deletion schemes; Access control; Cloud computing; Computers; Encryption; Sun; Access control; CP-ABE; Secret-sharing; Secure deletion (ID#: 15-5671)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6998429&isnumber=6998244
Jinbo Xiong; Ximeng Liu; Zhiqiang Yao; Jianfeng Ma; Qi Li; Kui Geng; Chen, P.S., "A Secure Data Self-Destructing Scheme in Cloud Computing," Cloud Computing, IEEE Transactions on , vol. 2, no. 4, pp. 448, 458, Oct.-Dec. 1 2014. doi: 10.1109/TCC.2014.2372758\
Abstract: With the rapid development of versatile cloud services, it becomes increasingly susceptible to use cloud services to share data in a friend circle in the cloud computing environment. Since it is not feasible to implement full lifecycle privacy security, access control becomes a challenging task, especially when we share sensitive data on cloud servers. In order to tackle this problem, we propose a key-policy attribute-based encryption with time-specified attributes (KP-TSABE), a novel secure data self-destructing scheme in cloud computing. In the KP-TSABE scheme, every ciphertext is labeled with a time interval while private key is associated with a time instant. The ciphertext can only be decrypted if both the time instant is in the allowed time interval and the attributes associated with the ciphertext satisfy the key's access structure. The KP-TSABE is able to solve some important security problems by supporting user-defined authorization period and by providing fine-grained access control during the period. The sensitive data will be securely self-destructed after a user-specified expiration time. The KP-TSABE scheme is proved to be secure under the decision l-bilinear Diffie-Hellman inversion (l-Expanded BDHI) assumption. Comprehensive comparisons of the security properties indicate that the KP-TSABE scheme proposed by us satisfies the security requirements and is superior to other existing schemes.
Keywords: authorisation; cloud computing; data privacy; inverse problems; public key cryptography; access control; cloud computing environment; data self-destructing scheme security; decision l-bilinear Diffie-Hellman inversion; key-policy attribute-based encryption with time-specified attribute KP-TSABE; l-expanded BDHI assumption; lifecycle privacy security; user-defined authorization period; Authorization; Cloud computing; Computer security; Data privacy; Encryption; Sensitive data; assured deletion; cloud computing; fine-grained access control; privacy-preserving; secure self-destructing (ID#: 15-5672)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6963363&isnumber=7024233
Rui Wang; Qimin Peng; Xiaohui Hu, "A Hypergraph-Based Service Dependency Model For Cloud Services," Multisensor Fusion and Information Integration for Intelligent Systems (MFI), 2014 International Conference on, pp. 1, 6, 28-29 Sept. 2014. doi: 10.1109/MFI.2014.6997658
Abstract: Cloud computing is known as a new computing paradigm that utilizes existing cloud services as fundamental elements for developing distributed applications based on the so-called “use, not own” manner. A dependency is a relation between services wherein a change to one of the services implies a potential change to the others. In this paper, services are classified into three layers in accordance with different business requirements. Services exist in different areas of the static domain, the user application in dynamic domain. User applications are implemented by way of choosing services in business layer and application layer. A hypergraph-based service model is used to represent architecture of multi-tenancy applications. Using the properties of hypergraph, we can solve the service about addition, deletion, replacement, migration and other problems. This model can implement extensible software architecture and assure the adaptive evolution of the large-scale complex software systems.
Keywords: business data processing; cloud computing; software architecture; adaptive evolution; application layer; business layer; business requirements; cloud computing paradigm; cloud services; dynamic domain; extensible software architecture; hypergraph-based service dependency model; large scale complex software systems; multitenancy applications; static domain; user applications; Adaptation models; Business; Computational modeling; Computer architecture; Software architecture; Software as a service; Cloud Computing; hypergraph-based service model; software architecture (ID#: 15-5673)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6997658&isnumber=6997624
Higai, A.; Takefusa, A.; Nakada, H.; Oguchi, M., "A Study of Effective Replica Reconstruction Schemes at Node Deletion for HDFS," Cluster, Cloud and Grid Computing (CCGrid), 2014 14th IEEE/ACM International Symposium on, pp. 512, 521, 26-29 May 2014. doi: 10.1109/CCGrid.2014.31
Abstract: Distributed file systems, which manage large amounts of data over multiple commercially available machines, have attracted attention as a management and processing system for big data applications. A distributed file system consists of multiple data nodes and provides reliability and availability by holding multiple replicas of data. Due to system failure or maintenance, a data node may be removed from the system and the data blocks the removed data node held are lost. If data blocks are missing, the access load of the other data nodes that hold the lost data blocks increases, and as a result the performance of data processing over the distributed file system decreases. Therefore, replica reconstruction is an important issue to reallocate the missing data blocks in order to prevent such performance degradation. The Hadoop Distributed File System (HDFS) is a widely used distributed file system. In the HDFS replica reconstruction process, source and destination data nodes for replication are selected randomly. We found that this replica reconstruction scheme is inefficient because data transfer is biased. Therefore, we propose two more effective replica reconstruction schemes that aim to balance the workloads of replication processes. Our proposed replication scheduling strategy assumes that nodes are arranged in a ring and data blocks are transferred based on this one-directional ring structure to minimize the difference of the amount of transfer data of each node. Based on this strategy, we propose two replica reconstruction schemes, an optimization scheme and a heuristic scheme. We have implemented the proposed schemes in HDFS and evaluated them on an actual HDFS cluster. From the experiments, we confirm that the replica reconstruction throughput of the proposed schemes show a 45% improvement compared to that of the default scheme. We also verify that the heuristic scheme is effective because it shows performance comparable to the optimization scheme and can be mo- e scalable than the optimization scheme.
Keywords: Big Data; file organisation; optimisation; HDFS; Hadoop distributed file system; access load; data transfer; heuristic scheme; node deletion; optimization scheme; replica reconstruction scheme; replication scheduling strategy; Availability; Big data; Data transfer; Distributed databases; Optimization; Structural rings; Throughput; HDFS; distributed file system; heuristic; optimization; reconstruction; replica (ID#: 15-5674)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846487&isnumber=6846423
Yusoh, Z.I.M.; Maolin Tang, "Composite SaaS Scaling In Cloud Computing Using A Hybrid Genetic Algorithm," Evolutionary Computation (CEC), 2014 IEEE Congress on, pp. 1609,1616, 6-11 July 2014. doi: 10.1109/CEC.2014.6900614
Abstract: A Software-as-a-Service or SaaS can be delivered in a composite form, consisting of a set of application and data components that work together to deliver higher-level functional software. Components in a composite SaaS may need to be scaled - replicated or deleted, to accommodate the user's load. It may not be necessary to replicate all components of the SaaS, as some components can be shared by other instances. On the other hand, when the load is low, some of the instances may need to be deleted to avoid resource underutilisation. Thus, it is important to determine which components are to be scaled such that the performance of the SaaS is still maintained. Extensive research on the SaaS resource management in Cloud has not yet addressed the challenges of scaling process for composite SaaS. Therefore, a hybrid genetic algorithm is proposed in which it utilises the problem's knowledge and explores the best combination of scaling plan for the components. Experimental results demonstrate that the proposed algorithm outperforms existing heuristic-based solutions.
Keywords: cloud computing; genetic algorithms; resource allocation; SaaS resource management; application component; cloud computing; composite SaaS component deletion; composite SaaS component replication; composite SaaS component scaling; data component; higher-level functional software; hybrid genetic algorithm; resource underutilisation avoidance; software-as-a-service; user load; Biological cells; Scalability; Servers; Sociology; Software as a service; Statistics; Time factors; Cloud Computing; Clustering; Composite SaaS; Grouping Genetic Algorithm (ID#: 15-5675)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900614&isnumber=6900223
Abdulsalam, S.; Lakomski, D.; Qijun Gu; Tongdan Jin; Ziliang Zong, "Program Energy Efficiency: The Impact Of Language, Compiler And Implementation Choices," Green Computing Conference (IGCC), 2014 International, pp. 1, 6, 3-5 Nov. 2014. doi: 10.1109/IGCC.2014.7039169
Abstract: Today reducing the energy usage of computing systems becomes a paramount task, no matter they are lightweight mobile devices, complex cloud computing platforms or large-scale supercomputers. Many existing studies in green computing focus on making the hardware more energy efficient. This is understandable because software running on low-power hardware will automatically consume less energy. Little work has been done to explore how software developers can play a more proactive role in saving energy by writing greener code. In fact, very few programmers consider energy-efficiency when writing code and even fewer know how to evaluate and improve the energy-efficiency of their code. In this paper, we quantitatively study the impact of languages (C/C++/Java/Python), compiler optimization (GNU C/C++ compiler with O1, O2, and O3 flags) and implementation choices (e.g. using malloc instead of new to create dynamic arrays and using vector vs. array for Quicksort) on the energy-efficiency of three well-known programs: Fast Fourier Transform, Linked List Insertion/Deletion and Quicksort. Our experiments show that by carefully selecting an appropriate language, optimization flag and data structure, significant energy can be conserved for solving the same problem with identical input size.
Keywords: data structures; fast Fourier transforms; green computing; power aware computing; program compilers; programming languages; sorting; Quicksort; code energy-efficiency; compiler choices; compiler optimization; complex cloud computing platforms; computing system energy usage reduction; data structure; dynamic arrays; fast Fourier transform; green computing; greener code writing; implementation choices; language choices; large-scale supercomputers; light-weight mobile devices; linked list insertion-deletion; optimization flag; program energy efficiency; software developers; Arrays; Java; Libraries; Optimization; Resource management; Software; Vectors; energy-efficient programming; green computing; software optimization (ID#: 15-5676)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7039169&isnumber=7039139
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Dynamical Systems, 2014 |
Research into dynamical systems cited here focuses on non-linear and chaotic dynamical systems and in proving abstractions of dynamical systems through numerical simulations. Many of the applications studied are cyber-physical systems. These works were presented in 2014 and are selected because of their specific relevancy to security issues.
Pradhan, P.; Venkitasubramaniam, P., "Under The Radar Attacks In Dynamical Systems: Adversarial Privacy Utility Tradeoffs," Information Theory Workshop (ITW), 2014 IEEE, pp. 242, 246, 2-5 Nov. 2014. doi: 10.1109/ITW.2014.6970829
Abstract: Cyber physical systems which integrate physical system dynamics with digital cyber infrastructure are envisioned to transform our core infrastructural frameworks such as the smart electricity grid, transportation networks and advanced manufacturing. This integration however exposes the physical system functioning to the security vulnerabilities of cyber communication. Both scientific studies and real world examples have demonstrated the impact of data injection attacks on state estimation mechanisms on the smart electricity grid. In this work, an abstract theoretical framework is proposed to study data injection/modification attacks on Markov modeled dynamical systems from the perspective of an adversary. Typical data injection attacks focus on one shot attacks by adversary and the non-detectability of such attacks under static assumptions. In this work we study dynamic data injection attacks where the adversary is capable of modifying a temporal sequence of data and the physical controller is equipped with prior statistical knowledge about the data arrival process to detect the presence of an adversary. The goal of the adversary is to modify the arrivals to minimize a utility function of the controller while minimizing the detectability of his presence as measured by the KL divergence between the prior and posterior distribution of the arriving data. Adversarial policies and tradeoffs between utility and detectability are characterized analytically using linearly solvable control optimization.
Keywords: Markov processes; radar; telecommunication security; Markov modeled dynamical systems; advanced manufacturing; adversarial privacy utility tradeoffs; core infrastructural frameworks; cyber communication; cyber physical systems; data arrival process; data injection attacks; digital cyber infrastructure; dynamic data injection attacks; dynamical systems; physical system dynamics; radar attacks; security vulnerabilities; smart electricity grid; state estimation mechanisms; temporal sequence; transportation networks; Markov processes; Mathematical model; Power system dynamics; Privacy; Process control; Smart grids; State estimation (ID#: 15-5178)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970829&isnumber=6970773
Senouci, A.; Busawon, K.; Bouridane, A.; Binns, R.; Ouslimani, A., "A Robust Chaotic Communication Scheme For A Class Of Dynamical Systems," Control Applications (CCA), 2014 IEEE Conference on, pp. 1178, 1183, 8-10 Oct. 2014. doi: 10.1109/CCA.2014.6981488
Abstract: This paper proposes an observer-based approach for robust chaotic communication with high power plaintext signals. The convergence rate of synchronization can be assigned by appropriately selecting the observer gains. The proposed scheme is carefully designed so that the encrypted signal does not deteriorate the synchronization. The proposed method significantly improves the frequency-domain characteristics of the transmitted secret message. This has the effect of preventing the extraction of the secret message using filtering techniques; hence improved security. Computer simulations show that the synchronization between the transmitter and the receiver is robust for different amplitude values of the information signal even in the presence of external disturbances. The synchronization between the transmitter and the receiver is maintained and the message signal is exactly recovered even for the various types of waveforms (square, trapezoidal, sinusoidal) of plaintext message and while varying the duty cycle and the rising and falling times of the signal.
Keywords: chaotic communication; filtering theory; synchronisation; telecommunication security; chaotic communication scheme; filtering techniques; high power plaintext signals; message signal; observer-based approach; plaintext message; secret message; Chaotic communication; Cryptography; Generators; Receivers; Synchronization; Transmitters (ID#: 15-5179)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6981488&isnumber=6981319
Ping Zhen; Geng Zhao; Lequan Min; Xiaodong Li, "A Survey of Chaos-Based Cryptography," P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC), 2014 Ninth International Conference on, pp. 237, 244, 8-10 Nov. 2014. doi: 10.1109/3PGCIC.2014.69
Abstract: As an offshoot of dynamical systems, chaos is highly sensitive to initial conditions and exhibits seemingly random behavior. From the perspective of cryptography and information security, randomness generated from entirely deterministic systems is a very appealing property. Chaotic cryptography has experienced significant developments since its birth in the early 90's. During these years, numerous research achievements have been obtained. This paper will present an overview about chaos-based cryptography and relevant progress that covers the main techniques used in this field.
Keywords: chaos; cryptography; chaos-based cryptography; deterministic systems; information security; Chaotic communication; Ciphers; Encryption; Public key cryptography; Chaos; Cryptography; Dynamical System; Information security (ID#: 15-5180)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7024589&isnumber=7024297
Khan, M.S.; Ferens, K.; Kinsner, W., "A Chaotic Measure For Cognitive Machine Classification Of Distributed Denial Of Service Attacks," Cognitive Informatics & Cognitive Computing (ICCI*CC), 2014 IEEE 13th International Conference on, pp.100,108, 18-20 Aug. 2014. doi: 10.1109/ICCI-CC.2014.6921448
Abstract: Today's evolving cyber security threats demand new, modern, and cognitive computing approaches to network security systems. In the early years of the Internet, a simple packet inspection firewall was adequate to stop the then-contemporary attacks, such as Denial of Service (DoS), ports scans, and phishing. Since then, DoS has evolved to include Distributed Denial of Service (DDoS) attacks, especially against the Domain Name Service (DNS). DNS based DDoS amplification attacks cannot be stopped easily by traditional signature based detection mechanisms because the attack packets contain authentic data, and signature based detection systems look for specific attack-byte patterns. This paper proposes a chaos based complexity measure and a cognitive machine classification algorithm to detect DNS DDoS amplification attacks. In particular, this paper computes the Lyapunov exponent to measure the complexity of a flow of packets, and classifies the traffic as either normal or anomalous, based on the magnitude of the computed exponent. Preliminary results show the proposed chaotic measure achieved a detection (classification) accuracy of about 66%, which is greater than that reported in the literature. This approach is capable of not only detecting offline threats, but has the potential of being applied over live traffic flows using DNS filters.
Keywords: Internet; firewalls; pattern classification; DNS DDoS amplification attacks; DNS filters; Internet; attack-byte patterns; chaos based complexity measure; classification accuracy; cognitive computing approach ;cognitive machine classification algorithm; cyber security threats; distributed denial-of-service attacks; domain name service; network security systems; signature based detection mechanisms; simple packet inspection firewall; Chaos; Classification algorithms; Computer crime; Internet; Mathematical model; Nonlinear dynamical systems; Time series analysis; Anomaly Detection; Chaos; Cognitive Machine Learning; Cyber threats; DDoS Amplification; DNS; Data traffic; Fractal; Internet; Lyapunov exponent (ID#: 15-5181)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921448&isnumber=6921429
Chuan Luo; Xiaolong Zheng; Zeng, D., "Causal Inference in Social Media Using Convergent Cross Mapping," Intelligence and Security Informatics Conference (JISIC), 2014 IEEE Joint, pp. 260, 263, 24-26 Sept. 2014. doi: 10.1109/JISIC.2014.50
Abstract: Revealing underlying causal structure in social media is critical to understanding how users interact, on which a lot of security intelligence applications can be built. Existing causal inference methods for social media usually rely on limited explicit causal context, pre-assume certain user interaction model, or neglect the nonlinear nature of social interaction, which could lead to bias estimations of causality. Inspired from recent advance in causality detection in complex ecosystems, we propose to take advantage of a novel nonlinear state space reconstruction based approach, namely Convergent Cross Mapping, to perform causal inference in social media. Experimental results on real world social media datasets show the effectiveness of the proposed method in causal inference and user behavior prediction in social media.
Keywords: causality; inference mechanisms; social networking (online);state-space methods; causal inference methods; causality detection; convergent cross mapping; nonlinear state space reconstruction; social media; user behavior prediction; Manifolds; Media; Nonlinear dynamical systems; Security; Time series analysis; Twitter; causal inference; nonlinear dynamic system; social media; user influence (ID#: 15-5182)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6975587&isnumber=6975536
Zamfir, M.D., "Discrete Time Feedback Linearization Method For Dynamical Adaptive Control Of Electronic Throttle," Electrical and Power Engineering (EPE), 2014 International Conference and Exposition on, pp. 105, 109, 16-18 Oct. 2014. doi: 10.1109/ICEPE.2014.6969877
Abstract: The throttle system is an electromechanical system represented by a nonlinear dynamical model. In construction some nonlinearityes exist into the throttle. The most significant nonlinearity is a dead-zone. It has its origin in a security position of the throttle position. There are other nonlinearities e.g. Coulomb friction and nonlinearities related to wear. The work is focused on the control of the position of the throttle. Discrete-time designs of electronic throttle control systems are required for implementation on electric traction borne computers. This paper presents an nonlinear, discrete-time control system design method. These is the discrete-time feedback linearization method. First, the system dynamics is transformed into linear, time-invariant form. Next, the pole placement control techniques it is applied to the transformed control problem. In the presence of parametric variations, and perturbation the linearized system is modified. Next, we propose to use the adaptive dynamical control to minimize the consequences of these parametric variations. The resulting control scheme is applied on the original system and its effectiveness is evaluated by simulation tests.
Keywords: adaptive control; automobiles; control nonlinearities; control system synthesis; discrete time systems; feedback; linearisation techniques; nonlinear dynamical systems; dead-zone nonlinearity; discrete time feedback linearization method; discrete-time design; dynamical adaptive control; electromechanical system; electronic throttle control system; nonlinear dynamical model; parametric variation; pole placement control techniques; Adaptation models; Adaptive control; Atmospheric modeling; Control systems; Linear systems; Mathematical model; Valves; dinamic control; discrete-time feedback linearization; electronic throttle control; nonlinear system (ID#: 15-5183)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6969877&isnumber=6969853
Fei Sun; Ozay, N.; Wolff, E.M.; Liu, J.; Murray, R.M., "Efficient Control Synthesis For Augmented Finite Transition Systems With An Application To Switching Protocols," American Control Conference (ACC), 2014, pp. 3273, 3280, 4-6 June 2014. doi: 10.1109/ACC.2014.6859428
Abstract: Augmented finite transition systems generalize nondeterministic transition systems with additional liveness conditions. We propose efficient algorithms for synthesizing control protocols for augmented finite transition systems to satisfy high-level specifications expressed in a fragment of linear temporal logic (LTL). We then use these algorithms within a framework for switching protocol synthesis for discrete-time dynamical systems, where augmented finite transition systems are used for abstracting the underlying dynamics. We introduce a notion of minimality for abstractions of certain fidelity and show that such minimal abstractions can be exactly computed for switched affine systems. Additionally, based on this framework, we present a procedure for computing digitally implementable switching protocols for continuous-time systems. The effectiveness of the proposed framework is illustrated through two examples of temperature control for buildings.
Keywords: continuous time systems; control system synthesis; discrete time systems; temporal logic; time-varying systems; augmented finite transition system; continuous-time system; control synthesis; discrete-time dynamical system; discrete-time switched system; high-level specifications; linear temporal logic; nondeterministic transition system; switched affine system; switching protocol synthesis; switching protocols; temperature control; Complexity theory; Heuristic algorithms; Protocols; Switched systems; Switches; Transient analysis; Automata; Hierarchical control; Switched systems (ID#: 15-5184)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6859428&isnumber=6858556
Do, V.L.; Fillatre, L.; Nikiforov, I., "A Statistical Method For Detecting Cyber/Physical Attacks On SCADA Systems," Control Applications (CCA), 2014 IEEE Conference on, pp. 364, 369, 8-10 Oct. 2014. doi: 10.1109/CCA.2014.6981373
Abstract: This paper addresses the problem of detecting cyber/physical attacks on Supervisory Control And Data Acquisition (SCADA) systems. The detection of cyber/physical attacks is formulated as the problem of detecting transient changes in stochastic-dynamical systems in the presence of unknown system states (often regarded as the nuisance parameter). The Variable Threshold Window Limited CUmulative SUM (VTWL CUSUM) test is adapted to the detection of transient changes of known profiles in the presence of nuisance parameter. Taking into account the performance criterion of the transient change detection problem, which minimizes the worst-case probability of missed detection for a given value of the worst-case probability of false alarm, the thresholds are tuned for optimizing the VTWL CUSUM algorithm. The optimal choice of thresholds leads to the simple Finite Moving Average (FMA) algorithm. The proposed algorithms are utilized for detecting the covert attack on a simple water distribution system, targeting at stealing water from the reservoir without being detected.
Keywords: SCADA systems; fault diagnosis; moving average processes; probability; security of data; statistical analysis; stochastic systems; transient response; FMA algorithm; SCADA systems; VTWL CUSUM algorithm; VTWL CUSUM test; cyber-physical attack detection; finite moving average algorithm; nuisance parameter; reservoir water stealing; statistical method; stochastic-dynamical systems; supervisory control and data acquisition systems; transient change detection problem; variable threshold window limited cumulative sum test; water distribution system; worst-case probability; Pressure measurement; Reservoirs; SCADA systems; Time measurement; Transient analysis; Vectors; SCADA systems; cyber attacks; fault diagnosis; transient change detection (ID#: 15-5185)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6981373&isnumber=6981319
Jae-Seung Yeom; Bhatele, A.; Bisset, K.; Bohm, E.; Gupta, A.; Kale, L.V.; Marathe, M.; Nikolopoulos, D.S.; Schulz, M.; Wesolowski, L., "Overcoming the Scalability Challenges of Epidemic Simulations on Blue Waters," Parallel and Distributed Processing Symposium, 2014 IEEE 28th International, pp. 755, 764, 19-23 May 2014. doi: 10.1109/IPDPS.2014.83
Abstract: Modeling dynamical systems represents an important application class covering a wide range of disciplines including but not limited to biology, chemistry, finance, national security, and health care. Such applications typically involve large-scale, irregular graph processing, which makes them difficult to scale due to the evolutionary nature of their workload, irregular communication and load imbalance. EpiSimdemics is such an application simulating epidemic diffusion in extremely large and realistic social contact networks. It implements a graph-based system that captures dynamics among co-evolving entities. This paper presents an implementation of EpiSimdemics in Charm++ that enables future research by social, biological and computational scientists at unprecedented data and system scales. We present new methods for application-specific processing of graph data and demonstrate the effectiveness of these methods on a Cray XE6, specifically NCSA's Blue Waters system.
Keywords: discrete event simulation; diseases; graph theory; medical computing; multiprocessing systems; Charm++ programming model; Cray XE6;EpiSimdemics; NCSA blue water system; biology; chemistry; discrete-event simulation approach; dynamical system modelling; epidemic diffusion simulation; epidemic simulations; finance; graph data application-specific processing; graph-based system; health care; irregular communication; irregular graph processing; load imbalance; national security; social contact networks; Computational modeling; Educational institutions; Load management; Load modeling; Scalability; Sociology; Statistics; contagion simulations; graph processing; performance; scalability; social contact networks (ID#: 15-5186)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6877307&isnumber=6877223
Babaie, T.; Chawla, S.; Ardon, S.; Yue Yu, "A Unified Approach to Network Anomaly Detection," Big Data (Big Data), 2014 IEEE International Conference on, pp. 650, 655, 27-30 Oct. 2014. doi: 10.1109/BigData.2014.7004288
Abstract: This paper presents a unified approach for the detection of network anomalies. Current state of the art methods are often able to detect one class of anomalies at the cost of others. Our approach is based on using a Linear Dynamical System (LDS) to model network traffic. An LDS is equivalent to Hidden Markov Model (HMM) for continuous-valued data and can be computed using incremental methods to manage high-throughput (volume) and velocity that characterizes Big Data. Detailed experiments on synthetic and real network traces shows a significant improvement in detection capability over competing approaches. In the process we also address the issue of robustness of network anomaly detection systems in a principled fashion.
Keywords: Big Data; computer network security; hidden Markov models; Big Data; HMM; LDS; continuous-valued data; hidden Markov model; linear dynamical system; network anomaly detection; network traffic; Computer crime; Correlation; Hidden Markov models; IP networks; Kalman filters; Ports (Computers); Robustness (ID#: 15-5187)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7004288&isnumber=7004197
Zamani, S.; Javanmard, M.; Jafarzadeh, N.; Zamani, M., "A Novel Image Encryption Scheme Based On Hyper Chaotic Systems And Fuzzy Cellular Automata," Electrical Engineering (ICEE), 2014 22nd Iranian Conference on, pp.1136,1141, 20-22 May 2014. doi: 10.1109/IranianCEE.2014.6999706
Abstract: A new image encryption scheme based on hyper chaotic system and Fuzzy Cellular Automata is proposed in this paper. Hyper chaotic system has more complex dynamical characteristics than chaos systems. Hence it becomes a better choice for secure image encryption schemes. Four hyper chaotic systems are used to improve the security and speed of the algorithm in this approach. First, the image is divided into four sub images. Each of these sub images has its own hyper chaotic system. In shuffling phase, Pixels in the two adjacent sub images are selected for changing their positions based upon the generated numbers of their hyper chaotic systems. Five 1D non-uniform Fuzzy Cellular Automata used in encryption phase. Used rule to encrypt a cell is selected based upon cell's right neighbor. By utilization of two different encryption methods for odd and even cells, problem of being limited to recursive rules in rule selecting process in these FCAs is solved. The results of implementing this scheme on some images from USC-SIPI database, shows that our method has high security and advantages such as confusion, diffusion, and is sensitive to small changes in key.
Keywords: cellular automata; cryptography; fuzzy set theory; image coding; 1D nonuniform fuzzy cellular automata; FCA; dynamical characteristic; hyperchaotic system; image encryption; rule selecting process; shuffling phase; Automata; Chaos; Correlation; Encryption; Entropy; FCA; Hyper Chaotic System; Image encryption; Lorenz System; Non-uniform Cellular Automata (ID#: 15-5188)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6999706&isnumber=6999486
Lin Pan; Voos, H.; N'Doye, I.; Darouach, M., "Group Synchronization And Control Of A New Class Of Adaptive Complex Network With Brownian Motion And Time-Varying Delay," Control Conference (ECC), 2014 European, pp. 1771, 1776, 24-27 June 2014. doi: 10.1109/ECC.2014.6862264
Abstract: In this paper, the Group Mean Square Synchronization (GMSS) of a new class of adaptive Complex Dynamical Network (CDN) with Limited Stochastic Perturbation (LSP) and interval Time-Varying Delays (TVD) is investigated. The zero-mean real scalar Brownian Motion (BM) with LSP is also discussed. In this CDN, the weight configuration matrices are considered in two cases : TVD and Non-Time-Varying Delay (NTVD). The outer-coupling matrices are also assumed to be two cases : symmetric and dissymmetric. Based on the control theory and stochastic analysis such as : adaptive control method, Itô formula, Lyapunov Stability Theory (LST) and Kronecker product rules, the controllers nd adaptive schemes are constructed which let all nodes reach the GMSS asymptotically in the CDN. Finally, some examples of several complex chaotic systems are presented to demonstrate the proposed theoretical analysis.
Keywords: Brownian motion; adaptive control; chaos;complex networks; delays; matrix algebra; nonlinear systems; perturbation techniques; stochastic systems; synchronisation; BM; CDN; GMSS; Itô formula; Kronecker product rules; LSP; LST; Lyapunov stability theory; NTVD; TVD; adaptive complex dynamical network; adaptive control method; chaotic systems; dissymmetric case; group mean square synchronization; interval time-varying delays; limited stochastic perturbation; nontime-varying delay; outer-coupling matrices; symmetric case; weight configuration matrices; zero-mean real scalar Brownian motion; Adaptation models; Asymptotic stability; Communities; Complex networks; Couplings; Delays; Synchronization (ID#: 15-5189)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6862264&isnumber=6862131
Fong-In, S.; Kiattisin, S.; Leelasantitham, A.; San-Um, W., "A Header Encryption Of Ultrasound Image Using Absolute-Value Chaotic Map," Biomedical Engineering International Conference (BMEiCON), 2014 7th, pp. 1, 5, 26-28 Nov. 2014. doi: 10.1109/BMEiCON.2014.7017414
Abstract: This paper presents a partial encryption scheme using absolute-value chaotic map for secure electronic health records (EHR). The HER system has been an emerging technology that allows medical personals to create, manage, and control medical data electronically through specific database or even web browsers. The proposed encryption scheme realizes XOR operations between separated planes of binary gray-scale image and a binaty image generated by an absolute-value chaotic map. The proposed is relatively simple containing a single absolute-value function with two constants and offers complex and robust dynamical behaviors in terms of random output values. Experiments have been performed in MATLAB using a magnetic resonance image which is divided into 64 sub-blocks and 13th iterations were proceeded for encryption. Encryption qualitative performances are evaluated through pixel density histograms, 2-dimensional power spectral density, and vertical, horizontal, and diagonal correlation plots. For the encryption quantitative measures, correlation coefficients, entropy, NPCR and UACI are realized. Demonstrations of wrong-key decrypted image are also included. The proposed encryption scheme offers a potential alternative to a secure medical data records and web browsing through cloud computing systems.
Keywords: biomedical MRI; biomedical ultrasonics; chaos; cloud computing; cryptography; electronic health records; iterative methods; mathematics computing; medical diagnostic computing; medical image processing; ultrasonic imaging;13th iterations;2-dimensional power spectral density; MATLAB; NPCR; UACI; XOR operations; absolute-value chaotic map; binary gray-scale image; cloud computing systems; complex dynamical behaviors; database; diagonal correlation plots; electronic health record security; electronic medical data control; electronic medical data creation; electronic medical data management; encryption qualitative performances; entropy; header encryption; horizontal correlation plots; magnetic resonance image; medical personals; partial encryption scheme; pixel density histograms; random output values; robust dynamical behaviors; single absolute-value function; ultrasound image; vertical correlation plots; web browsers; web browsing; wrong-key decrypted image; Biomedical imaging; Cryptography; Image segmentation; Absolute-Value Chaotic Map; Partial Encryption Scheme; electronic health records (ID#: 15-5190)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7017414&isnumber=7017361
Lin Pan; Voos, H.; N'Doye, I.; Darouach, M., "Uncertainty Quantification Of Group Synchronization And Control Of A New Class Of Adaptive Complex Dynamical Network With Brownian Motion And Time-Varying Delay," Control Conference (CCC), 2014 33rd Chinese, pp. 1881, 1886, 28-30 July 2014. doi: 10.1109/ChiCC.2014.6896916
Abstract: This paper investigates the Uncertainty Quantification (UQ) of Group Mean Square Synchronization (GMSS) for a new class of Complex Dynamical Network (CDN) with interval Time-varying Delays (TVD) and Limited Stochastic Perturbation (LSP). Based on the control theory and Stochastic Collocation (SC) method, we have studied the robustness of the control algorithm with respect to the value of the final time. To that end, we assumed a normal distribution for time and used the SC method [1] with different values of nodes n collocation points Ti to quantify the sensitivity. These results show that the synchronization errors are close to zero with a high probability and confirm for different number of nodes. Finally, some examples with different structure chaotic systems and their numerical simulation results are presented to demonstrate the theoretical analysis. Therefore, the conclusion of this study is that the accuracy of the synchronization and control algorithm is robust to variations of time.
Keywords: adaptive control ;complex networks; delays; network theory (graphs); normal distribution; synchronisation; Brownian motion; CDN; LSP; SC method; TVD; UQ; adaptive complex dynamical network; collocation points; control algorithm; control theory; group mean square synchronization; group synchronization; limited stochastic perturbation; normal distribution; robustness; stochastic collocation method; synchronization error; time-varying delay; uncertainty quantification; Artificial neural networks; Chaos; Polynomials; Robustness; Synchronization; Uncertainty; Chaotic Systems; Complex Dynamical Networks (CDN); Cumulative Distribution Function (CDF);Group Mean Square Synchronization (GMSS);Limited Stochastic Perturbation (LSP); Uncertainty Quantification (UQ) (ID#: 15-5191)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6896916&isnumber=6895198
San-Um, W.; Chuayphan, N., "A Lossless Physical-Layer Encryption Scheme In Medical Picture Archiving And Communication Systems Using Highly-Robust Chaotic Signals," Biomedical Engineering International Conference (BMEiCON), 2014 7th, pp. 1, 5, 26-28 Nov. 2014. doi: 10.1109/BMEiCON.2014.7017404
Abstract: This paper reviews some major techniques related to the security issues in Picture Archiving and Communications System (PACS) of medical images. Three conventional techniques including watermarking, digital signature and encryptions are studied. The encryption scheme using highly-robust chaotic signals is also proposed as a new lossless physical-layer that improves security in medical images in PACS. The dynamical system utilizing signum function is employed to generate chaotic signals with smooth bifurcation, i.e. no appearance of periodic windows. Nonlinear dynamics of the chaotic maps were initially investigated in terms of Cobweb map, chaotic attractor, Lyapunov exponent spectrum, bifurcation diagram, and 2-dimensional parameter spaces. Encryption qualitative performances are evaluated through pixel density histograms, 2-dimensional power spectral density, key space analysis, key sensitivity, vertical, horizontal, and diagonal correlation plots. Encryption quantitative performances are evaluated through correlation coefficients, NPCR and UACI. Demonstrations of wrong-key decrypted image are also included.
Keywords: PACS; bifurcation; biomedical ultrasonics; chaos; cryptography; digital signatures; image watermarking; medical image processing;2-dimensional parameter;2-dimensional power spectral density; ACS system; Cobweb map; Lyapunov exponent spectrum; NPCR; UACI; bifurcation diagram; chaotic attractor; correlation coefficients; diagonal correlation; digital signature; highly-robust chaotic signals; horizontal correlation; key sensitivity; key space analysis; lossless physical-layer encryption scheme; medical images; medical picture archiving-and-communication systems; nonlinear dynamics; pixel density histograms; signum function; vertical correlation; watermarking; wrong-key decrypted image; Chaotic communication; Correlation; Cryptography; Digital images; Encryption Scheme; Highly-Robust Chaotic Signals; Picture Archiving and Communication Systems (ID#: 15-5192)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7017404&isnumber=7017361
Pan, L.; Voos, H.; N'Doye, I.; Darouach, M., "Exponential Synchronization For A New Class Of Complex Dynamical Network With PIPC And Hybrid TVD," Intelligent Control (ISIC), 2014 IEEE International Symposium on, pp. 270, 275, 8-10 Oct. 2014. doi: 10.1109/ISIC.2014.6967598
Abstract: In this paper, the theme of Exponential Synchronization (ES) for a new class of Complex Dynamical Networks (CDN) with hybrid Time-Varying Delay (TVD) and Non-Time-Varying Delay (NTVD) nodes is investigated by using coupling Periodically Intermittent Pinning Control (PIPC). Based on the Lyapunov Stability Theory (LST), Kronecker product rules and PIPC method, sufficient conditions for ES and PIPC criteria of such CDN are derived. The obtained results are effective and less conservative. Furthermore, to verify the effectiveness of the proposed theoretical results, Barabasi-Albert Network (BAN) and Nearest-Neighbor Network (NNN) consisting of coupled non-delayed and delay Lee oscillators are finally given as the examples.
Keywords: Lyapunov methods; complex networks; delays; large-scale systems; oscillators; stability; synchronization ;time-varying systems; BAN; Barabasi-Albert network; Kronecker product rules; LST; Lyapunov stability theory; NNN;PIPC method; complex dynamical network; coupled nondelayed Lee oscillators; delay Lee oscillators; exponential synchronization; hybrid TVD; hybrid time-varying delay; nearest-neighbor network; nontime-varying delay; periodically intermittent pinning control; sufficient conditions; Artificial neural networks; Control systems; Couplings; Delays; Equations; Synchronization; Vectors (ID#: 15-5193)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6967598&isnumber=6967594
Antulov-Fantulin, N.; Lancic, A.; Stefancic, H.; Sikic, M.; Smuc, T., "Statistical Inference Framework for Source Detection of Contagion Processes on Arbitrary Network Structures," Self-Adaptive and Self-Organizing Systems Workshops (SASOW), 2014 IEEE Eighth International Conference on, pp. 78, 83, 8-12 Sept. 2014. doi: 10.1109/SASOW.2014.35
Abstract: We introduce a statistical inference framework for maximum likelihood estimation of the contagion source from a partially observed contagion spreading process on an arbitrary network structure. The framework is based on simulations of a contagion spreading process from a set of potential sources which were infected in the observed realization. We present a number of different likelihood estimators for determining the conditional probabilities of potential initial sources producing the observed epidemic realization, which are computed in scalable and parallel way. This statistical inference framework is applicable to arbitrary networks with different dynamical spreading processes.
Keywords: inference mechanisms; maximum likelihood estimation; probability; security of data; conditional probabilities; contagion processes; contagion spreading process; dynamical spreading process; epidemic realization; maximum likelihood estimation; source detection; statistical inference framework; Adaptation models; Airports; Atmospheric modeling; Estimation; Noise; Position measurement; Random variables; complex networks; contagion spreading; source detection (ID#: 15-5194)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7056358&isnumber=7056334
Hua Yang; Xiaojin Yang, "An Improved Multi-Channel Mac Protocol In Vehicle-To-Vehicle Communication System," Computing, Communication and Networking Technologies (ICCCNT), 2014 International Conference on, pp. 1, 5, 11-13 July 2014. doi: 10.1109/ICCCNT.2014.6963153
Abstract: Against Self-organizing networks for automotive large-scale network scenario, in a McMAC multichannel protocol based on adding clustering considerations, proposed an improved multi-channel MAC protocol. In this protocol vehicles organized into different clusters, the traditional single medium is divided into a plurality of control channels and one data channel. Clusters within and between clusters of data communication channel using TDMA/CDMA technology, control channel using CSMA/CA protocol. Simulation results show that compare to the Dynamical Control Assignment and McMAC, the improved protocol significantly enhances the performance indicators like probability of successful packet reception, channel access time, congestion control etc.
Keywords: carrier sense multiple access; code division multiple access; data communication; mobile radio; pattern clustering; time division multiple access; CSMA/CA protocol; McMAC multichannel protocol; TDMA-CDMA technology; automotive large-scale network scenario; control channel plurality; data channel; data communication channel; dynamical control assignment; improved multichannel MAC protocol; performance indicators; self-organizing networks; single medium; vehicle-to-vehicle communication system; Ad hoc networks; Media Access Protocol; Throughput; Time division multiple access; Vehicles; Medium Access Control; Multi-Channel; Vehicle-to-Vehicle Communication; Vehiclular Ad Hoc Network (ID#: 15-5195)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6963153&isnumber=6962988
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
End to End Security and IPv6, 2014 |
Protocols are an important element in developing end to end security. In 2014, much research was done on protocols in general and in IPv6 in particular, as they relate to security.
Sahraoui, S.; Bilami, A., "Compressed And Distributed Host Identity Protocol For End-To-End Security In The Iot," Next Generation Networks and Services (NGNS), 2014 Fifth International Conference on, pp. 295, 301, 28-30 May 2014. doi: 10.1109/NGNS.2014.6990267
Abstract: Wireless Sensor Networks (WSNs), as a key part of the Internet of Things (IoT), allow the representation of the dynamic characteristics of the physical world in the Internet's virtual world. Thus, sensor nodes are henceforth considered as Internet hosts and may act freely as web clients or servers. Undoubtedly, security and end-users privacy issues rise and become more severe in the IoT, due to the asymmetric nature of the communications between sensor nodes and the ordinary Internet hosts. Many solutions propose to use the classical IP-based security protocols for IoT, after adapting them to WSN's constraints by either messages compression or computational-load distribution techniques. In this paper we propose a 6LoWPAN (IPv6 over Low power Wireless Personal Area Networks) compression model for HIP (Host Identity Protocol) header, as well as, an adapted distribution scheme of the computational load in HIP's key agreement process. For an extreme lightweight end-to-end security, we propose to combine both compression and distribution models for HIP in WSNs side, in the IoT. The partial evaluation results show that the proposed protocol, named compressed and distributed HIP (CD-HIP), is more adapted than the standard HIP, while introducing minor header communication overhead.
Keywords: IP networks; Internet; Internet of Things; file servers; personal area networks; protocols; telecommunication security; wireless sensor networks; 6LoWPAN;CD-HIP; IP-based security protocols;IPv6 over low power wireless personal area networks; Internet of Things; IoT; WSN; Web clients; Web servers; communication overhead; compressed and distributed HIP; computational-load distribution techniques; distribution scheme; end-to-end security; host identity protocol; messages compression; wireless sensor networks; Hip; IP networks; Internet; Peer-to-peer computing; Protocols; Security; Wireless sensor networks;6LoWPAN compression; Host Identity Protocol; Internet of Things; IoT; Wireless Sensor Networks; distributed HIP Base Exchange; end-to-end security (ID#: 15-5215)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6990267&isnumber=6990210
Kasraoui, M.; Cabani, A.; Chafouk, H., "IKEv2 Authentication Exchange Model in NS-2," Computer, Consumer and Control (IS3C), 2014 International Symposium on, pp. 1074, 1077, 10-12 June 2014. doi: 10.1109/IS3C.2014.280
Abstract: Wireless Sensor Network (WSN)communications has become one of the most emerging fields in the area of wireless communication technologies. Integration of WSN with internet technologies has been enhanced because of the use of 6LowPAN standard. This leads to the challenges and significance of end-to-end security in 6LoWPAN communication between IPv6 enabled sensor networks and the Internet hosts. Many researchers have proposed the use of IPsec/IKE to WSNs for reinforcing the end-to-end security communication. Till now IKE module have not been implemented or added in a network simulator. In the proposed paper we have implemented the IKE module in Network Simulator-2 (NS2) simulator. This new module will helps to study in detail about the end-to-end security in wireless communication. In this paper we have also discussed and compared the pros and cons of network simulators like OMNET++, TOSSIM and COOJA with NS2.
Keywords: IP networks; computer network security; cryptographic protocols; personal area networks; wireless sensor networks;6LoWPAN communication standard; COOJA; IKE module; IKEv2 authentication exchange model;IPsec-IKE;IPv6 enabled sensor networks; Internet hosts; NS2 simulator; Network Simulator-2; OMNET++; TOSSIM; WSN communications; end-to-end security; wireless communication technologies; wireless sensor networks; Authentication; Delays; Energy consumption; Internet; Protocols; Wireless sensor networks; 6LoWPAN; IKEv2; IPSec; NS2 (ID#: 15-5216)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846072&isnumber=6845429
Naito, K.; Mori, K.; Kobayashi, H.; Kamienoo, K.; Suzuki, H.; Watanabe, A., "End-To-End IP Mobility Platform In Application Layer For Ios And Android OS," Consumer Communications and Networking Conference (CCNC), 2014 IEEE 11th, pp.92,97, 10-13 Jan. 2014. doi: 10.1109/CCNC.2014.6866554
Abstract: Smartphones are a new type of mobile devices that users can install additional mobile software easily. In the almost all smartphone applications, client-server model is used because end-to-end communication is prevented by NAT routers. Recently, some smartphone applications provide real time services such as voice and video communication, online games etc. In these applications, end-to-end communication is suitable to reduce transmission delay and achieve efficient network usage. Also, IP mobility and security are important matters. However, the conventional IP mobility mechanisms are not suitable for these applications because most mechanisms are assumed to be installed in OS kernel. We have developed a novel IP mobility mechanism called NTMobile (Network Traversal with Mobility). NTMobile supports end-to-end IP mobility in IPv4 and IPv6 networks, however, it is assumed to be installed in Linux kernel as with other technologies. In this paper, we propose a new type of end-to-end mobility platform that provides end-to-end communication, mobility, and also secure data exchange functions in the application layer for smartphone applications. In the platform, we use NTMobile, which is ported as the application program. Then, we extend NTMobile to be suitable for smartphone devices and to provide secure data exchange. Client applications can achieve secure end-to-end communication and secure data exchange by sharing an encryption key between clients. Users also enjoy IP mobility which is the main function of NTMobile in each application. Finally, we confirmed that the developed module can work on Android system and iOS system.
Keywords: Android (operating system);IP networks; client-server systems; cryptography; electronic data interchange; iOS (operating system);real-time systems; smart phones; Android OS;IPv4 networks;IPv6 networks; Linux kernel; NAT routers; NTMobile; OS kernel; application layer; client-server model; encryption key; end-to-end IP mobility platform; end-to-end communication; iOS system; network traversal with mobility; network usage; real time services; secure data exchange; smartphones; transmission delay; Authentication; Encryption; IP networks; Manganese; Relays; Servers (ID#: 15-5217)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866554&isnumber=6866537
Varadarajan, P.; Crosby, G., "Implementing IPsec in Wireless Sensor Networks," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp. 1, 5, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814024
Abstract: There is an increasing need for wireless sensor networks (WSNs) to be more tightly integrated with the Internet. Several real world deployment of stand-alone wireless sensor networks exists. A number of solutions have been proposed to address the security threats in these WSNs. However, integrating WSNs with the Internet in such a way as to ensure a secure End-to-End (E2E) communication path between IPv6 enabled sensor networks and the Internet remains an open research issue. In this paper, the 6LoWPAN adaptation layer was extended to support both IPsec's Authentication Header (AH) and Encapsulation Security Payload (ESP). Thus, the communication endpoints in WSNs are able to communicate securely using encryption and authentication. The proposed AH and ESP compressed headers performance are evaluated via test-bed implementation in 6LoWPAN for IPv6 communications on IEEE 802.15.4 networks. The results confirm the possibility of implementing E2E security in IPv6 enabled WSNs to create a smooth transition between WSNs and the Internet. This can potentially play a big role in the emerging "Internet of Things" paradigm.
Keywords: IP networks; Internet; Zigbee; computer network security; cryptography; wireless sensor networks; 6LoWPAN adaptation layer; AH; E2E security; ESP compressed header performance; IEEE 802.15.4 networks; IPsec authentication header;IPv6 enabled sensor networks; Internet; Internet of Things paradigm; WSNs; communication endpoints; encapsulation security payload; encryption; end-to-end communication path; security threats; stand-alone wireless sensor networks; Authentication; IEEE 802.15 Standards; IP networks; Internet; Payloads; Wireless sensor networks (ID#: 15-5218)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814024&isnumber=6813963
Ahmed, A.S.; Hassan, R.; Othman, N.E., "Security Threats For IPv6 Transition Strategies: A Review," Engineering Technology and Technopreneuship (ICE2T), 2014 4th International Conference on, pp.83,88, 27-29 Aug. 2014. doi: 10.1109/ICE2T.2014.7006224
Abstract: There is a growing perception among communications experts that IPv6 and its associated protocols is set to soon replace the current IP version. This is somewhat interesting given that general adoption of IPv6 has been slow. Perhaps this can be explained by the short-term fixes to IPv4 address including classless addressing and NAT. Because of these short-term solutions in addition that IPv4 is not capable to manage the growth of information systems, particularly the growth of internet technologies and services including cloud computing, mobile IP, IP telephony, and IP-capable mobile telephony, all of which necessitate the use of IPv6. There is however a realization that the transformation must be gradual and properly guided and managed. To this end, the Internet Engineering Task Force (IETF) was formed to assist in the transition from IPv4 to IPv6 Dual Stack, Header Translation and Tunneling. The mechanisms employed in this transition consist of changes to protocol mechanisms affecting hosts and routers, addressing and deployment, that are designed to avoid mishap and facilitate a smooth transition from IPv4 to IPv6. Given the inevitability of adopting IPv6, this paper focuses on a detailed examination of the transition techniques and its associated benefits and possible shortcomings. Furthermore, the security threats for each transition technique are overviewed.
Keywords: Internet; information systems; security of data; transport protocols; IETF; IP-capable mobile telephony;IPv4;IPv6 transition strategy; Internet engineering task force; NAT; classless addressing; cloud computing; dual stack; header translation; information system; internet technology; mobile IP; protocol mechanism; security threat; tunneling; Encapsulation; Firewalls (computing); IP networks; Internet; Protocols; Tunneling; Dual Stack; IPv4 ;IPv6; Translation; Tunneling (ID#: 15-5219)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7006224&isnumber=7006204
Goswami, S.; Misra, S.; Taneja, C.; Mukherjee, A., "Securing Intra-Communication in 6lowpan: A PKI Integrated Scheme," Advanced Networks and Telecommuncations Systems (ANTS), 2014 IEEE International Conference on, pp. 1, 5, 14-17 Dec. 2014. doi: 10.1109/ANTS.2014.7057265
Abstract: 6LoWPAN standard enables efficient integration of low power wireless networks with IPv6. However the security requirements of 6LoWPANs are high due to undefined deployment scenarios and constrained capabilities of sensor nodes. A number of schemes have been devised for secure communication over the Internet, PKI being the most widely used of them. It provides authentication, non-repudiation, confidentiality and integrity. PKI does not qualify for use in 6LoWPAN as it is not streamlined for these networks and creates a communication and processing overhead which cannot be borne by a simple wireless sensor node. We provide a scheme to integrate PKI and 6LoWPAN by essentially delegating a major portion of key management activity to the edge routers (gateway) of the LoWPAN and limiting the involvement of the end nodes to minimum communication with the edge router. The edge router maintains a Local Key Database (LKDB) by remaining in constant contact with the certification authority (CA) server and oversees all related keying functions in the LoWPAN. A request packet format and algorithm to acquire keys of the destination from edge router is proposed. Performance evaluation of the proposed scheme using a protocol analyzer indicated a time and increased packet count tradeoff for the enhanced level of security. An increase in packet payload during evaluation led to a significant increase in transmitted message count. The proposed scheme did not alter the nature of the packets transmitted and performed well at scalable loads.
Keywords: IP networks; performance evaluation; personal area networks; public key cryptography; telecommunication security; 6LoWPAN standard; IPv6;LKDB;PKI integrated scheme; certification authority server; edge routers; local key database; low power wireless networks; security requirements; wireless sensor node; Erbium; Payloads; Protocols; Public key; Servers; Wireless sensor networks (ID#: 15-5220)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7057265&isnumber=7057217
Chia-Wei Tseng; Sheue-Ji Chen; Yao-Tsung Yang; Li-Der Chou; Ce-Kuen Shieh; Sheng-Wei Huang, "IPv6 Operations And Deployment Scenarios Over SDN," Network Operations and Management Symposium (APNOMS), 2014 16th Asia-Pacific, pp. 1, 6, 17-19 Sept. 2014. doi: 10.1109/APNOMS.2014.6996530
Abstract: IPv6 is a technology that provides enormous address space and end-to-end communication, features that are required in the context of the device automation integration for future network. The transition to IPv6 holds the future of the internet infrastructure. Software-defined networking (SDN) defines a new concept for computer networks that can separate and provide abstract elements of network devices. IPv6 SDN has the potential to revolutionize the network design, construct and operate networks to achieve more efficient business network agility. In this paper, we will discuss the main architectures of SDN and illustrate how IPv6 can be deployed and integrated in SDN technologies using OpenFlow mechanisms. We will also discuss the IPv6 impact on link performance and deployment scenarios.
Keywords: IP networks; Internet; next generation networks; software defined networking;IPv6;Internet infrastructure; OpenFlow mechanisms; SDN; device automation integration; end-to-end communication; software-defined networking; Broadband communication; Computer architecture; IP networks; Internet; Performance evaluation;Security;Switches;IPv6;Network deployment; OpenFlow; SDN (ID#: 15-5221)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6996530&isnumber=6996102
Bhatti, S.N.; Phoomikiattisak, D.; Atkinson, R.J., "Fast, Secure Failover for IP," Military Communications Conference (MILCOM), 2014 IEEE, pp. 274, 281, 6-8 Oct. 2014. doi: 10.1109/MILCOM.2014.50
Abstract: We describe a mechanism for fast, secure failover for IP. The mechanism is invisible to end-systems: sessions are maintained during failover. Our novel approach is to model the failover as a mobility problem, and use a mobility solution in order to implement change in connectivity. Our system is based on the Identity Locator Network Protocol (ILNP), an Experimental IRTF protocol which is realised as superset of IPv6. Our empirical results from a test bed emulation show that there is almost zero gratuitous loss during failover.
Keywords: IP networks; transport protocols; ILNP; IP network; IPv6; experimental IRTF protocol; identity locator network protocol; mobility problem; secure failover; test bed emulation; IP networks; Middleboxes; Mobile communication; Mobile computing; Protocols; Routing; Security (ID#: 15-5222)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956771&isnumber=6956719
Baddi, Y.; Ech-Chrif El Kettani, M.D., "A Fast Dynamic Multicast Tree Adjustment Protocol For Mobile Ipv6," Next Generation Networks and Services (NGNS), 2014 Fifth International Conference on, pp. 106, 113, 28-30 May 2014. doi: 10.1109/NGNS.2014.6990237
Abstract: Internet research community has proposed many Different multicast routing protocols to support efficient multimedia application such as, IPTV, videoconferencing, group games. Nevertheless, these protocols have not been designed for mobile roaming members and sources, and has not been tested in wireless and mobile environment since they were developed for multicast parties whose members and sources are topologically stationary. Recently, as the performance of mobile hosts rapidly improves and the bandwidth of wireless access networks grows up, the expectation for mobile multimedia communication services including many-to-many communications such as video-conferencing begins a big necessary. Studying and solving multicast issues in the stationary multicast infrastructure has been largely studied in the literature. However, fewer efforts have been spent in the specific problems of mobile members and sources caused by the frequent change of membership and point of attachment. This paper addresses the issue of mobile Multicast routing by presenting a Fast Dynamic Multicast Tree Adjustment Protocol for Mobile IPv6 (FDMTA-MIPv6), an optimized multicast tree protocol is proposed to transform multicast tree into an optimal shared multicast tree routed at a selected RP. To estimate and evaluate our scheme, we implement simulation based in many metrics, simulation results show that good performance is achieved in terms of handoff latency, end-to-end delay, tree construction delay and others metrics.
Keywords: IP networks; IPTV; Internet; computer games; mobile computing; mobility management (mobile radio);multicast protocols; multimedia communication; radio access networks; routing protocols; telecommunication network topology; teleconferencing; video communication;FDMTA-MIPv6;IPTV;Internet research community; end-to-end delay; fast dynamic multicast tree adjustment protocol; frequent membership change; group games; handoff latency; many-to-many communications; mobile IPv6;mobile hosts; mobile members; mobile multimedia communication services; mobile sources; multicast routing protocols; multimedia application; point-of-attachment; tree construction delay; videoconferencing; wireless access networks; Delays; IP networks; Mobile communication; Mobile computing; Receivers; Routing protocols; CBT; Mobile IPv6; Multicast Routing;PIM-SM; RP (ID#: 15-5223)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6990237&isnumber=6990210
Baddi, Y.; El Kettani, M.D.E.-C., "Multiple Active Cores-Based Shared Multicast Tree For Mobile Ipv6 Environment," Information Science and Technology (CIST), 2014 Third IEEE International Colloquium in, pp.378,383, 20-22 Oct. 2014. doi: 10.1109/CIST.2014.7016650
Abstract: Due to the progress of network multimedia technology, internet research community proposed many different multicast routing protocols to support efficient realtime multimedia application such as, IPTV, videoconferencing, group games. These applications require a multicast routing protocol in which packets arrive to multicast receivers with minimum delay and delay variation. These applications are more important with arrival of mobile IPv6 protocol with mobile receivers and sources with continuous access. Nevertheless, the design of multicast protocols does not take into account that group members may be mobile. Dynamic group members and sources can rapidly affect quality of both routing protocol scheme and multicast tree used. The key idea of this work is to make the handover of multicast members transparent and a quick recovery mechanism to maintain an optimal multicast tree, by using MACT-MIPv6 architecture based on multicast routing protocol with Shared Multiple Active Cores Multicast Tree to hide the mobility of mobile multicast members from the main multicast delivery tree. Simulation results show that good performance is achieved in terms of handoff latency, end-to-end delay, tree construction delay and others metrics.
Keywords: IP networks; Internet; mobility management (mobile radio);multicast protocols; radio receivers; routing protocols; telecommunication network topology; Internet research community; MACT-MIPv6 architecture; delay variation; end-to-end delay; handoff latency; mobile IPv6 protocol; mobile receiver; multicast delivery tree; multicast member handover; multicast receiver; multicast routing protocol; multiple active core-based shared multicast tree; quick recovery mechanism; tree construction delay; IP networks; Mobile communication; Mobile computing; Receivers; Routing protocols;Subscriptions;CBT;MACT-MIPv6;MIPv6;Multicast tree; PIM-SM (ID#: 15-5224)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7016650&isnumber=7016576
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
End to End Security and the Internet of Things, 2014 |
End to end security focuses on the concept of uninterrupted protection of data traveling between two communicating partners. Generally, encryption is the method of choice. For the Internet of Things (IOT), “baked in” security is a major challenge. The research cited here was presented during 2014.
Sahraoui, S.; Bilami, A., "Compressed And Distributed Host Identity Protocol For End-To-End Security In The Iot," Next Generation Networks and Services (NGNS), 2014 Fifth International Conference on , vol., no., pp.295,301, 28-30 May 2014. doi: 10.1109/NGNS.2014.6990267
Abstract: Wireless Sensor Networks (WSNs), as a key part of the Internet of Things (IoT), allow the representation of the dynamic characteristics of the physical world in the Internet's virtual world. Thus, sensor nodes are henceforth considered as Internet hosts and may act freely as web clients or servers. Undoubtedly, security and end-users privacy issues rise and become more severe in the IoT, due to the asymmetric nature of the communications between sensor nodes and the ordinary Internet hosts. Many solutions propose to use the classical IP-based security protocols for IoT, after adapting them to WSN's constraints by either messages compression or computational-load distribution techniques. In this paper we propose a 6LoWPAN (IPv6 over Low power Wireless Personal Area Networks) compression model for HIP (Host Identity Protocol) header, as well as, an adapted distribution scheme of the computational load in HIP's key agreement process. For an extreme lightweight end-to-end security, we propose to combine both compression and distribution models for HIP in WSNs side, in the IoT. The partial evaluation results show that the proposed protocol, named compressed and distributed HIP (CD-HIP), is more adapted than the standard HIP, while introducing minor header communication overhead.
Keywords: IP networks; Internet; Internet of Things; file servers; personal area networks; protocols; telecommunication security; wireless sensor networks; 6LoWPAN; CD-HIP; IP-based security protocols;IPv6 over low power wireless personal area networks; Internet of Things; IoT; WSN; Web clients; Web servers; communication overhead; compressed and distributed HIP; computational-load distribution techniques; distribution scheme; end-to-end security; host identity protocol; messages compression; wireless sensor networks; Hip; IP networks; Internet; Peer-to-peer computing; Protocols; Security; Wireless sensor networks; 6LoWPAN compression; Host Identity Protocol ; Internet of Things; IoT; Wireless Sensor Networks; distributed HIP Base Exchange; end-to-end security (ID#: 15-5196)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6990267&isnumber=6990210
Jiye Park; Namhi Kang, "Lightweight Secure Communication For CoAP-Enabled Internet Of Things Using Delegated DTLS Handshake," Information and Communication Technology Convergence (ICTC), 2014 International Conference on, pp. 28, 33, 22-24 Oct. 2014. doi: 10.1109/ICTC.2014.6983078
Abstract: IETF CoRE working group proposed to use DTLS for supporting secure IoT services. In this paper, we examine problems that can happen when applying the DTLS protocol to IoT networks directly. To solve the problems, we separate the DTLS protocol into two; the handshake phase and the encryption phase. Our approach enhances performance in both device and network by using a way to delegate the DTLS handshake phase. We also present two scenarios (inbound and outbound) based on the properties of Constrained Application Protocol (CoAP) enabled sensors. The proposed scheme supports secure end-to-end communication despite using delegation.
Keywords: Internet of Things; cryptography; telecommunication security; CoAP enabled sensors; CoAP-enabled Internet of Things; DTLS protocol; IETF CoRE working group; IoT networks; constrained application protocol; delegated DTLS handshake; delegation; encryption phase; handshake phase; lightweight secure communication; secure end-to-end communication;Encryption;Internet;Protocols;Sensors;Servers;CoAP Security; DTLS; Delegation; End-to-end Security; Internet of Things (ID#: 15-5197)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6983078&isnumber=6983064
Curtis, Bill, "Delivering Security By Design In The Internet Of Things," Test Conference (ITC), 2014 IEEE International, pp. 1, 1, 20-23 Oct. 2014. doi: 10.1109/TEST.2014.7035283
Abstract: End to end security is becoming a prerequisite of the Internet of Things. Data must be managed securely at generation, in flight and at rest to avoid critical enterprise or personal data being intercepted. Privacy becomes paramount as our lives and health become increasingly digital, and devices must evolve to deliver security and robustness while pricing continues to be constrained. This talk will highlight the security requirements of the IoT as outlined by the Dept. of Homeland Security and the UK Centre for Protection of National Infrastructure to counter the emergence of threats ranging from advanced persistent software threats to physical tampering and side channel attacks. Following the definition of the attack threats we will then establish the definition of advanced device security features, system implementation requirements and testability criteria to develop Security by Design within the Internet of Things.
Keywords: (not provided) (ID#: 15-5198)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7035283&isnumber=7035243
Shafagh, H.; Hithnawi, A., "Poster Abstract: Security Comes First, a Public-key Cryptography Framework for the Internet of Things," Distributed Computing in Sensor Systems (DCOSS), 2014 IEEE International Conference on, pp. 135, 136, 26-28 May 2014. doi: 10.1109/DCOSS.2014.62
Abstract: Novel Internet services are emerging around an increasing number of sensors and actuators in our surroundings, commonly referred to as smart devices. Smart devices, which form the backbone of the Internet of Things (IoT), enable alternative forms of user experience by means of automation, convenience, and efficiency. At the same time new security and safety issues arise, given the Internet-connectivity and the interaction possibility of smart devices with human's proximate living space. Hence, security is a fundamental requirement of the IoT design. In order to remain interoperable with the existing infrastructure, we postulate a security framework compatible to standard IP-based security solutions, yet optimized to meet the constraints of the IoT ecosystem. In this ongoing work, we first identify necessary components of an interoperable secure End-to-End communication while incorporating Public-key Cryptography (PKC). To this end, we tackle involved computational and communication overheads. The required components on the hardware side are the affordable hardware acceleration engines for cryptographic operations and on the software side header compression and long-lasting secure sessions. In future work, we focus on integration of these components into a framework and the evaluation of an early prototype of this framework.
Keywords: IP networks; Internet; Internet of Things; open systems; public key cryptography; IP-based security solutions; Internet of Things; Internet services; Internet-connectivity; IoT; end-to-end communication; interoperability; public-key cryptography; safety issues; security issues; smart devices; Acceleration; Cryptography; Engines; Hardware; Internet of Things; Protocols (ID#: 15-5199)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846155&isnumber=6846129
Ahrary, A.; Ludena, R.D.A., "Big Data Application To The Vegetable Production And Distribution System," Signal Processing & its Applications (CSPA), 2014 IEEE 10th International Colloquium on, pp. 20, 24, 7-9 March 2014. doi: 10.1109/CSPA.2014.6805713
Abstract: The new paradigm of Big Data and its multiple benefits have being used in the novel nutrition-based vegetable production and distribution system in order to generate a healthy food recommendation to the end user and to provide different analytics to improve the system efficiency. As next step in this study, the new paradigm Internet of Things (IoT) is included in the Big Data approach of the system to use its benefits, particularly automation, to add much more precise data to the system to provide to the user a much richer experience. The IoT paradigm is bridging the real world with its virtual image, where devices located in different areas could exchange information each other without any type of supervision or control. But, spite of all the benefits that IoT could bring to society, the security of the information as well as privacy must be strongly enforced and managed in this new environment with unique characteristics. In our project we make a particular approach and security assessment of the use of IoT to provide automatic data to the system.
Keywords: Big Data; Internet of Things; agriculture; goods distribution; security of data; Big Data; Internet of Things; IoT paradigm; healthy food recommendation; nutrition-based vegetable; security assessment; vegetable distribution system; vegetable production system; virtual image; Authentication; Data handling; Data storage systems; Information management; Internet; Radiofrequency identification; Big Data; Computer Science; Data Analysis; Data systems; IoT (ID#: 15-5200)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6805713&isnumber=6805703
Poulymenopoulou, M.; Malamateniou, F.; Vassilacopoulos, G., "A Virtual PHR Authorization System," Biomedical and Health Informatics (BHI), 2014 IEEE-EMBS International Conference on, pp. 73, 76, 1-4 June 2014. doi: 10.1109/BHI.2014.6864307
Abstract: Cloud computing and Internet of things (IOT) technologies can support a new generation of PHR systems which are provided as cloud services that contain patient data (health and social) from various sources, including automatically transmitted data from Internet connected devices of patient living space (e.g. medical devices connected to patients at home care). In this paper, the virtual PHR concept is introduced as an entity on the network consisted of (a) a non-healthcare component containing health and social information collected by either the patient or non-healthcare providers, (b) a medical device component containing health information transmitted from Internet connected medical devices and (c) a healthcare professional component containing information stored into various healthcare information systems. The PHR concept is based on the patient-centered model dictating that patients are the owners of their information. Hence, patients are empowered to authorize other subjects to access it that introduces specific security challenges which are further accentuated by the fact that diverse local security policies may need to be reconciled. The PHR authorization system proposed here is based on a combination of role-based and attribute-based access control (RABAC) and supports patient-specified authorization policies of various granularity levels subject to constraints imposed by the security policies of the various health and social care providers involved. To this end, an ontology of granular security concepts is built to aid in semantically matching diverse authorization requests and to enable semantic rule reasoning on whether a requested access should be permitted or denied.
Keywords: authorisation; electronic health records; granular computing; ontologies (artificial intelligence);IOT; Internet of things; RABAC; attribute-based access control; cloud computing; data access; granular security concepts; granularity levels; health care providers; health information collection; health information transmission; healthcare information systems; healthcare professional component; information storage; local security policies; medical device component; nonhealthcare component; nonhealthcare providers; ontology; patient data; patient-centered model; patient-specified authorization policies; personal health record; role-based access control; semantic matching; semantic rule reasoning; social care providers; social information collection; virtual PHR authorization system; Authorization; Cloud computing; Filtering; Medical services; Ontologies; Semantics (ID#: 15-5201)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6864307&isnumber=6864286
Ludena R, D.A.; Ahrary, A.; Horibe, N.; Won Seok Yang, "IoT-security Approach Analysis for the Novel Nutrition-Based Vegetable Production and Distribution System," Advanced Applied Informatics (IIAIAAI), 2014 IIAI 3rd International Conference on, pp. 185, 189, Aug. 31 2014-Sept. 4 2014. doi: 10.1109/IIAI-AAI.2014.47
Abstract: The new Internet of Things (IoT) paradigm is giving to the scientific community the possibility to create integrated environments where information could be exchanged among heterogeneous characteristic networks in an automated way, in order to provide a richer experience to the user and to give specific relevant information regarding the particular environment in which the user is interacting with. Those characteristic are highly valuable for the novel nutrition-based vegetable production and distribution system, in which the multiple benefits of Big Data where used in order to generate a healthy food recommendation to the end user and to feed to the system different analytics to improve the system efficiency. Moreover, the different IoT capabilities, specifically automation and heterogeneous network communication are valuable to improve the information matrix of our project. This paper discusses the different IoT available technologies, their security capabilities and assessment, and how could be useful for our project.
Keywords: Big Data; Internet of Things; agricultural products; computer network security; production engineering computing; recommender systems; Internet of Things paradigm; IoT-security approach analysis; automation network communication; big data; healthy food recommendation; heterogeneous characteristic networks; heterogeneous network communication; nutrition-based vegetable production and distribution system; security capabilities; Authentication; Big data; Educational institutions ;Internet; Radiofrequency identification; Big Data; Computer Science; Data Analysis; Data systems; IoT (ID#: 15-5202)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6913291&isnumber=6913244
Porambage, P.; Schmitt, C.; Kumar, P.; Gurtov, A.; Ylianttila, M., "Two-Phase Authentication Protocol For Wireless Sensor Networks In Distributed IoT Applications," Wireless Communications and Networking Conference (WCNC), 2014 IEEE , vol., no., pp.2728,2733, 6-9 April 2014. doi: 10.1109/WCNC.2014.6952860
Abstract: In the centralized Wireless Sensor Network (WSN) architecture there exists a central entity, which acquires, processes and provides information from sensor nodes. Conversely, in the WSN applications in distributed Internet of Things (IoT) architecture, sensor nodes sense data, process, exchange information and perform collaboratively with other sensor nodes and endusers. In order to maintain the trustworthy connectivity and the accessibility of distributed IoT, it is important to establish secure links for end-to-end communication with proper authentication. The authors propose an implicit certificate-based authentication mechanism for WSNs in distributed IoT applications. The developed two-phase authentication protocol allows the sensor nodes and the end-users to authenticate each other and initiate secure connections. The proposed protocol supports the resource scarcity of the sensor nodes, heterogeneity and scalability of the network. The performance and security analysis justify that the proposed scheme is viable to deploy in resource constrained WSNs.
Keywords: Internet of Things; cryptographic protocols; wireless sensor networks; centralized WSN architecture; centralized wireless sensor network architecture; certificate-based authentication mechanism; distributed Internet of Things architecture; distributed IoT architecture; end-to-end communication; end-users; heterogeneity; resource constrained WSN; resource scarcity; security analysis; sensor nodes; trustworthy connectivity; two-phase authentication protocol; Authentication; Ciphers;Protocols; Public key; Servers; Wireless sensor networks; Distributed Internet of Things; Wireless Sensor Networks; authentication; implicit certificate; security (ID#: 15-5203)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6952860&isnumber=6951847
Addo, I.D.; Ahamed, S.I.; Yau, S.S.; Buduru, A., "A Reference Architecture for Improving Security and Privacy in Internet of Things Applications," Mobile Services (MS), 2014 IEEE International Conference on, pp. 108, 115, June 27 2014-July 2 2014. doi: 10.1109/MobServ.2014.24
Abstract: As the promise of the Internet of Things (IoT) materializes in our everyday lives, we are often challenged with a number of concerns regarding the efficacy of the current data privacy solutions that support the pervasive components at play in IoT. The privacy and security concerns surrounding IoT often manifests themselves as a treat to end-user adoption and negatively impacts trust among end-users in these solutions. In this paper, we present a reference software architecture for building cloud-enabled IoT applications in support of collaborative pervasive systems aimed at achieving trustworthiness among end-users in IoT scenarios. We present a case study that leverages this reference architecture to protect sensitive user data in an IoT application implementation and evaluate the response of an end-user study accomplished through a survey.
Keywords: Internet; Internet of Things; cloud computing; computer network security; data privacy; ubiquitous computing; Internet of Things application; cloud-enabled IoT application; collaborative pervasive system; data privacy solution; reference software architecture; security; Cloud computing; Computer architecture; Data privacy; Mobile communication; Motion pictures; Privacy; Security; Cloud- Enabled Service Privacy and Security; Collective Intelligence; Internet of Things; Software Reference Architecture (ID#: 15-5204)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6924301&isnumber=6924277
Pokrić, B.; Krc̆o, S.; Pokrić, M., "Augmented Reality Based Smart City Services Using Secure IoT Infrastructure," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on, pp. 803, 808, 13-16 May 2014. doi: 10.1109/WAINA.2014.127
Abstract: This paper presents an application of Augmented Reality (AR) within a smart city service to be deployed in the domain of public transport in the city of Novi Sad in Serbia. The described solution is focused on providing a simple and efficient method to citizens for accessing important information such as bus arrival times, bus routes and tourist landmarks using smart phones and AR technology. The AR information is triggered by image and geo-location markers and the data is provided via secure IoT infrastructure. The IoT infrastructure is based on bus-mounted IoT devices which utilize secure CoAP software protocol to transmit the data to the associated cloud servers. Description of the complete end-to-end solution is presented, providing the overall system set-up, user experience aspects and the security of the overall system, focusing on the lightweight encryption used within the low-powered IoT devices.
Keywords: Internet of Things; augmented reality; cloud computing; cryptography; low-power electronics; public administration; public information systems; smart phones; transportation; AR technology; Novi Sad; Serbia; associated cloud servers; augmented reality based smart city services; bus arrival times; bus routes; bus-mounted IoT devices; geo-location markers; lightweight encryption; low-powered IoT devices; public transport; secure CoAP software protocol; secure IoT infrastructure; smart phones; tourist landmarks; Augmented reality; Cities and towns; Companies; Cryptography; Smart phones; Transportation; AR; Augmented Reality; Smart City; Smart Transport; secure CoAP; secure IoT (ID#: 15-5205)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844738&isnumber=6844560
Ray, B.; Chowdhury, M.; Abawaiy, J., "PUF-based Secure Checker Protocol for Networked RFID Systems," Open Systems (ICOS), 2014 IEEE Conference on, pp.78,83, 26-28 Oct. 2014. doi: 10.1109/ICOS.2014.7042633
Abstract: Radio Frequency Identification (RFID) system is an emerging technology for automating object identification. The Networked RFID System (NRS) is a component of a distributed object identification network which facilitates automated supply chain management. It also makes the Internet of Things (IoT) concept a reality. To increase the business feasibility of NRS implementation, the system should be able to ensure the visibility and traceability of the object throughout the chain using a checker protocol. By doing so, the protocol will check the genuineness of the object and the genuineness of the object's previous travel path on-site. While doing so, the protocol needs to ensure the security requirement of the system. To this end, we propose a secure checker protocol for NRS which will use a PUF (Physically Unclonable Function) and simple cryptographic primitives. The protocol provides security (protect privacy of the partners, injection of fake objects, non-repudiation, and unclonability), visibility and traceability for NRS. It is also suitable for passive tags.
Keywords: Internet of Things; cryptographic protocols; radiofrequency identification; supply chain management; telecommunication security; Internet of Things; IoT; NRS; PUF; checker protocol; cryptographic primitives; distributed object identification network; networked RFID system; passive tags; physically unclonable function; radio frequency identification system; supply chain management; Equations; Privacy; Protocols; Radiofrequency identification; Security; Supply chains; NRS; PUF; RFID; checker; injection of fake objects; non-repudiation; privacy; protocol; unclonable (ID#: 15-5206)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7042633&isnumber=7042395
Flood, P.; Schukat, M., "Peer To Peer Authentication for Small Embedded Systems: A Zero-Knowledge-Based Approach to Security For The Internet Of Things," Digital Technologies (DT), 2014 10th International Conference on, pp. 68, 72, 9-11 July 2014. doi: 10.1109/DT.2014.6868693
Abstract: With an estimated 50 billion internet-enabled devices deployed by 2020, the arrival of the Internet of Things (IoT) or Internet of Everything (IoE) raises many questions regarding the suitability and adaptability of current computer security standards to provide privacy, data integrity and end entity authentication between communicating peers. In this paper we present a new protocol which combines zero-knowledge proofs and key exchange mechanisms to provide secure and authenticated communication in static machine-to-machine (M2M) networks. This approach addresses all of the aforementioned issues while also being suitable for devices with limited computational resources and can be deployed in wireless sensor networks. While the protocol requires an a-priori knowledge about the network setup and structure, it guarantees perfect forward secrecy.
Keywords: Internet of Things; cryptographic protocols; data integrity; data privacy; embedded systems; peer-to-peer computing; wireless sensor networks; Internet of Everything; Internet of Things security; Internet-enabled devices; IoE; IoT;M2M network; computer security standards; data integrity; embedded systems; end entity authentication; key exchange mechanisms; peer to peer authentication; perfect forward secrecy; privacy; static machine-to-machine network; wireless sensor networks; zero-knowledge proofs; zero-knowledge-based approach; Authentication; Elliptic curve cryptography; Embedded systems; Protocols; Diffie-Hellman key exchange; GMW protocol; Zero knowledge proof (ID#: 15-5207)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6868693&isnumber=6868673
Isa, M.A.M.; Hashim, H.; Ab Manan, J.-L.; Adnan, S.F.S.; Mahmod, R., "RF Simulator for Cryptographic Protocol," Control System, Computing and Engineering (ICCSCE), 2014 IEEE International Conference on, pp. 518, 523, 28-30 Nov. 2014. doi: 10.1109/ICCSCE.2014.7072773
Abstract: Advances in embedded RF devices and sensor nodes have witnessed major expansion of end user services such as Internet of Things (IoT) and Cloud Computing. These prospective smart embedded and sensor devices normally interconnect to the internet using wireless technology (e.g. radio frequency, Wi-Fi) and run on top of CoAP and TFTP protocols. In this paper, we present a RF Simulator v1.1 which simulates lightweight security protocols for RF devices communications using Stop and Wait Automatic Repeat Request (SW-ARQ) protocol. The RF Simulator can be used for a quick trial and debugging for any new cryptography protocol in the simulator before actual implementation or experiment of the protocol in the physical embedded devices. We believe that the RF Simulator may provide an alternate way for a computer scientist, cryptographer or engineer to do a rapid product research and development of any cryptographic protocol for smart devices. The major advantage of the RF Simulator is that the source codes in the simulator can be used directly into its physical implementation of the embedded RF devices communication. We also presented simulation results of DHKE and AES encryption schemes using SW-ARQ protocol as a use case of the RF Simulator. The simulation was executed in ARM Raspberry Pi board and HP DC7800 PC as hardware platforms for the simulator setup.
Keywords: IP networks; automatic repeat request; computer network security; cryptographic protocols; embedded systems; microcontrollers; AES encryption scheme; ARM Raspberry Pi board; CoAP protocol; DHKE encryption scheme; HP DC7800 PC; Internet of Things; IoT; RF Simulator v1.1;SW-ARQ protocol; TFTP protocol; Wi-Fi; cloud computing; cryptographic protocol; embedded RF device communication; embedded RF devices; end user services; hardware platforms; lightweight security protocols; physical embedded devices; physical implementation; radio frequency; sensor devices; sensor nodes; smart embedded devices; source codes; stop-and-wait automatic repeat request protocol; wireless technology; Computational modeling; Cryptographic protocols; Cryptography; Radio frequency; Servers; Simulation; AES; AP; Access Point; Asymmetric; BS; Base Station;Cryptography;DHKE;Diffie-Hellman;IOT;Lightweight;Privacy; RF; Radio Frequency; Raspberry Pi; Security; Simulation; Simulator; Stop and Wait ARQ; Symmetric; TFTP; Trivial File Transfer Protocol; Trust; UBOOT; UDP; WIFI; Wi-Fi AP (ID#: 15-5208)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7072773&isnumber=7072673
Dores, C.; Reis, L.P.; Vasco Lopes, N., "Internet Of Things And Cloud Computing," Information Systems and Technologies (CISTI), 2014 9th Iberian Conference on, pp. 1, 4, 18-21 June 2014. doi: 10.1109/CISTI.2014.6877071
Abstract: With advances in communication technology, future internet presents numerous opportunities to develop new systems designed to make day to day life easier and to enhance and prolong the life of people with disabilities. This motivation propels the development of new services that integrate the mobility of cloud systems and the diversity of IoT (Internet of Things). It will enable us to create new and more independent care systems for people with disabilities, enabling a certain degree of independence. This can have a psychological and social impact due to the better quality of life that enables. Other motivation is the versatility and mobility of services it can provide, making those services available. In this paper is explored and explained the different kinds of technologies that can be integrated to enable creation of future internet platforms. Also, an IoT Cloud platform will be analyzed and some tests will be made, ending with some conclusions and lessons learned in this work.
Keywords: Internet of Things; assisted living; body sensor networks; cloud computing; human factors; mobile computing; BSN; Internet of Things; IoT cloud platform; WSN; body sensor networks; cloud computing; cloud systems; communication technology; disabled people; future Internet platforms; independent care systems; motivation; psychological impact; social impact; wireless sensor networks; Cloud computing; Delays; IP networks; Multimedia communication; Security; Wireless sensor networks; BSN; Cloud computing; Disabled People; IoT; NGN's (ID#: 15-5209)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6877071&isnumber=6876860
Priller, P.; Aldrian, A.; Ebner, T., "Case Study: From Legacy To Connectivity Migrating Industrial Devices Into The World Of Smart Services," Emerging Technology and Factory Automation (ETFA), 2014 IEEE, pp. 1, 8, 16-19 Sept. 2014. doi: 10.1109/ETFA.2014.7005136
Abstract: Europa has launched multiple initiatives and research projects to remain competitive in a globalized world and keep industry and manufacturing on-shore. Funded by EU and member countries, project ARROWHEAD[1] focuses research and innovation for collaborative automation using interoperable services for smart production, to improve quality, efficiency, flexibility and cost competiveness. This includes an important new aspect called “Smart Services”, which aims to apply SOA (service oriented architecture) to maintenance and service of production systems and its parts, which still carry a huge potential for further gains in cost and energy savings. However, there will be no “big bang”. How can we turn present-day variety of diverse, specialized, and legacy loaded embedded systems into connected, SOA based cooperating participants of the Internet of Things (IoT)? This case study portrays the solution followed in ARROWHEAD WP1.1, for devices used in end-of-line (EoL) test systems in automotive powertrain production.
Keywords: Internet of Things; embedded systems; production engineering computing; production equipment; service-oriented architecture; EoL test systems; Europa; Internet of Things; IoT; SOA; automotive powertrain production; collaborative automation; connectivity migrating industrial devices; end-of-line test systems; interoperable services; legacy loaded embedded systems; production systems maintenance; production systems service; project ARROWHEAD; service oriented architecture; smart production; smart services; Automation; Maintenance engineering; Production; Protocols; Security; Service-oriented architecture; Testing (ID#: 15-5210)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7005136&isnumber=7005023
Schukat, M.; Flood, P., "Zero-knowledge Proofs in M2M Communication," Irish Signals & Systems Conference 2014 and 2014 China-Ireland International Conference on Information and Communications Technologies (ISSC 2014/CIICT 2014). 25th IET, pp. 269, 273, 26-27 June 2013. doi: 10.1049/cp.2014.0697
Abstract: The advent of the IoT with an estimated 50 billion internet enabled devices by the year 2020 raises questions about the suitability and scalability of existing mechanisms to provide privacy, data integrity and end-entity authentication between communicating peers. In this paper we present a new protocol that combines zero-knowledge proofs and key exchange mechanisms to provide secure and authenticated communication in static M2M networks, therefore addressing all the above problems. The protocol is suitable for devices with limited computational resources and can be deployed in wireless sensor networks. While the protocol requires an a-priori knowledge about the network setup and structure, it guarantees perfect forward secrecy.
Keywords: Internet of Things; computer network security; cryptographic protocols; wireless sensor networks; Internet enabled devices; Internet of Things; IoT; M2M communication; data integrity; data privacy; end-entity authentication; key exchange mechanisms; machine-to-machine communication; perfect forward secrecy; static M2M networks; wireless sensor networks; zero-knowledge proofs; Diffie Hellman key exchange; GMW protocol; Zero knowledge proof (ID#: 15-5211)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6912768&isnumber=6912720
Dsouza, C.; Ahn, G.-J.; Taguinod, M., "Policy-Driven Security Management For Fog Computing: Preliminary Framework And A Case Study," Information Reuse and Integration (IRI), 2014 IEEE 15th International Conference on, pp. 16, 23, 13-15 Aug. 2014. doi: 10.1109/IRI.2014.7051866
Abstract: With the increasing user demand for elastic provisioning of resources coupled with ubiquitous and on-demand access to data, cloud computing has been recognized as an emerging technology to meet such dynamic user demands. In addition, with the introduction and rising use of mobile devices, the Internet of Things (IoT) has recently received considerable attention since the IoT has brought physical devices and connected them to the Internet, enabling each device to share data with surrounding devices and virtualized technologies in real-time. Consequently, the exploding data usage requires a new, innovative computing platform that can provide robust real-time data analytics and resource provisioning to clients. As a result, fog computing has recently been introduced to provide computation, storage and networking services between the end-users and traditional cloud computing data centers. This paper proposes a policy-based management of resources in fog computing, expanding the current fog computing platform to support secure collaboration and interoperability between different user-requested resources in fog computing.
Keywords: Internet of Things; cloud computing; computer centres; open systems; resource allocation; security of data; Internet of things; IoT; cloud computing data centers; dynamic user demands; elastic resources provisioning; exploding data usage; fog computing; interoperability; networking services; on-demand data access; policy-driven security management; real-time data analytics; secure collaboration; storage services; ubiquitous data access; user-requested resources; virtualized technologies; Cloud computing; Collaboration; Computer architecture; Educational institutions; Global Positioning System; Security; Vehicles (ID#: 15-5212)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7051866&isnumber=7051718
Van den Abeele, F.; Hoebeke, J.; Moerman, I.; Demeester, P., "Fine-Grained Management Of COAP Interactions With Constrained IoT Devices," Network Operations and Management Symposium (NOMS), 2014 IEEE, pp. 1, 5, 5-9 May 2014. doi: 10.1109/NOMS.2014.6838368
Abstract: As open standards for the Internet of Things gain traction, the current Intranet of Things will evolve to a truly open Internet of Things, where constrained devices are first class citizens of the public Internet. However, the large amount of control over constrained networks offered by today's vertically integrated platforms, becomes even more important in an open IoT considering its promise of direct end-to-end interactions with constrained devices. In this paper a set of challenges is identified for controlling interactions with constrained networks that arise due to their constrained nature and their integration with the public Internet. Furthermore, a number of solutions are presented for overcoming these challenges by means of an intercepting intermediary at the edge of the constrained network.
Keywords: Internet; Internet of Things; open systems; protocols; telecommunication network management; CoAP interactions; Internet of Things; Intranet of Things; constrained devices; constrained networks; direct end-to-end interactions; fine-grained management; intercepting intermediary; open IoT; open standards; public Internet; vertically integrated platforms; Internet of Things; Logic gates; Protocols; Routing; Security; Standards (ID#: 15-5213)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838368&isnumber=6838210
Chaoliang Li; Qin Li; Guojun Wang, "Survey of Integrity Detection Methods in Internet of Things," Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, pp.906,913, 24-26 Sept. 2014. doi: 10.1109/TrustCom.2014.120
Abstract: Internet of Things (IoT) has received more and more concerns in academia and industry since it was first proposed. With the decrease of tag price and the development of electronic equipments, IoT is widely used to manage the commodities in modern logistic warehouse. As the commodities move in and out of the warehouse frequently every day, there is a need to devise an efficient solution to detect the integrity of a batch of commodities in such an environment. Many detection methods are analyzed and compared in the paper. At the same time, some promising and potential research directions on integrity detection are listed at the end of the paper.
Keywords: Internet of Things; data integrity; radiofrequency identification; warehouse automation; Internet of Things; IoT; electronic equipments; integrity detection methods; modern logistic warehouse; tag price; Detection algorithms; Educational institutions; Internet; Logistics; Privacy; Protocols; Radiofrequency identification; Internet of Things (IoT);integrity detection; survey (ID#: 15-5214)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011345&isnumber=7011202
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Fog Computing Security |
Fog computing is a concept that extends the Cloud concept to the end user. As with most new technologies, a survey of the scope and types of security problems is necessary. Much of the research presented relates to the Internet of Things.
Aazam, Mohammad; Huh, Eui-Nam, "Fog Computing Micro Datacenter Based Dynamic Resource Estimation and Pricing Model for IoT," Advanced Information Networking and Applications (AINA), 2015 IEEE 29th International Conference on, pp. 687, 694, 24-27 March 2015. doi: 10.1109/AINA.2015.254
Abstract: Pervasive and ubiquitous computing services have recently been under focus of not only the research community, but developers as well. Prevailing wireless sensor networks (WSNs), Internet of Things (IoT), and healthcare related services have made it difficult to handle all the data in an efficient and effective way and create more useful services. Different devices generate different types of data with different frequencies. Therefore, amalgamation of cloud computing with IoTs, termed as Cloud of Things (CoT) has recently been under discussion in research arena. CoT provides ease of management for the growing media content and other data. Besides this, features like: ubiquitous access, service creation, service discovery, and resource provisioning play a significant role, which comes with CoT. Emergency, healthcare, and latency sensitive services require real-time response. Also, it is necessary to decide what type of data is to be uploaded in the cloud, without burdening the core network and the cloud. For this purpose, Fog computing plays an important role. Fog resides between underlying IoTs and the cloud. Its purpose is to manage resources, perform data filtration, preprocessing, and security measures. For this purpose, Fog requires an effective and efficient resource management framework for IoTs, which we provide in this paper. Our model covers the issues of resource prediction, customer type based resource estimation and reservation, advance reservation, and pricing for new and existing IoT customers, on the basis of their characteristics. The implementation was done using Java, while the model was evaluated using CloudSim toolkit. The results and discussion show the validity and performance of our system.
Keywords: Cloud computing; Logic gates; Mobile handsets; Performance evaluation; Pricing; Resource management; Wireless sensor networks; Cloud of Things; Edge computing; Fog computing; IoT; Micro Data Center; resource management (ID#: 15-5318)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7098039&isnumber=7097928
Stojmenovic, I.; Sheng Wen, "The Fog Computing Paradigm: Scenarios And Security Issues," Computer Science and Information Systems (FedCSIS), 2014 Federated Conference on, vol., no., pp.1,8, 7-10 Sept. 2014. doi: 10.15439/2014F503
Abstract: Fog computing is a paradigm that extends Cloud computing and services to the edge of the network. Similar to Cloud, Fog provides data, compute, storage, and application services to end-users. In this article, we elaborate the motivation and advantages of Fog computing, and analyse its applications in a series of real scenarios, such as Smart Grid, smart traffic lights in vehicular networks and software defined networks. We discuss the state-of-the-art of Fog computing and similar work under the same umbrella. Security and privacy issues are further disclosed according to current Fog computing paradigm. As an example, we study a typical attack, man-in-the-middle attack, for the discussion of security in Fog computing. We investigate the stealthy features of this attack by examining its CPU and memory consumption on Fog device.
Keywords: cloud computing; data privacy; trusted computing; CPU consumption; Fog device; cloud computing; cloud services; fog computing paradigm; man-in-the-middle attack; memory consumption; privacy issue; security issue; smart grid; smart traffic lights; software defined networks; vehicular networks; Cloud computing; Companies; Intelligent sensors; Logic gates; Security; Wireless sensor networks; Cloud Computing; Fog Computing; Internet of Things; Software Defined Networks (ID#: 15-5319)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6932989&isnumber=6932982
Dsouza, C.; Ahn, G.-J.; Taguinod, M., "Policy-Driven Security Management for Fog Computing: Preliminary Framework and A Case Study," Information Reuse and Integration (IRI), 2014 IEEE 15th International Conference on, pp. 16, 23, 13-15 Aug. 2014. doi: 10.1109/IRI.2014.7051866
Abstract: With the increasing user demand for elastic provisioning of resources coupled with ubiquitous and on-demand access to data, cloud computing has been recognized as an emerging technology to meet such dynamic user demands. In addition, with the introduction and rising use of mobile devices, the Internet of Things (IoT) has recently received considerable attention since the IoT has brought physical devices and connected them to the Internet, enabling each device to share data with surrounding devices and virtualized technologies in real-time. Consequently, the exploding data usage requires a new, innovative computing platform that can provide robust real-time data analytics and resource provisioning to clients. As a result, fog computing has recently been introduced to provide computation, storage and networking services between the end-users and traditional cloud computing data centers. This paper proposes a policy-based management of resources in fog computing, expanding the current fog computing platform to support secure collaboration and interoperability between different user-requested resources in fog computing.
Keywords: Internet of Things; cloud computing; computer centres; open systems; resource allocation; security of data; Internet of things; IoT; cloud computing data centers; dynamic user demands; elastic resources provisioning; exploding data usage; fog computing; interoperability; networking services; on-demand data access; policy-driven security management; real-time data analytics; secure collaboration; storage services; ubiquitous data access; user-requested resources; virtualized technologies; Cloud computing; Collaboration; Computer architecture; Educational institutions; Global Positioning System; Security; Vehicles (ID#: 15-5320)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7051866&isnumber=7051718
Stojmenovic, I., "Fog Computing: A Cloud To The Ground Support For Smart Things And Machine-To-Machine Networks," Telecommunication Networks and Applications Conference (ATNAC), 2014 Australasian , vol., no., pp.117,122, 26-28 Nov. 2014. doi: 10.1109/ATNAC.2014.7020884
Abstract: Cloud services to smart things face latency and intermittent connectivity issues. Fog devices are positioned between cloud and smart devices. Their high speed Internet connection to the cloud, and physical proximity to users, enable real time applications and location based services, and mobility support. Cisco promoted fog computing concept in the areas of smart grid, connected vehicles and wireless sensor and actuator networks. This survey article expands this concept to the decentralized smart building control, recognizes cloudlets as special case of fog computing, and relates it to the software defined networks (SDN) scenarios. Our literature review identifies a handful number of articles. Cooperative data scheduling and adaptive traffic light problems in SDN based vehicular networks, and demand response management in macro station and micro-grid based smart grids are discussed. Security, privacy and trust issues, control information overhead and network control policies do not seem to be studied so far within the fog computing concept.
Keywords: cloud computing; computer network security; data privacy; software defined networking; trusted computing; Cisco; SDN; adaptive traffic light problems; cloud devices; cloud services; cloudlets; connected vehicles; control information overhead; cooperative data scheduling; decentralized smart building control; demand response management; fog computing; high speed Internet connection; location based services; machine-to-machine networks; macro station; microgrid based smart grids; mobility support; network control policy; privacy issue; security issue; smart devices; smart grid; smart things; software defined networks; trust issue; wireless sensor and actuator networks; Actuators; Cloud computing; Mobile communication; Optimal scheduling; Smart grids; Vehicles; Wireless communication; Fog computing; Machine-to-machine networks (ID#: 15-5321)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7020884&isnumber=7020854
Yannuzzi, M.; Milito, R.; Serral-Gracia, R.; Montero, D.; Nemirovsky, M., "Key ingredients in an IoT recipe: Fog Computing, Cloud computing, and more Fog Computing," Computer Aided Modeling and Design of Communication Links and Networks (CAMAD), 2014 IEEE 19th International Workshop on, pp. 325, 329, 1-3 Dec. 2014. doi: 10.1109/CAMAD.2014.7033259
Abstract: This paper examines some of the most promising and challenging scenarios in IoT, and shows why current compute and storage models confined to data centers will not be able to meet the requirements of many of the applications foreseen for those scenarios. Our analysis is particularly centered on three interrelated requirements: 1) mobility; 2) reliable control and actuation; and 3) scalability, especially, in IoT scenarios that span large geographical areas and require real-time decisions based on data analytics. Based on our analysis, we expose the reasons why Fog Computing is the natural platform for IoT, and discuss the unavoidable interplay of the Fog and the Cloud in the coming years. In the process, we review some of the technologies that will require considerable advances in order to support the applications that the IoT market will demand.
Keywords: Internet of Things; cloud computing; computer centres; data analysis; mobile computing; storage management ;IoT recipe; actuation reliability; cloud computing; control reliability; data analytics; data centers; fog computing; mobility requirement; storage models; Cloud computing; Handover; Mobile nodes; Reliability; Cloud Computing; Fog Computing; IoT; actuation; data analytics; mobility; real-time control; security (ID#: 15-5322)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7033259&isnumber=7033190
Popov, S.; Kurochkin, M.; Kurochkin, L.; Glazunov, V., "Network Synchronization Of Vehicle Multiprotocol Unit System Clock," Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT), 2014 6th International Congress on, pp. 105, 110, 6-8 Oct. 2014. doi: 10.1109/ICUMT.2014.7002087
Abstract: Recent achievements of automotive telematics in the area of communication channels integration for providing a persistent bidirectional link between vehicle and cloud infrastructure have intensified research in the field of mobile multiprotocol networks of intelligent vehicles oriented on cloud and fog environmental services. Synchronization of multiprotocol unit system clock is crucial for intelligent vehicle networks in terms of security system functioning, navigation, driver and passenger services. In spite of the fact that clock accuracy requirements are comparable with those for stationary systems, limited lifetime route to server in the cloud or fog and substantial restrictions on wireless network traffic complicates the achievement of this objective. Method of mobile multiprotocol unit synchronization in dynamic wireless networks of different technologies with virtual cloud servers is presented drawing on Network Time Protocol (NTP). This method provides the required quality of multiprotocol unit system clock accuracy while minimizing network traffic. Synchronization path selection algorithm is described, as based on probabilistic approach and synchronization quality retrospectives in the chosen technology network. The way of local network traffic reduction while maintaining the required accuracy of multiprotocol unit system clock is shown. The method involved can be used for multiprotocol unit synchronization in intelligent transportation networks.
Keywords: cloud computing; mobile radio; probability; protocols; synchronisation; telecommunication traffic; wireless channels; NTP; automotive telematics; bidirectional link; cloud infrastructure; communication channel integration; driver services; environmental services; intelligent transportation networks; intelligent vehicle networks; local network traffic reduction; mobile multiprotocol networks; mobile multiprotocol unit synchronization; navigation; network synchronization; network time protocol; network traffic minimization; passenger services; probabilistic approach; security system functioning; synchronization path selection algorithm; synchronization quality; vehicle multiprotocol unit system clock; virtual cloud servers; wireless network traffic; Accuracy; Mobile communication; Routing protocols; Servers; Synchronization; Vehicles (ID#: 15-5323)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7002087&isnumber=7002065
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Measurement of Security Weaknesses, 2014 |
Attackers need only find one or a few exploitable vulnerabilities to mount a successful attack while defenders must shore up as many weaknesses as practicable. The research presented here covers a range of weaknesses and approaches for identifying and securing against attacks. Many articles focus on key systems, both public and private. The common thread is for measuring those weaknesses. The work was presented in 2014.
Hemanidhi, A.; Chimmanee, S.; Sanguansat, P., "Network Risk Evaluation From Security Metric Of Vulnerability Detection Tools," TENCON 2014 - 2014 IEEE Region 10 Conference, pp. 1, 6, 22-25 Oct. 2014. doi: 10.1109/TENCON.2014.7022358
Abstract: Network Security is always a major concern in any organizations. To ensure that the organization network is well prevented from attackers, vulnerability assessment and penetration testing are implemented regularly. However, it is a highly time-consuming procedure to audit and analysis these testing results depending on administrator's expertise. Thus, security professionals prefer proactive-automatic vulnerability detection tools to identify vulnerabilities before they are exploited by an adversary. Although these vulnerability detection tools show that they are very useful for security professionals to audit and analysis much faster and more accurate, they have some important weaknesses as well. They only identify surface vulnerabilities and are unable to address the overall risk level of the scanned network. Also, they often use different standard for network risk level classification which habitually related to some organizations or vendors. Thus, these vulnerability detection tools are likely to, more or less, classify risk evaluation biasedly. This article presents a generic idea of “Network Risk Metric” as an unbiased risk evaluation from several vulnerability detection tools. In this paper, NetClarity (hardware-based), Nessus (software-based), and Retina (software-based) are implemented on two networks from an IT department of the Royal Thai Army (RTA). The proposed metric is applied for evaluating overall network risk from these three vulnerability detection tools. The result is a more accurate risk evaluation for each network.
Keywords: business data processing; computer crime; computer network performance evaluation; computer network security; T department; Nessus; NetClarity; RTA; Retina; Royal Thai Army; attackers; hardware-based; network risk evaluation; network risk level classification; network risk metric; network security; organization network; proactive-automatic vulnerability detection tools; security metric; security professionals; software-based; unbiased risk evaluation; vulnerabilities identification; vulnerability assessment; vulnerability penetration testing; Equations; Measurement; Retina; Security; Servers; Software; Standards organizations; Network Security; Risk Evaluation; Security Metrics; Vulnerability Detection (ID#: 15-5381)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7022358&isnumber=7021863
Kotenko, I.; Doynikova, E., "Security Evaluation for Cyber Situational Awareness," High Performance Computing and Communications, 2014 IEEE 6th Intl Symp on Cyberspace Safety and Security, 2014 IEEE 11th Intl Conf on Embedded Software and Syst (HPCC,CSS,ICESS), 2014 IEEE Intl Conf on, pp. 1197, 1204, 20-22 Aug. 2014. doi: 10.1109/HPCC.2014.196
Abstract: The paper considers techniques for measurement and calculation of security metrics taking into account attack graphs and service dependencies. The techniques are based on several assessment levels (topological, attack graph level, attacker level, events level and system level) and important aspects (zero-day attacks, cost-efficiency characteristics). It allows understanding the current security situation, including defining the vulnerable characteristics and weaknesses of the system under protection, dangerous events, current and possible cyber attack parameters, attacker intentions, integral cyber situation metrics and necessary countermeasures.
Keywords: firewalls; attack countermeasures; attack graph level; attack graphs; attacker intentions; attacker level; cost-efficiency characteristics; cyber attack parameters; cyber situational awareness; dangerous events; event level; integral cyber situation metrics; security evaluation; security metric calculation; security metric measurement; service dependencies; system level; system weaknesses; topological assessment level; vulnerable characteristics; zero-day attacks; Business; Conferences; High performance computing; Integrated circuits; Measurement; Probabilistic logic; Security; attack graphs; cyber situational awareness; network security; risk assessment; security metrics; service dependencies (ID#: 15-5382)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7056895&isnumber=7056577
Axelrod, C.W., "Reducing Software Assurance Risks For Security-Critical And Safety-Critical Systems," Systems, Applications and Technology Conference (LISAT), 2014 IEEE Long Island, pp. 1, 6, 2-2 May 2014. doi: 10.1109/LISAT.2014.6845212
Abstract: According to the Office of the Assistant Secretary of Defense for Research and Engineering (ASD(R&E)), the US Department of Defense (DoD) recognizes that there is a “persistent lack of a consistent approach ... for the certification of software assurance tools, testing and methodologies” [1]. As a result, the ASD(R&E) is seeking “to address vulnerabilities and weaknesses to cyber threats of the software that operates ... routine applications and critical kinetic systems ...” The mitigation of these risks has been recognized as a significant issue to be addressed in both the public and private sectors. In this paper we examine deficiencies in various software-assurance approaches and suggest ways in which they can be improved. We take a broad look at current approaches, identify their inherent weaknesses and propose approaches that serve to reduce risks. Some technical, economic and governance issues are: (1) Development of software-assurance technical standards (2) Management of software-assurance standards (3) Evaluation of tools, techniques, and metrics (4) Determination of update frequency for tools, techniques (5) Focus on most pressing threats to software systems (6) Suggestions as to risk-reducing research areas (7) Establishment of models of the economics of software-assurance solutions, and testing and certifying software We show that, in order to improve current software assurance policy and practices, particularly with respect to security, there has to be a major overhaul in how software is developed, especially with respect to the requirements and testing phases of the SDLC (Software Development Lifecycle). We also suggest that the current preventative approaches are inadequate and that greater reliance should be placed upon avoidance and deterrence. We also recommend that those developing and operating security-critical and safety-critical systems exchange best-ofbreed software assurance methods to prevent the vulnerability of components leading to compromise of entire systems of systems. The recent catastrophic loss of a Malaysia Airlines airplane is then presented as an example of possible compromises of physical and logical security of on-board communications and management and control systems.
Keywords: program testing; safety-critical software; software development management; software metrics; ASD(R&E);Assistant Secretary of Defense for Research and Engineering; Malaysia Airlines airplane; SDLC; US Department of Defense; US DoD; component vulnerability prevention; control systems; critical kinetic systems; cyber threats; economic issues; governance issues; logical security; management systems; on-board communications; physical security; private sectors; public sectors; risk mitigation; safety-critical systems; security-critical systems; software assurance risk reduction; software assurance tool certification; software development; software development lifecycle; software methodologies; software metric evaluation; software requirements; software system threats; software technique evaluation; software testing; software tool evaluation; software-assurance standard management; software-assurance technical standard development; technical issues; update frequency determination; Measurement; Organizations; Security; Software systems; Standards; Testing; cyber threats; cyber-physical systems; governance; risk; safety-critical systems; security-critical systems; software assurance; technical standards; vulnerabilities; weaknesses (ID#: 15-5383)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845212&isnumber=6845183
Guri, M.; Kedma, G.; Zadov, B.; Elovici, Y., "Trusted Detection of Sensitive Activities on Mobile Phones Using Power Consumption Measurements," Intelligence and Security Informatics Conference (JISIC), 2014 IEEE Joint, pp. 145, 151, 24-26 Sept. 2014. doi: 10.1109/JISIC.2014.30
Abstract: The unprecedented popularity of modern mobile phones has made them a lucrative target for skillful and motivated offenders. A typical mobile phone is packed with sensors, which can be turned on silently by a malicious program, providing invaluable information to the attacker. Detecting such hidden activities through software monitors can be blindfolded and bypassed by rootkits and by anti-forensic methods applied by the malicious program. Moreover, detecting power consumption by software running on the mobile phone is susceptible to similar evasive techniques. Consequently, software based detection of hidden malicious activities, particularly the silent activation of sensors, cannot be considered as trusted. In this paper we present a method which detects hidden activities using external measurement of power consumption. The classification model is acquired using machine-learning multi-label classification algorithms. Our method overcomes the inherent weaknesses of software-based monitors, and provides a trusted solution. We describe the measurement setup, and provide detailed evaluation results of the algorithms used. The results obtained so far support the feasibility of our method.
Keywords: learning (artificial intelligence);smart phones; telecommunication security; trusted computing; machine learning multilabel classification algorithms; malicious program; mobile phones; power consumption measurements; sensitive activities; software monitors; trusted detection; Battery charge measurement; Global Positioning System; IEEE 802.11 Standards; Mobile handsets; Monitoring; Power demand; Power measurement; Machine learning; Mobile phone security; Multi-label classification; Trusted measurement (ID#: 15-5384)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6975566&isnumber=6975536
Makhdoom, I.; Afzal, M.; Rashid, I., "A Novel Code Attestation Scheme Against Sybil Attack In Wireless Sensor Networks," Software Engineering Conference (NSEC), 2014 National, pp. 1, 6, 11-12 Nov. 2014. doi: 10.1109/NSEC.2014.6998232
Abstract: Wireless Sensor Networks (WSN) due to their distributed nature are vulnerable to various external and insider attacks. Classic cryptographic measures do protect against external attacks to some extent but they fail to defend against insider attacks involving node compromise. A compromised node can be used to launch various attacks of which Sybil Attack is the most prominent. In this paper we carry out a detailed review and analysis of various defenses proposed against Sybil Attack. We identify their strengths and weaknesses and also propose a novel One Way Code Attestation Protocol (OWCAP) for wireless sensors networks, which is an economical and a secure code attestation scheme that protects not only against Sybil Attack but also against majority of the insider attacks.
Keywords: cryptographic protocols; telecommunication security; wireless sensor networks; OWCAP; Sybil attack; WSN; cryptographic measurement; external attacks; insider attacks; novel code attestation scheme; one way code attestation protocol; wireless sensor networks; Cryptography; Heating; Wireless sensor networks; Sybil Attack; code attestation scheme; embedded systems security; insider attacks; trust and security issues in sensor networks; wireless sensor networks (ID#: 15-5385)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6998232&isnumber=6998226
Garvey, P.R.; Patel, S.H., "Analytical Frameworks to Assess the Effectiveness and Economic-Returns of Cybersecurity Investments," Military Communications Conference (MILCOM), 2014 IEEE, pp. 136, 145, 6-8 Oct. 2014. doi: 10.1109/MILCOM.2014.29
Abstract: Critical considerations in engineering today's systems are securing the collection, access, and dissemination of the information they contain. Advanced computing technologies, ubiquitous environments, and sophisticated networks enable globally distributed information access to an uncountable number of consumers - and adversaries. Assuring the integrity of today's missions, and the highly networked systems they depend on, requires economic decisions in rapidly changing technology and cyber threat environments. Knowing that countermeasures effective against today's threats can be ineffective tomorrow, decision-makers need agile ways to assess the efficacies of investments in cyber security on assuring mission outcomes. Analytical methods in cyber security economics need to be flexible in their information demands. Some investment decisions may necessitate methods that use in-depth knowledge about a mission's information systems and networks, vulnerabilities, and adversary abilities to exploit weaknesses. Other investment decisions may necessitate methods that use only a high-level understanding of these dimensions. The sophistication of methods to conduct economic-benefit tradeoffs of mission assuring investments must calibrate to the range of knowledge environments present within an organization. This paper presents a family of analytical frameworks to assess and measure the effectiveness of cyber security and the economic-benefit tradeoffs of competing cyber security investments. These frameworks demonstrate ways to think through and shape an analysis of the economic-benefit returns on cyber security investments - rather than being viewed as rigid model structures.
Keywords: authorisation; socio-economic effects; cyber security economics; cyber threat environment; cybersecurity investment; economic-benefit returns; economic-benefit tradeoff; economic-returns; globally distributed information; Accuracy; Computer security; Economics; Investment; Measurement; Organizations; Portfolios; cyber mission assurance; cybersecurity; cybersecurity economics; cybersecurity risk; economic-benefit tradeoffs; mission effectiveness (ID#: 15-5386)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956750&isnumber=6956719
Eldib, H.; Chao Wang; Taha, M.; Schaumont, P., "QMS: Evaluating The Side-Channel Resistance Of Masked Software From Source Code," Design Automation Conference (DAC), 2014 51st ACM/EDAC/IEEE, pp. 1, 6, 1-5 June 2014. Doi: (not provided)
Abstract: Many commercial systems in the embedded space have shown weakness against power analysis based side-channel attacks in recent years. Designing countermeasures to defend against such attacks is both labor intensive and error prone. Furthermore, there is a lack of formal methods for quantifying the actual strength of a counter-measure implementation. Security design errors may therefore go undetected until the side-channel leakage is physically measured and evaluated. We show a better solution based on static analysis of C source code. We introduce the new notion of Quantitative Masking Strength (QMS) to estimate the amount of information leakage from software through side channels. The QMS can be automatically computed from the source code of a countermeasure implementation. Our experiments, based on side-channel measurement on real devices, show that the QMS accurately quantifies the side-channel resistance of the software implementation.
Keywords: object-oriented methods; program diagnostics; security of data; C source code; QMS; counter-measure implementation; information leakage; masked software; power analysis based side-channel attacks; quantitative masking strength; security design; side-channel resistance; static analysis; Benchmark testing; Cryptography; Random variables; Resistance; Software; Software measurement; SMT solver; Side channel attack; countermeasure; differential power analysis; quantitative masking strength (ID#: 15-5387)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6881536&isnumber=6881325
Sanger, J.; Pernul, G., "Visualizing Transaction Context in Trust and Reputation Systems," Availability, Reliability and Security (ARES), 2014 Ninth International Conference on, pp. 94, 103, 8-12 Sept. 2014. doi: 10.1109/ARES.2014.19
Abstract: Transaction context is an important aspect that should be taken into account for reputation-based trust assessment, because referrals are bound to the situation-specific context in which they were created. The non-consideration of transaction context may cause several threats such as the value imbalance problem. Exploiting this weakness, a seller can build high reputation by selling cheap products while cheating on the expensive ones. In the recent years, multiple approaches have been introduced that address this challenge. All of them chose metrics leading to numerical reputation values. These values, however, are non-transparent and quite hard to understand for the end-user. In this work, in contrast, we combine reputation assessment and visual analytics to provide an interactive visualization of multivariate reputation data. We thereby allow the user to analyze the data sets and draw conclusions by himself. In this way, we enhance transparency, involve the user in the evaluation process and as a consequence increase the users' trust in the reputation system.
Keywords: data analysis; data visualisation; trusted computing; data set analysis; interactive visualization; multivariate reputation data; numerical reputation values; reputation system; reputation-based trust assessment; situation-specific context; transaction context visualization; trust system; value imbalance problem; visual analytics; Biological system modeling; Context; Context modeling; Data visualization; Electronic commerce; Measurement; Visual analytics; context; context-awareness; parallel coordinates; reputation; transaction context; trust; visual analytics; visualization (ID#: 15-5388)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6980268&isnumber=6980232
Bissessar, D.; Adams, C.; Dong Liu, "Using Biometric Key Commitments To Prevent Unauthorized Lending Of Cryptographic Credentials," Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on, pp.75,83, 23-24 July 2014. doi: 10.1109/PST.2014.6890926
Abstract: We present a technique that uses privacy enhancing technologies and biometrics to prevent the unauthorized lending of credentials. Current credential schemes suffer the weakness that issued credentials can be transferred between users. Our technique ensures the biometric identity of the individual executing the Issue and Show protocols of an existing credential system in a manner analogous to the enrollment and verification steps in traditional biometric systems. During Issue we create Pedersen commitments on biometrically derived keys obtained from fuzzy extractors. This issue-time commitment is sealed into the issued credential. During Show a verification-time commitment is generated. Correspondence of keys is verified using a zero-knowledge proof of knowledge. The proposed approach preserves the security of the underlying credential system, protects the privacy of the biometric, and generalizes to multiple biometric modalities. We illustrate the usage of our technique by showing how it can be incorporated into digital credentials and anonymous credentials.
Keywords: cryptography; data privacy; Pedersen commitments; anonymous credentials; biometric identity; biometric key commitments; biometric modalities; credential schemes; credential system; cryptographic credentials; digital credentials; fuzzy extractors; issue protocol; issue-time commitment; privacy enhancing technologies; show protocol; Data mining ;Encryption; Measurement; Privacy; Protocols; anonymous credentials; biometrics; digital credentials; fuzzy extractors; non-transferability; privacy enhancing technologies (ID#: 15-5389)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890926&isnumber=6890911
Sevcik, L.; Uhrin, D.; Frnda, J.; Uhrina, M.; Chmelikova, Z.; Voznak, M., "The Impact of Encryption on Video Transmission in IP Network," Telecommunications Forum Telfor (TELFOR), 2014 22nd, pp. 123, 126, 25-27 Nov. 2014. doi: 10.1109/TELFOR.2014.7034372
Abstract: One of the weaknesses of the original Internet Protocol is that it lacks any sort of general purpose mechanism for ensuring the authenticity and privacy of data as it is passed over IP network. With the increased use of the Internet for critical applications, security enhancements were needed for IP. The aim of this paper is to investigate an impact of encryption on video transmission in IP network. In the paper, we describe IPsec tunnel using ESP and AH header providing confidentiality in terms of safety, integrity and non-repudiation (using HMAC-SHA1 and 3DES encryption for confidentiality and AES in CBC mode). The other goal was to assess how an OpenVPN affects the transmitted video. We compare results of both measurements in order to express the impact of packet loss on the transmitted video.
Keywords: IP networks; Internet; computer network security; cryptographic protocols; video streaming;3DES encryption; AES; CBC mode;HMAC-SHA1 encryption; IP network; IPsec tunnel; Internet Protocol; OpenVPN; packet loss; security enhancements; video transmission; Encryption; IP networks; Packet loss; Streaming media; Transform coding; Video recording;3DES;AES256;IPsec;SSIM;video quality (ID#: 15-5390)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7034372&isnumber=7034339
Sochor, T., "Overview Of E-Mail SPAM Elimination And Its Efficiency," Research Challenges in Information Science (RCIS), 2014 IEEE Eighth International Conference on, pp. 1, 11, 28-30 May 2014. doi: 10.1109/RCIS.2014.6861043
Abstract: Permanently changing nature of SPAM (hereinafter only in the sense of unsolicited e-mail messages thus intentionally neglecting other SPAM types) causes that it is almost impossible to find any single mechanism protecting against SPAM for long-term periods. Multi-layer protection is usually applied to SPAM elimination therefore. Especially blacklisting and greylisting have been proven to be extremely useful because they can eliminate significant part of SPAM messages even before their delivery. The article gives an overview of existing SPAM elimination methods, motivates and describes multilayer anti-SPAM mechanisms and analyses the behavior and the efficiency of the key components of a multilayer system. Weaknesses of the analyzed methods (especially blacklisting and greylisting) are mentioned, too, and recommendations formulated, both based on the author's own measurements.
Keywords: collaborative filtering; security of data; unsolicited e-mail; SPAM messages; blacklisting; e-mail SPAM elimination efficiency; greylisting; multilayer antiSPAM mechanisms; multilayer protection; unsolicited e-mail messages; Bayes methods; Educational institutions; IP networks; Market research; Servers; Unsolicited electronic mail; SMTP; SPAM; content search for SPAM; electronic mail; greylisting; lacklisting; unsolicited e-mail messages; unsolicited e-mail message blacklisting (ID#: 15-5391)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6861043&isnumber=6860531
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Multicore Computing and Security, 2014 |
As high performance computing has evolved into larger and faster computing solutions, new approaches to security have been identified. The articles cited here focus on security issues related to multicore environments. These articles focus on a new secure processor that obfuscates its memory access trace, proactive dynamic load balancing on multicore systems, and, an experimental OS tailored to multicore processors of interest in signal processing. These materials were published in 2014.
Krishnan, S.P.T.; Veeravalli, B., "Performance Characterization and Evaluation of HPC Algorithms on Dissimilar Multicore Architectures," High Performance Computing and Communications, 2014 IEEE 6th Intl Symp on Cyberspace Safety and Security, 2014 IEEE 11th Intl Conf on Embedded Software and Syst (HPCC,CSS,ICESS), 2014 IEEE Intl Conf on, pp. 1288, 1295, 20-22 Aug. 2014. doi: 10.1109/HPCC.2014.219
Abstract: In this paper, we share our experiences in using two important yet different High Performance Computing (HPC)architectures for evaluating two HPC algorithms. The first architecture is an Intel x64 ISA based homogenous multicore with Uniform Memory Access (UMA) type shared-memory based Symmetric Multi-Processing system. The second architecture is an IBM Power ISA based heterogenous multicore with Non-Uniform Memory Access (NUMA) based distributed-memoryAsymmetric Multi-Processing system. The two HPC algorithms are for predicting biological molecular structures, specifically the RNA secondary structures. The first algorithm that we created is a parallelized version of a popular serial RNA secondary structure prediction algorithm called PKNOTS. The second algorithm is a new parallel-by-design algorithm that we have developed called MARSs. Using real Ribo-Nucleic Acid(RNA) sequences, we conducted large-scale experiments involving hundreds of sequences using the above two algorithms. Based on thousands of data points that we collected as an outcome of our experiments, we report on the observed performance metrics for both the algorithms on the two architectures. Through our experiments, we infer that architectures with specialized coprocessors for number-crunching along with high-speed memory bus and dedicated bus controllers generally perform better than general-purpose multi-processor architectures. In addition, we observed that algorithms that are intrinsically parallelized by design are able to scale & perform better by taking advantage of the underlying parallel architecture. We further share best practices on handling scalability aspects with regards to workload size. We believe our results are applicable to other HPC applications on similar HPC architectures.
Keywords: parallel architectures; shared memory systems; HPC algorithms; HPC architectures; IBM Power ISA; Intel x64 ISA; MARSs; NUMA; PKNOTS; RNA secondary structures; RNA sequences; UMA type shared-memory; biological molecular structure prediction; dedicated bus controllers; dissimilar multicore architectures; distributed-memory asymmetric multiprocessing system; heterogenous multicore; high performance computing architectures; high-speed memorybus; homogenous multicore; nonuniform memory access; number-crunching; parallel architecture; parallel-by-design algorithm; parallelized version; performance characterization; ribo-nucleic acid sequences; serial RNA secondary structure prediction algorithm; specialized coprocessors; uniform memory access type shared-memory; Algorithm design and analysis; Measurement; Multicore processing; Prediction algorithms; Program processors; RNA (ID#: 15-5299)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7056909&isnumber=7056577
Venkatesan, V.; Wei Qingsong; Tay, Y.C., "Ex-Tmem: Extending Transcendent Memory with Non-volatile Memory for Virtual Machines," High Performance Computing and Communications, 2014 IEEE 6th Intl Symp on Cyberspace Safety and Security, 2014 IEEE 11th Intl Conf on Embedded Software and Syst (HPCC,CSS,ICESS), 2014 IEEE Intl Conf on, pp. 966, 973, 20-22 Aug. 2014. doi: 10.1109/HPCC.2014.160
Abstract: Virtualization and multicore technology now make Transcendent memory, or Tmem, is a new approach to it possible to consolidate heterogeneous workloads on one physical optimize RAM utilization in a virtual environment where machine. Such consolidation helps reduce the amount of idle underutilized RAM from each guest VM and RAM unassigned resources. In particular, transcendent memory is a recent idea to to any guest (fallow memory), are collected into a central pool gather idle memory into a pool that is shared by virtual machines at hyper visor (or VMM), that is shared by VMs. It can be (VMs). However, the size of transcendent memory is unstable and viewed as a new level in the memory hierarchy for VMs, frequently fluctuates with changing workloads, contention among between main memory and disks. VMs over transcendent memory can cause increased cache misses. In this paper, we propose a mechanism to extend transcendent memory (called Ex-Tmem) by using emerging non-volatile memory. Ex-Tmem stores clean pages in a two-level buffering hierarchy with locality-aware data placement and replacement. In addition, Ex-Tmem enables memory-to-memory swapping by using non-volatile memory and eliminates expensive I/O caused by swapping. Extensive experiments on implemented prototype indicate that Ex-Tmem improves performance by up to 50% and reduces disk I/O by up to 37%, compared to existing Tmem.
Keywords: random-access storage; virtual machines; Ex-Tmem; RAM unassigned resources; VMM; cache misses; central pool gather idle memory; consolidate heterogeneous workloads; extending transcendent memory; guest VM; hyper visor; locality aware data placement; memory hierarchy; memory-to-memory swapping; multicore technology; nonvolatile memory; physical optimize RAM utilization; replacement; two-level buffering hierarchy; underutilized RAM; virtual environment; virtual machines; virtualization; Kernel; Nonvolatile memory; Phase change materials; Random access memory; Servers; Virtual machine monitors; Virtual machining; Non-volatile Memory; Transcendent Memory; Virtual Machines (VMs) (ID#: 15-5300)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7056862&isnumber=7056577
Shafaei, M.; Yunsi Fei, "HiTS: A High Throughput Memory Scheduling Scheme to Mitigate Denial-of-Service Attacks in Multi-core Systems," Computer Architecture and High Performance Computing (SBAC-PAD), 2014 IEEE 26th International Symposium on, pp. 206, 213, 22-24 Oct. 2014. doi: 10.1109/SBAC-PAD.2014.36
Abstract: Sharing DRAM memory by multiple cores in a computer system potentially exposes the running threads on cores to denial-of-service (DoS) attacks. This issue is usually addressed by memory scheduling schemes that rotate the memory service among threads according to a certain ranking mechanism. These ranking-based schemes, however, often incur many memory banks' row-buffer conflicts which reduce the throughput of DRAM and the entire system. This paper proposes a new ranking-based memory scheduling scheme, called HiTS, to mitigate DoS attacks in multicore systems with the lowest performance degradation. HiTS achieves these by ranking threads according to each thread's memory usage/requirement. HiTS then enforces the ranking in a way that minimum performance overhead would occur and fairness is also balanced. The effectiveness of HiTS is evaluated by simulations with 18 different workloads running on 8- and 16-core machines. The simulation results show up to 15.8% improvements in terms of unfairness reduction and 24.1% in system throughput compared with the best existing scheduling scheme.
Keywords: DRAM chips; computer network security; multiprocessing systems; DRAM memory; DoS attacks; HiTS; denial-of-service attacks; high throughput memory scheduling scheme; multicore systems; ranking-based memory scheduling scheme; Benchmark testing; Computer crime; Instruction sets; Message systems; Random access memory; Switches; Throughput; DRAM memory; denial-of-service attack; memory scheduling scheme; multi-core systems (ID#: 15-5301)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970666&isnumber=6970630
Moreira, J.; Teixeira, L.; Borin, E.; Rigo, S., "Leveraging Optimization Methods for Dynamically Assisted Control-Flow Integrity Mechanisms," Computer Architecture and High Performance Computing (SBAC-PAD), 2014 IEEE 26th International Symposium on, pp. 49, 56, 22-24 Oct. 2014. doi: 10.1109/SBAC-PAD.2014.35
Abstract: Dynamic Binary Modification (DBM) tools are useful for cross-platform execution of binaries and are powerful run time environments that allow execution optimizations, instrumentation and profiling. These tools have also been used as enablers for control-flow integrity verification, a process that consists in the observation and analysis of a program's execution path focusing on the detection of anomalies, such as those arising from flow corruption based software attacks. Even though this class of tools helps us in identifying a myriad of attacks, it is typically expensive at run time and introduce significant overhead to the program execution. Considering their inherent high cost, further expanding the capabilities of such tools for detection of program flow anomalies can slow down the analysis to the point that it is unfeasible to run it in real world workflows. In this paper we present a mechanism for including program flow verification in DBMs that uses asynchronous analysis and applies different parallel-programming techniques that leverage current multi-core systems to control the overhead of our analysis. Our mechanism was tested against synthetic program flow corruption use cases and correctly detected all detours. With our new optimizations, we show that our system achieves an slowdown of only 1.46x, while a naively implemented verification system face 4.22x of overhead.
Keywords: multiprocessing systems; parallel programming; program control structures; program diagnostics; program verification; security of data; software tools; DBM tools; asynchronous analysis; control-flow integrity verification; cross-platform execution; dynamic binary modification tools; dynamically assisted control-flow integrity mechanisms; flow corruption based software attacks; multicore systems; optimization methods; parallel-programming techniques; program execution path; program flow anomaly detection; program flow verification; run time environments; synthetic program flow corruption; verification system; Benchmark testing; Computer architecture; Instruments; Monitoring; Optimization; Security; Software (ID#: 15-5302)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970646&isnumber=6970630
March, J.L.; Petit, S.; Sahuquillo, J.; Hassan, H.; Duato, J., "Dynamic WCET Estimation for Real-Time Multicore Embedded Systems Supporting DVFS," High Performance Computing and Communications, 2014 IEEE 6th Intl Symp on Cyberspace Safety and Security, 2014 IEEE 11th Intl Conf on Embedded Software and Syst (HPCC,CSS,ICESS), 2014 IEEE Intl Conf on, pp. 27, 33, 20-22 Aug. 2014. doi: 10.1109/HPCC.2014.11
Abstract: A key issue to reduce the number of deadline misses and improve energy savings in embedded real-time systems is to accurately estimate the execution time of tasks as a function of the processor frequency. Existing execution time models however, use to rely on off-line analysis or on the assumption that the memory access time (quantified in processor cycles) is constant, ignoring that memory system components are not affected by the processor clock. In this paper, we propose the Processor-Memory (Proc-Mem) model, which dynamically predicts the execution time of the applications running in a multicore processor when varying the processor frequency. Proc-Mem approach is compared with a typical Constant Memory Access Time model, namely CMAT. Results show that the deviation of Proc-Mem is always lower than 6% with respect to the measured execution time, while the deviation of the CMAT model always exceeds 30%. These results turn in important energy savings for a similar number of deadline misses. Energy savings are on average by 22.9%, and up to 47.8% in the studied mixes.
Keywords: embedded systems; energy conservation; multiprocessing systems; power aware computing; CMAT model; DVFS; constant memory access time model; deadline misses; dynamic WCET estimation; energy savings; execution time estimation; execution time models; memory system components; multicore processor; off-line analysis; proc-mem model; processor clock; processor cycles; processor frequency; processor-memory model; real-time multicore embedded systems; worst case execution time; Benchmark testing; Estimation; Frequency estimation; Mathematical model; Multicore processing; Real-time systems; Time-frequency analysis (ID#: 15-5303)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7056713&isnumber=7056577
Park, S.; Park, Y.B., "A Multi-Core Architectural Pattern Selection Method for the Transition from Single-Core to Multi-Core Architecture," IT Convergence and Security (ICITCS), 2014 International Conference on, pp. 1, 5, 28-30 Oct. 2014. doi: 10.1109/ICITCS.2014.7021712
Abstract: Along with rapid advancements of convergent devices, increased software complexity paired with contrastingly shortened software product lifecycle have introduced new challenges from which the need to transform legacy single-core based systems to multi-core systems have emerged. Unfortunately, existing software development processes are late in providing adequate support for multi-core parallelization, failing to keep up with the speed of advancements in multi-core based hardware systems. To address this gap, in our previous work we have proposed a software development process to support the transition of an existing single-core based software to a multi-core equivalent. We have also introduced a tool, the Architectural Decision Supporter (ADS), to assist in the selection of appropriate multi-core architectural patterns and in the search for proper construction components. In this paper, we introduce a selection method for choosing the most desirable candidate among various multi-core architectural patterns implemented using ADS. The proposed method provides the means to combine the contextual knowledge of domain applications and the technical knowledge of individual architectural pattern for multi-core processing.
Keywords: multiprocessing systems; software architecture; software maintenance; ADS; Architectural Decision Supporter; domain applications contextual knowledge; individual architectural pattern technical knowledge; multicore architectural pattern selection method; multicore processing; single-core architecture; Concurrent computing; Decoding; Educational institutions; Hardware; Multicore processing; Software (ID#: 15-5304)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7021712&isnumber=7021698
Joonho Kong; Koushanfar, F., "Processor-Based Strong Physical Unclonable Functions With Aging-Based Response Tuning," Emerging Topics in Computing, IEEE Transactions on, vol. 2, no. 1, pp.16, 29, March 2014. doi: 10.1109/TETC.2013.2289385
Abstract: A strong physically unclonable function (PUF) is a circuit structure that extracts an exponential number of unique chip signatures from a bounded number of circuit components. The strong PUF unique signatures can enable a variety of low-overhead security and intellectual property protection protocols applicable to several computing platforms. This paper proposes a novel lightweight (low overhead) strong PUF based on the timings of a classic processor architecture. A small amount of circuitry is added to the processor for on-the-fly extraction of the unique timing signatures. To achieve desirable strong PUF properties, we develop an algorithm that leverages intentional post-silicon aging to tune the inter- and intra-chip signatures variation. Our evaluation results show that the new PUF meets the desirable inter- and intra-chip strong PUF characteristics, whereas its overhead is much lower than the existing strong PUFs. For the processors implemented in 45 nm technology, the average inter-chip Hamming distance for 32-bit responses is increased by 16.1% after applying our post-silicon tuning method; the aging algorithm also decreases the average intra-chip Hamming distance by 98.1% (for 32-bit responses).
Keywords: computer architecture; cryptographic protocols; digital signatures; microprocessor chips; Hamming distance; PUF; aging based response tuning; circuit components; circuit structure; computing platforms; exponential number; intellectual property protection protocols; processor architecture; processor based strong physical unclonable functions; unique chip signatures; Aging; Circuit optimization; Delays; Logic gates; Microprocessors; Multicore processing; Network security; Silicon; Temperature measurement; Circuit aging; Multi-core processor; Negative bias temperature instability; Physically unclonable function; Post-silicon tuning; Secure computing platform; circuit aging; multi-core processor; negative bias temperature instability; postsilicon tuning; secure computing platform (ID#: 15-5305)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6656920&isnumber=6824880
Yuheng Yuan; Zhenzhong He; Zheng Gong; Weidong Qiu, "Acceleration of AES Encryption with OpenCL," Information Security (ASIA JCIS), 2014 Ninth Asia Joint Conference on, pp. 64, 70, 3-5 Sept. 2014. doi: 10.1109/AsiaJCIS.2014.19
Abstract: The occurrence of multi-core processors has made parallel techniques popular. OpenCL, enabling access to the computing power of multi-platforms, taking advantage of the parallel feature of computing devices, gradually obtains researchers' favor. However, when using parallel techniques, which computation granularity and memory allocation strategies to choose bother developers the most. To solve this problem, many researchers had implemented experiments on Nvidia GPUs and found out the best solution for using CUDA. When it comes to use OpenCL on AMD GPU, to the best of our knowledge, less solutions have been proposed in the literature. Therefore, we conduct several experiments to demonstrate the relation between computation granularity and memory allocation methods of the input data when using OpenCL on AES encoding. In granularity of 16 bytes/thread, the encryption throughput of our experiment can achieve 5 Gbps. Compared with previous works, the ratio between the price of GPU and performance from our experiment is promising.
Keywords: cryptography; graphics processing units; multiprocessing systems; parallel processing; storage allocation; AES encoding; AES encryption; AMD GPU;CUDA; Nvidia GPU; OpenCL; computation granularity; computing device; encryption throughput; memory allocation method; memory allocation strategy; multicore processor; parallel technique; Computational modeling; Encryption; Graphics processing units; Instruction sets; Parallel processing; Resource management; Throughput; AES; Cryptography algorithm; Fast parallel implementation; GPU; OpenCL (ID#: 15-5306)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7023241&isnumber=7022266
Martinsen, J.K.; Grahn, H.; Isberg, A.; Sundstrom, H., "Reducing Memory in Software-Based Thread-Level Speculation for JavaScript Virtual Machine Execution of Web Applications," High Performance Computing and Communications, 2014 IEEE 6th Intl Symp on Cyberspace Safety and Security, 2014 IEEE 11th Intl Conf on Embedded Software and Syst (HPCC,CSS,ICESS), 2014 IEEE Intl Conf on, pp. 181, 184, 20-22 Aug. 2014. doi: 10.1109/HPCC.2014.34
Abstract: Thread-Level Speculation has been used to take advantage of multicore processors in virtual execution environments for the sequential JavaScript scripting language. While the results are promising the memory overhead is high. Here we propose to reduce the memory usage by limiting the checkpoint depth based on an in-depth study of the memory and execution time effects. We also propose an adaptive heuristic to dynamically adjust the checkpoints. We evaluate this using 15 web applications on an 8-core computer. The results show that the memory overhead is reduced for Thread Level Speculation by over 90% as compared to storing all checkpoints. Further, the performance is often better than when storing all the checkpoints and at worst 4% slower.
Keywords: Internet; Java; authoring languages; checkpointing; multiprocessing systems; virtual machines; JavaScript scripting language; Javascript virtual machine execution; Web applications; checkpoint depth; memory overhead; memory usage; multicore processor; software-based thread-level speculation; virtual execution environment; Electronic publishing; Encyclopedias; Instruction sets; Internet; Limiting; Memory management; multicore; thread-level speculation; web applications (ID#: 15-5307)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7056737&isnumber=7056577
Gray, A.; Stratford, K., "targetDP: an Abstraction of Lattice Based Parallelism with Portable Performance," High Performance Computing and Communications, 2014 IEEE 6th Intl Symp on Cyberspace Safety and Security, 2014 IEEE 11th Intl Conf on Embedded Software and Syst (HPCC,CSS,ICESS), 2014 IEEE Intl Conf on, pp. 312, 315, 20-22 Aug. 2014. doi: 10.1109/HPCC.2014.212
Abstract: To achieve high performance on modern computers, it is vital to map algorithmic parallelism to that inherent in the hardware. From an application developer's perspective, it is also important that code can be maintained in a portable manner across a range of hardware. Here we present targetDP (target Data Parallel), a lightweight programming layer that allows the abstraction of data parallelism for applications that employ structured grids. A single source code may be used to target both thread level parallelism (TLP) and instruction level parallelism (ILP) on either SIMD multi-core CPUs or GPU accelerated platforms. targetDP is implemented via standard Cpreprocessor macros and library functions, can be added to existing applications incrementally, and can be combined with higher-level paradigms such as MPI. We present CPU and GPU performance results for a benchmark taken from the lattice Boltzmann application that motivated this work. These demonstrate not only performance portability, but also the optimization resulting from the intelligent exposure of ILP.
Keywords: C language; application program interfaces; graphics processing units; lattice Boltzmann methods; macros; message passing; multi-threading; multiprocessing systems; programming environments; software libraries; source code (software);ILP;MPI;SIMD multicore CPU accelerated platform; SIMD multicore GPU-accelerated platform; TLP; algorithmic parallelism; data parallelism abstraction; higher-level paradigms; instruction level parallelism; lattice Boltzmann application; lattice based parallelism abstraction; library functions; lightweight programming layer; performance portability; portable performance; source code; standard C preprocessor macros; structured grids; target data parallel; targetDP; thread level parallelism; Computer architecture; Graphics processing units; Hardware; Lattices; Libraries; Parallel processing; Vectors (ID#: 15-5308)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7056758&isnumber=7056577
Lindsay, A.; Ravindran, B., "On Cache-Aware Task Partitioning for Multicore Embedded Real-Time Systems," High Performance Computing and Communications, 2014 IEEE 6th Intl Symp on Cyberspace Safety and Security, 2014 IEEE 11th Intl Conf on Embedded Software and Syst (HPCC,CSS,ICESS), 2014 IEEE Intl Conf on, pp. 677, 684, 20-22 Aug. 2014. doi: 10.1109/HPCC.2014.105
Abstract: One approach for real-time scheduling on multicore platforms involves task partitioning, which statically assigns tasks to cores, enabling subsequent core-local scheduling. No past partitioning schemes explicitly consider cache effects. We present a partitioning scheme called LWFG, which minimizes cache misses by partitioning tasks that share memory onto the same core and by evenly distributing the total working set size across cores. Our implementation reveals that LWFG improves execution efficiency and reduces mean maximum tardiness over past works by as much as 15% and 60%, respectively.
Keywords: cache storage; multiprocessing systems; real-time systems; scheduling; shared memory systems; LWFG; cache aware task partitioning; core local scheduling; multicore embedded real-time systems; multicore platforms; real-time scheduling; share memory; Job shop scheduling; Linux; Multicore processing; Partitioning algorithms; Processor scheduling; Real-time systems; Schedules; WSS; cache; multicore; real-time; scheduling (ID#: 15-5309)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7056816&isnumber=7056577
Berger, M.; Erlacher, F.; Sommer, C.; Dressler, F., "Adaptive Load Allocation For Combining Anomaly Detectors Using Controlled Skips," Computing, Networking and Communications (ICNC), 2014 International Conference on, pp. 792, 796, 3-6 Feb. 2014. doi: 10.1109/ICCNC.2014.6785438
Abstract: Traditional Intrusion Detection Systems (IDS) can be complemented by an Anomaly Detection Algorithm (ADA) to also identify unknown attacks. We argue that, as each ADA has its own strengths and weaknesses, it might be beneficial to rely on multiple ADAs to obtain deeper insights. ADAs are very resource intensive; thus, real-time detection with multiple algorithms is even more challenging in high-speed networks. To handle such high data rates, we developed a controlled load allocation scheme that adaptively allocates multiple ADAs on a multi-core system. The key idea of this concept is to utilize as many algorithms as possible without causing random packet drops, which is the typical system behavior in overload situations. We developed a proof of concept anomaly detection framework with a sample set of ADAs. Our experiments confirm that the detection performance can substantially benefit from using multiple algorithms and that the developed framework is also able to cope with high packet rates.
Keywords: multiprocessing systems; real-time systems; resource allocation; security of data; ADA; IDS; adaptive load allocation; anomaly detection algorithm; controlled load allocation; controlled skips; high-speed networks; intrusion detection systems; multicore system; multiple algorithms; real-time detection; resource intensive; unknown attacks; High-speed networks; Intrusion detection; Probabilistic logic; Reliability; Uplink; World Wide Web (ID#: 15-5310)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785438&isnumber=6785290
Ye, J.; Songyuan Li; Tianzhou Chen; Minghui Wu; Li Liu, "Core Affinity Code Block Schedule to Reduce Inter-core Data Synchronization of SpMT," High Performance Computing and Communications, 2014 IEEE 6th Intl Symp on Cyberspace Safety and Security, 2014 IEEE 11th Intl Conf on Embedded Software and Syst (HPCC,CSS,ICESS), 2014 IEEE Intl Conf on, pp. 1002, 1007, 20-22 Aug. 2014. doi: 10.1109/HPCC.2014.175
Abstract: Extract parallelism from programs is growing important as the number of cores of processors is increasing. Parallelization usually involves splitting a sequential thread, and schedule the split code to run on multiple cores. E.g. Some previous Speculative Multi-Threading research used code block reordering to automatically parallelize a sequential thread on multi-core processors. Although the parallelized code blocks can run on different cores, there may still be some data dependences among them. Therefore such parallelization will introduce data dependences among the cores where the code blocks run, which should be resolved alongside the execution by cross-core data sync. Cross-core data sync is usually expensive. This paper proposes to minimize the cross-core data sync with core affinity aware code block scheduling. Our work is based on an Speculative Multi-Threading (SpMT) approach with code block reordering. We improve it by implementing an affinity aware block scheduling algorithm. We built a simulator to model the SpMT architecture, and conducted experiments with SPEC2006 benchmarks. The data shows that plenty of cross-core data sync could be reduced (e.g. Up to 28.7% for gromacs) by the affinity aware block scheduling. For inter-core register sync delay of 5 cycles, this may suggest 3.73% increase in performance.
Keywords: multi-threading; multiprocessing systems; scheduling; synchronisation; SpMT; affinity aware block scheduling algorithm; code parallelization; core affinity code block schedule; intercore data synchronization; speculative multithreading; Educational institutions; Instruction sets; Multicore processing; Parallel processing; Registers; Schedules; Synchronization; data sync; multi-core; parallelization; speculative multithreading (ID#: 15-5311)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7056867&isnumber=7056577
Kishore, N.; Kapoor, B., "An Efficient Parallel Algorithm For Hash Computation In Security And Forensics Applications," Advance Computing Conference (IACC), 2014 IEEE International, pp. 873, 877, 21-22 Feb. 2014. doi: 10.1109/IAdCC.2014.6779437
Abstract: Hashing algorithms are used extensively in information security and digital forensics applications. This paper presents an efficient parallel algorithm hash computation. It's a modification of the SHA-1 algorithm for faster parallel implementation in applications such as the digital signature and data preservation in digital forensics. The algorithm implements recursive hash to break the chain dependencies of the standard hash function. We discuss the theoretical foundation for the work including the collision probability and the performance implications. The algorithm is implemented using the OpenMP API and experiments performed using machines with multicore processors. The results show a performance gain by more than a factor of 3 when running on the 8-core configuration of the machine.
Keywords: application program interfaces; cryptography; digital forensics; digital signatures; file organisation; parallel algorithms; probability; OpenMP API; SHA-1 algorithm; collision probability; data preservation; digital forensics; digital signature; hash computation; hashing algorithms; information security; parallel algorithm; standard hash function; Algorithm design and analysis; Conferences; Cryptography; Multicore processing; Program processors; Standards; Cryptographic Hash Function; Digital Forensics; Digital Signature;MD5;Multicore Processors; OpenMP; SHA-1 (ID#: 15-5312)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779437&isnumber=6779283
Tian Xu; Cockshott, P.; Oehler, S., "Acceleration of Stereo-Matching on Multi-core CPU and GPU," High Performance Computing and Communications, 2014 IEEE 6th Intl Symp on Cyberspace Safety and Security, 2014 IEEE 11th Intl Conf on Embedded Software and Syst (HPCC,CSS,ICESS), 2014 IEEE Intl Conf on, pp. 108, 115, 20-22 Aug. 2014. doi: 10.1109/HPCC.2014.22
Abstract: This paper presents an accelerated version of a dense stereo-correspondence algorithm for two different parallelism enabled architectures, multi-core CPU and GPU. The algorithm is part of the vision system developed for a binocular robot-head in the context of the CloPeMa research project. This research project focuses on the conception of a new clothes folding robot with real-time and high resolution requirements for the vision system. The performance analysis shows that the parallelised stereo-matching algorithm has been significantly accelerated, maintaining 12× and 176× speed-up respectively for multi-core CPU and GPU, compared with SISD (Single Instruction, Single Data) single-thread CPU. To analyse the origin of the speed-up and gain deeper understanding about the choice of the optimal hardware, the algorithm was broken into key sub-tasks and the performance was tested for four different hardware architectures.
Keywords: graphics processing units; image matching; image resolution; multiprocessing systems; parallel architectures; robot vision; service robots; stereo image processing; CloPeMa research project; GPU;SISD single-thread CPU; binocular robot-head; clothes folding robot; dense stereo-correspondence algorithm; hardware architectures; high resolution requirements; multicore CPU; parallelised stereo-matching algorithm; single instruction single data; stereo-matching acceleration; vision system; Acceleration; Algorithm design and analysis; Graphics processing units; Image resolution; Instruction sets; Robots; Acceleration; Dense-correspondences; Multi-core CPU; Robotic vision; Stereo matching (ID#: 15-5313)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7056725&isnumber=7056577
Hongwei Zhou; Rangyu Deng; Zefu Dai; Xiaobo Yan; Ying Zhang; Caixia Sun, "The Virtual Open Page Buffer for Multi-core and Multi-thread Processors," High Performance Computing and Communications, 2014 IEEE 6th Intl Symp on Cyberspace Safety and Security, 2014 IEEE 11th Intl Conf on Embedded Software and Syst (HPCC,CSS,ICESS), 2014 IEEE Intl Conf on, pp. 290, 297, 20-22 Aug. 2014. doi: 10.1109/HPCC.2014.52
Abstract: The performance of off-chip DDRx SDRAM has been greatly restricted by the single physical page that can be activated for each DRAM bank at any time. To alleviate this problem, an on-chip virtual open page buffer (VOPB) for multi-core multi-thread processor is proposed. The VOPB maintains a number of virtual active pages for each bank of off-chip memory, which effectively increases the maximum number of active pages and reduces page conflicts in the off-chip memory. Adopted by the FT-1500 processor, the VOPB along with optimized address mapping techniques greatly enhances the bandwidth, latency and energy efficiency of off-chip memory, especially for stream applications. Experimental results show that the VOPB improves off-chip memory bandwidth by 16.87% for Stream OpenMP and 6% for NPB-MPI on average.
Keywords: DRAM chips; message passing; multi-threading; multiprocessing systems; paged storage; DRAM bank;FT-1500 processor; NPB-MPI; Stream OpenMP; VOPB; active pages; address mapping technique optimization; bandwidth enhancement; energy efficiency enhancement; latency enhancement; multicore-multithread processor; off-chip DDRx SDRAM; off-chip memory; off-chip memory bandwidth improvement; on-chip virtual open page buffer; page conflict reduction; physical page; stream applications; virtual active pages; Arrays; Bandwidth; Prefetching; Random access memory; System-on-chip; memory bandwidth; multi-thread; virtual open page(ID#: 15-5314)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7056755&isnumber=7056577
Fengwei Zhang; Jiang Wang; Kun Sun; Stavrou, A., "HyperCheck: A Hardware-Assisted Integrity Monitor," Dependable and Secure Computing, IEEE Transactions on, vol. 11, no .4, pp.332,344, July-Aug. 2014. doi: 10.1109/TDSC.2013.53
Abstract: The advent of cloud computing and inexpensive multi-core desktop architectures has led to the widespread adoption of virtualization technologies. Furthermore, security researchers embraced virtual machine monitors (VMMs) as a new mechanism to guarantee deep isolation of untrusted software components, which, coupled with their popularity, promoted VMMs as a prime target for exploitation. In this paper, we present HyperCheck, a hardware-assisted tampering detection framework designed to protect the integrity of hypervisors and operating systems. Our approach leverages System Management Mode (SMM), a CPU mode in ×86 architecture, to transparently and securely acquire and transmit the full state of a protected machine to a remote server. We have implement two prototypes based on our framework design: HyperCheck-I and HyperCheck-II, that vary in their security assumptions and OS code dependence. In our experiments, we are able to identify rootkits that target the integrity of both hypervisors and operating systems. We show that HyperCheck can defend against attacks that attempt to evade our system. In terms of performance, we measured that HyperCheck can communicate the entire static code of Xen hypervisor and CPU register states in less than 90 million CPU cycles, or 90 ms on a 1 GHz CPU.
Keywords: security of data; virtual machines; virtualisation; CPU register; HyperCheck-I; HyperCheck-II;OS code dependence; SMM; VMM; Xen hypervisor; cloud computing; hardware-assisted integrity monitor; hardware-assisted tampering detection framework; multicore desktop architectures; operating systems; security assumptions; system management mode; untrusted software components; virtual machine monitors; Biomedical monitoring; Hardware; Kernel; Monitoring; Registers; Security; Virtual machine monitors; Coreboot; Hypervisor; kernel; system management mode (ID#: 15-5315)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6682894&isnumber=6851971
Chi Liu; Ping Song; Yi Liu; Qinfen Hao, "Efficient Work-Stealing with Blocking Deques," High Performance Computing and Communications, 2014 IEEE 6th Intl Symp on Cyberspace Safety and Security, 2014 IEEE 11th Intl Conf on Embedded Software and Syst (HPCC,CSS,ICESS), 2014 IEEE Intl Conf on, pp. 149, 152, 20-22 Aug. 2014. doi: 10.1109/HPCC.2014.28
Abstract: Work stealing is a popular and effective approach to implement load balancing in modern multi-/many-core systems, where each parallel thread has its local deque to maintain its own work-set of tasks and performs load balancing by stealing tasks from other deques. Unfortunately, the existing concurrent deques have two limitations. Firstly, these algorithms require memory fences in the owner's critical path operations to ensure correctness, which is expensive in modern weak-memory architectures. Secondly, the concurrent deques are difficult to extend to support various flexible forms of task distribution strategies, which can be more sufficient to optimize computation in some special applications, such as steal-half strategy in solving large, irregular graph problems. This paper proposes a blocking work-stealing deque. We optimize work stealing task deques through effective ways of accessing the deques to decrease the synchronization overhead. These ways can reduce the frequency of races when different threads need to operate on the same deque, especially using massive threads. We present implementation of the algorithm as a C++ library and the experiment results show that it behaves well to Cilk plus on a series of benchmarks. Since our approach relies on blocking deques, it is easy to extend to support flexible task creation and distribution strategies and also reduces the memory fences impact on performance.
Keywords: C++ language; multi-threading; resource allocation; software libraries; C++ library; Cilk plus; blocking work-stealing deque; load balancing; many-core systems; memory fences impact reduction; multicore systems; parallel thread; synchronization overhead; Algorithm design and analysis; Benchmark testing; Containers; Instruction sets; Processor scheduling; Synchronization; deque; load balancing; scheduling strategies; work stealing (ID#: 15-5316)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7056731&isnumber=7056577
Castellanos, A.; Moreno, A.; Sorribes, J.; Margalef, T., "Predicting Performance of Hybrid Master/Worker Applications Using Model-Based Regression Trees," High Performance Computing and Communications, 2014 IEEE 6th Intl Symp on Cyberspace Safety and Security, 2014 IEEE 11th Intl Conf on Embedded Software and Syst (HPCC,CSS,ICESS), 2014 IEEE Intl Conf on, pp. 355, 362, 20-22 Aug. 2014. doi: 10.1109/HPCC.2014.61
Abstract: Nowadays, there are several features related to node architecture, network topology and programming model that significantly affect the performance of applications. Therefore, the task of adjusting the values of parameters of hybrid parallel applications to achieve the best performance requires a high degree of expertise and a huge effort. Determining a performance model that considers all the system and application features is a very complex task that in most cases produces poor results. In order to simplify this goal and improve the results, we introduce a model-based regression tree technique to improve the accuracy of performance prediction for parallel Master/Worker applications on homogeneous multicore systems. The technique has been used to model the iteration time of the general expression for performance prediction. This approach significantly reduces the effort in getting an accurate prediction model, although it requires a relatively large training data set. The proposed model determines the configuration of the appropriate number of workers and threads of the hybrid application to achieve the best possible performance.
Keywords: iterative methods; multiprocessing systems; parallel processing; performance evaluation; regression analysis; trees (mathematics);homogeneous multicore systems; hybrid master-worker applications; hybrid parallel applications; iteration time; large training data set; model-based regression tree technique; network topology; node architecture; performance prediction; programming model; Computational modeling; Message systems; Multicore processing; Predictive models; Regression tree analysis; Training; Training data; Hybrid applications; Master/Worker; Multicore; Performance model; Regression tree (ID#: 15-5317)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7056765&isnumber=7056577
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Open Systems and Security, 2014 |
Open systems historically seemed "immune" to cyber-attacks because hackers used the same software. But increasingly, open systems vulnerabilities are being exploited. The articles cited here explore various aspects of open systems security, including resource sharing, software specifications, attack vectors and dependability. Nearly five hundred research articles on the subject of open systems and security were published in 2104. The ones cited here appear to have the most direct relevance to the Science of Security and cyber-physical systems.
Azhar, I.; Ahmed, N.; Abbasi, A.G.; Kiani, A.; Shibli, A., "Keeping Secret Keys Secret In Open Systems," Open Source Systems and Technologies (ICOSST), 2014 International Conference on, pp. 100, 104, 18-20 Dec. 2014. doi: 10.1109/ICOSST.2014.7029328
Abstract: Security of cryptographic keys stored on an untrusted host is a challenging task. Casual storage of keys could lead to an unauthorized access using physical means. If an adversary can access the binary code, the key material can be easily extracted using well-known key-finding techniques. This paper proposes a new technique for securing keys within software. In our proposed technique, we transform keys (randomly generated bit-strings) to a set of randomized functions, which are then compiled and obfuscated together to form a secure application. When the keys are required at the run-time, an inverse transform is computed by the application dynamically to yield the original bit-strings. We demonstrate that our technique resists attacks by many entropy based key finding algorithms that scan the host's RAM at run-time.
Keywords: computer network security; cryptography; inverse transforms; open systems; RAM; binary code; cryptographic key security; entropy-based key finding algorithm; inverse transform; key material; key-finding technique; open systems; randomized functions; randomly-generated bit-strings; secret keys; Availability; Cryptography; Heuristic algorithms; Lead; Open systems; Software; Key Hiding; Open System Security; White-Box Model (ID#: 15-5253)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7029328&isnumber=7029304
Shand, C.; McMorran, A.; Taylor, G., "Integration And Adoption Of Open Data Standards For Online And Offline Power System Analysis," Power Engineering Conference (UPEC), 2014 49th International Universities, pp. 1, 6, 2-5 Sept. 2014. doi: 10.1109/UPEC.2014.6934667
Abstract: The scalable communication, processing and storage of data within a power network is becoming more and more necessary to ensure the reliability of the grid and maintain the security of supply to consumers. Not all communications are performed in the same timeframe, at the same frequency, or at the same time of day; this results in problems when trying to coordinate a power network and the necessary data exchange. Different open or proprietary standards are often incompatible with each other both in terms of their communication protocols and data models. This causes electricity companies and standards groups to develop their own method of data exchange thus resulting in problems for exchanging and integrating this data, both internally and externally. Overcoming the challenges with incompatible data structure, serialisation formats and communication protocols will make it easier to integrate systems and realise the potential of being able to integrate data across domains. These include the ability to integrate real-time data into offline analysis tools; or utilising smart-meter data to enable true real-time pricing for electricity markets.
Keywords: data communication; power distribution economics; power grids; power markets; power supplies to apparatus; power system security; power transmission economics; protocols; communication protocol;data exchange; data storage model; electricity company; electricity consumer; electricity market; offline power system analysis; online power system analysis; open data standard integration; power network grid reliability; power supply security; scalable communication; smart meter; Computer integrated manufacturing; Data models; IEC standards; Phasor measurement units; Protocols; Real-time systems; Communication; Data Exchange; Open Standards; Power System Analysis (ID#: 15-5254)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6934667&isnumber=6934581
Subin Shen; Carugi, M., "Standardizing The Internet Of Things In An Evolutionary Way," ITU Kaleidoscope Academic Conference: Living In A Converged World - Impossible Without Standards?, Proceedings of the 2014, pp. 249, 254, 3-5 June 2014. doi: 10.1109/Kaleidoscope.2014.6858472
Abstract: The current situation of technology separation among the different application domains of the Internet of Things (IoT) results in a market separation per application domain. This issue hinders the technical innovation and investments in the IoT business. In order to solve the issue, it is necessary to standardize common technologies of the IoT across the different application domains. This paper argues that the key direction of the future standardization of the IoT is not standardizing specific technologies, but building over a standardized new architecture reference model for the IoT. Based on the analysis of existing key activities concerning the standardization of OSI, NGN and IoT from a functional architecture perspective, it suggests that the IoT standardization work be progressed in an evolutionary way in order to enable the integration of existing technologies, and focus on the interactions among the functional entities of the IoT to impose minimum constraints on future technical innovations.
Keywords: Internet of Things; social aspects of automation; International Telecommunication Union; Internet of Things; IoT; evolutionary way; next generation network; open system interconnection; Computer architecture; Next generation networking; Open systems; Privacy; Security; Telecommunication standards; Internet of Things; Next Generation Network; Open System Interconnection; architecture reference model; functional entity; interaction; standardization (ID#: 15-5255)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6858472&isnumber=6858455
Alqahtani, S.M.; Al Balushi, M.; John, R., "An Intelligent Intrusion Prevention System for Cloud Computing (SIPSCC)," Computational Science and Computational Intelligence (CSCI), 2014 International Conference on, vol.2, pp.152,158, 10-13 March 2014. doi: 10.1109/CSCI.2014.161
Abstract: Cloud computing is a fast growing IT model for the exchange and delivery of different services through the Internet. However there is a plethora of security concerns in cloud computing which still need to be tackled (e.g. confidentiality, auditability and Privileged User Access). To detect and prevent such issues, the Intrusion Detection System (IDS) and Intrusion Prevention System (IPS) are effective mechanism against attacks such as SQL Injection. This study proposes a new service of IPS that prevents SQL injections when it comes over cloud computing website(CCW) using signature-based devices approach. A model has been implemented on three virtual machines. Through this implementation, a service-based intrusion prevention system in cloud computing (SIPSCC) is proposed, investigated and evaluated from three perspectives the vulnerability detection, average time, and false positives.
Keywords: SQL; Web sites; cloud computing; digital signatures; security of data; virtual machines; CCW;IDS;IPS; Internet; SIPSCC;SQL injections; cloud computing Web site; intelligent intrusion prevention system; intrusion detection system; service-based intrusion prevention system in cloud computing; signature-based device approach; virtual machines; vulnerability detection; Cloud computing; Databases; Educational institutions; Intrusion detection; Servers; SIPSCC; CCW; IDS; IPS; Open Source Hostbased Intrusion Detection System (OSSEC) (ID#: 15-5256)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6822321&isnumber=6822285
Qingshui Xue; Fengying Li; Zhenfu Cao, "Proxy Multi-Signature Binding Positioning Protocol," Communications in China (ICCC), 2014 IEEE/CIC International Conference on, pp. 166, 170, 13-15 Oct. 2014. doi: 10.1109/ICCChina.2014.7008265
Abstract: Position-based cryptography has attracted lots of researchers' attention. In the mobile Internet, there are many position-based security applications. For the first time, one new conception-proxy multi-signature binding positioning protocols, is proposed. Based on one secure positioning protocol, one model of the proxy multi-signature binding positioning protocols is proposed. In the model, positioning protocols are bound to proxy multi-signature tightly, not loosely. Further, we propose one scheme of proxy multi-signature binding positioning protocols. As far as we know, it is the first scheme of proxy multi-signature binding positioning protocols.
Keywords: cryptographic protocols; mobile Internet; position-based cryptography; position-based security application; proxy multisignature binding positioning protocol; Cryptography; Internet; Mobile communication; Open systems; Privacy; Protocols; Positioning protocol; UC security; model; proxy multi-signature; proxy signature; scheme (ID#: 15-5257)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7008265&isnumber=7008220
Trenwith, P.M.; Venter, H.S., "A Digital Forensic Model For Providing Better Data Provenance In The Cloud," Information Security for South Africa (ISSA), 2014, pp. 1 ,6, 13-14 Aug. 2014. doi: 10.1109/ISSA.2014.6950489
Abstract: The cloud has made digital forensic investigations exceedingly difficult due to the fact that data may be spread over an ever-changing set of hosts and data centres. The normal search and seizure approach that digital forensic investigators tend to follow does not scale well in the cloud because it is difficult to identify the physical devices that data resides on. In addition, the location of these devices is often unknown or unreachable. A solution to identifying the physical device can be found in data provenance. Similar to the tags included in an email header, indicating where the email originated, a tag added to data, as it is passed on by nodes in the cloud, identifies where the data came from. If such a trace can be provided for data in the cloud it may ease the investigating process by indicating where the data can be found. In this research the authors propose a model that aims to identify the physical location of data, both where it originated and where it has been as it passes through the cloud. This is done through the use of data provenance. The data provenance records will provide digital investigators with a clear record of where the data has been and where it can be found in the cloud.
Keywords: cloud computing; digital forensics; cloud computing; data provenance; digital forensic model; email header; search and seizure approach; Cloud computing; Computational modeling; Computers; Digital forensics; Open systems; Protocols; Servers; Cloud Computing; Digital Forensic Investigation; Digital Forensics; annotations; bilinear pairing technique; chain of custody; data provenance (ID#: 15-5258)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950489&isnumber=6950479
Sitnikova, E.; Asgarkhani, M., "A Strategic Framework For Managing Internet Security," Fuzzy Systems and Knowledge Discovery (FSKD), 2014 11th International Conference on, pp. 947, 955, 19-21 Aug. 2014. doi: 10.1109/FSKD.2014.6980967
Abstract: The internet which was originally developed as an open distributed system has since evolved to become a key platform for connectivity of businesses and communities. Today, the internet is used for transferring critical information amongst sophisticated systems. These systems extend beyond one business organization to community of customers and suppliers. Consequently, today, vulnerabilities and risks to the Internet are equally relevant to systems that are integrated within corporate networks. Cloud Computing solutions, Supervisory Control and Data Acquisition (SCADA) systems and the Bring Your Own Device (BYOD) approach adopted by some organizations are examples of complexity of managing Internet security today. These systems are not only vulnerable to own system specific issues but also threatened by other Internet-related vulnerabilities. Whilst numerous previous studies have identified the need for managing Internet security, there remains a need for taking a strategic approach in security management of the Internet and sensitive Industrial Control Systems (ICS) integrated systems. This paper examines research on Internet security using a risk management approach. It presents an overview of key issues and recommends a management framework for secure Internet access.
Keywords: Bring Your Own Device; SCADA systems; business data processing; cloud computing; industrial control; open systems; risk management; security of data; BYOD approach; ICS integrated systems; Internet security management; SCADA systems; bring your own device approach; business organization; cloud computing solutions ;industrial control system integrated systems; open distributed system; risk management approach; supervisory control and data acquisition systems; Cloud computing; Computer crime; Computer hacking; Organizations; SCADA systems; Cloud Computing; Cyber Security; Internet Security; Risk Management; SCADA Systems; Strategic Security Management (ID#: 15-5259)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6980967&isnumber=6980796
Bakshi, K., "Secure Hybrid Cloud Computing: Approaches And Use Cases," Aerospace Conference, 2014 IEEE, pp. 1, 8, 1-8 March 2014. doi: 10.1109/AERO.2014.6836198
Abstract: Hybrid cloud is defined as a cloud infrastructure composed of two or more cloud infrastructures (private, public, and community clouds) that remain unique entities, but are bound together via technologies and approaches for the purposes of application and data portability. This paper will review a novel approach for implementing a secure hybrid cloud. Specifically, public and private cloud entities will be discussed for a hybrid cloud approach. The approach is based on extension of virtual Open Systems Interconnection (OSI) Layer 2 switching functions from a private cloud and to public clouds, tunneled on an OSI Layer 3 connection. As a result of this hybrid cloud approach, virtual workloads can be migrated from the private cloud to the public cloud and continue to be part of the same Layer 2 domain as in the private cloud, thereby maintaining consistent operational paradigms in bot the public and private cloud. This paper will introduce and discuss the virtual switching technologies which are fundamental underpinnings of the secure hybrid approach. This paper will not only discuss the virtual Layer 2 technical architecture of this approach, but also related security components. Specifically, data in motion security between the public and private clouds and interworkload secure communication in the public cloud will be reviewed. As part of the hybrid cloud approach, security aspects like encrypted communication tunnels, key management, and security management will be discussed. Moreover, management consoles, control points, and integration with cloud orchestration systems will also be discussed. Additionally, hybrid cloud consideration for network services like network firewall, server load balancers, application accelerators, and network routing functions will be examined. Finally, several practical use cases which can be applicable in the aerospace industry, like workload bursting, application development environments, and Disaster Recovery as a Service will be explored.
Keywords: cloud computing; open systems; security of data; OSI; aerospace industry; application accelerators; cloud infrastructure; cloud orchestration systems; community clouds; data portability; disaster recovery; encrypted communication tunnels; key management; motion security; network firewall; network routing functions; open systems interconnection; private clouds; public clouds; secure hybrid cloud computing; security aspects; security components; security management; server load balancers; switching functions; virtual switching technologies; Cloud computing; Computer architecture; Switches; Virtual machine monitors; Virtual machining (ID#: 15-5260)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6836198&isnumber=6836156
Xiao Chun Yin; Zeng Guang Liu; Hoon Jae Lee, "An Efficient And Secured Data Storage Scheme In Cloud Computing Using ECC-Based PKI," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp. 523, 527, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6779015
Abstract: Cloud computing is set of resources and services offered through the Internet. Cloud services are delivered from data centres located throughout the world. Cloud computing facilitates its consumers by providing virtual resources via internet. The rapid growth in field of "cloud computing" also increases severe security concerns. Security has remained a constant issue for Open Systems and internet, when we are talking about security, cloud really suffers. Lack of security is the only hurdle in wide adoption of cloud computing. Cloud computing is surrounded by many security issues like securing data and examining the utilization of cloud by the cloud computing vendors. This paper proposes a scheme to securely store and access of data via internet. We have used ECC based PKI for certificate procedure because the use of ECC significantly reduces the computation cost, message size and transmission overhead over RSA based PKI as 160-bit key size in ECC provides comparable security with 1024-bit key in RSA. We have designed Secured Cloud Storage Framework (SCSF). In this framework, users not only can securely store and access data in cloud but also can share data with multiple users through the unsecure internet in a secured way. This scheme can ensure the security and privacy of the data in the cloud.
Keywords: cloud computing; computer centres; data privacy; open systems; public key cryptography; security of data; storage management; ECC-based PKI; RSA based PKI; SCSF; certificate procedure; cloud computing; cloud services; computation cost; data centres; data privacy; data security; message size; open systems; secured cloud storage framework; secured data storage scheme; security concern; transmission overhead; unsecure Internet; virtual resources; Cloud computing; Educational institutions; Elliptic curve cryptography; Elliptic curves; Certificate; Cloud computing; Cloud storage; ECC; PKI (ID#: 15-5261)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779015&isnumber=6778899
Paudel, S.; Tauber, M.; Wagner, C.; Hudic, A.; Wee-Kong Ng, "Categorization of Standards, Guidelines and Tools for Secure System Design for Critical Infrastructure IT in the Cloud," Cloud Computing Technology and Science (CloudCom), 2014 IEEE 6th International Conference on, pp. 956, 963, 15-18 Dec. 2014. doi: 10.1109/CloudCom.2014.172
Abstract: With the increasing popularity of cloud computing, security in cloud-based applications is gaining awareness and is regarded as one of the most crucial factors for the long term success of such applications. Despite all benefits of cloud computing, its fate lies in its success in gaining trust from its users achieved by ensuring cloud services being built in a safe and secure manner. This work evaluates existing security standards and tools for creating Critical Infrastructure (CI) services in cloud environments -- often implemented as cyber physical systems (CPS). We also identify security issues from a literature review and from a show case analysis. Furthermore, we analyse and evaluate how mitigation options for identified open security issues for CI in the cloud point to individual aspects of standards and guidelines to support the creation of secure CPS/CI in the cloud. Additionally, we presented the results in a multidimensional taxonomy based on the mapping of the issues and the standards and tools. We show which areas require the attention as they are currently not covered completely by existing standards, guidelines and tools.
Keywords: cloud computing; critical infrastructures; open systems; security of data; standards; trusted computing; CPS; cloud computing; cloud environments; cloud services; cloud-based applications; critical infrastructure IT; critical infrastructure services; cyberphysical systems; guideline categorization; multidimensional taxonomy; open security issues; secure system design; standard categorization; Cloud computing; Context; Guidelines; Security; Standards; Taxonomy; CPS; critical infrastructure; secure software development; security-engineering (ID#: 15-5262)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7037790&isnumber=7036227
Diogo, P.; Reis, L.P.; Vasco Lopes, N., "Internet Of Things: A System's Architecture Proposal," Information Systems and Technologies (CISTI), 2014 9th Iberian Conference on, pp. 1, 6, 18-21 June 2014. doi: 10.1109/CISTI.2014.6877072
Abstract: Internet of Things (IoT) is seen as the future of Internet. We will step out from typical current communication paradigm, to a much wider spectrum, where normal “things” will talk to each other, independent of human interaction. Emphasizing its importance in health industry, it can save lives and improve the ageing and disabled population's quality of living. It is not just things connected to the Internet - it is intelligent systems that we will be able to build on top of IoT that will introduce us to a better quality of life. However, IoT is facing a major problem: fragmentation and interoperability problems. If we want things to communicate with each other, intelligently and autonomously, then the new future Internet must be structured to allow such thing. The industry must adopt current standards and provide interoperability among other systems and developers must be aware of this issue too. Every new device should be IoT proof for future integration in IoT. In this article, there is a focus on these health-related use cases where they are detailed and explained how IoT could be deployed to aid in specific cases. The second part of the article takes the current IoT problem and tackles its issues, presenting a communication paradigm and proposes a new IoT system's architecture.
Keywords: Internet of Things; health care; medical information systems; open systems; Internet of Things; IoT system architecture; ageing population quality of living improvement; communication paradigm; disabled population quality of living improvement; fragmentation problem; health industry; intelligent systems; interoperability problem; quality of life; Internet of Things; Logic gates; Security; Telecommunication standards; Web services; Internet of Things; M2M; architecture; communication; e-health; fragmentation; interoperability (ID#: 15-5263)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6877072&isnumber=6876860
Saravanakumar, C.; Arun, C., "Survey On Interoperability, Security, Trust, Privacy Standardization Of Cloud Computing," Contemporary Computing and Informatics (IC3I), 2014 International Conference on, pp. 977, 982, 27-29 Nov. 2014. doi: 10.1109/IC3I.2014.7019735
Abstract: Cloud computing is a service oriented concept which offers everything as a service. These services are deployed at the server with necessary credentials in order to provide reliable services to the customer. The customer always wants to process and store the data in the cloud with an efficient access over different location. The security is the key parameter to secure the customer's data. The cloud computing security issues are addressed in various standards and techniques which lacks in providing a complete solution. The privacy issues in the cloud access are handled and assessed by using privacy protocols and assessment techniques which are also addressed. The trust issues in cloud computing has been addressed with different models. An inter-cloud and intra-cloud standard of cloud interoperability has been identified in order to highlight the challenges exist during the cloud interaction. The cloud resources are deployed over cloud environment with different models also faces a problem. This paper focuses on a recent survey related to the cloud interoperability, security, privacy and trust based on standards and guidelines have been analyzed. The overall focus on this paper is to establish an interoperability among different cloud service providers for effective interaction by maximizing the QoS of cloud computing.
Keywords: cloud computing; data privacy; open systems; security of data; trusted computing; QoS; assessment techniques; cloud access; cloud computing security issues; cloud environment; cloud interaction; cloud interoperability; cloud privacy; intercloud standard; intracloud standard; privacy issues; privacy protocols; service oriented concept; Cloud computing; Computational modeling; Interoperability; Privacy; Security; Standards; Cloud Interoperability; Privacy; Security; Standardization; Trust Management (ID#: 15-5264)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7019735&isnumber=7019573
Aouadi, M.H.E.; Toumi, K.; Cavalli, A., "On Modeling and Testing Security Properties of Vehicular Networks," Software Testing, Verification and Validation Workshops (ICSTW), 2014 IEEE Seventh International Conference on, pp. 42, 50, March 31 2014-April 4 2014. doi: 10.1109/ICSTW.2014.56
Abstract: In this paper a new model to formally represent some units of a vehicular network system is presented. We show how this formalism, based on finite state machines augmented with variables, allows us to represent such kind of system. We focus in a scenario of vehicle to infrastructure (V2I) communication with the Dynamic Route Planning (DRP) service as a case study. We enrich this model by a new negotiation scenario. Next, we present the notion of a test in our framework, and discuss some testing scenarios which compile some security and interoperability properties. To support the theoretical framework we translate the system specification on an IF code which we will use to generate test cases using the TestGen-IF tool. These test cases allow us to perform experiments to verify the security and interoperability properties.
Keywords: finite state machines; intelligent transportation systems; open systems; security of data; DRP service; IF code; TestGen-IF tool; V2I communication; dynamic route planning; finite state machines; interoperability properties; negotiation scenario; security property testing; vehicle to infrastructure; vehicular network system;Interoperability;Navigation;Roads;Security;Software;Testing;Vehicles (ID#: 15-5265)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825637&isnumber=6825623
Genge, B.; Beres, A.; Haller, P., "A Survey On Cloud-Based Software Platforms To Implement Secure Smart Grids," Power Engineering Conference (UPEC), 2014 49th International Universities, pp. 1, 6, 2-5 Sept. 2014. doi: 10.1109/UPEC.2014.6934607
Abstract: Smart Grid has been characterized as the next generation power grid in which modern Information and Communication Technologies (ICT) will improve control, reliability and safety. Although the adoption of generic off-the-shelf ICT in Smart Grid provisions indisputable advantages and benefits, it raises several issues concerning the reliability and security of communications - the core infrastructure of Smart Grid. Cloud computing has developed and evolved over the past years becoming a real choice for Smart Grids infrastructure because of the availability, scalability, performance and interoperability that it offers. In this paper we present a survey of the existing cloud-based software platforms for implementing secure Smart Grids. Security issues like authentication and authorization of users, data encryption, availability, attacker impact, detection and trust management have received significant attention in previous work. Nevertheless, as shown in this paper, their integration and adaptation to emerging fields such as Smart Grid is still in an embryonic state. As such, we report recent advancements and software platforms specifically for Smart Grid and we outline several issues as well as suggestions for designing security-aware platforms for Smart Grid.
Keywords: cloud computing; open systems; power engineering computing; power system security; smart power grids; Information and Communication Technologies; cloud based software platform; cloud computing; generic off-the-shelf ICT; interoperability; next generation power grid safety control; secure smart grid infrastructure reliability; Availability; Cloud computing; Educational institutions; Encryption; Smart grids; Smart Grid; cloud computing; privacy; security (ID#: 15-5266)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6934607&isnumber=6934581
Brahma, S.; Kwiat, K.; Varshney, P.K.; Kamhoua, C., "Diversity and System Security: A Game Theoretic Perspective," Military Communications Conference (MILCOM), 2014 IEEE, pp. 146, 151, 6-8 Oct. 2014. doi: 10.1109/MILCOM.2014.30
Abstract: It has been argued that systems that are comprised of similar components (i.e., A monoculture) are more prone to attacks than a system that exhibits diversity. But it is not currently clear how much diversity is needed and how to leverage the underlying diversity in the design space. Here we attempt to study these issues using a Game Theoretic model comprised of multiple systems and an attacker. The model illustrates how the concept of the Nash Equilibrium provides a theoretical framework for designing strategic security solutions and how the mixed strategy solution space provides a conceptual basis for defining optimal randomization techniques that can exploit the underlying diversity. The paper also studies how strategic behavior influences the diversity and vulnerability of an overall system. Simulation results provide further insights into the effectiveness of our solution approach and the dynamics of strategic interaction in the context of system security.
Keywords: game theory; open systems; telecommunication security; Nash equilibrium; diversity;game theory; mixed strategy solution space; optimal randomization; system security; Circuit faults; Fault tolerant systems; Games; Redundancy; Security; Switches (ID#: 15-5267)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956751&isnumber=6956719
Shafagh, H.; Hithnawi, A., "Poster Abstract: Security Comes First, a Public-key Cryptography Framework for the Internet of Things," Distributed Computing in Sensor Systems (DCOSS), 2014 IEEE International Conference on, pp. 135, 136, 26-28 May 2014. doi: 10.1109/DCOSS.2014.62
Abstract: Novel Internet services are emerging around an increasing number of sensors and actuators in our surroundings, commonly referred to as smart devices. Smart devices, which form the backbone of the Internet of Things (IoT), enable alternative forms of user experience by means of automation, convenience, and efficiency. At the same time new security and safety issues arise, given the Internet-connectivity and the interaction possibility of smart devices with human's proximate living space. Hence, security is a fundamental requirement of the IoT design. In order to remain interoperable with the existing infrastructure, we postulate a security framework compatible to standard IP-based security solutions, yet optimized to meet the constraints of the IoT ecosystem. In this ongoing work, we first identify necessary components of an interoperable secure End-to-End communication while incorporating Public-key Cryptography (PKC). To this end, we tackle involved computational and communication overheads. The required components on the hardware side are the affordable hardware acceleration engines for cryptographic operations and on the software side header compression and long-lasting secure sessions. In future work, we focus on integration of these components into a framework and the evaluation of an early prototype of this framework.
Keywords: IP networks; Internet; Internet of Things; open systems; public key cryptography; IP-based security solutions; Internet of Things; Internet services; Internet-connectivity; IoT; end-to-end communication; interoperability; public-key cryptography; safety issues; security issues; smart devices; Acceleration; Cryptography; Engines; Hardware; Internet of Things; Protocols (ID#: 15-5268)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846155&isnumber=6846129
Neisse, R.; Fovino, I.N.; Baldini, G.; Stavroulaki, V.; Vlacheas, P.; Giaffreda, R., "A Model-Based Security Toolkit for the Internet of Things," Availability, Reliability and Security (ARES), 2014 Ninth International Conference on, pp. 78, 87, 8-12 Sept. 2014. doi: 10.1109/ARES.2014.17
Abstract: The control and protection of user data is a very important aspect in the design and deployment of the Internet of Things (IoT). The heterogeneity of the IoT technologies, the number of the participating devices and systems, and the different types of users and roles create important challenges in the IoT context. In particular, requirements of scalability, interoperability and privacy are difficult to address even with the considerable amount of existing work both in the research and standardization community. In this paper we propose a Model-based Security Toolkit, which is integrated in a management framework for IoT devices, and supports specification and efficient evaluation of security policies to enable the protection of user data. Our framework is applied to a Smart City scenario in order to demonstrate its feasibility and performance.
Keywords: Internet of Things; data privacy; formal specification; open systems; security of data; Internet of Things; IoT devices; Smart City scenario; interoperability requirement; model-based security toolkit; privacy requirement; scalability requirement; security policy evaluation; security policy specification; user data control; user data protection; Context; Context modeling; Data models; Data privacy; Graphical user interfaces; Security; Standardization; Internet of Things; Management; Security (ID#: 15-5269)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6980266&isnumber=6980232
Rajagopal, N.; Prasad, K.V.; Shah, M.; Rukstales, C., "A New Data Classification Methodology To Enhance Utility Data Security," Innovative Smart Grid Technologies Conference (ISGT), 2014 IEEE PES, pp. 1, 5, 19-22 Feb. 2014. doi: 10.1109/ISGT.2014.6816451
Abstract: Classification of data is an important step to strengthen the control of data, define how it is distributed, and who has access to the data. There are established practices among other industries like finance and banking. This paper describes a security framework for classification of data in electric utilities. Presently, data classification is viewed more from Information Security (IS) perspective with limited involvement of business functions. Present approach in utilities does not cover much of the data from Operational Technology (OT) systems. Implementation of Smart Grid increases the complexity of Data Classification with possibilities for dynamic data aggregation through enterprise level system integration. NIST Special Publication 800-60 provides guidelines to arrive at security classification based on broadly classified limited types of utility data. The new approach presented overcomes this limitation by mapping data types to appropriate interface categories based on the guidelines from Smart Grid Interoperability Panel (SGIP) - NISTIR 7628. Case study of a Data Classification exercise carried out for a North American Utility is presented. Some learnings and recommendations for enhancement of the approach are also discussed. A registry tool developed for Data Classification using the new approach is explained.
Keywords: data handling; electricity supply industry; open systems; pattern classification; power engineering computing; power system protection; power system security; security of data; smart power grids; NIST Special Publication 800-60;North American utility; SGIP; Smart Grid Interoperability Panel-NISTIR 7628;business functions; data classification complexity; data classification methodology; data control; data type mapping; dynamic data aggregation; electric utilities; enterprise level system integration; information security perspective; interface categories; security classification; security framework; smart grid; utility data security enhancement; NIST; Security; Smart grids; Critical Infrastructure Protection; Data Classification; Security; Security framework; Smart Grid (ID#: 15-5270)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6816451&isnumber=6816367
Bovet, G.; Hennebert, J., "Distributed Semantic Discovery for Web-of-Things Enabled Smart Buildings," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp. 1, 5, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814015
Abstract: Nowadays, our surrounding environment is more and more scattered with various types of sensors. Due to their intrinsic properties and representation formats, they form small islands isolated from each other. In order to increase interoperability and release their full capabilities, we propose to represent devices descriptions including data and service invocation with a common model allowing to compose mashups of heterogeneous sensors. Pushing this paradigm further, we also propose to augment service descriptions with a discovery protocol easing automatic assimilation of knowledge. In this work, we describe the architecture supporting what can be called a Semantic Sensor Web-of-Things. As proof of concept, we apply our proposal to the domain of smart buildings, composing a novel ontology covering heterogeneous sensing, actuation and service invocation. Our architecture also emphasizes on the energetic aspect and is optimized for constrained environments.
Keywords: Internet of Things; Web services; home automation; ontologies (artificial intelligence);open systems; software architecture; wireless sensor networks; actuator; data invocation; distributed semantic discovery protocols; interoperability; intrinsic properties; knowledge automatic assimilation; ontology covering heterogeneous sensor; semantic sensor Web of Things; service invocation; smart building; Ontologies; Resource description framework; Semantics; Sensors; Smart buildings; Web services (ID#: 15-5271)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814015&isnumber=6813963
Amoah, R.; Suriadi, S.; Camtepe, S.; Foo, E., "Security Analysis Of The Non-Aggressive Challenge Response Of The DNP3 Protocol Using A CPN Model," Communications (ICC), 2014 IEEE International Conference on, pp. 827, 833, 10-14 June 2014. doi: 10.1109/ICC.2014.6883422
Abstract: Distributed Network Protocol Version 3 (DNP3) is the de-facto communication protocol for power grids. Standard-based interoperability among devices has made the protocol useful to other infrastructures such as water, sewage, oil and gas. DNP3 is designed to facilitate interaction between master stations and outstations. In this paper, we apply a formal modelling methodology called Coloured Petri Nets (CPN) to create an executable model representation of DNP3 protocol. The model facilitates the analysis of the protocol to ensure that the protocol will behave as expected. Also, we illustrate how to verify and validate the behaviour of the protocol, using the CPN model and the corresponding state space tool to determine if there are insecure states. With this approach, we were able to identify a Denial of Service (DoS) attack against the DNP3 protocol.
Keywords: Petri nets; SCADA systems; computer network security; graph colouring; open systems; power grids; protocols; CPN model;DNP3 protocol; coloured Petri nets; de-facto communication protocol; denial of service attack; distributed network protocol version 3;executable model representation; formal modelling methodology; insecure states; master stations; nonaggressive challenge response;outstations; power grids; security analysis; standard-based interoperability; state space tool; Analytical models; Authentication; Data models; Image color analysis; Protocols; Standards; Coloured Petri Nets (CPN);Distributed Network Protocol Version 3 (DNP3); NonAggressive Challenge Response (NACR);Supervisory Control and Data Acquisition Systems (SCADA) (ID#: 15-5272)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883422&isnumber=6883277
Hitefield, S.; Nguyen, V.; Carlson, C.; O'Shea, T.; Clancy, T., "Demonstrated LLC-Layer Attack And Defense Strategies For Wireless Communication Systems," Communications and Network Security (CNS), 2014 IEEE Conference on, pp. 60, 66, 29-31 Oct. 2014. doi: 10.1109/CNS.2014.6997466
Abstract: In this work we demonstrate an over-the-air capability to exploit software weaknesses in the signal processing code implementing the physical and link layers of the OSI stack. Our test bed includes multiple nodes leveraging both GNU Radio and the Universal Software Radio Peripheral to demonstrate these attacks and corresponding defensive strategies. More specifically, we examine two duplex modem implementations, continuous wave and OFDM, and a link layer framing protocol vulnerable to buffer overflow attacks. We also discuss possible attacks against the network layer and above by exploiting a waveform utilizing the GNU Radio tunnel/tap block, which allows the waveform to directly interact with the Linux kernel's network stack. Lastly, we consider several different defensive countermeasures, both active and passive, for detecting vulnerabilities in the waveform implementation and also detecting malicious activity in the system. These mitigation strategies should be used to protect communications systems from succumbing to similar classes of attacks.
Keywords: Linux; OFDM modulation; modems; open systems; operating system kernels; protocols; radio networks; signal processing; software radio; telecommunication security; GNU radio tunnel; LLC-layer attack; Linux kernel network stack; OFDM; OSI stack; buffer overflow attack; communication system protection; continuous wave; duplex modem implementation; link layer framing protocol; malicious activity detection; physical layer; signal processing code implemention; universal software radio peripheral; wireless communication system; OFDM; Payloads; Protocols; Receivers; Security; Software; Wireless communication (ID#: 15-5273)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6997466&isnumber=6997445
Ficco, M.; Tasquier, L.; Aversa, R., "Agent-Based Intrusion Detection for Federated Clouds," Intelligent Networking and Collaborative Systems (INCoS), 2014 International Conference on, pp. 586, 591, 10-12 Sept. 2014. doi: 10.1109/INCoS.2014.93
Abstract: In the last years, the cloud services market has experienced an extremely rapid growth, as reported in several market research reports, which may lead to severe scalability problems. Therefore, federating multiple clouds is enjoying a lot of attention from the academic and commercial point of views. In this context, publish-subscribe is a widely used paradigm to support the interoperability of federated clouds. In this paper, we describe some potential vulnerabilities of a publish-subscribe based federated cloud system. In particular, we propose an agent-based system that aims at monitoring security vulnerabilities that affect such kind of inter-cloud cooperation solutions.
Keywords: cloud computing; message passing; middleware; open systems; security of data; agent-based intrusion detection; federated cloud interoperability; Intercloud cooperation solutions; publish-subscribe based federated cloud system; security vulnerabilities; Force; Measurement; Middleware; Monitoring; Probes; Security; Subscriptions; Cloud federation; agent-based approach; denial of service; intrusion detection; publish-subscribem (ID#: 15-5274)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7057154&isnumber=7057036
Draper-Gil, G.; Ferrer-Gomila, J.L.; Hinarejos, M.F.; Tauber, A., "An Optimistic Certified E-Mail Protocol For The Current Internet E-Mail Architecture," Communications and Network Security (CNS), 2014 IEEE Conference on, pp. 382, 390, 29-31 Oct. 2014. doi: 10.1109/CNS.2014.6997507
Abstract: Certified mail is a service where an item is delivered to the recipient in exchange of an evidence as proof he has received this item. Therefore certified e-mail should be a service where an e-mail is delivered to its recipient in exchange of evidence proving the recipient has received this e-mail. Even though there are several scientific proposals for certified e-mail (ce-mail), the only real applications we can find offering ce-mail services come from private companies or the public administration. All of them are designed with independence of traditional e-mail infrastructures, tailored to custom needs (private companies and administration) and do not address interoperability between different systems. This explains why scientific proposals have not reach the implementation phase and why existent proposals are not widespread. In this paper we present a ce-mail solution designed taking into account the existent Internet e-mail infrastructure, providing all the required evidence about the submission and the receipt of certified e-mails.
Keywords: Internet; electronic mail; open systems; protocols; public administration; Internet e-mail architecture; ce-mail; certified mail service; e-mail infrastructures; interoperability; optimistic certified e-mail protocol; private companies; public administration; Conferences; Electronic mail; Internet; Postal services; Proposals; Protocols; Security; certified delivery; certified e-mail; e-mail security; fair-exchange (ID#: 15-5275)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6997507&isnumber=6997445
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Pattern Locks, 2014 |
Pattern locks are best known as the access codes using a series of lines connecting dots. Primarily familiar to Android users, research into pattern locks shows promise for many more uses. The research cited here was presented in 2014.
Ishizaki, K.; Daijavad, S.; Nakatani, T., "Transforming Java Programs For Concurrency Using Double-Checked Locking Pattern," Performance Analysis of Systems and Software (ISPASS), 2014 IEEE International Symposium on, pp.128,129, 23-25 March 2014. doi: 10.1109/ISPASS.2014.6844469
Abstract: Java provides a synchronized construct for multi-core programming with many workloads. However, naïve use of the synchronized construct causes performance scalability problems due to lock contention. One of the sources of lock contentions is a synchronized collection class. There are known concurrency code patterns to alleviate lock contentions such as a Concurrent Collection (CC), Read-Write Lock (RWL), and Double-Checked Locking (DCL). To date, there is no algorithm to transform a program using DCL. This paper describes steps on how to rewrite synchronized blocks using DCL.
Keywords: Java; concurrency control; parallel programming; CC; DCL; Java programs; RWL; concurrency code patterns; concurrent collection; double-checked locking pattern; lock contention; multicore programming; performance scalability problems; read-write lock; synchronized block rewriting; synchronized construct; Concurrent computing; Java; Libraries; Programming; Scalability; Synchronization; Transforms (ID#: 15-5634)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844469&isnumber=6844447
Zongwei Zhu; Xi Li; Hengchang Liu; Cheng Ji; Yuan Xu; Xuehai Zhou; Beilei Sun, "A Thread Behavior-Based Memory Management Framework on Multi-core Smartphone," Engineering of Complex Computer Systems (ICECCS), 2014 19th International Conference on, pp. 91, 97, 4-7 Aug. 2014. doi: 10.1109/ICECCS.2014.21
Abstract: Memory management systems have significantly affected the overall performance of modern multi-core smartphone systems. Android, as one of the most popular smartphone operating systems, adopts a global buddy system with the FCFS (first come, first served) principle for memory allocation, and releases requests to manage external fragmentations and maintain the memory allocation efficiency. However, extensive experimental study on thread behaviors indicates that memory external fragmentation is no longer the crucial bottleneck in most Android applications. Specifically, a thread usually allocates or releases memory in bursts, resulting in serious memory locks and inefficient memory allocation. Furthermore, the pattern of such bursting behaviors varies throughout the life cycle of a thread. The conventional FCFS policy of Android buddy system fails to adapt to such variations and thus suffers from performance degradation. In this paper, we propose a novel memory management framework, called Memory Management Based on Thread Behaviors (MMBTB), for multi-core smartphone systems. It adapts to various thread behaviors through targeted optimizations to provide efficient memory allocation. The efficiency and effectiveness of this new memory management scheme on multicore architecture is proved by a theoretical emulation model. Our experimental studies on the real Android system show that MMBTB can improve the efficiency of memory allocation by 12%-20%, confirming the theoretical analysis results.
Keywords: Android (operating system);multiprocessing systems; smart phones; storage management; Android applications; Android buddy system; FCFS; MMBTB; first come first served principle; global buddy system; memory allocation; memory external fragmentation; memory management based on thread behaviors; memory management systems; multicore architecture; multicore smartphone systems; smartphone operating systems; thread behavior-based memory management framework; Androids; Humanoid robots; Instruction sets; Libraries; Memory management; Multicore processing; Resource management; Android; behaviors; memory management; smartphone; thread (ID#: 15-5635)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6923123&isnumber=6923102
Thatmann, D., "Distributed Authorization In Complex Multi Entity-Driven API Ecosystems," Signal Processing and Communication Systems (ICSPCS), 2014 8th International Conference on, pp. 1, 9, 15-17 Dec. 2014. doi: 10.1109/ICSPCS.2014.7021072
Abstract: In certain business sectors adapting to modern and cost reducing technologies and service models can be still a challenge. This especially applies for health care related SME, such as hospitals, where cost reduction runs counter the need of being compliant to legal regulations and where the access control has to struggle against a diverse landscape of health care equipment accompanied by dynamic and complex role models. Outsourcing data storage and data processing seems not to reduce the complexity, rather bears the risks of reduced data availability, loss or abuse of data and can increase legal compliance risks and concerns. Since this applies for many SMEs, a common platform, such as an ecosystem, can help to lower the entrance barrier by regaining helpful management functionalities, standardized basic services and therefore push the adoption to modern cost reducing service consumption scenarios. In this paper a generic design pattern for realizing distributed authorization in an API ecosystem is presented. The pattern is applied within a research project, which aims to develop an ecosystem for trading and consuming services within demanding business sectors and reduce lock-in effects for both, service providers and consumers. The concept of Distributed Authorization is applied in a new complex multi entity use-case, where access policies for RESTful APIs can be designed flexible under consideration of service providers' and consumers' requirements which are enforced by a central trusted 3rd party provider.
Keywords: application program interfaces; authorisation; API ecosystem; SME; access control; complex multientity-driven API ecosystems; data processing; distributed authorization; generic design pattern; health care equipment; hospitals; outsourcing data storage; trading; Authorization; Contracts; Ecosystems; Logic gates; Monitoring; Servers (ID#: 15-5636)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7021072&isnumber=7021039
Holey, A.; Zhai, A., "Lightweight Software Transactions on GPUs," Parallel Processing (ICPP), 2014 43rd International Conference on, pp. 461, 470, 9-12 Sept. 2014. doi: 10.1109/ICPP.2014.55
Abstract: Graphics Processing Units (GPUs) provide an attractive option for extracting data-level parallelism from diverse applications. However, some applications, although possess abundant data-level parallelism, exhibit irregular memory access patterns to the shared data structures. Porting such applications to GPUs requires synchronization mechanisms such as locks, which significantly increase the programming complexity. Coarse-grained locking, where a single lock controls all the shared resources, although reduces programming efforts, can substantially serialize GPU threads. On the other hand, fine-grained locking, where each data element is protected by an independent lock, although facilitates maximum parallelism, requires significant programming efforts. To overcome these challenges, we propose to support software transactional memory (STM) on GPU that is able to achieve performance comparable to fine-grained locking, while requiring minimal programming efforts. Software-based transactional execution can incur significant runtime overheads due to activities such as detecting conflicts across thousands of GPU threads and managing a consistent memory state. Thus, in this paper we illustrate three lightweight STM designs that are capable of scaling to a large number of GPU threads. In our system, programmers simply mark the critical sections in the applications, and the underlying STM support is able to achieve performance comparable to fine-grained locking.
Keywords: data structures; graphics processing units; multi-threading; parallel processing; storage management; transaction processing; GPU threads; GPUs; STM; application porting; coarse-grained locking; consistent memory state management; data-level parallelism; graphics processing units; irregular memory access patterns; programming complexity; shared data structures; software transactional memory; software transactions; software-based transactional execution; synchronization mechanisms; Graphics processing units; Instruction sets; Parallel processing; Programming; Reactive power; System recovery; GPUs; parallel programming; software transactional memory (ID#: 15-5637)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957255&isnumber=6957198
Rybnicek, M.; Lang-Muhr, C.; Haslinger, D., "A Roadmap To Continuous Biometric Authentication On Mobile Devices," Wireless Communications and Mobile Computing Conference (IWCMC), 2014 International, pp. 122, 127, 4-8 Aug. 2014. doi: 10.1109/IWCMC.2014.6906343
Abstract: Mobile devices nowadays contain a variety of personal or even business-related information that is worth being protected from unauthorized access. Owners of such devices should use a passcode or unlock pattern to secure such important assets, but since these techniques are being perceived as annoying barriers, locked devices are not standard. But even if such authentication mechanisms are used, they are very easy to circumvent. Biometric methods are promising applications to secure mobile devices in an user-friendly, discreet way. Based on embedded sensors like gyroscope, accelerometer or microphones, which are state-of-the-art sensors for mobile devices, behavioral biometric approaches appear even more attractive for user verification than physiological methods like fingerprints or face recognition. So far many biometric approaches have been presented. After a short overview of relevant representatives, we discuss these methods based on their applicability and limitations. Our findings are summarized and presented as a roadmap to provide a foundation for future research.
Keywords: biometrics (access control); mobile handsets; security of data; authentication mechanisms; continuous biometric authentication; mobile devices; Accelerometers; Authentication; Electronic mail; Feature extraction; Mobile handsets; Tactile sensors; Bring your own Device; Continuous Biometrics; Mobile Security (ID#: 15-5638)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6906343&isnumber=6906315
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Peer to Peer Security, 2014 |
In a peer-to-peer (P2P) network, tasks such as searching for files or streaming audio or video are shared among multiple interconnected nodes--peers-- who share resources with other network participants without the need for centralized coordination by servers. Peer-to-peer systems pose considerable challenges for computer security. Like other forms of software, P2P applications can contain vulnerabilities, but what makes security particularly dangerous for P2P software is that peer-to-peer applications act as servers as well as clients, making them more vulnerable to remote exploits. The research articles in this bibliography address such topics as a large scale overlay network, unstructured networks, mobile streaming, bit torrent, and traffic identification. The work cite here was presented in 2014.
Samuvelraj, G.; Nalini, N., "A Survey Of Self Organizing Trust Method To Avoid Malicious Peers From Peer To Peer Network," Green Computing Communication and Electrical Engineering (ICGCCEE), 2014 International Conference on, pp. 1, 4, 6-8 March 2014. doi: 10.1109/ICGCCEE.2014.6921379
Abstract: Networks are subject to attacks from malicious sources. Sending the data securely over the network is one of the most tedious processes. A peer-to-peer (P2P) network is a type of decentralized and distributed network architecture in which individual nodes in the network act as both servers and clients of resources. Peer to peer systems are incredibly flexible and can be used for wide range of functions and also a Peer to peer (P2P) system prone to malicious attacks. To provide a security over peer to peer system the self-organizing trust model has been proposed. Here the trustworthiness of the peers has been calculated based on past interactions and recommendations. The interactions and recommendations are evaluated based on importance, recentness, and satisfaction parameters. By this the good peers were able to form trust relationship in their proximity and avoids the malicious peers.
Keywords: client-server systems; computer network security; fault tolerant computing; peer-to-peer computing; recommender systems; trusted computing;P2P network; client-server resources; decentralized network architecture; distributed network architecture; malicious attacks; malicious peers; malicious sources; peer to peer network; peer to peer systems; peer trustworthiness; satisfaction parameters;self organizing trust method; self-organizing trust model; Computer science; History; Measurement; Organizing; Peer-to-peer computing; Security; Servers; Metrics; Network Security; Peer to Peer; SORT (ID#: 15-5276)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921379&isnumber=6920919
Jagadale, N.N.; Parvat, T.J., "A Secured Key Issuing Protocol For Peer-To-Peer Network," Wireless Computing and Networking (GCWCN), 2014 IEEE Global Conference on, pp. 213, 218, 22-24 Dec. 2014. doi: 10.1109/GCWCN.2014.7030881
Abstract: Identity-based cryptography (IBC) was introduced into peer-to-peer (P2P) networks for identity verification and authentication purposes. However, current IBC-based solutions are unable to solve the problem of secure private key issuing. In this paper, we propose a secure key issuing system by using an IBC for P2P networks. We present an IBC infrastructure setup phase, peer registration solution by using Shamir's (k, n) secret sharing, and secure key issuing the scheme, which accepts key privacy authorities (KPAs) and key generate centre (KGC) to securely issue private keys to the peers in order to enable the IBC systems to be applicable and more acceptable in real-world P2P networks. Moreover, for maintaining the security of KPAs, we are developing a system to authenticate KPAs using Byzantine fault tolerance protocol. The theoretical analysis and experimental results are shows that performance of SKIP is very efficiently and effectively, and it can support large scale systems.
Keywords: cryptographic protocols; peer-to-peer computing; Byzantine fault tolerance protocol; IBC infrastructure setup phase; IBC-based solutions;KGC;KPA;P2P networks; identity-based cryptography; key generate centre; key privacy authorities; peer registration solution; peer-to-peer network; secured key issuing protocol; Accuracy; Algorithm design and analysis; Encryption; Peer-to-peer computing; Protocols; Encryption; Key Generation; Peer to Peer Network (ID#: 15-5277)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7030881&isnumber=7030833
Xianglin Wei; Ming Chen; Jianhua Fan; Guomin Zhang, "A General Framework for Detecting Malicious Peers in Reputation-Based Peer-to-Peer Systems," P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC), 2014 Ninth International Conference on, pp. 463, 468, 8-10 Nov. 2014. doi: 10.1109/3PGCIC.2014.95
Abstract: Constructing an efficient and trustable content delivery community with low cost is the general target for the designers of the Peer-to-Peer (P2P) systems. To achieve this goal, many reputation mechanisms are introduced in recent years to alleviate the blindness during peer selection in distributed P2P environment where malicious peers coexist with honest ones. They indeed provide incentives for peers to contribute more resources to the system, and thus, promote the whole system performance. However, little attention has been paid on how to identify the malicious peers in this situation. In this paper, a general framework is presented for detecting malicious peers in Reputation-based P2P systems. Firstly, the malicious peers are divided into various categories and the problem is formulated. Secondly, the general framework is put forward which mainly contains four steps, i.e. Data collection, data processing, malicious peers detection and malicious peers clustering. Thirdly, an algorithm implementation of this general framework is shown. Finally, the framework's application and its performance evaluation are shown.
Keywords: peer-to-peer computing; security of data;P2P systems; distributed P2P environment; malicious peer detection; malicious peers clustering; peer selection; reputation-based peer-to-peer systems; Algorithm design and analysis; Clustering algorithms; Communities; Entropy; Peer-to-peer computing; Security; Topology; Framework; Malicious Peers; Peer-to-Peer; Reputation (ID#: 15-5278)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7024629&isnumber=7024297
Sancho, R.; Lopes Pereira, R., "Hybrid Peer-to-Peer DNS," Computing, Networking and Communications (ICNC), 2014 International Conference on, pp. 977, 981, 3-6 Feb. 2014. doi: 10.1109/ICCNC.2014.6785470
Abstract: Domain censorship has escalated quickly over time, as have Distributed Denial of Service attacks on the Internet. The Domain Name System (DNS) currently in use has small number of root servers which have full control of the domains. By controlling these servers or access to these servers, one can censor or impersonate parts of the Internet. We propose an open DNS that uses a Peer-to-Peer (P2P) network to store and distribute the records. Anyone can join the network and use and provide Distributed Zone Files (DZFs). DZFs are signed with private keys, allowing for multiple Zone Files for each domain, and giving the end user the choice of which keys, if any, to trust. However building a DNS purely based on a P2P network, incurs some overhead. The response times for queries are in the order of 10 to 20 times greater than when using the current DNS, as such the system provides users with a way to circumvent around censored domains, while still being able to use current DNS, for domains that are not censored, keeping response times low for non censored domains, and acceptable for censored domains.
Keywords: Internet; computer network security; peer-to-peer computing;DZF;Internet;P2P network; distributed denial of service attacks; distributed zone files; domain censorship; domain name system ;hybrid peer-to-peer DNS; peer-to-peer networks; root servers; Computers; Domain Name System; Internet; Peer-to-peer computing; Public key; Servers; Time factors (ID#: 15-5279)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785470&isnumber=6785290
Arora, D.; Verigin, A.; Godkin, T.; Neville, S.W., "Statistical Assessment of Sybil-Placement Strategies within DHT-Structured Peer-to-Peer Botnets," Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on, pp. 821, 828, 13-16 May 2014. doi: 10.1109/AINA.2014.100
Abstract: Botnets are a well recognized global cyber-security threat as they enable attack communities to command large collections of compromised computers (bots) on-demand. Peer to-peer (P2P) distributed hash tables (DHT) have become particularly attractive botnet command and control (C & C) solutions due to the high level resiliency gained via the diffused random graph overlays they produce. The injection of Sybils, computers pretending to be valid bots, remains a key defensive strategy against DHT-structured P2P botnets. This research uses packet level network simulations to explore the relative merits of random, informed, and partially informed Sybil placement strategies. It is shown that random placements perform nearly as effectively as the tested more informed strategies, which require higher levels of inter-defender co-ordination. Moreover, it is shown that aspects of the DHT-structured P2P botnets behave as statistically nonergodic processes, when viewed from the perspective of stochastic processes. This suggests that although optimal Sybil placement strategies appear to exist they would need carefully tuning to each specific P2P botnet instance.
Keywords: command and control systems; computer network security; invasive software; peer-to-peer computing; statistical analysis; stochastic processes; C&C solutions; DHT-structured P2P botnets; DHT-structured peer-to-peer botnets; Sybil placement strategy statistical assessment; botnet command and control solution; compromised computer on-demand collections; cyber security threat; diffused random graph; interdefender coordination; packet level network simulation; peer-to-peer distributed hash tables; stochastic process; Computational modeling; Computers; Internet; Network topology; Peer-to-peer computing; Routing; Topology (ID#: 15-5280)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838749&isnumber=6838626
Ekanayake, S.; Tennekoon, R.; Atukorale, A., "Decentralized Reputation Based Trust Model For Peer-To-Peer Content Distribution Networks," Information and Automation for Sustainability (ICIAfS), 2014 7th International Conference on, pp.1, 6, 22-24 Dec. 2014. doi: 10.1109/ICIAFS.2014.7069556
Abstract: Enormous content distribution systems improved broadly with the rapid growth of novel and innovative technologies. Peer-to-peer (P2P) content distribution network (CDN) technologies such innovative technological improvement which claims low cost, efficient high demand data distribution and it gradually involves to the next-generation CDNs. Communications among the nodes of such open network infrastructures commonly perceived as an environment offering both opportunities and threats, which are based on trust issues. Thus, a trust mechanism is required to establish secure communication between the content delivery nodes. This paper introduces a novel, decentralized, cooperative and self-organized reputation based trust algorithm to mitigate the security complications of CDNs. To illustrate the trust model, a novel P2P hybrid network infrastructure has been introduced. The key notion of the research is to validate trust of the target before send or accept the traffic. Furthermore, performance of the proposed trust algorithm is evaluated through the ns-2 simulator in order to analyze the trusted and untrusted behavior of the P2P content delivery nodes.
Keywords: peer-to-peer computing; trusted computing;P2P CDN technologies; novel P2P hybrid network infrastructure; novel decentralized cooperative self-organized reputation based trust algorithm; peer-to-peer content distribution network technology; secure communication; Indexes; Object oriented modeling; Peer-to-peer computing; Protocols; Security; Servers; Topology; Content Delivery Networks; Peer-to-Peer Content delivery; Reputation based trust;ns-2 (ID#: 15-5281)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7069556&isnumber=7069512
Narang, P.; Ray, S.; Hota, C.; Venkatakrishnan, V., "PeerShark: Detecting Peer-to-Peer Botnets by Tracking Conversations," Security and Privacy Workshops (SPW), 2014 IEEE, pp. 108, 115, 17-18 May 2014. doi: 10.1109/SPW.2014.25
Abstract: The decentralized nature of Peer-to-Peer (P2P) botnets makes them difficult to detect. Their distributed nature also exhibits resilience against take-down attempts. Moreover, smarter bots are stealthy in their communication patterns, and elude the standard discovery techniques which look for anomalous network or communication behavior. In this paper, we propose PeerShark, a novel methodology to detect P2P botnet traffic and differentiate it from benign P2P traffic in a network. Instead of the traditional 5-tuple 'flow-based' detection approach, we use a 2-tuple 'conversation-based' approach which is port-oblivious, protocol-oblivious and does not require Deep Packet Inspection. PeerShark could also classify different P2P applications with an accuracy of more than 95%.
Keywords: computer network security; invasive software; peer-to-peer computing; telecommunication traffic;2-tuple conversation-based approach;P2P applications;P2P botnet traffic; PeerShark; anomalous network; communication behavior; communication patterns; conversations tracking; flow-based detection; peer-to-peer botnets detection; port-oblivious; protocol-oblivious; standard discovery techniques; Electronic mail; Feature extraction; Firewalls (computing); IP networks; Internet; Peer-to-peer computing; Ports (Computers);botnet; machine learning; peer-to-peer (ID#: 15-5282)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957293&isnumber=6957265
Lin Cai; Rojas-Cessa, R., "Containing Sybil Attacks On Trust Management Schemes For Peer-To-Peer Networks," Communications (ICC), 2014 IEEE International Conference on, pp. 841,846, 10-14 June 2014. doi: 10.1109/ICC.2014.6883424
Abstract: In this paper, we introduce a framework to detect possible sybil attacks against a trust management scheme of peer-to-peer (P2P) networks used for limiting the proliferation of malware. Sybil attacks may underscore the effectivity of such schemes as malicious peers may use bogus identities to artificially manipulate the reputation, and therefore, the levels of trust of several legitimate and honest peers. The framework includes a k-means clustering scheme, a method to verify the transactions reported by peers, and identification of possible collaborations between peers. We prove that as the amount of public information on peers increases, the effectivity of sybil attacks may decrease. We study the performance of each of these mechanisms, in terms of the number of infected peers in a P2P network, using computer simulation. We show the effect of each mechanism and their combinations. We show that the combination of these schemes is effective and efficient.
Keywords: computer network performance evaluation; computer network security; invasive software; pattern clustering; peer-to-peer computing; trusted computing;P2P network; artificial reputation manipulation; bogus identities; collaboration identification; computer simulation; honest peers;k-means clustering scheme; legitimate peers; malicious peers; malware proliferation; peer-to-peer networks; performance analysis; public information; sybil attacks; transaction verification; trust levels; trust management schemes; Clustering algorithms; Databases; Estimation; Information systems; Malware; Peer-to-peer computing; Distributed system; key mean clustering; malware proliferation;p2p network; sybil attack; transaction verification (ID#: 15-5283)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883424&isnumber=6883277
Qiyi Han; Hong Wen; Ting Ma; Bin Wu, "Self-Nominating Trust Model Based On Hierarchical Fuzzy Systems For Peer-To-Peer Networks," Communications in China (ICCC), 2014 IEEE/CIC International Conference on, pp.199,203, 13-15 Oct. 2014. doi: 10.1109/ICCChina.2014.7008271
Abstract: Security is one of the most critical constraints for the expansion of P2P networks. The autonomy, dynamic and distribution natures benefit both valid and malicious users. Exploiting a reputation-based trust model is a feasible solution in such an open environment to build trust relationship among peers. While most of the existing trust models focus on decreasing the abuse, intentions and sharing capabilities of peers are mostly ignored. In this paper, we present a self-nominating trust model based on Hierarchical Fuzzy Systems to quantify the activities of peers. We integrate the reputation based on eight factors. Three promising factors are provided by resource holder to demonstrate their desires. Four capability factors are recorded by requester to identify the provider's service capability. In addition, another security factor is adopted to evaluate the peers' trust on security. Experiments illustrate that our trust model improves the efficiency and security of P2P systems.
Keywords: computer network security; fuzzy set theory; peer-to-peer computing; trusted computing;P2P networks; hierarchical fuzzy system; peer trust relationship; peer-to-peer networks; reputation-based trust model; resource holder; security constraint; security factor; self-nominating trust model; Computational modeling; Fuzzy systems; Measurement; Peer-to-peer computing; Privacy; Quality of service; Security; Hierarchical fuzzy system; Promise; Reputation; Trust (ID#: 15-5284)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7008271&isnumber=7008220
Basu, S.; Roy, S., "A Group-Based Multilayer Encryption Scheme For Secure Dissemination Of Post-Disaster Situational Data Using Peer-To-Peer Delay Tolerant Network," Advances in Computing, Communications and Informatics (ICACCI, 2014 International Conference on, pp. 1566, 1572, 24-27 Sept. 2014. doi: 10.1109/ICACCI.2014.6968358
Abstract: In the event of a disaster, the communication infrastructure can be partially or totally destroyed, or rendered unavailable due to high congestion. Today's smart-phones that can communicate directly via Bluetooth or WiFi without using any network infrastructure, can be used to create an opportunistic post disaster communication network where situational data can spread quickly, even in the harshest conditions. However, presence of malicious and unscrupulous entities that forward sensitive situational data in such a network may pose serious threats on accuracy and timeliness of the data. Therefore, providing basic security features, like authentication, confidentiality and integrity, to all communications occurring in this network becomes inevitable. But, in such an opportunistic network, that uses short range and sporadic wireless connections, no trusted third party can be used as it won't be accessible locally at the runtime. As a result, traditional security services like cryptographic signatures, certificates, authentication protocols and end-to-end encryption become inapplicable. Moreover, since disaster management is generally a group based activity; a forwarding entity may be better authenticated based on its group membership verification. In this paper, we propose a Group-based Distributed Authentication Mechanism that enables nodes to mutually authenticate each other as members of valid groups and also suggest a Multilayer Hashed Encryption Scheme in which rescue-groups collaboratively contribute towards preserving the confidentiality and integrity of sensitive situational information. The schemes provide authentication, confidentiality and integrity in a fully decentralized manner to suit the requirements of an opportunistic post disaster communication network. We emulate a post disaster scenario in the ONE simulator to show the effectiveness of our schemes in terms of delivery ratio, average delay and overhead ratio.
Keywords: computer network security; cryptography; data integrity; delay tolerant networks; disasters; emergency management; peer-to-peer computing; ONE simulator; average delay; communication infrastructure; data accuracy; data authentication; data confidentiality; data integrity; data timeliness; delivery ratio; disaster management; group membership verification; group-based distributed authentication mechanism; group-based multilayer hashed encryption scheme; malicious entities; mutual authentication; opportunistic postdisaster communication network; overhead ratio; peer-to-peer delay tolerant network; rescue-groups; secure postdisaster situational data dissemination; security features; sensitive situational data forwarding; short-range connections; sporadic wireless connections; unscrupulous entities; Authentication; Communication networks; Encryption; Nonhomogeneous media; Peer-to-peer computing; Delay Tolerant Network; Group-based Authentication; Group-pin; Hashing; Multilayer Encryption; Post Disaster Communication Network; Situational Analysis (ID#: 15-5285)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6968358&isnumber=6968191
Chaumette, S.; Ouoba, J., "A Multilevel Platform For Secure Communications In A Fleet Of Mobile Phones," Mobile Computing, Applications and Services (MobiCASE), 2014 6th International Conference on, pp. 173, 174, 6-7 Nov. 2014. doi: 10.4108/icst.mobicase.2014.258028
Abstract: The work presented in this paper targets MANets composed of mobile phones which are possibly equipped with different wireless technologies. These nodes operate in a totally decentralized and unplanned manner by communicating with each other via peer-to-peer wireless technologies. In this particular context, the multi-technology capabilities of the mobile phones should be used efficiently to increase and diversify their peer-to-peer capacities. Therefore we have defined a dedicated multilevel platform that allows a set of mobile nodes to communicate securely in peer-to-peer mode by using the most appropriate approach depending on the context (costs and/or preferences of the entities). This paper is organized as follows. We first present the characteristics that we consider significant to build a proper model of the system. We then give an overview of the solutions that we have proposed for the main operations within our multilevel platform. Finally, we describe a mobile application that we have developed and present the performance analysis that we have conducted.
Keywords: mobile ad hoc networks; peer-to-peer computing; telecommunication security; MANET; mobile phones; multilevel platform; multitechnology capability; peer-to-peer wireless technology; secure communications; Ad hoc networks; Context; Mobile communication; Mobile computing; Mobile handsets; Peer-to-peer computing; Security; communication; efficiency; mobility; peer-to-peer; security; wireless (ID#: 15-5286)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7026296&isnumber=7026266
Bioglio, V.; Gaeta, R.; Grangetto, M.; Sereno, M., "Rateless Codes and Random Walksfor P2P Resource Discovery in Grids," Parallel and Distributed Systems, IEEE Transactions on, vol. 25, no. 4, pp. 1014, 1023, April 2014. doi: 10.1109/TPDS.2013.141
Abstract: Peer-to-peer (P2P) resource location techniques in grid systems have been recently investigated to obtain scalability, reliability, efficiency, fault-tolerance, security, and robustness. Query resolution for locating resources and update information on their own resource status in these systems can be abstracted as the problem of allowing one peer to obtain a local view of global information defined on all peers of a P2P unstructured network. In this paper, the system is represented as a set of nodes connected to form a P2P network where each node holds a piece of information that is required to be communicated to all the participants. Moreover, we assume that the information can dynamically change and that each peer periodically requires to access the values of the data of all other peers. A novel approach based on a continuous flow of control packets exchanged among the nodes using the random walk principle and rateless coding is proposed. An innovative rateless decoding mechanism that is able to cope with asynchronous information updates is also proposed. The performance of the proposed system is evaluated both analytically and experimentally by simulation. The analytical results show that the proposed strategy guarantees quick diffusion of the information and scales well to large networks. Simulations show that the technique is effective also in presence of network and information dynamics.
Keywords: codes; decoding; grid computing; peer-to-peer computing; resource allocation; security of data; software fault tolerance;P2P resource discovery;P2P unstructured network; asynchronous information updates; continuous flow; control packets; fault-tolerance; grid systems; information dynamics; peer-to-peer resource location techniques; query resolution; random walks; rateless codes; rateless decoding mechanism; resource locating; security; Decoding; Encoding; Equations; Mathematical model; Peer-to-peer computing; Robustness; Vectors; Resource discovery; peer to peer; random walks; rateless codes (ID#: 15-5287)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6519231&isnumber=6750096
Jin Zhou; Chen, C.L.P.; Long Chen; Han-Xiong Li, "A Collaborative Fuzzy Clustering Algorithm in Distributed Network Environments," Fuzzy Systems, IEEE Transactions on, vol. 22, no.6, pp. 1443, 1456, Dec. 2014. doi: 10.1109/TFUZZ.2013.2294205
Abstract: Due to privacy and security requirements or technical constraints, traditional centralized approaches to data clustering in a large dynamic distributed peer-to-peer network are difficult to perform. In this paper, a novel collaborative fuzzy clustering algorithm is proposed, in which the centralized clustering solution is approximated by performing distributed clustering at each peer with the collaboration of other peers. The required communication links are established at the level of cluster prototype and attribute weight. The information exchange only exists between topological neighboring peers. The attribute-weight-entropy regularization technique is applied in the distributed clustering method to achieve an ideal distribution of attribute weights, which ensures good clustering results. And the important features are successfully extracted for the high-dimensional data clustering. The kernelization of the proposed algorithm is also realized as a practical tool for clustering the data with “nonspherical”-shaped clusters. Experiments on synthetic and real-world datasets have demonstrated the efficiency and superiority of the proposed algorithms.
Keywords: computer network security; data privacy; pattern clustering; peer-to-peer computing; attribute weights; attribute-weight-entropy regularization technique; centralized clustering solution; collaborative fuzzy clustering algorithm; distributed network environment; distributed peer-to-peer network; nonspherical-shaped clusters; peer collaboration; privacy requirement; security requirement; Clustering algorithms; Clustering methods; Collaboration; Distributed databases; Niobium; Peer-to-peer computing; Prototypes; Collaborative clustering; distributed peer-to-peer network; kernel-based clustering; subspace clustering (ID#: 15-5288)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6678768&isnumber=6963534
Safa, H.; El-Hajj, W.; Moutaweh, M., "Trust Aware System for P2P Routing Protocols," Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on , vol., no., pp.829,836, 13-16 May 2014. doi: 10.1109/AINA.2014.101
Abstract: A peer-to-peer (P2P) system is known by its scalability and dynamic nature where nodes can join and leave the system easily and anytime. These networks are susceptible to malicious behaviors such as nodes dropping messages and misleading requesting nodes. P2P routing protocols are not immune against these misbehaviors. Therefore, detecting and dealing with malicious nodes will certainly lead to more reliable and secure system. In this paper, we propose a trust aware system for P2P routing protocols. The proposed system analyzes constantly the behaviors of all nodes to determine their trust-worthiness then classify them accordingly isolating the ones deemed malicious. It tracks the nodes' reputation based on evaluation reports from the nodes themselves. The credibility of nodes that are inaccurately evaluating other nodes is also monitored, thus, malicious evaluations would not affect other nodes' reputation. We have integrated the proposed approach with several P2P routing protocols and evaluated their performance through simulations measuring parameters such as request delivery ratio, malicious detection, and false negatives. Results show that the proposed approach improves significantly the performance of P2P routing protocols.
Keywords: peer-to-peer computing; routing protocols; telecommunication security;P2P routing protocol; false negatives; malicious detection; peer-to-peer system; request delivery ratio; trust aware system; Fingers; Peer-to-peer computing; Public key; Routing; Routing protocols; Vectors; Peer-to-peer networks; reputation; routing; trust-awareness (ID#: 15-5289)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838750&isnumber=6838626
Xiaolei Wang; Yuexiang Yang; Jie He, "Identifying P2P Network Activities on Encrypted Traffic," Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, pp.893,899, 24-26 Sept. 2014. doi: 10.1109/TrustCom.2014.117
Abstract: Peer-to-Peer (P2P) traffic has always been a dominant portion of current Internet traffic and become more and more difficult to manage for Internet Service Producers (ISP) and network administrators. Although many methods have been proposed to classify different types of P2P applications and achieved satisfied performance, research on identifying network activities of a certain P2P application is still lacking to the best of our knowledge, which is urgently required in the context of forensic investigation for illegal P2P applications. In this paper, a novel approach based on Hidden Markov Model is proposed to identify network activities on the encrypted traffic, based on analysis of the time series characteristics and statistical properties of network traffic. After presenting a general model of network activities, Team Viewer is selected as a case study to verify the effectiveness of the approach to identify different activities. According to experiments using real network traces, our approach proves to be effective in identifying different activities of a P2P application with a high true positive 99.1% and low negligible false positive 3.6%.
Keywords: Internet; computer network security; cryptography; hidden Markov models; peer-to-peer computing; telecommunication traffic; time series; ISP; Internet service producers; Internet traffic;P2P network activities identification;P2P traffic; TeamViewer; encrypted traffic; forensic investigation context; hidden Markov model; illegal P2P applications; peer-to-peer traffic; statistical properties; time series characteristics; Analytical models; Computational modeling; Cryptography; Hidden Markov models; Probability; Time series analysis; Training; Baum-Welch algorithm; Hidden Markov Model (HMM); Peer-to-Peer; TeamViewer; Viterbi algorithm; statistical properties; time series characteristics (ID#: 15-5290)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011343&isnumber=7011202
Chandra, S.; Bhattacharyya, S.; Paira, S.; Alam, S.S., "A Study And Analysis On Symmetric Cryptography," Science Engineering and Management Research (ICSEMR), 2014 International Conference on, pp. 1, 8, 27-29 Nov. 2014. doi: 10.1109/ICSEMR.2014.7043664
Abstract: Technology is advancing day-to-day. For a better and faster technology, information security is a must. This requires data authentication at the execution levels. Cryptography is a useful tool through which secure data independency can be established. It uses two basic operations namely encryption and decryption for secure data communication. A large number of cryptographic techniques have been proposed and implemented so far. In this paper, we have surveyed some of the proposed mechanisms based on Symmetric Key Cryptography and have made a basic comparison study among them. The basic features, advantages, drawbacks and applications of various Symmetric Key Cryptography algorithms have been mentioned in this paper.
Keywords: cryptography; data communication; data authentication; data communication security; data independency security; decryption; encryption; execution levels; information security; symmetric key cryptography technique analysis; Algorithm design and analysis; Authentication; Encryption; Protocols; Public key; Asymmetric key cryptography; Blowfish; Cryptography; Peer-to-Peer; Public key certificate; Reed-Solomon codes; Symmetric key cryptography (ID#: 15-5291)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7043664&isnumber=7043537
Soryal, J.; Perera, I.M.; Darwish, I.; Fazio, N.; Gennaro, R.; Saadawi, T., "Combating Insider Attacks in IEEE 802.11 Wireless Networks with Broadcast Encryption," Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on, pp. 472, 479, 13-16 May 2014. doi: 10.1109/AINA.2014.58
Abstract: The IEEE 802.11 protocols are used by millions of smartphone and tablet devices to access the Internet via Wi-Fi wireless networks or communicate with one another directly in a peer-to-peer mode. Insider attacks are those originating from a trusted node that had initially passed all the authentication steps to access the network and then got compromised. A trusted node that has turned rogue can easily perform Denial-of-Service (DoS) attacks on the Media Access Control (MAC) layer by illegally capturing the channel and preventing other legitimate nodes from communicating with one another. Insider attackers can alter the implementation of the IEEE 802.11 Distributed Coordination Function (DCF) protocol residing in the Network Interface Card (NIC) to illegally increase the probability of successful packet transmissions into the channel at the expenses of nodes that follow the protocol standards. The attacker fools the NIC to upgrade its firmware and forces in a version containing the malicious code. In this paper, we present a distributed solution to detect and isolate the attacker in order to minimize the impact of the DoS attacks on the network. Our detection algorithm enhances the DCF firmware to enable honest nodes to monitor each other's traffic and compare their observations against honest communication patterns derived from a two-dimensional Markov chain. A channel hopping scheme is then used on the physical layer (PHY) to evade the attacker. To facilitate communication among the honest member stations and minimize network downtime, we introduce two isolation algorithms, one based on identity-based encryption and another based on broadcast encryption. Our simulation results show that the latter enjoys quicker recovery time and faster network convergence.
Keywords: Internet; Markov processes; access protocols; authorisation; computer network security; cryptographic protocols; firmware; network interfaces; notebook computers; peer-to-peer computing; smart phones; wireless LAN;2D Markov chain; DCF ;DoS attack impact minimization; IEEE 802.11 distributed coordination function protocol; IEEE 802.11 wireless networks; Internet; MAC; NIC; PHY; Wi-Fi wireless networks; attacker detection; attacker isolation; authentication steps; broadcast encryption; channel hopping scheme; denial-of-service attacks; firmware; honest member stations; identity-based encryption; insider attacks; legitimate node prevention; malicious code; media access control layer; network convergence; network downtime minimization; network interface card; peer-to-peer mode; physical layer; recovery time; smartphone; successful packet transmission probability; tablet devices; trusted node; Cryptography; Detection algorithms; IEEE 802.11 Standards; OFDM; Peer-to-peer computing; Spread spectrum communication; Throughput; Broadcast encryption; Byzantine attack; DoS attack; IEEE 802.11;Markov chain; identity-based encryption (ID#: 15-5292)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838702&isnumber=6838626
Saini, N.K.; Sihag, V.K.; Yadav, R.C., "A Reactive Approach For Detection Of Collusion Attacks In P2P Trust And Reputation Systems," Advance Computing Conference (IACC), 2014 IEEE International, pp. 312, 317, 21-22 Feb. 2014. doi: 10.1109/IAdCC.2014.6779340
Abstract: Internet today is also a medium of sharing immeasurable amount of information for widespread Peer to Peer (P2P) environments. Various application domains such as file sharing, distributed computing and e community based applications adopted the P2P technology as underlying network structure. A fairly open structure of P2P network applications, also make peers exposed. Interaction with unfamiliar peer in the absence of a trusted third party makes them vulnerable to potential attacks. To enable a reliable communication among peers, trust and reputation mechanisms came into existence. Malicious behavior of peer itself within the network, make reputation system themselves as vulnerable to attacks. Malicious peers often collude to procure a collective objective. The paper reviews existing collusion attacks. It also proposes a reactive defense mechanism against such collusion attacks. The proposed mechanism, detects collusion based on underlying trust and reputation knowledge. It also provides a reduction mechanism to chastise colluded peers.
Keywords: Internet; peer-to-peer computing; security of data; trusted computing;Internet;P2P reputation systems;P2P trust systems; colluded peers; collusion attack detection; malicious peers; peer to peer environments; reactive defense mechanism; Computational modeling; Computer architecture; Conferences; Electronic mail; Peer-to-peer computing; Quality of service; Servers; Collusion; Identity; P2P; Peer; Reputation; Trust (ID#: 15-5293)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779340&isnumber=6779283
Wei Zhang; Yue-Ji Wang; Xiao-Lei WangWang, "A Survey of Defense against P2P Botnets," Dependable, Autonomic and Secure Computing (DASC), 2014 IEEE 12th International Conference on, pp. 97, 102, 24-27 Aug. 2014. doi: 10.1109/DASC.2014.26
Abstract: Botnet, a network of computers that are compromised and controlled by the attacker, is one of the most significant and serious threats to the Internet. Researchers have done plenty of research and made significant progress. As the extensive use and unique advantages of peer-to-peer (P2P) technology, the new advanced form of botnets with the P2P architecture have emerged and become more resilient to defense methods and countermeasures than traditional centralized botnets. Due to the underlying security limitation of current system and Internet architecture, and the complexity of P2P botnet itself, how to effectively counter the global threat of P2P botnets is still a very challenging issue. In this paper, we present an overall overview and analysis of the current defense methods against P2P botnets. We also separately analyse the challenges in botnets detection, measurement and mitigation in detail which introduced by the new form of P2P botnets and propose our suggestions to corresponding challenges.
Keywords: Internet; invasive software; peer-to-peer computing; Internet architecture; P2P architecture;P2P botnet complexity; P2P botnet threat;P2P technology; botnet detection; botnet measurement; botnet mitigation; countermeasures; defense method; peer-to-peer technology; security limitation; serious threat; Crawlers; Current measurement; Feature extraction; Monitoring; Peer-to-peer computing; Protocols; Topology; Botnets detection; Botnets measurement; Botnets mitigation; P2P botnet (ID#: 15-5294)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6945311&isnumber=6945641
Karuppayah, S.; Fischer, M.; Rossow, C.; Muhlhauser, M., "On Advanced Monitoring In Resilient And Unstructured P2P Botnets," Communications (ICC), 2014 IEEE International Conference on, pp. 871, 877, 10-14 June 2014. doi: 10.1109/ICC.2014.6883429
Abstract: Botnets are a serious threat to Internet-based services and end users. The recent paradigm shift from centralized to more sophisticated Peer-to-Peer (P2P)-based botnets introduces new challenges for security researchers. Centralized botnets can be easily monitored, and once their command and control server is identified, easily be taken down. However, P2P-based botnets are much more resilient against such attempts. To make it worse, botnets like P2P Zeus include additional countermeasures to make monitoring and crawling more difficult for the defenders. In this paper, we discuss in detail the problems of P2P botnet monitoring. As our main contribution, we introduce the Less Invasive Crawling Algorithm (LICA) for efficiently crawling unstructured P2P botnets and utilize only local information. We compare the performance of LICA with other known crawling methods such as Depth-first and Breadth-first search. This is achieved by simulating these methods on not only a real-world botnet dataset, but also on an unstructured P2P file sharing network dataset. Our analysis results indicate that LICA significantly outperforms the other known crawling methods.
Keywords: Internet; invasive software; peer-to-peer computing; Internet-based services;LICA;P2P Zeus;P2P botnet monitoring; P2P-based botnets; centralized botnets; command and control server; less invasive crawling algorithm; peer-to-peer-based botnets; unstructured P2P botnet crawling; unstructured P2P botnets; unstructured P2P file sharing network dataset; Approximation algorithms; Approximation methods; Crawlers; Information systems; Monitoring; Peer-to-peer computing; Security (ID#: 15-5295)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883429&isnumber=6883277
Leontiadis, I.; Molva, R.; Onen, M., "A P2P Based Usage Control Enforcement Scheme Resilient To Re-Injection Attacks," A World of Wireless, Mobile and Multimedia Networks (WoWMoM), 2014 IEEE 15th International Symposium on, pp. 1, 8, 19-19 June 2014. doi: 10.1109/WoWMoM.2014.6918974
Abstract: Existing privacy controls based on access control techniques do not prevent massive dissemination of private data by unauthorized users. We suggest a usage control enforcement scheme that allows users to gain control over their data during its entire lifetime. The scheme is based on a peer-to-peer architecture whereby a different set of peers is randomly selected for data assignment. Usage control is achieved based on the assumption that at least t out of any set of n peers will not behave maliciously. Such a system would still suffer from re-injection attacks whereby attackers can gain ownership of data and the usage policy thereof by simply re-storing data after slight modification of the content. In order to cope with re-injection attacks the scheme relies on a similarity detection mechanism. The robustness of the scheme has been evaluated in an experimental setting using a variety of re-injection attacks.
Keywords: authorisation; data privacy; peer-to-peer computing;P2P based usage control enforcement scheme; access control techniques; data assignment; peer-to-peer architecture; privacy control; re-injection attacks; similarity detection mechanism; Access control; Cryptography; Distributed databases; Peer-to-peer computing; Protocols; Resistance (ID#: 15-5296)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6918974&isnumber=6918912
Trifa, Z.; Khemakhem, M., "Analysis Of Malicious Peers In Structured P2P Overlay Networks," Computer Applications and Information Systems (WCCAIS), 2014 World Congress on, pp. 1, 6, 17-19 Jan. 2014. doi: 10.1109/WCCAIS.2014.6916552
Abstract: The malicious behavior peer has crucial impact on the efficiency and integrity of structured p2p systems. The increasing complexity found in such systems, helps in some part to explain the large scale of the challenge faced in dealing with such problem. In such systems, node trust is often essential. However, the destructive and malicious intent of misbehaving peers is often overlooked, despite being one of the most difficult troubles faced in such systems. In this paper we propose a study of these malicious peers. We use a monitoring process, which involves the placement of few instrumented peers within the network. The goal of the monitoring process is to gather a wide set of metrics on the behavior of a peer. To achieve this, we analyze a large trace of messages and operations in the overlay, which gives an insight on the proprieties and peer behaviors. We focus on the communication protocol between peers. Our measurement infrastructure consists in a set of fake peers called Sybils doted with specific controllers in different level of the system. These Sybils are connected to different zones in the network and controlled by a coordinator.
Keywords: computer network security; overlay networks; peer-to-peer computing; protocols; Sybils; communication protocol; instrumented peers; malicious peers analysis; monitoring process; node trust; structured P2P overlay networks; Atmospheric measurements; Lead; Monitoring; Particle measurements; Peer-to-peer computing; Pollution measurement; Routing; mitigation; monitoring; security; structured P2P systems (ID#: 15-5297)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916552&isnumber=6916540
Chunzhi Wang; Dongyang Yu; Hui Xu; Hongwei Chen, "A Bayesian Trust Sampling Method For P2P Traffic Inspection," Security, Pattern Analysis, and Cybernetics (SPAC), 2014 International Conference on, pp. 454, 457, 18-19 Oct. 2014. doi: 10.1109/SPAC.2014.6982732
Abstract: A Peer-to-Peer (P2P) traffic identification method based on Bayesian trust sampling is presented in this paper, which predicts the fluctuation degree for next cycle of P2P traffic ratio, and optimizes for the used amount of historical proportion estimation. Simulation results show that, under the premise of using a fixed number of the estimated values for historical P2P ratio, this trust method makes a better forecast for the fluctuation degree of P2P traffic ratio, and reduces the amount of redundant samples.
Keywords: Bayes methods; peer-to-peer computing; sampling methods; telecommunication traffic; trusted computing; Bayesian trust sampling method; P2P traffic inspection; P2P traffic ratio; historical proportion estimation; peer-to-peer traffic identification method; Accuracy; Bayes methods; Data models; Fluctuations; Peer-to-peer computing; Predictive models; Bayesian Trust;P2P Traffic Identification; Two-stage Sampling (ID#: 15-5298)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6982732&isnumber=6982642
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Privacy Models, 2014 |
Privacy issues have emerged as a major area of interest and research. As with so much in the Science of Security, efforts to chart the scope and to develop models for visualizing privacy are a topic of prime interest. The articles cited here appeared in 2014.
Hermans, J.; Peeters, R.; Preneel, B., "Proper RFID Privacy: Model and Protocols," Mobile Computing, IEEE Transactions on , vol.13, no.12, pp.2888,2902, Dec. 1 2014. doi: 10.1109/TMC.2014.2314127
Abstract: We approach RFID privacy both from modelling and protocol point of view. Our privacy model avoids the drawbacks of several proposed RFID privacy models that either suffer from insufficient generality or put forward unrealistic assumptions regarding the adversary's ability to corrupt tags. Furthermore, our model can handle multiple readers and introduces two new privacy notions to capture the recently discovered insider attackers. We analyse multiple existing RFID protocols, demonstrating the easy applicability of our model, and propose a new wide-forward-insider private RFID authentication protocol. This protocol provides sufficient privacy guarantees for most practical applications and is the most efficient of its kind, it only requires two scalar-EC point multiplications.
Keywords: protocols; radiofrequency identification; telecommunication security; RFID authentication protocol; RFID privacy models; insider attackers; privacy notions; proper RFID privacy; Authentication; Computational modeling; Privacy; Protocols; Radiofrequency identification; Computer security; RFID tags; authentication; cryptography; privacy (ID#: 15-5225)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779604&isnumber=6939756
Al-Jaberi, M.F.; Zainal, A., "Data Integrity And Privacy Model In Cloud Computing," Biometrics and Security Technologies (ISBAST), 2014 International Symposium on, pp. 280, 284, 26-27 Aug. 2014. doi: 10.1109/ISBAST.2014.7013135
Abstract: Cloud computing is the future of computing industry and it is believed to be the next generation of computing technology. Among the major concern in cloud computing is data integrity and privacy. Clients require their data to be safe and private from any tampering or unauthorized access. Various algorithms and protocols (MD5, AES, and RSA-based PHE) are implemented by the various components of this model to provide the maximum levels of integrity management and privacy preservation for data stored in public cloud such as Amazon S3. The impact of algorithms and protocols, used to ensure data integrity and privacy, is studied to test the performance of the proposed model. The prototype system showed that data integrity and privacy are ensured against unauthorized parties. This model reduces the burden of checking the integrity of data stored in cloud storage by utilizing a third party, integrity checking service, and applies security mechanism that ensure privacy and confidentiality of data stored in cloud computing. This paper proposes an architecture based model that provides data integrity verification and privacy preserving in cloud computing.
Keywords: authorisation; cloud computing; data integrity; data privacy; cloud computing; data integrity; data privacy; unauthorized access; Cloud computing; Computational modeling; Data models; Data privacy; Encryption; Amazon S3;Cloud computing; data integrity; data privacy (ID#: 15-5226)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7013135&isnumber=7013076
Tbahriti, S.-E.; Ghedira, C.; Medjahed, B.; Mrissa, M., "Privacy-Enhanced Web Service Composition," Services Computing, IEEE Transactions on, vol. 7, no. 2, pp. 210, 222, April-June 2014. doi: 10.1109/TSC.2013.18
Abstract: Data as a Service (DaaS) builds on service-oriented technologies to enable fast access to data resources on the Web. However, this paradigm raises several new privacy concerns that traditional privacy models do not handle. In addition, DaaS composition may reveal privacy-sensitive information. In this paper, we propose a formal privacy model in order to extend DaaS descriptions with privacy capabilities. The privacy model allows a service to define a privacy policy and a set of privacy requirements. We also propose a privacy-preserving DaaS composition approach allowing to verify the compatibility between privacy requirements and policies in DaaS composition. We propose a negotiation mechanism that makes it possible to dynamically reconcile the privacy capabilities of services when incompatibilities arise in a composition. We validate the applicability of our proposal through a prototype implementation and a set of experiments.
Keywords: Web services; cloud computing; data privacy; Data as a Service; negotiation mechanism; privacy model; privacy policy; privacy requirements; privacy-enhanced Web service composition; privacy-preserving DaaS composition; privacy-sensitive information; DNA; Data models; Data privacy; Government; Phase change materials; Privacy; Web services; DaaS services; Service composition; negotiation; privacy (ID#: 15-5227)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6475932&isnumber=6828820
Xichen Wu; Guangzhong Sun, "A Novel Dummy-Based Mechanism to Protect Privacy on Trajectories," Data Mining Workshop (ICDMW), 2014 IEEE International Conference on, pp. 1120, 1125, 14-14 Dec. 2014. doi: 10.1109/ICDMW.2014.122
Abstract: In recent years, wireless communication technologies and accurate positioning devices enable us to enjoy various types of location-based services. However, revealing users location information to potentially untrusted LBS providers is one of the most significant privacy threats in location-based services. The dummy-based privacy-preserving approach is a popular technology that can protect real trajectories from exposing to attackers. Moreover, it does not need a trusted third part, while guaranteeing the quality of service. When user requests a service, dummy trajectories anony mize the real trajectory to satisfy privacy-preserving requirements. In this paper, we propose a new privacy model that includes three reasonable privacy metrics. We also design a new algorithm named adaptive dummy trajectories generation algorithm (ADTGA) to derive uniformly distributed dummy trajectories. Dummy trajectories generated by our algorithm can achieve stricter privacy-preserving requirements based on our privacy model. The experimental results show that our proposed algorithm can use fewer dummy trajectories to satisfy the same privacy-preserving requirement than existing algorithms, and the distribution of dummy trajectories is more uniformly.
Keywords: data privacy; ADTGA; adaptive dummy trajectories generation algorithm; distributed dummy trajectories; dummy-based mechanism; dummy-based privacy-preserving approach; location-based services; privacy metrics; privacy model; privacy protection; privacy-preserving requirements; quality of service; untrusted LBS providers; user requests; users location information; wireless communication technologies; Adaptation models; Algorithm design and analysis; Educational institutions; Measurement; Privacy; Trajectory; Dummy-based anonymization; Location-based services; Trajectory privacy (ID#: 15-5228)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7022721&isnumber=7022545
Abdolmaleki, B.; Baghery, K.; Akhbari, B.; Aref, M.R., "Attacks And Improvements On Two New-Found RFID Authentication Protocols," Telecommunications (IST), 2014 7th International Symposium on, pp. 895, 900, 9-11 Sept. 2014. doi: 10.1109/ISTEL.2014.7000830
Abstract: In recent years, in order to provide secure communication between Radio Frequency Identification (RFID) users different RFID authentication protocols have been proposed. In this paper, we investigate weaknesses of two newfound RFID authentication protocols that proposed by Shi et al. and Liu et al. in 2014. The Ouafi-Phan privacy model is used for privacy analysis. We show that these two protocols have some weaknesses and could not provide the security and the privacy of RFID users. Furthermore, two improved protocols are proposed that eliminate existing weaknesses in Shi et al.'s and Liu et al.'s protocols.
Keywords: cryptographic protocols; data privacy; radiofrequency identification; Ouafi-Phan privacy model; RFID authentication protocols; privacy analysis; radiofrequency identification users; secure communication; Authentication; Games; Privacy; Protocols; Radiofrequency identification; Servers; CRC; Hash function; NTRU; RFID authentication protocols; public-key; security and privacy (ID#: 15-5229)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7000830&isnumber=7000650
Sohrabi-Bonab, Z.; Alagheband, M.R.; Aref, M.R., "Formal Cryptanalysis Of A CRC-Based RFID Authentication Protocol," Electrical Engineering (ICEE), 2014 22nd Iranian Conference on, pp. 1642, 1647, 20-22 May 2014. doi: 10.1109/IranianCEE.2014.6999801
Abstract: Recently, Pang et al. proposed a secure and efficient lightweight mutual authentication protocol [1]. Their scheme is EPC Class 1 Generation 2 compatible and based on both the cyclic redundancy codes (CRC) and the pseudo random number generator (PRNG). Although authors claimed that the proposed protocol is secure against all attacks, in this paper we utilize the Vaudenay's privacy model to prove that the scheme only supports the lowest privacy level and it is traceable as well. Furthermore, an improved scheme with higher privacy is proposed. Also, the privacy of the our proposed protocol is proved in formal model.
Keywords: cryptographic protocols; cyclic redundancy check codes; radiofrequency identification; random number generation; telecommunication security; CRC-based RFID lightweight mutual authentication protocol formal cryptanalysis; EPC Class 1 Generation 2;PRNG;Vaudenay privacy model; cyclic redundancy code; pseudorandom number generator; radiofrequency identification; Authentication; Cryptography; Polynomials; Privacy; Protocols; Radiofrequency identification; Standards; CRC function; Privacy; RFID authentication (ID#: 15-5230)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6999801&isnumber=6999486
Nagendrakumar, S.; Aparna, R.; Ramesh, S., "A Non-Grouping Anonymity Model For Preserving Privacy In Health Data Publishing," Science Engineering and Management Research (ICSEMR), 2014 International Conference on, pp. 1, 6, 27-29 Nov. 2014. doi: 10.1109/ICSEMR.2014.7043554
Abstract: Publishing health data may jeopardize privacy breaches, since they contain sensitive information about the individuals. Privacy preserving data publishing (PPDP) addresses the problem of revealing sensitive data when extracting the useful data. The existing privacy models are group based anonymity models. Hence, these models consider the privacy of the individual only in a group based manner. And those groups are the hunting ground for the adversaries. All data re-identification attacks are based on the group of records. The root cause behind our approach is that the k-anonymity problem can be viewed as a clustering approach. Though the k-anonymity problem does not insist on the number of clusters, it requires that each group must contain at least k-records. We propose a Non-Grouping Anonymity model; this gives a basic level of anonymization that prevents an individual being re-identified from their published data.
Keywords: data privacy; electronic publishing; medical information systems; pattern clustering; security of data; PPDP; anonymization; clustering approach; data re-identification attacks; group based anonymity model; health data publishing privacy; k-anonymity problem; nongrouping anonymity model; privacy breaches; privacy model; privacy preserving data publishing; sensitive data; sensitive information; Data models; Data privacy; Loss measurement; Privacy; Publishing; Taxonomy; Vegetation; Anonymity; Privacy in Data Publishing; data Privacy; data Utility (ID#: 15-5231)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7043554&isnumber=7043537
Zhang, X.; Dou, W.; Pei, J.; Nepal, S.; Yang, C.; Liu, C.; Chen, J., "Proximity-Aware Local-Recoding Anonymization with MapReduce for Scalable Big Data Privacy Preservation in Cloud," Computers, IEEE Transactions on, vol. PP, no.99, pp. 1, 1, 26 September 2014. doi: 10.1109/TC.2014.2360516
Abstract: Cloud computing provides promising scalable IT infrastructure to support various processing of a variety of big data applications in sectors such as healthcare and business. Data sets like electronic health records in such applications often contain privacy-sensitive information, which brings about privacy concerns potentially if the information is released or shared to third-parties in cloud. A practical and widely-adopted technique for data privacy preservation is to anonymize data via generalization to satisfy a given privacy model. However, most existing privacy preserving approaches tailored to small-scale data sets often fall short when encountering big data, due to their insufficiency or poor scalability. In this paper, we investigate the local-recoding problem for big data anonymization against proximity privacy breaches and attempt to identify a scalable solution to this problem. Specifically, we present a proximity privacy model with allowing semantic proximity of sensitive values and multiple sensitive attributes, and model the problem of local recoding as a proximity-aware clustering problem. A scalable two-phase clustering approach consisting of a t-ancestors clustering (similar to k-means) algorithm and a proximity-aware agglomerative clustering algorithm is proposed to address the above problem. We design the algorithms with MapReduce to gain high scalability by performing data-parallel computation in cloud. Extensive experiments on real-life data sets demonstrate that our approach significantly improves the capability of defending the proximity privacy breaches, the scalability and the time-efficiency of local-recoding anonymization over existing approaches.
Keywords: Big data; Couplings; Data models; Data privacy; Numerical models; Privacy; Scalability; Big Data; Cloud Computing; Data Anonymization; MapReduce; Proximity Privacy (ID#: 15-5232)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6911981&isnumber=4358213
Zhou, J.; Lin, X.; Dong, X.; Cao, Z., "PSMPA: Patient Self-controllable and Multi-level Privacy-preserving Cooperative Authentication in Distributed m-Healthcare Cloud Computing System," Parallel and Distributed Systems, IEEE Transactions on , vol. PP, no.99, pp.1,1, 27 March 2014. doi: 10.1109/TPDS.2014.2314119
Abstract: Distributed m-healthcare cloud computing system significantly facilitates efficient patient treatment for medical consultation by sharing personal health information among healthcare providers. However, it brings about the challenge of keeping both the data confidentiality and patients’ identity privacy simultaneously. Many existing access control and anonymous authentication schemes cannot be straightforwardly exploited. To solve the problem, in this paper, a novel authorized accessible privacy model (AAPM) is established. Patients can authorize physicians by setting an access tree supporting flexible threshold predicates. Then, based on it, by devising a new technique of attribute-based designated verifier signature, a patient selfcontrollable multi-level privacy-preserving cooperative authentication scheme (PSMPA) realizing three levels of security and privacy requirement in distributed m-healthcare cloud computing system is proposed. The directly authorized physicians, the indirectly authorized physicians and the unauthorized persons in medical consultation can respectively decipher the personal health information and/or verify patients’ identities by satisfying the access tree with their own attribute sets. Finally, the formal security proof and simulation results illustrate our scheme can resist various kinds of attacks and far outperforms the previous ones in terms of computational, communication and storage overhead.
Keywords: Authentication; Cloud computing; Computational modeling; Medical services; Privacy; Public key (ID#: 15-5233)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779640&isnumber=4359390
Yilin Shen; Hongxia Jin, "Privacy-Preserving Personalized Recommendation: An Instance-Based Approach via Differential Privacy," Data Mining (ICDM), 2014 IEEE International Conference on, pp. 540, 549, 14-17 Dec. 2014. doi: 10.1109/ICDM.2014.140
Abstract: Recommender systems become increasingly popular and widely applied nowadays. The release of users' private data is required to provide users accurate recommendations, yet this has been shown to put users at risk. Unfortunately, existing privacy-preserving methods are either developed under trusted server settings with impractical private recommender systems or lack of strong privacy guarantees. In this paper, we develop the first lightweight and provably private solution for personalized recommendation, under untrusted server settings. In this novel setting, users' private data is obfuscated before leaving their private devices, giving users greater control on their data and service providers less responsibility on privacy protections. More importantly, our approach enables the existing recommender systems (with no changes needed) to directly use perturbed data, rendering our solution very desirable in practice. We develop our data perturbation approach on differential privacy, the state-of-the-art privacy model with lightweight computation and strong but provable privacy guarantees. In order to achieve useful and feasible perturbations, we first design a novel relaxed admissible mechanism enabling the injection of flexible instance-based noises. Using this novel mechanism, our data perturbation approach, incorporating the noise calibration and learning techniques, obtains perturbed user data with both theoretical privacy and utility guarantees. Our empirical evaluation on large-scale real-world datasets not only shows its high recommendation accuracy but also illustrates the negligible computational overhead on both personal computers and smart phones. As such, we are able to meet two contradictory goals, privacy preservation and recommendation accuracy. This practical technology helps to gain user adoption with strong privacy protection and benefit companies with high-quality personalized services on perturbed user data.
Keywords: calibration; data privacy; personal computing; recommender systems; trusted computing; computational overhead; data perturbation; differential privacy; high quality personalized services; noise calibration; perturbed user data; privacy preservation; privacy protections; privacy-preserving methods; privacy-preserving personalized recommendation; private recommender systems; provable privacy guarantees; recommendation accuracy; smart phones; strong privacy protection; theoretical privacy; untrusted server settings; user adoption; user private data; utility guarantees; Aggregates; Data privacy; Noise; Privacy; Sensitivity; Servers; Vectors; Data Perturbation; Differential Privacy; Learning and Optimization; Probabilistic Analysis; Recommender System (ID#: 15-5234)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7023371&isnumber=7023305
Abbasi, Khurrum Mustafa; ul Haq, Irfan; Malik, Ahmad Kamran; Khalid, Shehzad; Fazil, Saba; Durad, Hanif, "On Access Control Of Cloud Service Chains," Multi-Topic Conference (INMIC), 2014 IEEE 17th International, pp. 293, 298, 8-10 Dec. 2014. doi: 10.1109/INMIC.2014.7097354
Abstract: Service-oriented architecture may be regarded as an incubator for small resources entrepreneurs to bid and work on bigger projects. It also helps large enterprise to trade their resources at various levels. This has opened new gateways for renting out resources. Sometimes a single service is sold at different levels making the Cloud service a supply chain of added value. This supply chain which is built on the same resources but varying claims of ownership, poses novel challenges related to security, trust and privacy of data. There is still no popular system of governing body which can glue together the participating stakeholders through mutual trust and organizational policies. A governing mechanism that can preserve stakeholders' privacy issues and resolve their conflicts throughout the emerging service chains is also non-existent. In this paper we are introducing a mechanism of access control for such Cloud service chains. Building on our pevious work of SLA-based privacy model, we have discussed the realization of Role-based Access Control (RBAC) to services of federated-cloud. The main advantage of RBAC is that it provides an efficient control to resources and data access. We have also provided a preliminary analysis of this on-going research.
Keywords: Access control; Automation; Engines; Mathematical model; Privacy; Service-oriented architecture; Supply chains (ID#: 15-5235)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7097354&isnumber=7096896
Lo, N.-W.; Yeh, K.-H.; Fan, C.-Y., "Leakage Detection and Risk Assessment on Privacy for Android Applications: LRPdroid," Systems Journal, IEEE, vol. PP, no.99, pp. 1, 9, 18 December 2014. doi: 10.1109/JSYST.2014.2364202
Abstract: How to identify and manage information leakage of user privacy is a very crucial and sensitive topic for handheld mobile device manufacturers, telecommunication companies, and mobile device users. As the success of a financial fraud usually requires possessing a victim's private information, new types of personal identity theft and private information acquirement attack are developed and deployed along with various Apps in order to steal personal private information from mobile device users. With more than 50% of smartphone market share, Android-based mobile phone vendors and Internet service providers have to face the new challenge on user privacy management. In this paper, we present a user privacy analysis framework for an Android platform called LRPdroid. The goals of LRPdroid are to achieve information leakage detection, user privacy disclosure evaluation, and privacy risk assessment for Apps installed on Android-based mobile devices. With a formally defined user privacy model, LRPdroid can effectively support mobile users to manage their own privacy risks on targeted Apps. In addition, new privacy analysis viewpoints such as user perception and leakage awareness are introduced in LRPdroid. Two general App usage scenarios are evaluated with our system prototype to show the feasibility and practicability of the LRPdroid framework on user privacy management.
Keywords: Androids; Data privacy; Humanoid robots; Mobile communication ;Privacy; Smart phones; Android; information leakage; privacy disclosure; risk assessment; security (ID#: 15-5236)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6985559&isnumber=4357939
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Privacy Models |
Privacy issues have emerged as a major area of interest and research. As with so much in the Science of Security, efforts to chart the scope and to develop models for visualizing privacy are a topic of interest. The articles cited here appeared in 2015.
Ravichandran, K.; Gavrilovska, A.; Pande, S., "PiMiCo: Privacy Preservation via Migration in Collaborative Mobile Clouds," System Sciences (HICSS), 2015 48th Hawaii International Conference on, pp. 5341, 5351, 5-8 Jan. 2015. doi: 10.1109/HICSS.2015.628
Abstract: The proliferation of mobile devices and mobile clouds coupled with a multitude of their sensing abilities is creating interesting possibilities, the sensing capabilities are creating different types and fidelities of data in a geographically distributed manner that can be used to build new kinds of peer-to-peer applications. However, the data generated by these mobile devices can be personal and of a highly confidential nature. While very interesting possibilities exist for collaborating on the diverse, shared data in real time, privacy policies on the data sharing, transport, as well as usage must be clearly specified and respected. The goal of this work is to introduce a privacy preserving data centric programming model for building collaborative applications in large scale mobile clouds and discuss its design. Our work introduces several concepts and leverages privacy annotations and a transparent execution migration framework to achieve our goals. We also present an evaluation using several applications demonstrating that overheads are minimal and can be used in a real-time setting.
Keywords: cloud computing; data privacy; groupware; mobile computing; PiMiCo; collaborative mobile clouds; data sharing; mobile devices; privacy annotation; privacy policy; privacy preservation via migration; privacy preserving data centric programming model; sensing capability; transparent execution migration framework; Clouds; Data privacy; Mobile communication; Mobile handsets; Privacy; Sensors; Servers; mobile cloud; privacy (ID#: 15-5237)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7070457&isnumber=7069647
Peng Jia; Xiang He; Liang Liu; Binjie Gu; Yong Fang, "A Framework For Privacy Information Protection On Android," Computing, Networking and Communications (ICNC), 2015 International Conference on, pp.1127,1131, 16-19 Feb. 2015. doi: 10.1109/ICCNC.2015.7069508
Abstract: Permissions-based security model of Android increasingly shows its vulnerability in protecting users' privacy information. According to the permissions-based security model, an application should have the appropriate permissions before gaining various resources (including data and hardware) in the phone. This model can only restrict an application to access system resources without appropriate permissions, but can not prevent malicious accesses to privacy data after the application having obtained permissions. During the installation of an application, the system will prompt what permissions the application is requesting. Users have no choice but to allow all the requested permissions if they want to use the application. Once an application is successfully installed, the system is unable to control its behavior dynamically, and at this time the application can obtain privacy information and send them out without the acknowledgements of users. Therefore, there is a great security risk of the permissions-based security model. This paper researches on different ways to access users' privacy information and proposes a framework named PriGuard for dynamically protecting users' privacy information based on Binder communication interception technology and feature selection algorithm. Applications customarily call system services remotely by using the Binder mechanism, then access the equipment and obtain information through system services. By redirecting the Binder interface function of Native layer, PriGuard intercepts Binder messages, as a result, intercepting the application's Remote Procedure Call (RPC) for system services, then it can dynamically monitor the application's behaviors that access privacy information. In this paper, we collect many different types of benign Application Package File (APK) samples, and get the Application Programming Interface (API) calls of each sample when it is running. Afterwards we transform these API calls of each sample into f- ature vectors. Feature selection algorithm is used to generate the optimal feature subset. PriGuard automatically completes the privacy policy configuration on the newly installed software according to the optimal feature subset, and then control the calls on system service of the software using Binder message interception technology, which achieves the purpose of protecting users' privacy information.
Keywords: Android (operating system); application program interfaces; authorisation; data protection; remote procedure calls; API; APK; Android; Binder communication interception technology; Binder interface function; Binder message interception technology; PriGuard framework; RPC; application installation; application package file; application programming interface; application remote procedure call; dynamic application behavior monitoring; dynamic user privacy information protection; feature selection algorithm; native layer; optimal feature subset generation; permission-based security model; privacy policy configuration; security risk; system resource access; system services; user privacy information access; user privacy information protection; Conferences; Monitoring; Privacy; Security; Smart phones; Software; Vectors; RPC intercept; android; binder; feature selection algorithm; privacy protection (ID#: 15-5238)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7069508&isnumber=7069279
Choi, B.C.F.; Zhenhui Jiang; Ramesh, B.; Yizhou Dong, "Privacy Tradeoff and Social Application Usage," System Sciences (HICSS), 2015 48th Hawaii International Conference on, pp. 304, 313, 5-8 Jan. 2015. doi: 10.1109/HICSS.2015.44
Abstract: Privacy trade off is important to individuals' usage of social applications. Although previous studies have enriched understanding on the impact of privacy trade off, rarely have researchers examined privacy trade off beyond the online commercial contexts. This study aims to fill this gap in the literature by examining the effects of privacy risk and image enhancement on social application usage. To develop the research model, we drew on the Stimulus-Organism-Response framework to integrate the privacy literature and multidimensional development theory to explain how aspects of social applications influence usage intention through privacy risk and image enhancement. The research model was tested on survey data gathered from 217 social application users. We found that exposure sensitivity, network scope, and transparency of self affects privacy risk and image enhancement. Additionally, privacy risk and image enhancement were found to be important in shaping usage of social applications.
Keywords: data privacy; image enhancement; risk analysis; social networking (online); image enhancement; online commercial contexts; privacy risk; privacy tradeoff; social application usage; stimulus-organism-response framework; Calculus; Context; Image enhancement; Information management; Privacy; Sensitivity; Social network services (ID#: 15-5239)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7069693&isnumber=7069647
Paul, Mithun; Collberg, Christian; Bambauer, Derek, "A Possible Solution for Privacy Preserving Cloud Data Storage," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 397, 403, 9-13 March 2015. doi: 10.1109/IC2E.2015.103
Abstract: Despite the economic advantages of cloud data storage, many corporations have not yet migrated to this technology. While corporations in the financial sector cite data security as a reason, corporations in other sectors cite privacy concerns for this reluctance. In this paper, we propose a possible solution for this problem inspired by the HIPAA safe harbor methodology for data anonymization. The proposed technique involves using a hash function that uniquely identifies the data and then splitting data across multiple cloud providers. We propose that such a "Good Enough" approach to privacy-preserving cloud data storage is both technologically feasible and financially advantageous. Following this approach addresses concerns about privacy harms resulting from accidental or deliberate data spills from cloud providers. The "Good Enough" method will enable firms to move their data into the cloud without incurring privacy risks, enabling them to realize the economic advantages provided by the pay per-use model of cloud data storage.
Keywords: Cloud computing; Data privacy; Indexes; Memory; Privacy; Security; Data Privacy; Cloud; Obfuscation (ID#: 15-5240)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092951&isnumber=7092808
Buettner, R., "Analyzing the Problem of Employee Internal Social Network Site Avoidance: Are Users Resistant due to Their Privacy Concerns?," System Sciences (HICSS), 2015 48th Hawaii International Conference on, pp. 1819, 1828, 5-8 Jan. 2015. doi: 10.1109/HICSS.2015.220
Abstract: I investigate the phenomenon of user resistance behavior concerning internal social networking sites through an empirical analysis of the behavioral attitudes of 253 working professionals from various sectors and all company sizes. Results from linear regression analysis indicates the importance the role of privacy concerns play in explaining user resistance behavior phenomenon. In addition, I found considerable negative interrelations between privacy concerns and perceived usefulness (rPC-PU = -0.421) as well as privacy concerns and perceived ease of use (rPC-PE = -0.459). Results from structural equation modeling using privacy concerns, usefulness and ease of use reveals an impressive predictive power (R2 = 0.731).
Keywords: behavioural sciences computing; regression analysis; social networking (online); behavioral attitude; empirical analysis; employee internal social network site avoidance; linear regression analysis; perceived ease-of-use; perceived usefulness; predictive power; privacy concern; structural equation modeling; user resistance behavior; Atmospheric measurements; Companies; Immune system; Particle measurements; Privacy; Resistance; Social network services; avoidance problem; business-to-employee-portals; employee portals; enterprise social networks; internal social network sites; technology acceptance; user resistance (ID#: 15-5241)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7070031&isnumber=7069647
Kikuchi, Hiroaki; Hashimoto, Hideki; Yasunaga, Hideo; Saito, Takamichi, "Scalability of Privacy-Preserving Linear Regression in Epidemiological Studies," Advanced Information Networking and Applications (AINA), 2015 IEEE 29th International Conference on, pp. 510, 514, 24-27 March 2015. doi: 10.1109/AINA.2015.229
Abstract: In many hospitals, data related to patients are observed and collected to a central database for medical research. For instance, DPC dataset, which stands for Disease, Procedure and Combination, covers medical records for more than 7 million patients in more than 1000 hospitals. Using the distributed DPC data set, a number of epidemiological studied are feasible to reveal useful knowledge on medical treatments. Hence, cryptography helps to preserve the privacy of personal data. The study called as Privacy-Preserving Data Mining (PPDM) aims to perform a data mining algorithm with preserving confidentiality of datasets. This paper studies the scalability of privacy-preserving data mining in epidemiological study. As for the data-mining algorithm, we focus to a linear regression since it is used in many applications and simple to be evaluated. We try to identify the linear model to estimate a length of hospital stay from distributed dataset related to the patient and the disease information. Our contributions of this paper include (1) to propose privacy-preserving protocols for linear regression with horizontally or vertically partitioned datasets, and (2) to clarify the limitation of size of problem to be performed. These information are useful to determine the dominant element in PPDM and to figure out the direction of study for further improvement.
Keywords: DPC; Epidemiologic; data-mining; privacy (ID#: 15-5242)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7098014&isnumber=7097928
Ran Yang; Yu Jie Ng; Vishwanath, A., "Do Social Media Privacy Policies Matter? Evaluating the Effects of Familiarity and Privacy Seals on Cognitive Processing," System Sciences (HICSS), 2015 48th Hawaii International Conference on, pp. 3463, 3472, 5-8 Jan. 2015. doi: 10.1109/HICSS.2015.417
Abstract: News stories of security breaches and government surveillance have made Internet users more concerned about their privacy, translating perhaps to greater scrutiny of privacy policies of social media platforms and online application providers. The purpose of the research was to examine whether individuals unquestioningly accept the privacy policies of social media platforms and the extent to which individual information processing influences users' agreement. The Heuristic-Systematic Model (HSM) provided the theoretical framework for an experimental study that compared privacy policies from familiar and unfamiliar social media platforms that also varied in the presence of TRUSTe authentication signals. The results implicate heuristic processing where individuals, rather than examine the content of a policy, blindly comply in agreement. The heuristic effect was most pronounced when individuals were familiar with the social media platform. Surprisingly, the presence of a TRUSTe seal reduced decision confidence, and rather than stimulate heuristic processing, caused a more detailed assessment of the policy content.
Keywords: Internet; data privacy government; message authentication; social networking (online); trusted computing; HSM; Internet user; TRUSTe authentication signal; TRUSTe seal reduced decision confidence; cognitive processing; familiarity seal; government surveillance; heuristic processing; heuristic-systematic model; information processing; online application provider; policy content; privacy seal; security breaches; social media platform; social media privacy policy; Data privacy;Facebook;Internet;Media;Privacy;Seals;Systematics;congitive effort; heursitic processing; heursitic-systematic model; information processing; online deception; privacy policy; social media; systematic processing; trust cues; turste symbol (ID#: 15-5243)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7070232&isnumber=7069647
Gambhir, M.; Doja, M.N.; Moinuddin, "Novel Trust Computation Architecture for Users Accountability in Online Social Networks," Computational Intelligence & Communication Technology (CICT), 2015 IEEE International Conference on, pp. 725, 731, 13-14 Feb. 2015. doi: 10.1109/CICT.2015.104
Abstract: The Online Social Network (OSN) is a growing platform which enables people to get hold of news, communicate with family and old friends with whom they have lost contact, to promote a business, to invite to an event of friends and to get people to collaborate to create something magical. With the increasing popularity in OSNs, Researchers have been finding out ways to stop the negative activities over the social media by imposing the privacy settings in the leading OSNs. The privacy settings let the user to control who can access what information in his/her profile. None of these have given the entity of trust enough thought. Very less number of trust management models has been implemented in the OSNs for use by the common users. This paper proposes a new 3 Layer secured architecture with a novel mechanism for ensuring more safer online world. It provides a unique global id for each user, evaluates and computes the Trust Factor for a user, thereby measuring the credibility of a user in the OSN space.
Keywords: authorisation; data privacy; social networking (online); trusted computing; OSN; access control; layer secured architecture; online social networks; privacy settings; social media; trust computation architecture; trust factor; trust management models; users accountability; Authentication; Business; Computer architecture; Databases; Servers; Social network services; Global id; Online Social Networks; OpenID; Trust Factor; Trust management (ID#: 15-5244)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7078798&isnumber=7078645
Miguel, Jorge; Caballe, Santi; Xhafa, Fatos; Snasel, Vaclav, "A Data Visualization Approach for Trustworthiness in Social Networks for On-line Learning," Advanced Information Networking and Applications (AINA), 2015 IEEE 29th International Conference on, pp. 490, 497, 24-27 March 2015. doi: 10.1109/AINA.2015.226
Abstract: Up to now, the problem of ensuring collaborative activities in e-Learning against dishonest students' behaviour has been mainly tackled with technological security solutions. Over the last years, technological security solutions have evolved from isolated security approaches based on specific properties, such as privacy, to holistic models based on technological security comprehensive solutions, such as public key infrastructures, biometric models and multidisciplinary approaches from different research areas. Current technological security solutions are feasible in many e-Learning scenarios but on-line assessment involves certain requirements that usually bear specific security challenges related to e-Learning design. In this context, even the most advanced and comprehensive technological security solutions cannot cope with the whole scope of e-Learning vulnerabilities. To overcome these deficiencies, our previous research aimed at incorporating information security properties and services into on-line collaborative e-Learning by a functional approach based on trustworthiness assessment and prediction. In this paper, we present a peer-to-peer on-line assessment approach carried out in a real on-line course developed in our real e-Learning context of the Open University of Catalonia. The design presented in this paper is conducted by our trustworthiness security methodology with the aim of building peer-to-peer collaborative activities, which enhances security e-Learning requirements. Eventually, peer-to-peer visualizations methods are proposed to manage security e-Learning events, as well as on-line visualization through peer-to-peer tools, intended to analyse collaborative relationship.
Keywords: Information security; computer-supported collaborative learning; on-line assessment; peer-to-peer analysis; trustworthiness (ID#: 15-5245)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7098011&isnumber=7097928
Hadj Ahmed, B.; Amine, A.; Reda Mohamed, H., "New Private Information Retrieval Protocol Using Social Bees Lifstyle over Cloud Computing," Computational Intelligence & Communication Technology (CICT), 2015 IEEE International Conference on, pp. 161, 165, 13-14 Feb. 2015. doi: 10.1109/CICT.2015.163
Abstract: Recently, a novel form of web services had seen the light under the name of Cloud Computing which presents the dematerialisation of software, systems and infrastructures. However, in a world where digital information is everywhere, finding the desired information has become a crucial problem. In other hand, the users of cloud services starting asking about their privacy protection, particularly when they lose control of their data during the treatment and even some of them think about counting the service providers themselves as honest attackers. For that, new approaches had been published in every axis of the privacy preserving domain. One of these axis consists of a special retrieval models which allow both finding and hiding sensitive desired information at the same time. The substance of our work is a new system of private information retrieval protocol (PIR) composed of four steps the authentication to ensure the identification of authorised users. The encryption of stored documents by the server using the boosting algorithm based on the life of bees and multi-filter cryptosystems. The information retrieval step using a combination of distances by social bees where a document must pass through three dams controlled with three types of worker bees, the bee queen represents the query and the hive represents the class of relevant documents. Finally, a visualization step that permits the presentation of the results in graphical format understandable by humans as a 3D cube. Our objectives is to amend the response to users' demands.
Keywords: Web services; cloud computing; cryptography; data protection; data visualisation; information retrieval;3D cube; PIR; authentication; authorised user identification; bee hive; bee queen; boosting algorithm; cloud computing; cloud services; digital information; graphical format; multifilter cryptosystems; privacy preserving domain; privacy protection; private information retrieval protocol; sensitive desired information hiding; service providers; social bee lifestyle; software, dematerialisation; stored documents encryption; user demands; visualization step; web services; worker bees; Boosting; Cloud computing; Encryption; Information retrieval; Protocols; Boosting Cryptosystem; Cloud Computing; Private Information Retrieval; Social bees; Visualisation (ID#: 15-5246)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7078687&isnumber=7078645
Bruce, Ndibanje; Kim, Hyunho; Kang, Youngjin; Lee, Youngsil; Lee, Hoonjae, "On Modeling Protocol-Based Clustering Tag in RFID Systems with Formal Security Analysis," Advanced Information Networking and Applications (AINA), 2015 IEEE 29th International Conference on, pp. 498, 505, 24-27 March 2015. doi: 10.1109/AINA.2015.227
Abstract: This paper presents an efficiency and adaptive cryptographic protocol to ensure users' privacy and data integrity in RFID system. Radio Frequency Identification technology offers more intelligent systems and applications, but privacy and security issues have to be addressed before and after its adoption. The design of the proposed model is based on clustering configuration of the involved tags where they interchange the data with the reader whenever it sends a request. This scheme provides a strong mutual authentication framework that suits for real heterogeneous RFID applications such as in supply-chain management systems, healthcare monitoring and industrial environment. In addition, we contribute with a mathematical analysis to the delay analysis and optimization in a clustering topology tag-based. Finally, a formal security and proof analysis is demonstrated to prove the effectiveness of the proposed protocol and that achieves security and privacy.
Keywords: RFID; authentication; cryptography protocol; privacy; security (ID#: 15-5247)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7098012&isnumber=7097928
Saripalle, R.K.; De La Rosa Algarin, A.; Ziminski, T.B., "Towards Knowledge Level Privacy And Security Using RDF/RDFS and RBAC," Semantic Computing (ICSC), 2015 IEEE International Conference on, pp. 264, 267, 7-9 Feb. 2015. doi: 10.1109/ICOSC.2015.7050817
Abstract: Information privacy and security plays a major role in domains where sensitive information is handled, such as case studies of rare diseases. Currently, security for accessing any sensitive information is provided by various mechanisms at the user/system level by employing access control models such as Role Based Access Control. However, these approaches leave security at the knowledge level unattended, which can be inadequate. For example, in healthcare, ontology-based information extraction is employed for extracting medical knowledge from sensitive structured/unstructured data sources. These information extraction systems act on sensitive data sources which are protected against unauthorized access at the system level based on the user, context and permissions, but the knowledge that can be extracted from these sources is not. In this paper we tackle the security or access control at the knowledge level by presenting a model, to enforce knowledge security/access by leveraging knowledge sources (currently focused on RDF) with the RBAC model. The developed model filters out knowledge by means of binary permissions on the knowledge source, providing each user with a different view of the knowledge source.
Keywords: authorisation; data privacy; knowledge acquisition; Information privacy; Information security; RBAC model; RDFS; access control; binary permissions; healthcare; knowledge level privacy; knowledge level security; medical knowledge extraction; ontology-based information extraction system; role-based access control; sensitive data source; sensitive information; unauthorized access; unstructured data source; Computers; Cryptography; Heart; Medical services; Ontologies; Resource description framework; Semantics; CRP model; OBIE; RBAC; RDF-RBAC; knowledge security; semantic knowledge security (ID#: 15-5248)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7050817&isnumber=7050753
Konstantinou, Charalambos; Keliris, Anastasis; Maniatakos, Michail, "Privacy-preserving Functional IP Verification Utilizing Fully Homomorphic Encryption," Design, Automation & Test in Europe Conference & Exhibition (DATE), 2015, pp. 333, 338, 9-13 March 2015 doi: (not provided)
Abstract: Intellectual Property (IP) verification is a crucial component of System-on-Chip (SoC) design in the modern IC design business model. Given a globalized supply chain and an increasing demand for IP reuse, IP theft has become a major concern for the IC industry. In this paper, we address the trust issues that arise between IP owners and IP users during the functional verification of an IP core. Our proposed scheme ensures the privacy of IP owners and users, by a) generating a privacy-preserving version of the IP, which is functionally equivalent to the original design, and b) employing homomorphically encrypted input vectors. This allows the functional verification to be securely outsourced to a third-party, or to be executed by either parties, while revealing the least possible information regarding the test vectors and the IP core. Experiments on both combinational and sequential benchmark circuits demonstrate up to three orders of magnitude IP verification slowdown, due to the computationally intensive fully homomorphic operations, for different security parameter sizes.
Keywords: Encryption; IP networks; Libraries; Logic gates; Noise (ID#: 15-5249)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092410&isnumber=7092347
Kodali, Ravi Kishore; Gundabathula, Satya Kesav; Boppana, Lakshmi, "Implementation of Toeplitz Hash based RC-4 in WSN," Signal Processing, Informatics, Communication and Energy Systems (SPICES), 2015 IEEE International Conference on, pp. 1, 5, 19-21 Feb. 2015. doi: 10.1109/SPICES.2015.7091535
Abstract: Certain Wireless sensor network (WSN) applications such as military and e- health care require the inter-node communication to be secure. The tiny WSN nodes have limited computational power, memory and finite energy source. These constraints restrict the implementation of highly secure models on the devices as they demand more memory and involve compute intensive operations. Several protocols have been designed for providing different security levels with varying strengths at the expense of the amount of hardware and computational power of the processor in the WSN node. In wireless equivalent privacy (WEP) model static keys are generated for the XOR operation with the plain text in the encryption process. This work proposes a new security model that provides dynamic keys to the encryption/decryption stages. A model for the proposed scheme has been developed using nesC and the same has been implemented on a IRIS WSN node. The WSN implementation of the proposed security model has been compared with those of WEP, WiFi Protected access (WPA) based on memory usage and execution time.
Keywords: Ciphers; Computational modeling; Encryption; Heuristic algorithms; Random access memory; Wireless sensor networks; IRIS mote; Security; Toeplitz Hash; WEP; WPA; WSN (ID#: 15-5250)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7091535&isnumber=7091354
Rahmani, A.; Amine, A.; Hamou, M.R., "De-identification of Textual Data Using Immune System for Privacy Preserving in Big Data," Computational Intelligence & Communication Technology (CICT), 2015 IEEE International Conference on, pp. 112, 116, 13-14 Feb. 2015. doi: 10.1109/CICT.2015.146
Abstract: With the growing observed success of big data use, many challenges appeared. Timeless, scalability and privacy are the main problems that researchers attempt to figure out. Privacy preserving is now a highly active domain of research, many works and concepts had seen the light within this theme. One of these concepts is the de-identification techniques. De-identification is a specific area that consists of finding and removing sensitive information either by replacing it, encrypting it or adding a noise to it using several techniques such as cryptography and data mining. In this report, we present a new model of de-identification of textual data using a specific Immune System algorithm known as CLONALG.
Keywords: Big Data; data privacy; text analysis; CLONALG; big data; cryptography; data mining; privacy preserving; specific immune system algorithm; textual data de-identification; Big data; Data models; Data privacy; Immune system; Informatics; Privacy; Security; CLONALG; big data; de-identification; immune systems; privacy preserving (ID#: 15-5251)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7078678&isnumber=7078645
Pournaras, Evangelos; Moise, Izabela; Helbing, Dirk, "Privacy-Preserving Ubiquitous Social Mining via Modular and Compositional Virtual Sensors," Advanced Information Networking and Applications (AINA), 2015 IEEE 29th International Conference on, pp. 332, 338, 24-27 March 2015. doi: 10.1109/AINA.2015.203
Abstract: The introduction of ubiquitous systems, wearable computing and 'Internet of Things' technologies in our digital society results in a large-scale data generation. Environmental, home, and mobile sensors are only a few examples of the significant capabilities to collect massive data in real-time from a plethora of heterogeneous social environments. These capabilities provide us with a unique opportunity to understand and tackle complex problems with new novel approaches based on reasoning about data. However, existing 'Big Data' approaches often turn this opportunity into a threat of citizens' privacy and open participation by surveilling, profiling and discriminating people via closed proprietary data mining services. This paper illustrates how to design and build an open participatory platform for privacy-preserving social mining: the Planetary Nervous System. Building such a complex platform in which data sharing and collection is self-determined by the user and is performed in a decentralized fashion within different ubiquitous environments is a challenge. This paper tackles this challenge by introducing a modular and compositional design approach based on a model of virtual sensors. Virtual sensors provide a holistic approach to build the core functionality of the Planetary Nervous System but also social mining applications that extend the core functionality. The holistic modeling approach with virtual sensors has the potential to simplify the engagement of citizens in different innovative crowd-sourcing activities and increase its adoption by building communities. Performance evaluations of virtual sensors in the Planetary Nervous System confirm the feasibility of the model to build real-time ubiquitous social mining services.
Keywords: data mining;distributed system;mobile platform; privacy; sensor; ubiquitous computing (ID#: 15-5252)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7097988&isnumber=7097928
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Resiliency and Security, 2014 |
Resiliency is one of the five hard problems in cybersecurity science. The work presented here was produced in 2014.
Bodeau, D.; Brtis, J.; Graubart, R.; Salwen, J., "Resiliency Techniques For Systems-Of-Systems Extending And Applying The Cyber Resiliency Engineering Framework To The Space Domain," Resilient Control Systems (ISRCS), 2014 7th International Symposium on, pp. 1, 6, 19-21 Aug. 2014. doi: 10.1109/ISRCS.2014.6900099
Abstract: This paper describes how resiliency techniques apply to an acknowledged system-of-systems. The Cyber Resiliency Engineering Framework is extended to apply to resilience in general, with a focus on resilience of space systems. Resiliency techniques can improve system-of-systems operations. Both opportunities and challenges are identified for resilience as an emergent property in an acknowledged system-of-systems.
Keywords: aerospace computing; security of data; cyber resiliency engineering framework; resiliency technique; space domain; system-of-systems operations; Collaboration; Dynamic scheduling;Interoperability;Monitoring;Redundancy;Resilience;Space vehicles; cyber security; resilience; system-of-systems (ID#: 15-5324)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900099&isnumber=6900080
Bin Hu; Gharavi, H., "Smart Grid Mesh Network Security Using Dynamic Key Distribution with Merkle Tree 4-Way Handshaking," Smart Grid, IEEE Transactions on, vol. 5, no. 2, pp. 550, 558, March 2014. doi: 10.1109/TSG.2013.2277963
Abstract: Distributed mesh sensor networks provide cost-effective communications for deployment in various smart grid domains, such as home area networks (HAN), neighborhood area networks (NAN), and substation/plant-generation local area networks. This paper introduces a dynamically updating key distribution strategy to enhance mesh network security against cyber attack. The scheme has been applied to two security protocols known as simultaneous authentication of equals (SAE) and efficient mesh security association (EMSA). Since both protocols utilize 4-way handshaking, we propose a Merkle-tree based handshaking scheme, which is capable of improving the resiliency of the network in a situation where an intruder carries a denial of service attack. Finally, by developing a denial of service attack model, we can then evaluate the security of the proposed schemes against cyber attack, as well as network performance in terms of delay and overhead.
Keywords: computer network performance evaluation; computer network security; cryptographic protocols; home networks; smart power grids; substations; trees (mathematics); wireless LAN; wireless mesh networks; wireless sensor networks; EMSA; HAN; IEEE 802.11s; Merkle tree 4-way handshaking scheme; NAN; SAE; WLAN; cost-effective communications; cyber attack; denial-of-service attack model; distributed mesh sensor networks; dynamic key distribution strategy updating; efficient mesh security association; home area networks; neighborhood area networks; network performance; network resiliency improvement; plant-generation local area networks; security protocols; simultaneous authentication-of-equals; smart grid mesh network security enhancement; substation local area networks; wireless local area networks; Authentication; Computer crime; Logic gates; Mesh networks; Protocols; Smart grids; EMSA; IEEE 802.11s;SAE; security attacks; security protocols; smart grid; wireless mesh networks (ID#: 15-5325)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6599007&isnumber=6740878
Teng Xu; Potkonjak, M., "A Lightweight Security Primitive Using Laser-Based Fault Injection," SENSORS, 2014 IEEE, pp. 1248, 1251, 2-5 Nov. 2014. doi: 10.1109/ICSENS.2014.6985236
Abstract: Security and low power are essential requirements for sensor networks. In order to meet these requirements we have proposed a new type of lightweight security primitive using laser-based fault injection. The essential idea is to use lasers to cut the wires in the circuit layouts, thus to intentionally introduce faults in circuits. We have the following key observations: (1) Large VLSI ICs with partial faults can produce highly unpredictable outputs. (2) Faults in different positions in circuits can cause huge difference in outputs alternation. Therefore, we take advantage of the excellent output randomness of the circuit after fault-injection and directly use it as a security primitive. Compared to the traditional security primitive, e.g., PUF, our proposed laser-based security primitive is robust and resiliency against conditions of operations. More importantly, it employs very low power consumption, therefore providing an ideal platform for sensor networks. We compare the fault injection on standard modules, such as adders, multipliers, and XOR networks and further propose the best architecture. Our statistical tests indicate that by using the laser-based fault injection, lightweight security primitives for sensor networks with small footprint and low energy can be created.
Keywords: VLSI; fault diagnosis; low-power electronics; wireless sensor networks; XOR networks; adders; large VLSI ICs; laser-based fault injection; lightweight security primitive; low power consumption; multipliers; wireless sensor networks; Adders; Circuit faults; Hardware; Laser theory; Logic gates; Security (ID#: 15-5326)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6985236&isnumber=6984913
Alexiou, N.; Basagiannis, S.; Petridou, S., "Security Analysis Of NFC Relay Attacks Using Probabilistic Model Checking," Wireless Communications and Mobile Computing Conference (IWCMC), 2014 International, pp.524,529, 4-8 Aug. 2014. doi: 10.1109/IWCMC.2014.6906411
Abstract: Near Field Communication (NFC) is a short-ranged wireless communication technology envisioned to support a large gamut of smart-device applications, such as payment and ticketing applications. Two NFC-enabled devices need to be in close proximity, typically less than 10 cm apart, in order to communicate. However, adversaries can use a secret and fast communication channel to relay data between two distant victim NFC-enabled devices and thus, force NFC link between them. Relay attacks may have tremendous consequences for security as they can bypass the NFC requirement for short range communications and even worse, they are cheap and easy to launch. Therefore, it is important to evaluate security of NFC applications and countermeasures to support the emergence of this new technology. In this work we present a probabilistic model checking approach to verify resiliency of NFC protocol against relay attacks based on protocol, channel and application specific parameters that affect the successfulness of the attack. We perform our formal analysis within the probabilistic model checking environment PRISM to support automated security analysis of NFC applications. Finally, we demonstrate how the attack can be thwarted and we discuss the successfulness of potential countermeasures.
Keywords: access protocols; formal verification; near-field communication; telecommunication security; wireless channels; NFC protocol; NFC relay attacks; automated security analysis; fast communication channel; formal analysis; near field communication; probabilistic model checking environment PRISM; secret communication channel; short range communications; short-ranged wireless communication technology; smart device applications; Delays; Model checking; Probabilistic logic; Relays; Security; Transport protocols; Near Field Communication; probabilistic model checking; relay attack; security analysis (ID#: 15-5327)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6906411&isnumber=6906315
Zonouz, S.; Davis, C.M.; Davis, K.R.; Berthier, R.; Bobba, R.B.; Sanders, W.H., "SOCCA: A Security-Oriented Cyber-Physical Contingency Analysis in Power Infrastructures," Smart Grid, IEEE Transactions on, vol.5, no. 1, pp. 3, 13, Jan. 2014. doi: 10.1109/TSG.2013.2280399
Abstract: Contingency analysis is a critical activity in the context of the power infrastructure because it provides a guide for resiliency and enables the grid to continue operating even in the case of failure. In this paper, we augment this concept by introducing SOCCA, a cyber-physical security evaluation technique to plan not only for accidental contingencies but also for malicious compromises. SOCCA presents a new unified formalism to model the cyber-physical system including interconnections among cyber and physical components. The cyber-physical contingency ranking technique employed by SOCCA assesses the potential impacts of events. Contingencies are ranked according to their impact as well as attack complexity. The results are valuable in both cyber and physical domains. From a physical perspective, SOCCA scores power system contingencies based on cyber network configuration, whereas from a cyber perspective, control network vulnerabilities are ranked according to the underlying power system topology.
Keywords: power grids; power system planning; power system security; SOCCA; accidental contingency; control network; cyber components; cyber network configuration; cyber perspective; cyber-physical security evaluation; grid operation; malicious compromises; physical components; power infrastructures; power system contingency; power system topology; security-oriented cyber-physical contingency analysis; Algorithm design and analysis; Indexes; Mathematical model; Network topology; Power grids; Security; Contingency analysis; cyber-physical systems; security; situational awareness; state estimation (ID#: 15-5328)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6687271&isnumber=6693741
Hussain, A.; Faber, T.; Braden, R.; Benzel, T.; Yardley, T.; Jones, J.; Nicol, D.M.; Sanders, W.H.; Edgar, T.W.; Carroll, T.E.; Manz, D.O.; Tinnel, L., "Enabling Collaborative Research for Security and Resiliency of Energy Cyber Physical Systems," Distributed Computing in Sensor Systems (DCOSS), 2014 IEEE International Conference on , vol., no., pp.358,360, 26-28 May 2014. doi: 10.1109/DCOSS.2014.36
Abstract: The University of Illinois at Urbana Champaign (Illinois), Pacific Northwest National Labs (PNNL), and the University of Southern California Information Sciences Institute (USC-ISI) consortium is working toward providing tools and expertise to enable collaborative research to improve security and resiliency of cyber physical systems. In this extended abstract we discuss the challenges and the solution space. We demonstrate the feasibility of some of the proposed components through a wide-area situational awareness experiment for the power grid across the three sites.
Keywords: fault tolerant computing; power engineering computing; power grids; security of data; collaborative research; cyber physical system resiliency; cyber physical system security; energy cyber physical systems; power grid; wide-area situational awareness experiment; Collaboration; Communities; Computer security; Data models; Phasor measurement units; Power systems; cyber physical systems; energy; experimentation (ID#: 15-5329)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846190&isnumber=6846129
Shila, D.M.; Venugopal, V., "Design, Implementation and Security Analysis Of Hardware Trojan Threats In FPGA," Communications (ICC), 2014 IEEE International Conference on, pp. 719, 724, 10-14 June 2014. doi: 10.1109/ICC.2014.6883404
Abstract: Hardware Trojan Threats (HTTs) are stealthy components embedded inside integrated circuits (ICs) with an intention to attack and cripple the IC similar to viruses infecting the human body. Previous efforts have focused essentially on systems being compromised using HTTs and the effectiveness of physical parameters including power consumption, timing variation and utilization for detecting HTTs. We propose a novel metric for hardware Trojan detection coined as HTT detectability metric (HDM) that uses a weighted combination of normalized physical parameters. HTTs are identified by comparing the HDM with an optimal detection threshold; if the monitored HDM exceeds the estimated optimal detection threshold, the IC will be tagged as malicious. As opposed to existing efforts, this work investigates a system model from a designer perspective in increasing the security of the device and an adversary model from an attacker perspective exposing and exploiting the vulnerabilities in the device. Using existing Trojan implementations and Trojan taxonomy as a baseline, seven HTTs were designed and implemented on a FPGA testbed; these Trojans perform a variety of threats ranging from sensitive information leak, denial of service to beat the Root of Trust (RoT). Security analysis on the implemented Trojans showed that existing detection techniques based on physical characteristics such as power consumption, timing variation or utilization alone does not necessarily capture the existence of HTTs and only a maximum of 57% of designed HTTs were detected. On the other hand, 86% of the implemented Trojans were detected with HDM. We further carry out analytical studies to determine the optimal detection threshold that minimizes the summation of false alarm and missed detection probabilities.
Keywords: field programmable gate arrays; integrated logic circuits; invasive software; FPGA testbed; HDM;HTT detectability metric; HTT detection; ICs; RoT; Trojan taxonomy; denial of service; hardware Trojan detection technique; hardware Trojan threats; integrated circuits; missed detection probability; normalized physical parameters; optimal detection threshold; power consumption; root of trust; security analysis; sensitive information leak; summation of false alarm; timing variation; Encryption; Field programmable gate arrays; Hardware; Power demand; Timing; Trojan horses; Design; Hardware Trojans; Resiliency; Security (ID#: 15-5330)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883404&isnumber=6883277
Msolli, A.; Helali, A.; Maaref, H., "Enhance Resiliency To Pool Based Key Pre-Distribution Scheme," Computer & Information Technology (GSCIT), 2014 Global Summit on, pp. 1, 4, 14-16 June 2014. doi: 10.1109/GSCIT.2014.6970107
Abstract: The security of the information stored or transmitted through a wireless sensor network against attacks is a primary objective. Key management includes many security services such as confidentiality and authentication. Under the constraints of WSN, designing a scheme of key management is a major challenge. In this paper, we improve the resilience of pool based on symmetric key pre-distribution scheme against capture nodes and secure connectivity coverage.
Keywords: cryptography; telecommunication security; wireless sensor networks; WSN; connectivity coverage security; information security; key management; pool based key predistribution scheme; resiliency enhancement; symmetric key predistribution scheme; wireless sensor network; Cryptography; Simulation; cryptography; key management scheme; resiliency against nodes capture; wireless sensors network (ID#: 15-5331)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6970107&isnumber=6970090
Mihai-Gabriel, I.; Victor-Valeriu, P., "Achieving Ddos Resiliency In A Software Defined Network By Intelligent Risk Assessment Based On Neural Networks And Danger Theory," Computational Intelligence and Informatics (CINTI), 2014 IEEE 15th International Symposium on, pp. 319, 324, 19-21 Nov. 2014. doi: 10.1109/CINTI.2014.7028696
Abstract: Distributed Denial of Service (DDoS) attacks are becoming a very versatile weapon. Unfortunately, they are becoming very popular amongst cyber criminals, and they are also getting cheaper. As the interest grows for such weapons on the black market, their scale reaches unimaginable proportions. As is the case of the Spamhaus attack, which was mitigated by CloudFlare through null-routing techniques. This paper presents a way of mitigating DDoS attacks in a Software Defined Network (SDN) environment, by assessing risk through the means of a cyber-defense system based on neural networks and the biological danger theory. In addition to mitigating attacks the demo platform can also perform full packet capture in the SDN, if the central command component deems it necessary. These packet captures can be used later for forensic analysis and identification of the attacker.
Keywords: computer network security; digital forensics; neural nets; risk management; software defined networking; CloudFlare; DDoS attack mitigation; DDoS resiliency; SDN environment; Spamhaus attack; attacker identification; biological danger theory; cyber criminals; cyber-defense system; distributed denial of service attacks; forensic analysis; full packet capture; intelligent risk assessment; neural networks; null-routing technique; software defined network; Computer crime; Control systems; Delays; Monitoring; Neural networks; Servers (ID#: 15-5332)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7028696&isnumber=7028644
Barreto, C.; Giraldo, J.; Cardenas, A.A.; Mojica-Nava, E.; Quijano, N., "Control Systems for the Power Grid and Their Resiliency to Attacks," Security & Privacy, IEEE, vol. 12, no. 6, pp. 15, 23, Nov.-Dec. 2014. doi: 10.1109/MSP.2014.111
Abstract: Most government, industry, and academic efforts to protect the power grid have focused on information security mechanisms for preventing and detecting attacks. In addition to these mechanisms, control engineering can help improve power grid security.
Keywords: power grids; power system control; power system security; attack detection; attack prevention; control engineering; control systems; information security mechanisms; power grid security; Computer security; Control systems; Energy management; Power grids; Resilience; Smart grids; control systems; cyber-physical systems; power grid; resiliency; security; smart grid (ID#: 15-5333)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7006441&isnumber=7006395
Ali, S.S.; Sinanoglu, O.; Karri, R., "AES Design Space Exploration New Line For Scan Attack Resiliency," Very Large Scale Integration (VLSI-SoC), 2014 22nd International Conference on, pp. 1, 6, 6-8 Oct. 2014. doi: 10.1109/VLSI-SoC.2014.7004193
Abstract: Crypto-chips are vulnerable to side-channel attacks. Scan attack is one such side-channel attack which uses the scan-based DFT test infrastructure to leak the secret information of the crypto-chip. In the presence of scan, an attacker can run the chip in normal mode, and then by switching to the test mode, retrieve the intermediate results of the crypto-chip. Using only a few input-output pairs one can retrieve the entire secret key. Almost all the scan attacks on AES crypto-chip use the same iterative 128-bit AES design where the round register is placed exactly after the round operation. However, the attack potency may vary depending on the design of AES. In this work, we consider various designs of AES. We shed light on the impact of design style on the scan attack. We also consider response compaction in our analysis. We show that certain design decisions deliver inherent resistance to scan attack.
Keywords: cryptography; design for testability; AES design space exploration; DFT test infrastructure; advanced encryption standard; cryptochips; design style; input-output pairs; normal mode; response compaction; round operation; round register; scan attack resiliency; secret key; side-channel attacks; test mode; word length 128 bit; Ciphers; Clocks; Computer architecture; Encryption; Hamming distance; Microprocessors; Registers; AES Scan Chain; Scan Attack; Scan-based DFT; Security; Testability (ID#: 15-5334)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7004193&isnumber=7004150
Vadari, Mani, "Dynamic Microgrids - A Potential Solution For Enhanced Resiliency In Distribution Systems," Test Conference (ITC), 2014 IEEE International, pp. 1, 1, 20-23 Oct. 2014. doi: 10.1109/TEST.2014.7035285
Abstract: Of late, microgrids are getting a lot of attention, not just to support national security at military bases, but also to provide more resilient power supplies at other types of facilities, to allow for increased penetration of renewables, and other reasons. College campuses, military bases, and even corporate campuses are exploring microgrid options. This has spurred creation of new technologies and control mechanisms that allow these systems to operate in a grid-connected mode and also independently for extended periods of time. In this presentation, we propose a radical new concept: a top-down breakup of the distribution grid into an interconnected set of microgrids. Such an architecture would dramatically change how utilities address storm response while also delivering utilities' other mandates. We call this the “dynamic microgrid”, a new concept that will move the microgrid from its present niche to a mainstream position. Dynamic microgrids have the potential to be a key element of the ultimate self-healing grid - the Holy Grail of the smart grid. They'd allow the grid to divide itself into smaller self-sustaining grids, which can then be stitched back to form the regular distribution grid.
Keywords: (not provided) (ID#: 15-5335)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7035285&isnumber=7035243
Atighetchi, M.; Adler, A., "A Framework For Resilient Remote Monitoring," Resilient Control Systems (ISRCS), 2014 7th International Symposium on, pp. 1, 8, 19-21 Aug. 2014. doi: 10.1109/ISRCS.2014.6900090
Abstract: Today's activities in cyber space are more connected than ever before, driven by the ability to dynamically interact and share information with a changing set of partners over a wide variety of networks. To support dynamic sharing, computer systems and network are stood up on a continuous basis to support changing mission critical functionality. However, configuration of these systems remains a manual activity, with misconfigurations staying undetected for extended periods, unneeded systems remaining in place long after they are needed, and systems not getting updated to include the latest protections against vulnerabilities. This environment provides a rich environment for targeted cyber attacks that remain undetected for weeks to months and pose a serious national security threat. To counter this threat, technologies have started to emerge to provide continuous monitoring across any network-attached device for the purpose of increasing resiliency by virtue of identifying and then mitigating targeted attacks. For these technologies to be effective, it is of utmost importance to avoid any inadvertent increase in the attack surface of the monitored system. This paper describes the security architecture of Gestalt, a next-generation cyber information management platform that aims to increase resiliency by providing ready and secure access to granular cyber event data available across a network. Gestalt's federated monitoring architecture is based on the principles of strong isolation, least-privilege policies, defense-in-depth, crypto-strong authentication and encryption, and self-regeneration. Remote monitoring functionality is achieved through an orchestrated workflow across a distributed set of components, linked via a specialized secure communication protocol, that together enable unified access to cyber observables in a secure and resilient way.
Keywords: Web services; information management; security of data; Gestalt platform; attack identification; attack mitigation; communication protocol; computer networks; computer systems; cyber attacks; cyber observables; cyber space; granular cyber event data; mission critical functionality; national security threat; network-attached device; next-generation cyber information management platform; remote monitoring functionality ;resilient remote monitoring; Bridges; Firewalls (computing); Monitoring; Protocols; Servers; XML; cyber security; federated access; middleware; semantic web (ID#: 15-5336)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900090&isnumber=6900080
Tunc, C.; Fargo, F.; Al-Nashif, Y.; Hariri, S.; Hughes, J., "Autonomic Resilient Cloud Management (ARCM) Design and Evaluation," Cloud and Autonomic Computing (ICCAC), 2014 International Conference on, pp. 44, 49, 8-12 Sept. 2014. doi: 10.1109/ICCAC.2014.35
Abstract: Cloud Computing is emerging as a new paradigm that aims delivering computing as a utility. For the cloud computing paradigm to be fully adopted and effectively used, it is critical that the security mechanisms are robust and resilient to faults and attacks. Securing cloud systems is extremely complex due to the many interdependent tasks such as application layer firewalls, alert monitoring and analysis, source code analysis, and user identity management. It is strongly believed that we cannot build cloud services that are immune to attacks. Resiliency to attacks is becoming an important approach to address cyber-attacks and mitigate their impacts. Resiliency for mission critical systems is demanded higher. In this paper, we present a methodology to develop an Autonomic Resilient Cloud Management (ARCM) based on moving target defense, cloud service Behavior Obfuscation (BO), and autonomic computing. By continuously and randomly changing the cloud execution environments and platform types, it will be difficult especially for insider attackers to figure out the current execution environment and their existing vulnerabilities, thus allowing the system to evade attacks. We show how to apply the ARCM to one class of applications, Map/Reduce, and evaluate its performance and overhead.
Keywords: cloud computing; security of data; software fault tolerance; ARCM; BO; autonomic resilient cloud management; cloud computing; cloud service behavior obfuscation; cloud system security; moving target defense; Cloud computing; Conferences; Autonomic Resilient Cloud Management; behavior obfuscation; resiliency (ID#: 15-5337)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7024043&isnumber=7024029
Xin Chen; Jin-Hee Cho; Sencun Zhu, "Globaltrust: An Attack-Resilient Reputation System for Tactical Networks," Sensing, Communication, and Networking (SECON), 2014 Eleventh Annual IEEE International Conference on, pp. 275, 283, June 30 2014-July 3 2014. doi: 10.1109/SAHCN.2014.6990363
Abstract: In a military tactical network where a trust authority (e.g., a commander) makes a decision during a mission, assessing the trustworthiness of participating entities accurately is critical to mission success. In this work, we propose a trust-based reputation management scheme, called GlobalTrust, for minimizing false decisions on the reputation of nodes in the network. In the proposed scheme, nodes may be compromised and provide incorrect opinions to the trust authority, who conducts reputation evaluation towards all nodes based on the provided opinions. GlobalTrust achieves three goals: (1) maintaining a consistent global view towards each node; (2) obtaining high resiliency against various attack patterns; and (3) attaining highly accurate reputation values of nodes. Through extensive simulations comparing GlobalTrust with other existing schemes, we show that GlobalTrust minimizes false decisions while maintaining high resilience against various attack behaviors. Specifically, under various attacks, GlobalTrust can achieve a highly accurate consistent view on nodes' reputations even when the number of malicious nodes is up to 40% of all participating nodes.
Keywords: military communication; telecommunication security; GlobalTrust; attack-resilient reputation system; false decisions; malicious nodes; military tactical network; tactical networks; trust authority; trust-based reputation management; Conferences; Peer-to-peer computing; Protocols; Security; Sensors; Sparse matrices; Vectors; Reputation; Security; Tactical networks; Trust (ID#: 15-5338)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6990363&isnumber=6990316
Jie He; Yuexiang Yang; Xiaolei Wang; Chuan Tang; Yingzhi Zeng, "PeerDigger: Digging Stealthy P2P Hosts through Traffic Analysis in Real-Time," Computational Science and Engineering (CSE), 2014 IEEE 17th International Conference on, pp. 1528, 1535, 19-21 Dec. 2014. doi: 10.1109/CSE.2014.283
Abstract: P2P technology has been widely applied in many areas due to its excellent properties. Some botnets also shift towards the decentralized architectures, since they provide a better resiliency against detection and takedown efforts. Besides, modern P2P bots tend to run on compromised hosts in a stealthy way, which renders most existing approaches ineffective. In addition, few approaches address the problem of real-time detection. However, it is important to detect bots as soon as possible in order to minimize their harm. In this paper, we propose Peer Digger, a novel real-time system capable of detecting stealthy P2P bots. Peer Digger first detects all P2P hosts base on several basic properties of flow records, and then distinguishes P2P bots from benign P2P hosts by analyzing their network behavior patterns. The experimental results demonstrate that our system is able to identity P2P bots with an average TPR of 98.07% and an average FPR of 1.5% within 4 minutes.
Keywords: computer network security; invasive software; peer-to-peer computing; real-time systems; telecommunication traffic;FPR;P2P host detection; P2P technology; PeerDigger; TPR; decentralized architectures; network behavior pattern analysis; real-time detection; stealthy P2P bot detection; Feature extraction; IP networks; Monitoring; Peer-to-peer computing; Real-time systems; Storms; Vectors; P2P network; bot detection; real-time; traffic analysis (ID#: 15-5339)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7023794&isnumber=7023510
Wijayasekara, D.; Linda, O.; Manic, M.; Rieger, C., "FN-DFE: Fuzzy-Neural Data Fusion Engine for Enhanced Resilient State-Awareness of Hybrid Energy Systems," Cybernetics, IEEE Transactions on, vol. 44, no. 11, pp. 2065, 2075, Nov. 2014. doi: 10.1109/TCYB.2014.2323891
Abstract: Resiliency and improved state-awareness of modern critical infrastructures, such as energy production and industrial systems, is becoming increasingly important. As control systems become increasingly complex, the number of inputs and outputs increase. Therefore, in order to maintain sufficient levels of state-awareness, a robust system state monitoring must be implemented that correctly identifies system behavior even when one or more sensors are faulty. Furthermore, as intelligent cyber adversaries become more capable, incorrect values may be fed to the operators. To address these needs, this paper proposes a fuzzyneural data fusion engine (FN-DFE) for resilient state-awareness of control systems. The designed FN-DFE is composed of a three-layered system consisting of: 1) traditional threshold based alarms; 2) anomalous behavior detector using self-organizing fuzzy logic system; and 3) artificial neural network-based system modeling and prediction. The improved control system state awareness is achieved via fusing input data from multiple sources and combining them into robust anomaly indicators. In addition, the neural network-based signal predictions are used to augment the resiliency of the system and provide coherent state-awareness despite temporary unavailability of sensory data. The proposed system was integrated and tested with a model of the Idaho National Laboratory's hybrid energy system facility known as HYTEST. Experiment results demonstrate that the proposed FNDFE provides timely plant performance monitoring and anomaly detection capabilities. It was shown that the system is capable of identifying intrusive behavior significantly earlier than conventional threshold-based alarm systems.
Keywords: control engineering computing; fuzzy neural nets; power engineering computing; power system control; security of data; sensor fusion; FN-DFE engine; HYTEST facility; Idaho National Laboratory; anomalous behavior detector; artificial neural network-based system; control system state-awareness; control systems; critical infrastructure; energy production system; enhanced resilient state-awareness; fuzzy-neural data fusion engine; hybrid energy systems; industrial system; intelligent cyber adversaries; neural network-based signal predictions; self-organizing fuzzy logic system; system modeling; system prediction; system state monitoring; threshold based alarms; threshold-based alarm systems; Artificial neural networks; Control systems; Monitoring; Robustness; Sensor systems; Vectors; Artificial neural networks; data fusion; fuzzy logic systems; resilient control systems; state-awareness (ID#: 15-5340)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6823672&isnumber=6922172
Junjie Zhang; Perdisci, R.; Wenke Lee; Xiapu Luo; Sarfraz, U., "Building a Scalable System for Stealthy P2P-Botnet Detection," Information Forensics and Security, IEEE Transactions on, vol. 9, no.1, pp. 27,38, Jan. 2014. doi: 10.1109/TIFS.2013.2290197
Abstract: Peer-to-peer (P2P) botnets have recently been adopted by botmasters for their resiliency against take-down efforts. Besides being harder to take down, modern botnets tend to be stealthier in the way they perform malicious activities, making current detection approaches ineffective. In addition, the rapidly growing volume of network traffic calls for high scalability of detection systems. In this paper, we propose a novel scalable botnet detection system capable of detecting stealthy P2P botnets. Our system first identifies all hosts that are likely engaged in P2P communications. It then derives statistical fingerprints to profile P2P traffic and further distinguish between P2P botnet traffic and legitimate P2P traffic. The parallelized computation with bounded complexity makes scalability a built-in feature of our system. Extensive evaluation has demonstrated both high detection accuracy and great scalability of the proposed system.
Keywords: computer network security; peer-to-peer computing; telecommunication traffic; P2P botnet traffic;P2P communications; detection systems; malicious activities; network traffic; peer-to-peer botnets; scalable system; statistical fingerprints; stealthy P2P botnet detection; Educational institutions; Electronic mail; Feature extraction; Monitoring; Overlay networks; Peer-to-peer computing; Scalability; Botnet; P2P; intrusion detection; network security (ID#: 15-5341)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6661360&isnumber=6684617
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Threat Vector Metrics and Privacy, 2014 |
As systems become larger and more complex, the surface that hackers can attack also grows. Is this set of recent research articles, topics are explored that include smartphone malware, zero-day polymorphic worm detection, source identification, drive-by download attacks, two-factor face authentication, semantic security, and code structures. Of particular interest to the Science of Security community are these research articles focused on measurement and on privacy. All were presented in 2014.
Karabat, C.; Topcu, B., "How To Assess Privacy Preservation Capability Of Biohashing Methods?: Privacy Metrics," Signal Processing and Communications Applications Conference (SIU), 2014 22nd, pp. 2217, 2220, 23-25 April 2014. doi: 10.1109/SIU.2014.6830705
Abstract: In this paper, we evaluate privacy preservation capability of biometric hashing methods. Although there are some work on privacy evaluation of biometric template protection methods in the literature, they fail to cover all biometric template protection methods. To the best of our knowledge, there is no work on privacy metrics and assessment for biometric hashing methods. We use several metrics under different threat scenarios to assess privacy protection level of biometric hashing methods in this work. The simulation results demonstrate that biometric hash vectors may leak private information especially under advanced threat scenarios.
Keywords: authorisation; biometrics (access control); data protection; biometric hash vectors; biometric hashing methods; biometric template protection methods; privacy metrics; privacy preservation capability assessment; privacy preservation capability evaluation; privacy protection level assessment; private information leakage; threat scenarios; Conferences; Internet; Measurement; Privacy; Security; Signal processing; Simulation; biometric; biometric hash; metrics; privacy (ID#: 15-5392)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830705&isnumber=6830164
Leitold, F.; Arrott, A.; Colon Osorio, F.C., "Combining Commercial Consensus And Community Crowd-Sourced Categorization Of Web Sites For Integrity Against Phishing And Other Web Fraud," Malicious and Unwanted Software: The Americas (MALWARE), 2014 9th International Conference on, pp. 40, 49, 28-30 Oct. 2014. doi: 10.1109/MALWARE.2014.6999407
Abstract: Traditionally, the protection provided by 3rd party anti-Malware endpoint security products is measured using a sample set that is representative of the prevalent universe of attacks at that point in time (malicious URLs and/or malicious files in the world). The methodology used for such a selection of the Malware attack samples, the so-called Stimulus Workload (SW), has been a matter of controversy for a number of years. The reason is simple. Given a carefully crafted selection of such files or URLs, then, the results of the measurements can varied drastically favoring one vendor versus the other. In [1], Colon Osorio, et.al. argued that the selection process must be strictly regulated, and further, that such a selection must take into account the fact that amongst the samples selected, some pose a greater threat to users than others, as they are more widespread, and hence are more likely to affect a given user. Further, some Malware attack samples may only be found on specific websites, affect specific countries/regions, or only be relevant to a particular operating system version or interface languages (English, German, Chinese, and so forth). In [1], [2], the idea of a Customizable Stimulus Workloads, (CSW) was first suggested, whereas, the collection of samples selected as the Stimulus Workload is required to take into account all the elements described above. Within this context, CSWs are created by filtering attack samples base on prevalence, geographic regions, customer application environments, and other factors. Within the context of this methodology, in this manuscript we will pay special attention to one such specific application environment, primarily, Social Networks. With such a target environment in mind, a CSW was created and used to evaluate the performance of end-point security products. Basically, we examine the protection provided against Malware that uses internet Social Networks as part of the attack vector. When Social Network CSWs are used,- together with differential metrics of effectiveness, we found that amongst the Social Networks studied (Facebook, Google+, and Twitter) the amount of inherent protection provided ranged from negligible to a level that we will call modest self-protection (0% to 18% prevention rate). Further, results of our evaluation showed that the supplemental protection provided by 3rd party anti-Malware products was erratic, ranging from a low of 0% to a high of 93% depending on the product and/or Social Network combination.
Keywords: computer crime; fraud; invasive software; social networking (online);Facebook; Google; Twitter; Web fraud; Web sites; antimalware endpoint security product; commercial consensus; community crowd-sourced categorization; customizable stimulus workload; end-point security product; malicious URL; malicious files; phishing; social network; Electronic mail; Facebook; Internet; Malware; Media; Uniform resource locators (ID#: 15-5393)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6999407&isnumber=6999400
Yadav, M.; Gupta, S.K.; Saket, R.K., "Experimental Security Analysis for SAODV vs SZRP in Ad-Hoc Networks," Computational Intelligence and Communication Networks (CICN), 2014 International Conference on, pp. 819, 823, 14-16 Nov. 2014. doi: 10.1109/CICN.2014.175
Abstract: Ad-hoc network, due to its fundamental characteristics like open environment operation, random topology and capability limitation, is mostly at risk from security point of view. There may be malicious threats during data transmission from one user to another user, which leads to affect the system performance and causing insecurity in data transmission. Many routing protocols consider these security issues as major point of consideration and hence try to overcome the security threats in ad-hoc networks. In this article, a scenario is set up for simulation to evaluate the performance and security issue of two secure routing protocols that are secure ad-hoc on demand vector (SAODV) and secure zone routing protocol (SZRP). In this paper, simulation has been done for number of times with different values of pause time ranging from 0 to 800 seconds for both protocols. And finally, simulation has been done for malicious environment with different numbers of malicious nodes ranging from 2 to 18 nodes for both protocols. Our analysis has been done under two performance metrics-one is packet delivery ratio and second is end to end delay. Experimental results have been obtained using NS-2 (version 2.34) mainly. We have prepared excel graphs from. Tr (trace) files. Based on experimental outcomes paper concluded, that SZRP outperforms SAODV for real time applications.
Keywords: data communication; mobile ad hoc networks; routing protocols; security of data;MANET;NS-2 version 2.34;SAODV experimental security analysis; SZRP; data transmission insecurity; excel graphs; malicious threats; mobile ad hoc network; secure ad hoc on demand vector; secure zone routing protocol; Ad hoc networks; Delays; Routing; Routing protocols; Security; Ad-hoc network; MANET; Malicious node; PDF; SAODV; SZRP; ns-2 (ID#: 15-5394)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7065595&isnumber=7065338
Sang-Ho Na; Eui-Nam Huh, "A Methodology Of Assessing Security Risk Of Cloud Computing In User Perspective For Security-Service-Level Agreements," Innovative Computing Technology (INTECH), 2014 Fourth International Conference on, pp. 87, 92, 13-15 Aug. 2014. doi: 10.1109/INTECH.2014.6927759
Abstract: Underlying cloud computing feature, outsourcing of resources, makes the Service Level Agreement (SLA) is a critical factor for Quality of Service (QoS), and many researchers have addressed the question of how a SLA can be evaluated. Lately, security-SLAs have also received much attention with the Security-as-a-Service mode in cloud computing. The quantitative measurement of security metrics is a considerably difficult problem and might be considered the multi-dimensional aspects of security threats and user requirements. To address these issues, we provide a novel a methodology of security risk assessment for security-service-level agreements in the cloud service based on a multi-dimensional approach depending on services type, probabilities of threats, and network environments to reach a security-SLA evaluation.
Keywords: cloud computing; probability; risk management; security of data; QoS; cloud computing; multidimensional approach; quality of service; quantitative measurement; resource outsourcing; security metrics; security risk assessment; security threats; security-SLA evaluation; security-SLAs; security-as-a-service mode; security-service-level agreements; service level agreement; threat probability; user perspective; user requirements; Availability; Cloud computing; Measurement; Quality of service; Risk management; Security; Vectors; Personal Cloud Service; Security Risk Assessment in User Perspective; Security-SLA (ID#: 15-5395)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6927759&isnumber=6927737
Xiaokuan Zhang; Haizhong Zheng; Xiaolong Li; Suguo Du; Haojin Zhu, "You Are Where You Have Been: Sybil Detection Via Geo-Location Analysis In Osns," Global Communications Conference (GLOBECOM), 2014 IEEE, pp. 698, 703, 8-12 Dec. 2014. doi: 10.1109/GLOCOM.2014.7036889
Abstract: Online Social Networks (OSNs) are facing an increasing threat of sybil attacks. Sybil detection is regarded as one of major challenges for OSN security. The existing sybil detection proposals that leverage graph theory or exploit the unique clickstream patterns are either based on unrealistic assumptions or limited to the service providers. In this study, we introduce a novel sybil detection approach by exploiting the fundamental mobility patterns that separate real users from sybil ones. The proposed approach is motivated as follows. On the one hand, OSNs including Yelp and Dianping allow us to obtain the users' mobility trajectories based on their online reviews and the locations of their visited shops/restaurants. On the other side, a real user's mobility is generally predictable and confined to a limited neighborhood while the sybils' mobility is forged based on the paid review missions. To exploit the mobility differences between the real and sybil users, we introduce an entropy based definition to capture users' mobility patterns. Then we design a new sybil detection model by incorporating the newly defined location entropy based metrics into other traditional feature sets. The proposed sybil detection model can significantly improve the performance of sybil detections, which is well demonstrated by extensive evaluations based on the data set from Dianping.
Keywords: entropy; mobile computing; mobility management (mobile radio);security of data; social networking (online);Dianping; OSN security; Yelp; geolocation analysis; graph theory; location entropy based metrics; online social network; sybil attack detection; sybil mobility forgery; user mobility trajectory; Databases; Entropy; Feature extraction; Information systems; Measurement; Security; Support vector machines; Entropy; Location-Based Feature; Minimum Covering Circle; Sybil Detection (ID#: 15-5396)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7036889&isnumber=7036769
Goudar, V.; Potkonjak, M., "On Admitting Sensor Fault Tolerance While Achieving Secure Biosignal Data Sharing;" Healthcare Informatics (ICHI), 2014 IEEE International Conference on, pp. 266, 275, 15-17 Sept. 2014. doi: 10.1109/ICHI.2014.44
Abstract: Remote health monitoring BASNs promise substantive improvements in the quality of healthcare by providing access to diagnostically rich patient data in real-time. However, adoption is hindered by the threat of compromise of the diagnostic quality of the data by faults. Simultaneously, unresolved issues exist with the secure sharing of the sensitive medical data measured by automated BASNs, stemming from the need to provide the data owner (BASN user / patient) and the data consumers (healthcare providers, insurance companies, medical research facilities) secure control over the medical data as it is shared. We address these issues with a robust watermarking approach constrained to leave primary data semantic metrics unaffected and secondary metrics affected minimally. Further, the approach is coordinated with a fault tolerant sensor partitioning technique to afford high semantic accuracy together with recovery of bio signal semantics in the presence of sensor faults, while preserving the robustness of the watermark so that it is not easily corrupted, recovered or spoofed by malicious data consumers. Based on experimentally collected datasets from a gait-stability monitoring BASN, we show that our watermarking technique can robustly and effectively embed up to 1000 bit watermarks under these constraints.
Keywords: body area networks; body sensor networks; health care; medical administrative data processing; security of data; watermarking; biosignal semantic recovery; body area sensor networks; data semantic metrics; fault tolerant sensor partitioning technique; gait-stability monitoring BASN; health care; malicious data consumers; patient data; remote health monitoring BASNs; robust watermarking approach; secure biosignal data sharing; sensitive medical data secure sharing; sensor fault tolerance; Encoding; Measurement; Robustness; Semantics; Vectors; Watermarking; Body Area Networks; Fault Tolerance; Medical Data Security; Medical Data Sharing; Watermarking (ID#: 15-5397)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7052499&isnumber=7052453
Narayanan, A.; Lihui Chen; Chee Keong Chan, "Addetect: Automated Detection Of Android Ad Libraries Using Semantic Analysis," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2014 IEEE Ninth International Conference on, pp. 1, 6, 21-24 April 2014. doi: 10.1109/ISSNIP.2014.6827639
Abstract: Applications that run on mobile operating systems such as Android use in-app advertisement libraries for monetization. Recent research reveals that many ad libraries, including popular ones pose threats to user privacy. Some aggressive ad libraries involve in active privacy leaks with the intention of providing targeted ads. Few intrusive ad libraries are classified as adware by commercial mobile anti-virus apps. Despite such issues, semantic detection of ad libraries from Android apps remains an unsolved problem. To this end, we have proposed and developed the AdDetect framework to perform automatic semantic detection of in-app ad libraries using semantic analysis and machine learning. A module decoupling technique based on hierarchical clustering is used to identify and recover the primary and non-primary modules of apps. Each of these modules is then represented as vectors using semantic features. A SVM classifier trained with these feature vectors is used to detect ad libraries. We have conducted an experimental study on 300 apps spread across 15 categories obtained from the official market to verify the effectiveness of AdDetect. The simulation results are promising. AdDetect achieves 95.34% accurate detection of ad libraries with very less false positives. Further analysis reveals that the proposed detection mechanism is robust against common obfuscation techniques. Detailed analysis on the detection results and semantic characteristics of different families of ad libraries is also presented.
Keywords: Android (operating system); data privacy; learning (artificial intelligence); pattern classification; pattern clustering; semantic networks; software libraries; support vector machines; AdDetect framework; Android ad libraries; Android apps; SVM classifier; active privacy leaks; adware; automatic semantic detection; commercial mobile antivirus apps; feature vectors; hierarchical clustering; in-app ad libraries; in-app advertisement libraries; intrusive ad libraries; machine learning; mobile operating systems; module decoupling technique; monetization; nonprimary modules; obfuscation techniques; semantic analysis; semantic characteristics; semantic features; user privacy; Androids; Feature extraction; Humanoid robots; Java; Libraries; Semantics; Vectors (ID#: 15-5398)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6827639&isnumber=6827478
Bin Liang; Wei You; Liangkun Liu; Wenchang Shi; Heiderich, M., "Scriptless Timing Attacks on Web Browser Privacy," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp. 112, 123, 23-26 June 2014. doi: 10.1109/DSN.2014.93
Abstract: The existing Web timing attack methods are heavily dependent on executing client-side scripts to measure the time. However, many techniques have been proposed to block the executions of suspicious scripts recently. This paper presents a novel timing attack method to sniff users' browsing histories without executing any scripts. Our method is based on the fact that when a resource is loaded from the local cache, its rendering process should begin earlier than when it is loaded from a remote website. We leverage some Cascading Style Sheets (CSS) features to indirectly monitor the rendering of the target resource. Three practical attack vectors are developed for different attack scenarios and applied to six popular desktop and mobile browsers. The evaluation shows that our method can effectively sniff users' browsing histories with very high precision. We believe that modern browsers protected by script-blocking techniques are still likely to suffer serious privacy leakage threats.
Keywords: data privacy; online front-ends; CSS features; Web browser privacy; Web timing attack methods; cascading style sheets; client-side scripts; desktop browser; mobile browser; privacy leakage threats; rendering process; script-blocking techniques; scriptless timing attacks; user browsing history; Animation; Browsers; Cascading style sheets; History; Rendering (computer graphics);Timing; Web privacy; browsing history; scriptless attack; timing attack (ID#: 15-5399)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903572&isnumber=6903544
Wenhai Sun; Bing Wang; Ning Cao; Ming Li; Wenjing Lou; Hou, Y.T.; Hui Li, "Verifiable Privacy-Preserving Multi-Keyword Text Search in the Cloud Supporting Similarity-Based Ranking," Parallel and Distributed Systems, IEEE Transactions on, vol. 25, no. 11, pp. 3025, 3035, Nov. 2014. doi: 10.1109/TPDS.2013.282
Abstract: With the growing popularity of cloud computing, huge amount of documents are outsourced to the cloud for reduced management cost and ease of access. Although encryption helps protecting user data confidentiality, it leaves the well-functioning yet practically-efficient secure search functions over encrypted data a challenging problem. In this paper, we present a verifiable privacy-preserving multi-keyword text search (MTS) scheme with similarity-based ranking to address this problem. To support multi-keyword search and search result ranking, we propose to build the search index based on term frequency- and the vector space model with cosine similarity measure to achieve higher search result accuracy. To improve the search efficiency, we propose a tree-based index structure and various adaptive methods for multi-dimensional (MD) algorithm so that the practical search efficiency is much better than that of linear search. To further enhance the search privacy, we propose two secure index schemes to meet the stringent privacy requirements under strong threat models, i.e., known ciphertext model and known background model. In addition, we devise a scheme upon the proposed index tree structure to enable authenticity check over the returned search results. Finally, we demonstrate the effectiveness and efficiency of the proposed schemes through extensive experimental evaluation.
Keywords: cloud computing; cryptography; data privacy; database indexing; information retrieval; text analysis; tree data structures; ciphertext model; cloud computing; cloud supporting similarity-based ranking; cosine similarity measure;data encryption; management cost reduction; multidimensional algorithm; search privacy; secure index schemes; similarity-based ranking; term frequencyand; tree-based index structure; user data confidentiality; vector space model; verifiable privacy-preserving multikeyword text search; Encryption; Frequency measurement; Indexes; Privacy; Servers; Vectors; Cloud computing; multi-keyword search; privacy-preserving search; similarity-based ranking; verifiable search (ID#: 15-5400)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6656804&isnumber=6919360
Ambusaidi, M.A.; Xiangjian He; Zhiyuan Tan; Nanda, P.; Liang Fu Lu; Nagar, U.T., "A Novel Feature Selection Approach for Intrusion Detection Data Classification," Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, pp. 82, 89, 24-26 Sept. 2014. doi: 10.1109/TrustCom.2014.15
Abstract: Intrusion Detection Systems (IDSs) play a significant role in monitoring and analyzing daily activities occurring in computer systems to detect occurrences of security threats. However, the routinely produced analytical data from computer networks are usually of very huge in size. This creates a major challenge to IDSs, which need to examine all features in the data to identify intrusive patterns. The objective of this study is to analyze and select the more discriminate input features for building computationally efficient and effective schemes for an IDS. For this, a hybrid feature selection algorithm in combination with wrapper and filter selection processes is designed in this paper. Two main phases are involved in this algorithm. The upper phase conducts a preliminary search for an optimal subset of features, in which the mutual information between the input features and the output class serves as a determinant criterion. The selected set of features from the previous phase is further refined in the lower phase in a wrapper manner, in which the Least Square Support Vector Machine (LSSVM) is used to guide the selection process and retain optimized set of features. The efficiency and effectiveness of our approach is demonstrated through building an IDS and a fair comparison with other stateof-the-art detection approaches. The experimental results show that our hybrid model is promising in detection compared to the previously reported results.
Keywords: feature selection; filtering theory; least squares approximations; pattern classification; security of data; support vector machines; IDS; LSSVM; feature selection approach; filter selection process; intrusion detection data classification ;least square support vector machine; wrapper selection process; Accuracy; Feature extraction; Intrusion detection; Mutual information; Redundancy;Support vector machines; Training; Feature selection; Floating search; Intrusion detection; Least square support vector machines; Mutual information (ID#: 15-5401)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011237&isnumber=7011202
Shatilov, K.; Boiko, V.; Krendelev, S.; Anisutina, D.; Sumaneev, A., "Solution For Secure Private Data Storage In A Cloud," Computer Science and Information Systems (FedCSIS), 2014 Federated Conference on, pp. 885, 889, 7-10 Sept. 2014. doi: 10.15439/2014F43
Abstract: Cloud computing and, more particularly, cloud databases, is a great technology for remote centralized data managing. However, there are some drawbacks including privacy issues, insider threats and potential database thefts. Full encryption of remote database does solve the problem, but disables many operations that can be held on DBMS side; therefore problem requires much more complex solution and specific encryptions. In this paper, we propose a solution for secure private data storage that protects confidentiality of user's data, stored in cloud. Solution uses order preserving and homomorphic proprietary developed encryptions. Proposed approach includes analysis of user's SQL queries, encryption of vulnerable data and decryption of data selection, returned from DBMS. We have validated our approach through the implementation of SQL queries and DBMS replies processor, which will be discussed in this paper. Secure cloud database architecture and used encryptions also will be covered.
Keywords: cloud computing; cryptography; data privacy; distributed databases; DBMS replies processor; SQL queries; cloud computing; cloud databases; data selection; database thefts; encryption; privacy issues; remote centralized data managing; remote database; secure cloud database architecture; secure private data storage; user data; vulnerable data; Encoding; Encryption; Query processing; Vectors (ID#: 15-5402)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933109&isnumber=6932982
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Trust and Trustworthiness, 2014 |
Trust is created in information security through cryptography to assure the identity of external parties. The works cited here look at methods to measure trustworthiness. All were presented in 2014.
Lifeng Wang; Zhengping Wu, "A Trustworthiness Evaluation Framework in Cloud Computing for Service Selection," Cloud Computing Technology and Science (CloudCom), 2014 IEEE 6th International Conference on, pp. 101, 106, 15-18 Dec. 2014. doi: 10.1109/CloudCom.2014.107
Abstract: Cloud computing provides many benefits for individuals and enterprises by offering a range of computing services. The service dynamism, elasticity, economy and choices are too attractive to ignore. At the meantime, cloud computing has opened up a new frontier of challenges by introducing trust scenario. The Trustworthiness Evaluation of cloud Services is a paramount concern. In this paper, we present a framework to quantitatively measure and rank the trustworthiness of cloud services. In particular, we address the fundamental understanding of trustworthiness, quantitative trustworthiness metrics, unified scale of trust factors, trust factors categorization, trust coordinate and multi-criteria analysis for trustworthiness decision making. Our comprehensive framework of trustworthiness evaluation contains five basic building blocks. The preprocessing block query and calculate the existent trustworthiness record. Then the trust factors are collected, if there was no match record found. The trust factor management block categorize the trust factors and convert them by using unified scale. The trust factor processing block is for weighting and positioning of trust factors. The trustworthiness decision making block provide calculation of cloud service trustworthiness, and the results are recorded in our trustworthiness record block. The proposed trustworthiness measurement framework is employed in several experiments by using existing trust dataset. The analysis based on the experiment result indicates our trustworthiness evaluation is accurate and flexible.
Keywords: cloud computing; trusted computing; block query preprocessing; cloud computing; computing service selection; multicriteria analysis; quantitative trustworthiness metrics; service choice; service dynamism; service economy; service elasticity; trust coordinate; trust factors; trust factors categorization; trust factors scale; trustworthiness decision making; trustworthiness evaluation framework; trustworthiness measure; trustworthiness ranking; trustworthiness record; Accuracy; Cloud computing; Decision making; Measurement; Ontologies; Peer-to-peer computing; Radio frequency; cloud service selection; cloud service trustworthiness; multi-criteria analysis; trust coordinate; trust metrics; trustworthiness evaluation (ID#: 15-5358)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7037654&isnumber=7036227
Lifeng Wang; Zhengping Wu, "Evaluation Of E-Commerce System Trustworthiness Using Multi-Criteria Analysis," Computational Intelligence in Multi-Criteria Decision-Making (MCDM), 2014 IEEE Symposium on, pp. 86, 93, 9-12 Dec. 2014. doi: 10.1109/MCDM.2014.7007192
Abstract: Trustworthiness is a very critical element and should be treated as an important reference when customers try to select proper e-commerce systems. Trustworthiness evaluation requires the management of a wide variety of information types, parameters and uncertainties. Multi-criteria decision analysis (MCDA) has been regarded as a suitable set of methods to perform trustworthiness evaluations as a result of its flexibility and the possibility. For making trustworthiness measurement simple and standardized, this paper proposes a novel trustworthiness measurement model based on multi-criteria decision analysis. Recently, a lot of great efforts have been carried out to develop decision making for evaluation of trustworthiness and reputation. However, these research works still stay on the stage of theoretical research. This paper proposes and implements a trustworthiness measurement model using multi-criteria decision making approach for e-commerce systems. Firstly, this paper recognizes trust factors of e-commerce systems and distributes the factors in our designed multi-dimensional trust space and trust trustworthiness measurement model. All relevant factors are filtered, categorized and quantified. Then, our designed multi-criteria analysis mechanism can deal with the trust factors and analyze their trust features from different perspectives. Finally, the evaluated trustworthiness result is provided. Meanwhile, we also have a knowledge learning based approach to improve the accuracy of the result. At the end of this paper, we have conducted several experiments to validate our designed trustworthiness measurement by involving real world data. Our evaluated trustworthiness result and real world data are matched very well.
Keywords: decision theory; electronic commerce; learning (artificial intelligence);trusted computing; MCDA; e-commerce system; knowledge learning based approach; multicriteria decision analysis; multidimensional trust space; trust factors; trust features; trustworthiness evaluation; trustworthiness measurement model; Analytical models; Decision making; Extraterrestrial measurements; History; Peer-to-peer computing; Social network services; Vectors; measurement; multi-criteria analysis; trust model; trust space; trustworthiness (ID#: 15-5359)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7007192&isnumber=7007176
Lifeng Wang; Zhengping Wu, "A Novel Trustworthiness Measurement Model for Cloud Service," Utility and Cloud Computing (UCC), 2014 IEEE/ACM 7th International Conference on, pp. 928, 933, 8-11 Dec. 2014. doi: 10.1109/UCC.2014.151
Abstract: Recent surveys show that there is enormous increase of organizations intending to adopt cloud, but one of their major obstructions is the trustworthiness evaluation of cloud service candidates. Performing evaluations of cloud service candidates is expensive and time consuming, especially with the breadth of services available today. In this situation, this paper proposes a novel trustworthiness measurement model to evaluate cloud service trustworthiness. By using the proposed trustworthiness measurement model, we first recognize and categorize trust factors in group of shared factors and unique factors. After special treatment of unique factors, all the trust factors in two groups are located in our designed trust dimension by different weighting and positioning approach. Then all the trust factors are converted to trust vectors, these various trust vectors of the services are considered by the designed multi-criteria analysis mechanism which can help us to analyze trust features from different perspectives and provide a comprehensive trustworthiness evaluation. In case the measurement result is inconsistent with user preference, we also provide an adjustment approach based on knowledge learning to enhance the accuracy of measurement result. At the end of this paper, our designed trustworthiness measurement model is validated by several experiments. The experiments are designed based upon real world dataset and the results indicate the accuracy of our measurement can be guaranteed.
Keywords: cloud computing; trusted computing; cloud service; knowledge learning; multicriteria analysis mechanism; positioning approach; trust dimension; trust factors; trust features; trust vectors; trustworthiness evaluation; trustworthiness measurement model; weighting approach; Accuracy; Cloud computing; Data models; Educational institutions; Ontologies; Peer-to-peer computing; Vectors; cloud service; multi-criteria analysis; trust dimension; trust vector; trustworthiness measurement (ID#: 15-5360)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7027618&isnumber=7027326
Shabut, A.M.; Dahal, K.; Awan, I., "Friendship Based Trust Model to Secure Routing Protocols in Mobile Ad Hoc Networks," Future Internet of Things and Cloud (FiCloud), 2014 International Conference on, pp. 280, 287, 27-29 Aug. 2014. doi: 10.1109/FiCloud.2014.51
Abstract: Trust management in mobile ad hoc networks (MANETs) has become a significant issue in securing routing protocols to choose reliable and trusted paths. Trust is used to cope with defection problems of nodes and stimulate them to cooperate. However, trust is a highly complex concept because of the subjective nature of trustworthiness, and has several social properties, due to its social origins. In this paper, a friendship-based trust model is proposed for MANETs to secure routing protocol from source to destination, in which multiple social degrees of friendships are introduced to represent the degree of nodes' trustworthiness. The model considers the behaviour of nodes as a human pattern to reflect the complexity of trust subjectivity and different views. More importantly, the model considers the dynamic differentiation of friendship degree over time, and utilises both direct and indirect friendship-based trust information. The model overcomes the limitation of neglecting the social behaviours of nodes when evaluating trustworthiness. The empirical analysis shows the greater robustness and accuracy of the trust model in a dynamic MANET environment.
Keywords: mobile ad hoc networks; routing protocols; telecommunication network management; dynamic MANET environment; dynamic differentiation; friendship based trust model; human pattern; indirect friendship-based trust information; mobile ad hoc networks; node trustworthiness; secure routing protocol; social behaviours; trust management; trust subjectivity; trusted paths; Ad hoc networks; Analytical models; Computational modeling; Measurement; Mobile computing; Routing protocols; Mobile ad hoc networks; friendship degrees; social analysis; trust; trust management (ID#: 15-5361)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6984207&isnumber=6984143
Mingdong Tang; Yu Xu; Jianxun Liu; Zibin Zheng; Xiaoqing Liu, "Combining Global and Local Trust for Service Recommendation," Web Services (ICWS), 2014 IEEE International Conference on, pp.305,312, June 27 2014-July 2 2014. doi: 10.1109/ICWS.2014.52
Abstract: Recommending trusted services to users is of paramount value in service-oriented environments. Reputation has been widely used to measure the trustworthiness of services, and various reputation models for service recommendation have been proposed. Reputation is basically a global trust score obtained by aggregating trust from a community of users, which could be conflicting with an individual's personal opinion on the service. Evaluating a service's trustworthiness locally based on the evaluating user's own or his/her friends' experiences is sometimes more accurate. However, local trust assessment may fail to work when no trust path from an evaluating user to a target service exists. This paper proposes a hybrid trust-aware service recommendation method for service-oriented environment with social networks via combining global trust and local trust evaluation. A global trust metric and a local trust metric are firstly presented, and then a strategy for combining them to predict the final trust of service is proposed. To evaluate the proposed method's performance, we conducted several simulations based on a synthesized dataset. The simulation results show that our proposed method outperforms the other methods in service recommendation.
Keywords: Web services; service-oriented architecture; social networking (online);trusted computing; global trust; hybrid trust-aware service recommendation method; local trust assessment; service trustworthiness; service-oriented environments; social networks; trusted services; Communities; Computer science; Educational institutions; Measurement; Reliability; Social network services; Vectors; reputation; service recommendation; service-oriented environment; social networks; trust (ID#: 15-5362)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6928912&isnumber=6928859
Pranata, I.; Skinner, G., "A Security Extension For Securing The Feedback & Rating Values In TIDE Framework," Information, Communication Technology and System (ICTS), 2014 International Conference on, pp. 227, 232, 24-24 Sept. 2014. doi: 10.1109/ICTS.2014.7010588
Abstract: In today's online environment, ratings and trust are paramount to the validity of transactions. Many consider the trustworthiness of an online entity prior to engaging in a transaction or collaboration activity. To derive entity's trustworthiness, feedbacks and ratings about this entity must first be collected electronically from other entities (i.e. raters) in the environment. As with any electronic transmission, security always becomes a crucial issue. The tampered feedback and rating values would result in an invalid measurement of an entity's trustworthiness. Thus, this issue needs to be addressed to ensure the accuracy of the trustworthiness computation. In this paper, we propose a security extension to our TIDE (Trust In Digital Environment) framework. This security extension upholds the integrity of feedback and ratings value during its electronic transmissions. The inclusivity of this method further maintains the accuracy of TIDE trustworthiness computation. In addition, this security extension can be universally applied in other trust and reputation systems.
Keywords: trusted computing; TIDE trustworthiness computation; electronic transmission; online entity; reputation systems; security extension; trust in digital environment framework; Authentication; Computational modeling; Educational institutions; Public key; Servers; Tides; Digital Environments; Security; Trust Model; Web of Trust (ID#: 15-5363)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7010588&isnumber=7010460
Samuvelraj, G.; Nalini, N., "A Survey Of Self Organizing Trust Method To Avoid Malicious Peers From Peer To Peer Network," Green Computing Communication and Electrical Engineering (ICGCCEE), 2014 International Conference on, pp. 1, 4, 6-8 March 2014. doi: 10.1109/ICGCCEE.2014.6921379
Abstract: Networks are subject to attacks from malicious sources. Sending the data securely over the network is one of the most tedious processes. A peer-to-peer (P2P) network is a type of decentralized and distributed network architecture in which individual nodes in the network act as both servers and clients of resources. Peer to peer systems are incredibly flexible and can be used for wide range of functions and also a Peer to peer (P2P) system prone to malicious attacks. To provide a security over peer to peer system the self-organizing trust model has been proposed. Here the trustworthiness of the peers has been calculated based on past interactions and recommendations. The interactions and recommendations are evaluated based on importance, recentness, and satisfaction parameters. By this the good peers were able to form trust relationship in their proximity and avoids the malicious peers.
Keywords: client-server systems; computer network security; fault tolerant computing; peer-to-peer computing; recommender systems; trusted computing;P2P network; client-server resources; decentralized network architecture; distributed network architecture; malicious attacks; malicious peers; malicious sources; peer to peer network; peer to peer systems; peer trustworthiness; satisfaction parameters;self organizing trust method; self-organizing trust model; Computer science; History; Measurement; Organizing; Peer-to-peer computing; Security; Servers; Metrics; Network Security; Peer to Peer; SORT (ID#: 15-5364)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921379&isnumber=6920919
Garakani, M.R.; Jalali, M., "A Trust Prediction Approach By Using Collaborative Filtering And Computing Similarity In Social Networks," Technology, Communication and Knowledge (ICTCK), 2014 International Congress on, pp. 1, 4, 26-27 Nov. 2014. doi: 10.1109/ICTCK.2014.7033535
Abstract: Along with the increasing popularity of social web sites, users rely more on the trustworthiness information for many online activities among users. However, such social network data often suffers from severe data sparsity and aren't able to provide users with enough information. Therefore, trust prediction has emerged as an important topic in social network research. Nowadays, trust prediction is not calculated with high accuracy. Collaborative filtering approach has become more applicable and is almost used in recommendation systems. In this approach, it is tried tha-tusers' rating of certain areas to be gathered and the similarity of users or items are measured, the most suitable and nearest item of user's preference will be realized and recommended. By using this concept and the most innovative and available approach to measure similarity is recommended to the target user. Then the trusted user is found. The results demonstrate that the recommended approach significantly improves the accuracy of trust prediction in social networks.
Keywords: collaborative filtering; recommender systems; social networking (online); trusted computing; collaborative filtering; computing similarity; data sparsity; item similarity; online activities; recommendation systems; social Web sites; social networks; trust prediction approach; trustworthiness information; user preference; user similarity; Accuracy; Collaboration; Computational modeling; Educational institutions; Filtering; Measurement; Social network services; collaborative filtering; similarity; social networks; trust; trust prediction (ID#: 15-5365)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7033535&isnumber=7033487
Guibing Guo; Jie Zhang; Thalmann, D.; Yorke-Smith, N., "ETAF: An Extended Trust Antecedents Framework For Trust Prediction," Advances in Social Networks Analysis and Mining (ASONAM), 2014 IEEE/ACM International Conference on, pp. 540, 547, 17-20 Aug. 2014. doi: 10.1109/ASONAM.2014.6921639
Abstract: Trust is one source of information that has been widely adopted to personalize online services for users, such as in product recommendations. However, trust information is usually very sparse or unavailable for most online systems. To narrow this gap, we propose a principled approach that predicts implicit trust from users' interactions, by extending a well-known trust antecedents framework. Specifically, we consider both local and global trustworthiness of target users, and form a personalized trust metric by further taking into account the active user's propensity to trust. Experimental results on two real-world datasets show that our approach works better than contemporary counterparts in terms of trust ranking performance when direct user interactions are limited.
Keywords: security of data; user interfaces; ETAF; active user propensity; direct user interactions; extended trust antecedents framework; global trustworthiness; local trustworthiness; personalized trust metric; product recommendations; real-world datasets; trust prediction; trust ranking performance; Computational modeling; Conferences; Educational institutions; Equations; Measurement; Social network services; Support vector machines; Trust prediction; trust antecedents framework; user interactions; user ratings (ID#: 15-5366)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921639&isnumber=6921526
Pandit, C.M.; Ladhe, S.A., "Secure Routing Protocol in MANET using TAC," Networks & Soft Computing (ICNSC), 2014 First International Conference on, pp.107,112, 19-20 Aug. 2014. doi: 10.1109/CNSC.2014.6906693
Abstract: MANET is the self-organized and distributed system with no central administration and requires no infrastructure. Due to this, the MANET is used in emergency services and during natural calamities. Nodes have to co-operate with each other for routing packets. Security is the major challenge for these networks. The compromised node can adversely affect the quality and reliability of data. To improve the security of the MANET, it is essential to evaluate the trustworthiness of nodes. In this paper, we have used the scheme that evaluates the trusted communication path with the help of Trust Allocation Certificate TAC. TAC declares the degree of trustworthiness of particular node. TAC can be used to detect the spoofed ID, trust falsified and packet dropping behavior of nodes.
Keywords: mobile ad hoc networks; routing protocols; telecommunication network reliability; telecommunication security; MANET; TAC; distributed system; malicious node; mobile ad hoc networks; packet dropping behavior; secure routing protocol; self-organized system; trust allocation certificate; trusted communication path; Measurement; Mobile ad hoc networks; Reliability; Routing; Routing protocols; Security; Adhoc Routing; MANET; Malicious Node; NS2 (ID#: 15-5367)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6906693&isnumber=6906636
Paverd, A.; Martin, A.; Brown, I., "Privacy-Enhanced Bi-Directional Communication In The Smart Grid Using Trusted Computing," Smart Grid Communications (SmartGridComm), 2014 IEEE International Conference on, pp. 872, 877, 3-6 Nov. 2014. doi: 10.1109/SmartGridComm.2014.7007758
Abstract: Although privacy concerns in smart metering have been widely studied, relatively little attention has been given to privacy in bi-directional communication between consumers and service providers. Full bi-directional communication is necessary for incentive-based demand response (DR) protocols, such as demand bidding, in which consumers bid to reduce their energy consumption. However, this can reveal private information about consumers. Existing proposals for privacy-enhancing protocols do not support bi-directional communication. To address this challenge, we present a privacy-enhancing communication architecture that incorporates all three major information flows (network monitoring, billing and bi-directional DR) using a combination of spatial and temporal aggregation and differential privacy. The key element of our architecture is the Trustworthy Remote Entity (TRE), a node that is singularly trusted by mutually distrusting entities. The TRE differs from a trusted third party in that it uses Trusted Computing approaches and techniques to provide a technical foundation for its trustworthiness. A automated formal analysis of our communication architecture shows that it achieves its security and privacy objectives with respect to a previously-defined adversary model. This is therefore the first application of privacy-enhancing techniques to bi-directional smart grid communication between mutually distrusting agents.
Keywords: data privacy; energy consumption; incentive schemes; invoicing; power engineering computing; power system measurement; protocols; smart meters; smart power grids; trusted computing; TRE; automated formal analysis; bidirectional DR information flow; billing information flow; differential privacy; energy consumption reduction; incentive-based demand response protocol; network monitoring information flow; privacy-enhanced bidirectional smart grid communication architecture; privacy-enhancing protocol; smart metering; spatial aggregation; temporal aggregation; trusted computing; trustworthy remote entity; Bidirectional control; Computer architecture; Monitoring; Privacy; Protocols; Security; Smart grids (ID#: 15-5368)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7007758&isnumber=7007609
Asmare, E.; McCann, J.A., "Lightweight Sensing Uncertainty Metric—Incorporating Accuracy and Trust," Sensors Journal, IEEE, vol. 14, no. 12, pp. 4264, 4272, Dec. 2014. doi: 10.1109/JSEN.2014.2354594
Abstract: The future will involve millions of networked sensors whose sole purpose is to gather data about various phenomena so that it can be used in making informed decisions. However, each measurement performed by a sensor has an associated uncertainty in its value, which if not accounted for properly, could potentially derail the decision process. Computing and embedding the associated uncertainties with data are, therefore, crucial to providing reliable information for sensor-based applications. In this paper, we present a novel unified framework for computing uncertainty based on accuracy and trust. We present algorithms for computing accuracy and trustworthiness and also propose an approach for propagating uncertainties. We evaluate our approach functionally by applying it to data sets collected from past deployments and demonstrate its benefits for in-network processing as well as fault detection.
Keywords: lightweight structures; measurement uncertainty; sensors; accuracy; data sets; lightweight sensing uncertainty metric; trust; unified framework; Accuracy; Measurement uncertainty; Sensors; Standards; Systematics; Temperature measurement; Uncertainty; Accuracy; sensing uncertainty; trust (ID#: 15-5369)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6891108&isnumber=6933962
Vaidyanathan, K.; Das, B.P.; Sumbul, E.; Renzhi Liu; Pileggi, L., "Building Trusted Ics Using Split Fabrication," Hardware-Oriented Security and Trust (HOST), 2014 IEEE International Symposium on, pp. 1, 6, 6-7 May 2014. doi: 10.1109/HST.2014.6855559
Abstract: Due to escalating manufacturing costs the latest and most advanced semiconductor technologies are often available at off-shore foundries. Utilizing these facilities significantly limits the trustworthiness of the corresponding integrated circuits for mission critical applications. We address this challenge of cost-effective and trustworthy CMOS manufacturing for advanced technologies using split fabrication. Split fabrication, the process of splitting an IC into an untrusted and trusted component, enables the designer to exploit the most advanced semiconductor manufacturing capabilities available offshore without disclosing critical IP or system design intent. We show that split fabrication after the Metal1 layer is secure and has negligible performance and area overhead compared to complete IC manufacturing in the off-shore foundry. Measurements from split fabricated 130nm testchips demonstrate the feasibility and efficacy of the proposed approach.
Keywords: CMOS integrated circuits; design for testability; foundries; integrated circuit manufacture;Metal1 layer; area overhead; integrated circuit manufacturing; mission critical integrated circuits; offshore foundries; size 130 nm; split fabrication; test chips; trustworthy CMOS manufacturing; Decision support systems; Hardware design languages; IP networks; Random access memory; Security; System-on-chip; Circuit obfuscation; Design for trust; Hardware security; Split fabrication (ID#: 15-5370)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6855559&isnumber=6855557
Yier Jin; Sullivan, D., "Real-Time Trust Evaluation In Integrated Circuits," Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014, pp. 1, 6, 24-28 March 2014. doi: 10.7873/DATE.2014.104
Abstract: The use of side-channel measurements and fingerprinting, in conjunction with statistical analysis, has proven to be the most effective method for accurately detecting hardware Trojans in fabricated integrated circuits. However, these post-fabrication trust evaluation methods overlook the capabilities of advanced design skills that attackers can use in designing sophisticated Trojans. To this end, we have designed a Trojan using power-gating techniques and demonstrate that it can be masked from advanced side-channel fingerprinting detection while dormant. We then propose a real-time trust evaluation framework that continuously monitors the on-board global power consumption to monitor chip trustworthiness. The measurements obtained corroborate our frameworks effectiveness for detecting Trojans. Finally, the results presented are experimentally verified by performing measurements on fabricated Trojan-free and Trojan-infected variants of a reconfigurable linear feedback shift register (LFSR) array.
Keywords: integrated circuits; invasive software; shift registers; statistical analysis; LFSR array; Trojan-free variants; Trojan-infected variants; advanced design skills; chip trustworthiness; hardware Trojan detection; integrated circuits; on-board global power consumption; post-fabrication trust evaluation methods; power-gating techniques; real-time trust evaluation framework; reconfigurable linear feedback shift register array; side-channel fingerprinting detection; side-channel measurements; Erbium; Hardware; Power demand; Power measurement; Semiconductor device measurement; Testing; Trojan horses (ID#: 15-5371)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6800305&isnumber=6800201
Sharifi, M.; Manaf, A.A.; Memariani, A.; Movahednejad, H.; Dastjerdi, A.V., "Consensus-Based Service Selection Using Crowdsourcing Under Fuzzy Preferences of Users," Services Computing (SCC), 2014 IEEE International Conference on, pp. 17, 26, June 27 2014-July 2 2014. doi: 10.1109/SCC.2014.12
Abstract: Different evaluator entities, either human agents (e.g., experts) or software agents (e.g., monitoring services), are involved in the assessment of QoS parameters of candidate services, which leads to diversity in service assessments. This diversity makes the service selection a challenging task, especially when numerous qualities of service criteria and range of providers are considered. To address this problem, this study first presents a consensus-based service assessment methodology that utilizes consensus theory to evaluate the service behavior for single QoS criteria using the power of crowdsourcing. To this end, trust level metrics are introduced to measure the strength of a consensus based on the trustworthiness levels of crowd members. The peers converged to the most trustworthy evaluation. Next, the fuzzy inference engine was used to aggregate each obtained assessed QoS value based on user preferences because we address multiple QoS criteria in real life scenarios. The proposed approach was tested and illustrated via two case studies that prove its applicability.
Keywords: Web services; behavioural sciences computing; fuzzy reasoning; fuzzy set theory; trusted computing; QoS criteria; QoS parameters; QoS value; Web service; candidate services; consensus theory; consensus-based service assessment methodology; consensus-based service selection; crowd members; crowdsourcing; evaluator entities; fuzzy inference engine; fuzzy preferences; human agents; service assessments; service behavior; service criteria; trust level metrics; trustworthiness levels; trustworthy evaluation; user preferences; Convergence; Engines; Fuzzy logic; Measurement; Monitoring; Peer-to-peer computing; Quality of service; Consensus; Fuzzy aggregation; Service selection; Trust; Web service (ID#: 15-5372)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6930512&isnumber=6930500
Yi Ying Ng; Hucheng Zhou; Zhiyuan Ji; Huan Luo; Yuan Dong, "Which Android App Store Can Be Trusted in China?," Computer Software and Applications Conference (COMPSAC), 2014 IEEE 38th Annual, pp. 509, 518, 21-25 July 2014. doi: 10.1109/COMPSAC.2014.95
Abstract: China has the world's largest Android population with 270 million active users. However, Google Play is only accessible by about 30% of them, and third-party app stores are thus used by 70% of them for daily Android apps (applications) discovery. The trustworthiness of Android app stores in China is still an open question. In this paper, we present a comprehensive study on the trustworthy level of top popular Android app stores in China, by discovering the identicalness and content differences between the APK files hosted in the app stores and the corresponding official APK files. First, we have selected 25 top apps that have the highest installations in China and have the corresponding official ones downloaded from their official websites as oracle, and have collected total 506 APK files across 21 top popular app stores (20 top third party stores as well as Google Play). Afterwards, APK identical checking and APK difference analysis are conducted against the corresponding official versions. Next, assessment is applied to rank the severity of APK files. All the apps are classified into 3 severity levels, ranging from safe (identical and higher level), warning (lower version or modifications on resource related files) to critical (modifications on permission file and/or application codes). Finally, the severity levels contribute to the final trustworthy ranking score of the 21 stores. The study indicates that about only 26.09% of level APK files are safe, 37.74% of them are at warning level, and 36.17% of them are surprisingly at critical level. We have also found out that 10 (about 2%) APK files are modified and resigned by unknown third-parties. In addition, the average trustworthy ranking score (47.37 over 100) has also highlighted that the trustworthy level of the Android app stores in China is relatively low. In conclusion, we suggest Android users to download APK files from its corresponding official websites or use the highest ranked third-party app stores, and we appeal app stores to ensure all hosting APK files are trustworthy enough to provide a "safe-to-download" environment.
Keywords: Android (operating system); security of data; APK files; Android App Store; Android app stores; China; Google Play; safe-to-download environment; third-party app stores; Androids; Distance measurement; Google; Humanoid robots; Libraries; Mobile communication; Smart phones; APK; Android; app store; severity ranking; trustworthy (ID#: 15-5373)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6899255&isnumber=6899181
Almanea, M.I.M., "Cloud Advisor - A Framework towards Assessing the Trustworthiness and Transparency of Cloud Providers," Utility and Cloud Computing (UCC), 2014 IEEE/ACM 7th International Conference on, pp. 1018, 1019, 8-11 Dec. 2014. doi: 10.1109/UCC.2014.168
Abstract: We propose a Cloud Advisor framework that couples two salient features: trustworthiness and transparency measurement. It provides a mechanism to measure trustworthiness based on the history of the cloud provider taking into account evidence support and to measure transparency based on the Cloud Controls Matrix (CCM) framework. The selection process is based on a set of assurance requirements that if are met by the cloud provider or if it has been considered in a tool it could bring assurance and confidence to cloud customers.
Keywords: cloud computing; matrix algebra; trusted computing; CCM framework; assurance requirement; cloud advisor; cloud controls matrix framework; cloud customer; cloud provider; selection process; transparency measurement; trustworthiness measurement; Cloud computing; Conferences; Educational institutions; History; Monitoring; Privacy; Security; assurance requirements ;cloud computing; cloud providers; framework; measurement ;transparency; trustworthiness (ID#: 15-5374)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7027635&isnumber=7027326
Yu Bai; Gang Yin; Huaimin Wang, "Multi-dimensions of Developer Trustworthiness Assessment in OSS Community," Trust, Security and Privacy in Computing and Communications (TrustCom), 2014 IEEE 13th International Conference on, pp. 75, 81, 24-26 Sept. 2014. doi: 10.1109/TrustCom.2014.14
Abstract: With the prosperity of the Open Source Software, various software communities are formed and they attract huge amounts of developers to participate in distributed software development. For such software development paradigm, how to evaluate the skills of the developers comprehensively and automatically is critical. However, most of the existing researches assess the developers based on the Implementation aspects, such as the artifacts they created or edited. They ignore the developers' contributions in Social collaboration aspects, such as answering questions, giving advices, making comments or creating social connections. In this paper, we propose a novel model which evaluate the individuals' skills from both Implementation and Social collaboration aspects. Our model defines four metrics from multi-dimensions, including collaboration index, technical skill, community influence and development contribution. We carry out experiments on a real-world online software community. The results show that our approach can make more comprehensive measurement than the previous work.
Keywords: groupware; public domain software; security of data; social aspects of automation; software metrics; trusted computing; OSS community; collaboration index; community influence; developer trustworthiness assessment; development contribution; distributed software development; open source software; question answering; real-world online software community; social collaboration aspects; technical skill; Collaboration; Communities; Educational institutions; Equations; Indexes; Mathematical model; Software; Developer assessment; OSS community; multi-Dimensions contribution; trustworthiness (ID#: 15-5375)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011236&isnumber=7011202
Singal, H.; Kohli, S., "Conceptual Model For Obfuscated TRUST Induced From Web Analytics Data For Content-Driven Websites," Advances in Computing, Communications and Informatics (ICACCI, 2014 International Conference on, pp. 2781, 2785, 24-27 Sept. 2014. doi: 10.1109/ICACCI.2014.6968622
Abstract: Besides e-commerce, infobahn has become an imperative mediocre to provide significant content on services such as academics, medical, legal, relationships, meteorological, general knowledge, etc. to users in a judicious manner. To fascinate additional users to various content providers on the Web, it is essential to build a relationship of trust with them by meritoriously achieving online content optimization by estimating content items' attractiveness and relevance to users' interests. But more than building, the long term sustenance of TRUST is necessary to bind users with the website forever. The perseverance of the present study is to contribute in the effective measurement of TRUST evolved and maintained for Web mediated Information Exchange (W-MIE) or content websites over a long run.
Keywords: Internet; Web sites; data analysis; data integrity; TRUST; W-MIE; Web analytics data; Web mediated information exchange; content-driven Web sites; online content optimization; Analytical models; Computational modeling; Data models; Time measurement; Usability; Web sites; Content-driven websites; TRUST; Trustworthiness; Web mediated Information Exchange (WMIE) (ID#: 15-5376)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6968622&isnumber=6968191
Almanea, M.I.M., "A Survey and Evaluation of the Existing Tools that Support Adoption of Cloud Computing and Selection of Trustworthy and Transparent Cloud Providers," Intelligent Networking and Collaborative Systems (INCoS), 2014 International Conference on, pp. 628, 634, 10-12 Sept. 2014. doi: 10.1109/INCoS.2014.42
Abstract: In spite of the benefits that could flow from its adoption, cloud computing brings new challenges associated with potential lack of transparency, trust, and loss of controls. With a growing number of cloud service providers, potential customers will require methods for selecting trustworthy and appropriate providers. We discuss existing tools, methods and frameworks that promote the adoption of cloud computing models, and the selection of trustworthy cloud service providers. We propose a set of customer's assurance requirements as a basis for comparative evaluation, and is applied to several popular tools (CSA STAR, CloudTrust Protocol, C.A.RE and Cloud Provider Transparency Scorecard). We describe a questionnaire-based survey in which respondents evaluate the extent to which these tools have been used, and assess their usefulness. The majority of respondents agreed on the importance of using the tools to assist migration to the cloud and, although most respondents have not used the tools, those who have used them reported them helpful. It has been noticed that there might be a relationship between a tool's compliance to the proposed requirements and the popularity of using these tools, and these results should encourage cloud providers to address customers' assurance requirements.
Keywords: cloud computing; trusted computing; cloud computing; cloud migration; cloud service providers; customers assurance requirements; questionnaire-based survey; transparent cloud providers; trustworthy cloud providers; Certification; Cloud computing; Measurement; Monitoring; Protocols; Security; Standards; cloud computing; cloud service provider; measurement; selection; transparency; trustworthiness (ID#: 15-5377)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7057161&isnumber=7057036
Wenhe Li; Tie Bao; Lu Han; Shufen Liu; Chen Qu, "Evidence-Driven Quality Evaluation Model Of Collaboration Software Trustworthiness," Computer Supported Cooperative Work in Design (CSCWD), Proceedings of the 2014 IEEE 18th International Conference on, pp. 65, 70, 21-23 May 2014. doi: 10.1109/CSCWD.2014.6846818
Abstract: Establishment of quality evaluation model of trustworthiness plays an important role in quality analysis for collaboration software. Therefore, this paper researches the quality evaluation model and proposes a method of establishing the quality level model based on practical evidence. This method is mainly carried out as follows: The trustworthiness evidence model is established through collecting the practical evidence in the systems development life cycle. Then the measurement method and the value range are analyzed and the trustworthiness level model is established to provide the evaluation criterion to the evaluation of software trustworthiness. The quality level model based on the practical evidence is able to ensure the practical operability of the evaluation of trustworthiness and to lower down the work complexity.
Keywords: groupware; software quality; trusted computing; collaboration software trustworthiness; evidence-driven quality evaluation model; measurement method; quality analysis; quality level model; systems development life cycle; trustworthiness evidence model; trustworthiness level model; value range; Analytical models; Collaboration; Data models; Libraries; Software; Software algorithms; Software measurement; collaboration software; practical evidence; quality evaluation; trustworthiness level model (ID#: 15-5378)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846818&isnumber=6846800
Di Cerbo, F.; Kaluvuri, S.P.; Motte, F.; Nasser, B.; Chen, W.X.; Short, S., "Towards a Linked Data Vocabulary for the Certification of Software Properties," Signal-Image Technology and Internet-Based Systems (SITIS), 2014 Tenth International Conference on, pp. 721, 727, 23-27 Nov. 2014. doi: 10.1109/SITIS.2014.29
Abstract: In order to cater for a growing user base that requires varied functionalities and owns multiple devices, software providers are using cloud solutions as the preferred technical means. In fact, all major operating systems come with a tight integration to cloud services. Software solutions that have such integration with cloud services should disclose (transparency) this to the consumer. Furthermore, with mounting concerns over the security of software, consumers are demanding assurance over the software being used. Software certification can address both issues: security and transparency of software, thereby providing comprehensive assurance to consumers. However current software certifications are tailored for human consumption and represented in natural language, a major issue that hinders automated reasoning to be performed on them. Focused research efforts in the past few years have resulted in a Digital Certification concept, a machine process able representation of certifications, that can cater to different software provisioning models. We extend the notion of a Digital Certification by using the Linked Data vocabulary to express general characteristics of software systems that benefits from existing and future knowledge from the Linked Data community. This greatly increases the usability of such Digital Certifications and has a wider impact on the Software certification landscape.
Keywords: certification; cloud computing; natural language processing; operating systems (computers);security of data; software engineering; certification representation; cloud solutions; digital certification concept; digital certifications; linked data community; linked data vocabulary; natural language; operating systems; software property certification; software provisioning models; software security; software transparency; Context; Measurement; Security; Software systems; Time factors; Vocabulary; assurance; certification; digital certificate; linked data; security; trust; trustworthiness (ID#: 15-5379)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7081622&isnumber=7081475
Valente, J.; Barreto, C.; Cardenas, A.A., "Cyber-Physical Systems Attestation," Distributed Computing in Sensor Systems (DCOSS), 2014 IEEE International Conference on, pp. 354, 357, 26-28 May 2014. doi: 10.1109/DCOSS.2014.61
Abstract: Cyber-Physical Systems (CPS) are monitored and controlled by a wide variety of sensors and controllers. However, it has been repeatedly demonstrated that most of the devices interacting with the physical world (sensors and controllers) are extremely fragile to security incidents. One particular technology that can help us improve the trustworthiness of these devices is software attestation. While software attestation can help a verifier check the integrity of devices, it still has several drawbacks that have limited their application in the field, like establishing an authenticated channel, the inability to provide continuous attestation, and the need to modify devices to implement the attestation procedure. To overcome these limitations, we propose CPS-attestation as an attestation technique for control systems to attest their state to an external verifier. CPS-attestation enables a verifier to continuously monitor the dynamics of the control system over time and detect whether a component is not behaving as expected or if it is driving the system to an unsafe state. Our goal in this position paper is to initiate the discussion on the suitability of applying attestation techniques to control systems and the associated research challenges.
Keywords: control engineering computing; formal verification; trusted computing; CPS-attestation technique; control system dynamics; controllers; cyber-physical systems; device trustworthiness; security incidents; sensors; software attestation procedure; Control systems; Current measurement; Hardware; Monitoring; Security; Software; Software measurement; Software attestation; critical infrastructure protection; cyber-physical systems (ID#: 15-5380)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846189&isnumber=6846129
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
User Privacy in the Cloud, 2014 |
Privacy is a major problem for distributed file systems, that is, the Cloud. Considerable research is being conducted in this area. The works cited here are selected by the editors as work of interest to the Science of Security community. The work was presented in 2014.
Bertino, E.; Samanthula, B.K., "Security With Privacy - A Research Agenda," Collaborative Computing: Networking, Applications and Worksharing (CollaborateCom), 2014 International Conference on, 144, 153, 22-25 Oct. 2014. Doi: (not provided)
Abstract: Data is one of the most valuable assets for organization. It can facilitate users or organizations to meet their diverse goals, ranging from scientific advances to business intelligence. Due to the tremendous growth of data, the notion of big data has certainly gained momentum in recent years. Cloud computing is a key technology for storing, managing and analyzing big data. However, such large, complex, and growing data, typically collected from various data sources, such as sensors and social media, can often contain personally identifiable information (PII) and thus the organizations collecting the big data may want to protect their outsourced data from the cloud. In this paper, we survey our research towards development of efficient and effective privacy-enhancing (PE) techniques for management and analysis of big data in cloud computing. We propose our initial approaches to address two important PE applications: (i) privacy-preserving data management and (ii) privacy-preserving data analysis under the cloud environment. Additionally, we point out research issues that still need to be addressed to develop comprehensive solutions to the problem of effective and efficient privacy-preserving use of data.
Keywords: Big Data; cloud computing; data privacy; security of data; PE applications; PE techniques; PII; big data analysis; business intelligence; cloud computing; cloud environment; data sources; outsourced data; personally identifiable information; privacy-enhancing techniques; privacy-preserving data analysis; privacy-preserving data management; research agenda; security; social media; Big data; Cancer; Electronic mail; Encryption; Media; Privacy (ID#: 15-5677)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7014559&isnumber=7011734
Henze, M.; Hermerschmidt, L.; Kerpen, D.; Haussling, R.; Rumpe, B.; Wehrle, K., "User-Driven Privacy Enforcement for Cloud-Based Services in the Internet of Things," Future Internet of Things and Cloud (FiCloud), 2014 International Conference on , vol., no., pp.191,196, 27-29 Aug. 2014. doi: 10.1109/FiCloud.2014.38
Abstract: Internet of Things devices are envisioned to penetrate essentially all aspects of life, including homes and urban spaces, in use cases such as health care, assisted living, and smart cities. One often proposed solution for dealing with the massive amount of data collected by these devices and offering services on top of them is the federation of the Internet of Things and cloud computing. However, user acceptance of such systems is a critical factor that hinders the adoption of this promising approach due to severe privacy concerns. We present UPECSI, an approach for user-driven privacy enforcement for cloud-based services in the Internet of Things to address this critical factor. UPECSI enables enforcement of all privacy requirements of the user once her sensitive data leaves the border of her network, provides a novel approach for the integration of privacy functionality into the development process of cloud-based services, and offers the user an adaptable and transparent configuration of her privacy requirements. Hence, UPECSI demonstrates an approach for realizing user-accepted cloud services in the Internet of Things.
Keywords: Internet of Things; cloud computing; data privacy; Internet of Things; UPECSI; cloud computing; cloud-based services; privacy functionality; user-driven privacy enforcement; Access control; Cloud computing; Data privacy; Medical services; Monitoring; Privacy; Cloud Computing; Development; Internet of Things; Model-driven; Privacy; Services; User-acceptance (ID#: 15-5678)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6984194&isnumber=6984143
Zheming Dong; Lei Zhang; Jiangtao Li, "Security Enhanced Anonymous Remote User Authentication and Key Agreement for Cloud Computing," Computational Science and Engineering (CSE), 2014 IEEE 17th International Conference on, pp. 1746,1751, 19-21 Dec. 2014. doi: 10.1109/CSE.2014.320
Abstract: Cloud computing is a new pattern of computing paradigm which enables the users to transfer their work to the cloud. The tremendous storage and computing resources provided by the cloud liberate the users from the shortage of local resources. However, as the adoption of cloud computing is emerging rapidly, the security and privacy issues are still significant challenges. In a cloud environment, a user accesses to the cloud server through open networks. Thus a variety of attacks can be launched if a secure channel is not established. Furthermore, user's sensitive personal information may be revealed if user's identity is exposed to an attacker. Therefore, user anonymity is also an important concern in cloud environment. In this paper, we first show several weaknesses of a recent anonymous remote user authentication and key agreement protocol for cloud computing, then we propose a new one. Our new protocol enables a user and a cloud server to authenticate each other anonymously and establish a secure channel between them. Thus, only the user and the cloud server may learn the messages exchanged and no entity except themselves can learn the real identities of the message senders.
Keywords: cloud computing; cryptographic protocols; data privacy; message authentication; cloud computing; key agreement protocol; privacy issue; security enhanced anonymous remote user authentication; security issue; user anonymity; Authentication; Cloud computing; Generators; Protocols; Public key; Servers; Anonymity; Authentication; Certificateless Cryptography; Cloud Computing; Key Agreement (ID#: 15-5679)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7023831&isnumber=7023510
Elmehdwi, Y.; Samanthula, B.K.; Wei Jiang, "Secure K-Nearest Neighbor Query Over Encrypted Data In Outsourced Environments," Data Engineering (ICDE), 2014 IEEE 30th International Conference on, pp.664,675, March 31 2014-April 4 2014. doi: 10.1109/ICDE.2014.6816690
Abstract: For the past decade, query processing on relational data has been studied extensively, and many theoretical and practical solutions to query processing have been proposed under various scenarios. With the recent popularity of cloud computing, users now have the opportunity to outsource their data as well as the data management tasks to the cloud. However, due to the rise of various privacy issues, sensitive data (e.g., medical records) need to be encrypted before outsourcing to the cloud. In addition, query processing tasks should be handled by the cloud; otherwise, there would be no point to outsource the data at the first place. To process queries over encrypted data without the cloud ever decrypting the data is a very challenging task. In this paper, we focus on solving the k-nearest neighbor (kNN) query problem over encrypted database outsourced to a cloud: a user issues an encrypted query record to the cloud, and the cloud returns the k closest records to the user. We first present a basic scheme and demonstrate that such a naive solution is not secure. To provide better security, we propose a secure kNN protocol that protects the confidentiality of the data, user's input query, and data access patterns. Also, we empirically analyze the efficiency of our protocols through various experiments. These results indicate that our secure protocol is very efficient on the user end, and this lightweight scheme allows a user to use any mobile device to perform the kNN query.
Keywords: cloud computing; cryptography; data privacy; query processing; relational databases; cloud computing; data access patterns; data confidentiality; data management tasks; encrypted data; kNN protocol; kNN query problem; mobile device; outsourced environments; privacy issues; query processing; relational data; secure k-nearest neighbor query; sensitive data; user input query; Distributed databases; Encryption; Protocols; Query processing (ID#: 15-5680)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6816690&isnumber=6816620
Omar, M.N.; Salleh, M.; Bakhtiari, M., "Biometric Encryption To Enhance Confidentiality In Cloud Computing," Biometrics and Security Technologies (ISBAST), 2014 International Symposium on, pp. 45, 50, 26-27 Aug. 2014. doi: 10.1109/ISBAST.2014.7013092
Abstract: Virtualization technology is the base technology used in Cloud computing. Therefore, virtualization enables Cloud computing to provide hardware and software services to the users on demand. Actually, many companies migrates to the Cloud computing for many reasons such as capabilities of processor, bus speed, size of storage, memory and managed to reduce the cost of dedicated servers. However, virtualization and Cloud computing contain many security weaknesses that affects the biometric data confidentiality in the Cloud computing. Those security issues are VM ware escape, hopping, mobility, diversity monitoring and etc. Furthermore, the privacy of a particular user is an issue in biometric data i.e. the face reorganization data for a famous and important people. Therefore, this paper proposed biometric encryption to improve the confidentiality in Cloud computing for biometric data. Also, this paper discussed virtualization for Cloud computing, as well as biometrics encryption. Indeed, this paper overviewed the security weaknesses of Cloud computing and how biometric encryption can improve the confidentiality in Cloud computing environment. Apart from this, confidentiality is enhanced in Cloud computing by using biometric encryption for biometric data. The novel approach of biometric encryption is to enhance the biometric data confidentiality in Cloud computing.
Keywords: biometrics (access control);cloud computing; cryptography; virtualisation; VM ware; biometric data confidentiality; biometric encryption; cloud computing; face reorganization data; hardware services; software services; virtualization technology; Bioinformatics; Biometrics (access control); Cloud computing; Encryption; Hardware; Virtualization; Biometric Encryption; Cloud computing; Virtualization (ID#: 15-5681)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7013092&isnumber=7013076
Yanzhi Ren; Yingying Chen; Jie Yang; Bin Xie, "Privacy-Preserving Ranked Multi-Keyword Search Leveraging Polynomial Function In Cloud Computing," Global Communications Conference (GLOBECOM), 2014 IEEE, pp.594,600, 8-12 Dec. 2014. doi: 10.1109/GLOCOM.2014.7036872
Abstract: The rapid deployment of cloud computing provides users with the ability to outsource their data to public cloud for economic savings and flexibility. To protect data privacy, users have to encrypt the data before outsourcing to the cloud, which makes the data utilization, such as data retrieval, a challenging task. It is thus desirable to enable the search service over encrypted cloud data for supporting effective and efficient data retrieval over a large number of data users and documents in the cloud. Existing approaches on encrypted cloud data search either focus on single keyword search or become inefficient when a large amount of documents are present, and thus have little support for the efficient multi-keyword search. In this paper, we propose a light-weight search approach that supports efficient multi-keyword ranked search in cloud computing system. Specifically, we first propose a basic scheme using polynomial function to hide the encrypted keyword and search patterns for efficient multi-keyword ranked search. To enhance the search privacy, we propose a privacy-preserving scheme which utilizes the secure inner product method for protecting the privacy of the searched multi-keywords. We analyze the privacy guarantee of our proposed scheme and conduct extensive experiments based on the real-world dataset. The experiment results demonstrate that our scheme can enable the encrypted multi-keyword ranked search service with high efficiency in cloud computing.
Keywords: cloud computing; cryptography; data protection; information retrieval; outsourcing; cloud computing deployment; data outsourcing; data privacy protection; data retrieval; data utilization; encrypted cloud data; encrypted keyword hiding; encrypted multikeyword ranked search service; light-weight search approach; pattern search; privacy guarantee analysis; privacy-preserving ranked multikeyword search leveraging polynomial function; public cloud; real-world dataset; search privacy enhancement; search service; searched multikeyword privacy protection; secure inner product method; Cloud computing; Cryptography; Indexes; Keyword search; Polynomials; Privacy; Servers (ID#: 15-5682)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7036872&isnumber=7036769
Shabalala, M.V.; Tarwireyi, P.; Adigun, M.O., "Privacy Monitoring Framework For Enhancing Transparency In Cloud Computing," Adaptive Science & Technology (ICAST), 2014 IEEE 6th International Conference on, pp. 1, 7, 29-31 Oct. 2014. doi: 10.1109/ICASTECH.2014.7068093
Abstract: The lack of proper privacy and security mechanisms to monitor the sensitive information entrusted to cloud service providers by consumers is a barrier to broader adoption of cloud computing. Despite the many benefits that cloud computing offer, many businesses are still skeptical about how privacy is handled in the cloud. This owes to the fact that with cloud computing, the storage and processing of private information are done on remote machines that are not owned or even managed by the customers. All that the customer can see is a virtual infrastructure built on top of possibly non-trusted physical hardware or operating environments. There is a need for technical mechanism to address users' privacy concerns in order to allow for broader adoption of the cloud. In this paper, we present a Privacy Monitoring Framework to help cloud customers comprehend with what happens to their data while stored in the cloud. The framework provides mechanism which enables cloud customers to trace in detail what happens to their data, where it is stored and who accesses it.
Keywords: cloud computing; data privacy; system monitoring; cloud computing; privacy monitoring framework; transparency enhancement; Business; Cloud computing; Data privacy; Monitoring; Privacy; Security; accountability; availability; cloud computing; confidentiality; integrity; privacy; security; trust (ID#: 15-5683)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7068093&isnumber=7068059
Mercy, S.S.; Srikanth, G.U., "An Efficient Data Security System For Group Data Sharing In Cloud System Environment," Information Communication and Embedded Systems (ICICES), 2014 International Conference on, pp.1,4, 27-28 Feb. 2014. doi: 10.1109/ICICES.2014.7033956
Abstract: Cloud Computing delivers the service to the users by having reliable internet connection. In the secure cloud, services are stored and shared by multiple users because of less cost and data maintenance. Sharing the data is the vital intention of cloud data centres. On the other hand, storing the sensitive information is the privacy concern of the cloud. Cloud service provider has to protect the stored client's documents and applications in the cloud by encrypting the data to provide data integrity. Designing proficient document sharing among the group members in the cloud is the difficult task because of group user membership change and conserving document and group user identity confidentiality. To propose the fortified data sharing scheme in secret manner for providing efficient group revocation Advanced Encryption Standard scheme is used. Proposed System contributes efficient group authorization, authentication, confidentiality and access control and document security. To provide more data security Advanced Encryption Standard algorithm is used to encrypt the document. By asserting security and confidentiality in this proficient method securely share the document among the multiple cloud user.
Keywords: authorisation; cloud computing; cryptography; data privacy; document handling; software maintenance; software reliability; Internet connection reliability; access control; authentication; authorization; cloud computing; cloud data centres; cloud system environment; confidentiality; data encryption; data security advanced encryption standard algorithm; document conservation; document security; efficient data security system; group data sharing; group revocation advanced encryption standard scheme; group user identity confidentiality; group user membership change; privacy concern; proficient document sharing; sensitive information storage; Authorization; Cloud computing; Encryption; Servers; Cloud Computing; Document Sharing; Dynamic Group; Group Authorization (ID#: 15-5684)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7033956&isnumber=7033740
Kuzhalvaimozhi, S.; Rao, G.R., "Privacy Protection In Cloud Using Identity Based Group Signature," Applications of Digital Information and Web Technologies (ICADIWT), 2014 Fifth International Conference on the, pp. 75, 80, 17-19 Feb. 2014. doi: 10.1109/ICADIWT.2014.6814670
Abstract: Cloud computing is one of the emerging computing technology where costs are directly proportional to usage and demand. The advantages of this technology are the reasons of security and privacy problems. The data belongs to the users are stored in some cloud servers which is not under their own control. So the cloud services are required to authenticate the user. In general, most of the cloud authentication algorithms do not provide anonymity of the users. The cloud provider can track the users easily. The privacy and authenticity are two critical issues of cloud security. In this paper, we propose a secure anonymous authentication method for cloud services using identity based group signature which allows the cloud users to prove that they have privilege to access the data without revealing their identities.
Keywords: authorisation; cloud computing; cryptography; data privacy; digital signatures; cloud computing; cloud security; cloud services; identity based cryptosystem; identity based group signature; privacy problems; privacy protection; secure anonymous authentication method; security problems; user authentication; Authentication; Cloud computing; Elliptic curve cryptography; Privacy; Cloud; Group Signature; Identity based cryptosystem; Privacy Protection (ID#: 15-5685)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814670&isnumber=6814661
Balasaraswathi, V.R.; Manikandan, S., "Enhanced Security For Multi-Cloud Storage Using Cryptographic Data Splitting With Dynamic Approach," Advanced Communication Control and Computing Technologies (ICACCCT), 2014 International Conference on, pp. 1190, 1194, 8-10 May 2014. doi: 10.1109/ICACCCT.2014.7019286
Abstract: The use of cloud computing has increased rapidly in many organizations. Security is considered to be the most critical aspects in a cloud computing environment due to the sensitive information stored in the cloud for users. The goal of cloud security is mainly focused on the issues related to the data security and privacy aspects in cloud computing. This multi cloud model which is based on partitioning of application system into distinct clouds instead of using single cloud service such as in Amazon cloud service. It will discuss and present the cryptographic data splitting with dynamic approach for securing information. The metadata information is stored in private cloud. This approach prevents the unauthorized data retrieval by hackers and intruders. The results and implementation for the new proposed model is analyzed, in relation to addressing the security factors in cloud computing.
Keywords: cloud computing; cryptography; data privacy; storage management; Amazon cloud service; application system partitioning; cloud computing environment; cloud security; data privacy; data security; hackers; intruders; metadata information; multicloud storage model; private cloud; sensitive information; single cloud service; unauthorized data retrieval; Analytical models; Cloud computing; Computational modeling; Cryptography; Data models; Ecosystems; Cryptographic Data Splitting; Multi-cloud storage; private cloud; public cloud (ID#: 15-5686)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7019286&isnumber=7019129
Jianwei Chen; Huadong Ma, "Privacy-Preserving Decentralized Access Control for Cloud Storage Systems," Cloud Computing (CLOUD), 2014 IEEE 7th International Conference on, pp. 506, 513, June 27 2014-July 2 2014. doi: 10.1109/CLOUD.2014.74
Abstract: Along with a large amount of data being outsourced to the cloud, it is imperative to enforce a secure, efficient and privacy-aware access control scheme on the cloud. Decentralized Attribute-based Encryption (ABE) is a variant of multi-authority ABE scheme which is regarded as being more suited to access control in a large-scale cloud. Constructing a decentralized ABE scheme should not need a central Attribute Authority (AA) and any cooperative computing, where most schemes are not efficient enough. Moreover, they introduced a Global Identifier (GID) to resist the collusion attack from users, but corrupt AAs can trace a user by his GID, resulting in the leakage of the user's identity privacy. In this paper, we design a privacy-preserving decentralized access control framework for cloud storage systems, and propose a decentralized CP-ABE access control scheme with the privacy preserving secret key extraction. Our scheme does not require any central AA and coordination among multi-authorities. We adopt Pedersen commitment scheme and oblivious commitment based envelope protocols as the main cryptographic primitives to address the privacy problem, thus the users receive secret keys only for valid identity attributes while the AAs learn nothing about the attributes. Our theoretical analysis and extensive experiment demonstrate the presented scheme's security strength and effectiveness in terms of scalability, computation and storage.
Keywords: authorisation; cloud computing; cryptography; data privacy; decentralised control; GID; Pedersen commitment scheme; central attribute authority; cloud storage systems; collusion attack; cooperative computing; corrupt AA; cryptographic primitives; decentralized ABE scheme; decentralized CP-ABE access control; decentralized attribute-based encryption; global identifier; large scale cloud; multi-authority ABE scheme; privacy preserving secret key extraction; privacy-aware access control scheme; privacy-preserving decentralized access control framework; user identity privacy; Access control; Cloud computing; Encryption; Privacy; Registers (ID#: 15-5687)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6973780&isnumber=6973706
Patel, K.; Sendhil Kumar, K.S.; Singh, N.; Parikh, K.; Jaisankar, N., "Data Security And Privacy Using Data Partition And Centric Key Management In Cloud," Information Communication and Embedded Systems (ICICES), 2014 International Conference on, pp. 1, 5, 27-28 Feb. 2014. doi: 10.1109/ICICES.2014.7033769
Abstract: The Cloud Computing is a next generation platform, which provides virtualization with resource pool. There are three types of cloud service models, Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS). Most of the scientific research focus on IaaS model, which manage virtualization and storage. IaaS allows customer to scale based on user demand and user only pays for the resource usage. Data security plays a crucial role in cloud environment and user trust is most challenging problem of cloud services. This research paper proposed new methodology that secures data and provide privacy to the customer in cloud. Our technique providing security by using data partition approach and that partitioned data will be proceed further parallel for encryption mechanism. Here privacy is given by centric key management scheme.
Keywords: Web services; cloud computing; data privacy; private key cryptography; virtualisation; IaaS model; PaaS; SaaS model; centric key management scheme; cloud computing; cloud environment; cloud service models; data partition approach; data privacy; data security; encryption mechanism; infrastructure-as-a-service; next generation platform; platform-as-a-service; resource pool; resource usage; scientific research; software-as-a-service; storage management; user demand; user trust; virtualization management; Algorithm design and analysis; Cloud computing; Data privacy; Encryption; Partitioning algorithms; Algorithm; Cloud Computing; Encryption; Key Management; Service Models (ID#: 15-5688)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7033769&isnumber=7033740
Shashidhara, M.S.; Jaini, C.P., "Privacy Preserving Third Party Auditing in Multi Cloud Storage Environment," Cloud Computing in Emerging Markets (CCEM), 2014 IEEE International Conference on, pp. 1, 6, 15-17 Oct. 2014. doi: 10.1109/CCEM.2014.7015495
Abstract: The on-demand, pay-per-use, and scalable services provided in cloud model guarantee to reduce capital as well as running expenditures for both hardware and software. In cloud environment, users can remotely store their data and access them from a shared pool of configurable computing resources, without local data storage burden. We discuss various methods related to the security and privacy capabilities in cloud paradigm especially data storage in multi cloud environment. We provide three models in form of multicloud architectures which allow categorizing the schemes and analyze them according to their security benefits. The different methods include, resource replication, split application system into tiers based on PIR methods, split both application logic and data into segments. In addition, since the integrity protection of data is a fearsome task in Cloud computing for users with limited computing resources, vulnerabilities in user data privacy are also possible in third party auditing. So we propose a safe cloud storage methodology which supports privacy-preserving third party auditing. And we study the outcomes to perform audits concurrently for multiple users in an efficient manner. Experimental results show that the third party auditing computation time is better than existing approach.
Keywords: cloud computing; data integrity; data privacy; resource allocation; security of data; storage management; PIR methods; application logic; cloud computing; cloud paradigm; computing resources; configurable computing resources; data integrity protection; data privacy; data storage; multicloud architectures; multicloud storage environment; on-demand scalable services; pay-per-use services; privacy capabilities; privacy preserving third party auditing; resource replication; scalable services; security capabilities; split application system; Cloud computing; Computer architecture; Databases; Flowcharts; Memory; Security; Web servers (ID#: 15-5689)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7015495&isnumber=7015466
Wenyi Liu; Uluagac, A.S.; Beyah, R., "MACA: A Privacy-Preserving Multi-Factor Cloud Authentication System Utilizing Big Data," Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on, pp. 518, 523, April 27 2014-May 2 2014. doi: 10.1109/INFCOMW.2014.6849285
Abstract: Multi-factor authentication (MFA) is an approach to user validation that requires the presentation of two or more authentication factors. Given the popularity of cloud systems, MFA systems become vital in authenticating users. However, MFA approaches are highly intrusive and expose users' sensitive information to untrusted cloud servers that can keep physically identifying elements of users, long after the user ends the relationship with the cloud. To address these concerns in this work, we present a privacy-preserving multi-factor authentication system utilizing the features of big data called MACA. In MACA, the first factor is a password while the second factor is a hybrid profile of user behavior. The hybrid profile is based on users' integrated behavior, which includes both host-based characteristics and network flow-based features. MACA is the first MFA that considers both user privacy and usability combining big data features (26 total configurable features). Furthermore, we adopt fuzzy hashing and fully homomorphic encryption (FHE) to protect users' sensitive profiles and to handle the varying nature of the user profiles. We evaluate the performance of our proposed approach through experiments with several public datasets. Our results show that our proposed system can successfully validate legitimate users while detecting impostors.
Keywords: Big Data; cloud computing; cryptography; data privacy; file servers; message authentication; Big Data; FHE; MACA; MFA; fully homomorphic encryption; fuzzy hashing; host-based characteristics; network flow-based features; password; privacy-preserving multifactor cloud authentication system; untrusted cloud servers; usability; user behavior hybrid profile; user integrated behavior; user privacy; user sensitive profile protection; Authentication; Big data; Conferences; Cryptography; Mice; Servers; Authentication in Cloud; Fully Homomorphic Encryption; Fuzzy Hashing; Privacy-Preserving Authentication (ID#: 15-5690)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849285&isnumber=6849127
Ruihui Zhao; Hongwei Li; Yi Yang; Yu Liang, "Privacy-Preserving Personalized Search Over Encrypted Cloud Data Supporting Multi-Keyword Ranking," Wireless Communications and Signal Processing (WCSP), 2014 Sixth International Conference on, pp. 1, 6, 23-25 Oct. 2014. doi: 10.1109/WCSP.2014.6992161
Abstract: Cloud computing is emerging as a revolutionary computing paradigm which provides a flexible and economic strategy for data management and resource sharing. Security and privacy become major concerns in the cloud scenario, for which Searchable Encryption (SE) technology is proposed to support efficient keyword based queries and retrieval of encrypted data. However, the absence of personalized search is still a typical shortage in existing SE schemes. In this paper, we focus on addressing personalized search over encrypted cloud data and propose a Privacy-preserving Personalized Search over Encrypted Cloud Data Supporting Multi-keyword Ranking(PPSE) scheme that supports Top-k retrieval in stringent privacy requirements. For the first time, we formulate the privacy issue and design goals for personalized search in SE. We introduce the Open Directory Project to construct a formal model for integrating preferential ranking with keyword search reasonably and automatically, which can help eliminate the ambiguity of any two search requests. In PPSE, we employ the vector space model and the secure kNN scheme to guarantee sufficient search accuracy and privacy protection. The tf-idf weight and the preference weight help to ensure that the search result will faithfully respect the user's interest. As a result, thorough security analysis and performance evaluation on experiments performed on the real-world dataset demonstrate that the PPSE scheme indeed accords with our proposed design goals.
Keywords: cloud computing; cryptography; data privacy; query processing; Open Directory Project; PPSE scheme; SE technology; encrypted cloud data supporting multikeyword ranking; flexible-economic data management strategy; flexible-economic resource sharing strategy; formal model; keyword search; keyword-based encrypted data query; keyword-based encrypted data retrieval; performance evaluation; preference weight; preferential ranking integration; privacy protection; privacy-preserving personalized search; real-world dataset; search accuracy; search request ambiguity elimination; secure kNN scheme; security analysis; tf-idf weight; top-k retrieval; user interest; vector space model; Cryptography; Data privacy; Dictionaries; Indexes; Servers; Vectors; Multi-keyword ranking; Personalized search; Searchable encryption (ID#: 15-5691)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6992161&isnumber=6992003
Khanezaei, N.; Hanapi, Z.M., "A Framework Based On RSA And AES Encryption Algorithms For Cloud Computing Services," Systems, Process and Control (ICSPC), 2014 IEEE Conference on, pp. 58, 62, 12-14 Dec. 2014. doi: 10.1109/SPC.2014.7086230
Abstract: Cloud computing is an emerging computing model in which resources of the computing communications are provided as services over the Internet. Privacy and security of cloud storage services are very important and become a challenge in cloud computing due to loss of control over data and its dependence on the cloud computing provider. While there is a huge amount of transferring data in cloud system, the risk of accessing data by attackers raises. Considering the problem of building a secure cloud storage service, current scheme is proposed which is based on combination of RSA and AES encryption methods to share the data among users in a secure cloud system. The proposed method allows providing difficulty for attackers as well as reducing the time of information transmission between user and cloud data storage.
Keywords: cloud computing; data privacy; public key cryptography; AES encryption algorithm; Internet; RSA encryption algorithm; cloud computing services; cloud storage service; data privacy; data security; Cloud computing; Computational modeling; Encryption; Secure storage; Servers; AES; Cloud Computing; Cryptography; Data Security; RSA (ID#: 15-5692)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7086230&isnumber=7086214
Sen, S.; Guha, S.; Datta, A.; Rajamani, S.K.; Tsai, J.; Wing, J.M., "Bootstrapping Privacy Compliance in Big Data Systems," Security and Privacy (SP), 2014 IEEE Symposium on, pp. 327, 342, 18-21 May 2014. doi: 10.1109/SP.2014.28
Abstract: With the rapid increase in cloud services collecting and using user data to offer personalized experiences, ensuring that these services comply with their privacy policies has become a business imperative for building user trust. However, most compliance efforts in industry today rely on manual review processes and audits designed to safeguard user data, and therefore are resource intensive and lack coverage. In this paper, we present our experience building and operating a system to automate privacy policy compliance checking in Bing. Central to the design of the system are (a) Legal ease-a language that allows specification of privacy policies that impose restrictions on how user data is handled, and (b) Grok-a data inventory for Map-Reduce-like big data systems that tracks how user data flows among programs. Grok maps code-level schema elements to data types in Legal ease, in essence, annotating existing programs with information flow types with minimal human input. Compliance checking is thus reduced to information flow analysis of Big Data systems. The system, bootstrapped by a small team, checks compliance daily of millions of lines of ever-changing source code written by several thousand developers.
Keywords: Big Data; Web services; cloud computing; computer bootstrapping; conformance testing; data privacy; parallel programming; search engines; source code (software); Bing; Grok data inventory; Legal ease language; Map-Reduce-like Big Data systems; automatic privacy policy compliance checking; business imperative privacy policies; cloud services; code-level schema element mapping; datatypes; information flow types; minimal human input; personalized user experiences; privacy compliance bootstrapping; privacy policy specification; program annotation; source code; user data handling; user trust; Advertising; Big data; Data privacy; IP networks; Lattices; Privacy; Semantics; big data; bing; compliance; information flow; policy; privacy; program analysis (ID#: 15-5693)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956573&isnumber=6956545
Ragini; Mehrotra, P.; Venkatesan, S., "An Efficient Model For Privacy And Security In Mobile Cloud Computing," Recent Trends in Information Technology (ICRTIT), 2014 International Conference on, pp. 1, 6, 10-12 April 2014. doi: 10.1109/ICRTIT.2014.6996177
Abstract: Mobile Cloud Computing has emerged as a promising technology and its application is expected to expand its features in storing personal health information, e-governance and others. Although data security and privacy have been the major concern to the users. These issues originated from the fact that the cloud is a semi-trusted environment and the sensitive information stored in the cloud can be accessed by any unauthorized person. Thus, new methods and models are needed to solve the problem of privacy and security of data owner. In this paper, we attempt to address the concern of privacy and security of data owner. We first present a Mobility Node Model (MNM) where mobile client is coming from the external environment to the organization. Here data owner provides access of cloud data to mobile client via proxy server without revealing its identity. Second we propose a Centralized Owner Model (COM) as a centralized control mechanism which generates, key, group member details and mobile client accessibility for external and internal environment. Here request of mobile client is propagated via Trusted Leader to achieve optimality in terms of minimizing computation and communication overheads. The analysis of our proposed models demonstrate the efficiency to achieve the privacy and security in mobile cloud computing.
Keywords: authorisation; cloud computing; cryptography; data privacy; mobile computing; trusted computing; COM; IBE; MNM; centralized control mechanism; centralized owner model; cloud data access; communication overhead; computation overhead; data owner privacy; data security; external environment; group member details; identity based proxy encryption; internal environment; key generation; mobile client accessibility; mobile cloud computing; mobility node model; proxy server; semitrusted environment; sensitive information; trusted leader; unauthorized person; Ciphers; Cloud computing; Computational modeling; Encryption; Mobile communication; Servers; Identity Based Encryption (IBE); Mobile Cloud Computing; Privacy& Security (ID#: 15-5694)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6996177&isnumber=6996087
Wenhai Sun; Shucheng Yu; Wenjing Lou; Hou, Y.T.; Hui Li, "Protecting Your Right: Attribute-Based Keyword Search With Fine-Grained Owner-Enforced Search Authorization In The Cloud," INFOCOM, 2014 Proceedings IEEE, pp. 226, 234, April 27 2014-May 2 2014. doi: 10.1109/INFOCOM.2014.6847943
Abstract: Search over encrypted data is a critically important enabling technique in cloud computing, where encryption-before-outsourcing is a fundamental solution to protecting user data privacy in the untrusted cloud server environment. Many secure search schemes have been focusing on the single-contributor scenario, where the outsourced dataset or the secure searchable index of the dataset are encrypted and managed by a single owner, typically based on symmetric cryptography. In this paper, we focus on a different yet more challenging scenario where the outsourced dataset can be contributed from multiple owners and are searchable by multiple users, i.e. multi-user multi-contributor case. Inspired by attribute-based encryption (ABE), we present the first attribute-based keyword search scheme with efficient user revocation (ABKS-UR) that enables scalable fine-grained (i.e. file-level) search authorization. Our scheme allows multiple owners to encrypt and outsource their data to the cloud server independently. Users can generate their own search capabilities without relying on an always online trusted authority. Fine-grained search authorization is also implemented by the owner-enforced access policy on the index of each file. Further, by incorporating proxy re-encryption and lazy re-encryption techniques, we are able to delegate heavy system update workload during user revocation to the resourceful semi-trusted cloud server. We formalize the security definition and prove the proposed ABKS-UR scheme selectively secure against chosen-keyword attack. Finally, performance evaluation shows the efficiency of our scheme.
Keywords: authorisation; cloud computing; cryptography; data privacy; information retrieval ;trusted computing; ABE; ABKS-UR scheme; always online trusted authority; attribute-based encryption; attribute-based keyword search; chosen-keyword attack; cloud computing; cloud server environment; data privacy; encryption; encryption-before-outsourcing; fine-grained owner-enforced search authorization; lazy re-encryption technique; owner-enforced access policy; proxy re-encryption technique; resourceful semi-trusted cloud server; searchable index; security definition; single-contributor search scenario; symmetric cryptography; user revocation; Authorization; Data privacy; Encryption; Indexes; Keyword search; Servers (ID#: 15-5695)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847943&isnumber=6847911
Xue, Li; Wuling, Ren; Guoxin, Jiang; Jie, Yang, "A Solution Which Can Support Privacy Protection And Fuzzy Search Quickly Under Cloud Computing Environment," Information Technology and Electronic Commerce (ICITEC), 2014 2nd International Conference on, pp. 43 , 46, 20-21 Dec. 2014. doi: 10.1109/ICITEC.2014.7105568
Abstract: With the rapid development and widely-use of cloud computing, nowadays more and more users store data in the cloud storages. Some users, especially enterprise users, who have more privacy requirements, urgently need a solution where the cloud storage can be encrypted and retrieved rapidly and also open to internal staff only. However, to protect the privacy of data, the data must be encrypted when it is uploaded. This will greatly reduce the efficiency of retrieval. On the basis of the above, the author proposes a solution which can provide privacy protection and rapid fuzzy search under cloud computing environment. This solution can provide a more reasonable and efficient data storage and retrieval services to the user's data.
Keywords: Approximation methods; Cloud computing; Encryption; Privacy; Servers; Cloud; computing; fuzzy; privacy; search (ID#: 15-5696)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7105568&isnumber=7105555
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Virtual Machines, 2015 |
Arguably, virtual machines are more secure than actual machines. This idea is based on the notion that an attacker cannot jump the gap between the virtual and the actual. The growth of interest in cloud computing suggest it is time for a fresh look at the vulnerabilities in virtual machines. In the articles presented below, security concerns are addressed in some interesting ways. The articles cited below show how competition between I/O workloads could be exploited, describe a "gathering storm" for V/M security issues, and discuss digital forensics issues in the cloud.
Jin, S.; Ahn, J.; Seol, J.; Cha, S.; Huh, J.; Maeng, S., "H-SVM: Hardware-assisted Secure Virtual Machines under a Vulnerable Hypervisor," Computers, IEEE Transactions on, vol. PP, no .99, pp.1, 1, 09 January 2015. doi: 10.1109/TC.2015.2389792
Abstract: With increasing demands on cloud computing, protecting guest virtual machines (VMs) from malicious attackers has become critical to provide secure services. The current cloud security model with software-based virtualization relies on the invulnerability of the software hypervisor and its trustworthy administrator with the root permission. However, compromising the hypervisor with remote attacks or root permission grants the attackers with a full access capability to the memory and context of a guest VM. This paper proposes a HW-based approach to protect guest VMs even under an untrusted hypervisor. With the proposed mechanism, memory isolation is provided by the secure hardware, which is much less vulnerable than the software hypervisor. The proposed mechanism extends the current hardware support for memory virtualization based on nested paging with a small extra hardware cost. The hypervisor can still flexibly allocate physical memory pages to virtual machines for efficient resource management. In addition to the system design for secure virtualization, this paper presents a prototype implementation using system management mode. Although the current system management mode is not intended for security functions and thus limits the performance and complete protection, the prototype implementation proves the feasibility of the proposed design.
Keywords: Context; Hardware; Memory management; Registers; Virtual machine monitors; Virtual machining; Virtualization; Cloud Computing; Security; Virtualization (ID#: 15-5342)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7005439&isnumber=4358213
Su, Kui; Xu, Lei; Chen, Cong; Chen, Wenzhi; Wang, Zonghui, "Affinity and Conflict-Aware Placement of Virtual Machines in Heterogeneous Data Centers," Autonomous Decentralized Systems (ISADS), 2015 IEEE Twelfth International Symposium on, pp. 289, 294, 25-27 March 2015. doi: 10.1109/ISADS.2015.42
Abstract: Virtual machine placement (VMP) problem has been a key issue in IaaS/PaaS cloud infrastructures. Many recent works on VMP prove that inter-VM relations such as memory share, traffic dependency and resource competition should be seriously considered to save energy, increase the performance of infrastructure, reduce service level agreement violation rates and provide better administrative capabilities to the cloud provider. However, most existing works consider the inter-VM relations without taking the heterogeneity of cloud data centers into account. In practice, heterogeneous physical machines (PM) in a heterogeneous data center are often partitioned into logical groups for load balancing and specific services, cloud users always assigned their VMs with specific PM requirements, which make the inter-VM relations far more complex. In this paper, we propose an efficient solution for VMP with inter-VM relation constraints in a heterogeneous data center. The experimental results prove that our solution can efficiently solve the complex problem with an acceptable runtime.
Keywords: Bandwidth; Delays; Distributed databases; Greedy algorithms; Runtime; Security; Virtual machining; Affinity; Cloud data centers; Conflict; Heterogeneity; Virtual machine placement (ID#: 15-5343)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7098274&isnumber=7098213
Sethi, Shuchi; Shakil, Kashish Ara; Alam, Mansaf, "Seeking Black Lining in Cloud," Computing for Sustainable Global Development (INDIACom), 2015 2nd International Conference on, pp. 1251, 1256, 11-13 March 2015. Doi: (not provided)
Abstract: This work is focused on attacks on confidentiality that require time synchronization. This manuscript proposes a detection framework for covert channel perspective in cloud security. This problem is interpreted as a binary classification problem and the algorithm proposed is based on certain features that emerged after data analysis of Google cluster trace that forms base for analyzing attack free data. This approach can be generalized to study the flow of other systems and fault detection. The proposed framework makes no assumptions pertaining to data distribution as a whole making it suitable to meet cloud dynamism.
Keywords: Conferences; Bus contention; Cloud security; Covert channel; Virtual machines (ID#: 15-5344)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7100450&isnumber=7100186
Bekeneva, Ya.; Shipilov, N.; Borisenko, K.; Shorov, A., "Simulation of DDoS-attacks and Protection Mechanisms Against Them," Young Researchers in Electrical and Electronic Engineering Conference (EIConRusNW), 2015 IEEE NW Russia, pp. 49, 55, 2-4 Feb. 2015. doi: 10.1109/EIConRusNW.2015.7102230
Abstract: Distributed Denial of Service (DDoS) attacks have become a major threat to current networks. This article provides an overview on existing DDoS attacks generating tools and defense methods against them. The main difficulty of exploring DDoS attack features using such tools is the problem of raising huge real network and making lots of preparations to run tests executing these tools. We provide novel system for studying different DDoS attacks and counterattack technologies in virtual network. System architecture and interface is shown. Scenarios of simulating attacks are described, test results collected, analyzed, and presented.
Keywords: Filtering; DDoS; Egress Filtering; INET; NTP attack; OMNeT++;ReaSE; SYN-flooding; multi-level topology; network security; ngress Filtering; simulation; virtual machine; virtual network (ID#: 15-5345)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7102230&isnumber=7102217
Marnerides, A.K.; Spachos, P.; Chatzimisios, P.; Mauthe, A.U., "Malware Detection in the Cloud under Ensemble Empirical Mode Decomposition," Computing, Networking and Communications (ICNC), 2015 International Conference on, pp. 82, 88, 16-19 Feb. 2015. doi: 10.1109/ICCNC.2015.7069320
Abstract: Cloud networks underpin most of todays' socio-economical Information Communication Technology (ICT) environments due to their intrinsic capabilities such as elasticity and service transparency. Undoubtedly, this increased dependence of numerous always-on services with the cloud is also subject to a number of security threats. An emerging critical aspect is related with the adequate identification and detection of malware. In the majority of cases, malware is the first building block for larger security threats such as distributed denial of service attacks (e.g. DDoS); thus its immediate detection is of crucial importance. In this paper we introduce a malware detection technique based on Ensemble Empirical Mode Decomposition (E-EMD) which is performed on the hypervisor level and jointly considers system and network information from every Virtual Machine (VM). Under two pragmatic cloud-specific scenarios instrumented in our controlled experimental testbed we show that our proposed technique can reach detection accuracy rates over 90% for a range of malware samples. In parallel we demonstrate the superiority of the introduced approach after comparison with a covariance-based anomaly detection technique that has been broadly used in previous studies. Consequently, we argue that our presented scheme provides a promising foundation towards the efficient detection of malware in modern virtualized cloud environments.
Keywords: cloud computing; computer network security; invasive software; virtual machines; DDoS; E-EMD; cloud networks; covariance-based anomaly detection technique; distributed denial of service attacks; elasticity; ensemble empirical mode decomposition; malware detection; pragmatic cloud-specific scenarios; security threats; service transparency; socio-economical information communication technology environments; virtual machine; Accuracy; Empirical mode decomposition; Information security; Malware; Measurement; Virtual machine monitors; Anomaly Detection; Cloud computing; Empirical Mode Decomposition; Malware Detection (ID#: 15-5346)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7069320&isnumber=7069279
Kanstren, Teemu; Lehtonen, Sami; Savola, Reijo; Kukkohovi, Hilkka; Hatonen, Kimmo, "Architecture for High Confidence Cloud Security Monitoring," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 195, 200, 9-13 March 2015. doi: 10.1109/IC2E.2015.21
Abstract: Operational security assurance of a networked system requires providing constant and up-to-date evidence of its operational state. In a cloud-based environment we deploy our services as virtual guests running on external hosts. As this environment is not under our full control, we have to find ways to provide assurance that the security information provided from this environment is accurate, and our software is running in the expected environment. In this paper, we present an architecture for providing increased confidence in measurements of such cloud-based deployments. The architecture is based on a set of deployed measurement probes and trusted platform modules (TPM) across both the host infrastructure and guest virtual machines. The TPM are used to verify the integrity of the probes and measurements they provide. This allows us to ensure that the system is running in the expected environment, the monitoring probes have not been tampered with, and the integrity of measurement data provided is maintained. Overall this gives us a basis for increased confidence in the security of running parts of our system in an external cloud-based environment.
Keywords: Computer architecture; Cryptography; Monitoring; Probes; Servers; Virtual machining; TPM; cloud; monitoring; secure element; security assurance (ID#: 15-5347)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092917&isnumber=7092808
Kashif, U.A.; Memon, Z.A.; Balouch, A.R.; Chandio, J.A., "Distributed Trust Protocol for IaaS Cloud Computing," Applied Sciences and Technology (IBCAST), 2015 12th International Bhurban Conference pp.275,279, 13-17 Jan. 2015. doi: 10.1109/IBCAST.2015.7058516
Abstract: Due to economic benefits of cloud computing, consumers have rushed to adopt Cloud Computing. Apart from rushing into cloud, security concerns are also raised. These security concerns cause trust issue in adopting cloud computing. Enterprises adopting cloud, will have no more control over data, application and other computing resources that are outsourced from cloud computing provider. In this paper we propose a novel technique that will not leave consumer alone in cloud environment. Firstly we present theoretical analysis of selected state of the art technique and identified issues in IaaS cloud computing. Secondly we propose Distributed Trust Protocol for IaaS Cloud Computing in order to mitigate trust issue between cloud consumer and provider. Our protocol is distributed in nature that lets the consumer to check the integrity of cloud computing platform that is in the premises of provider's environment. We follow the rule of security duty separation between the premises of consumer and provider and let the consumer be the actual owner of the platform. In our protocol, user VM hosted at IaaS Cloud Computing uses Trusted Boot process by following specification of Trusted Computing Group (TCG) and by utilizing Trusted Platform Module (TPM) Chip of the consumer. The protocol is for the Infrastructure as a Service IaaS i.e. lowest service delivery model of cloud computing.
Keywords: cloud computing; formal specification; security of data; trusted computing; virtual machines; IaaS cloud computing; Infrastructure as a Service; TCG specification; TPM chip; Trusted Computing Group; cloud computing platform integrity checking; cloud consumer; cloud environment; cloud provider; computing resources; distributed trust protocol; economic benefit; security concern; security duty separation; service delivery model; trust issue mitigation; trusted boot process; trusted platform module chip; user VM; Hardware; Information systems; Security; Virtual machine monitors; Trusted cloud computing; cloud computing; cloud security and trust; trusted computing; virtualization (ID#: 15-5348)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7058516&isnumber=7058466
Meera, G.; Geethakumari, G., "A Provenance Auditing Framework For Cloud Computing Systems," Signal Processing, Informatics, Communication and Energy Systems (SPICES), 2015 IEEE International Conference on, pp. 1, 5, 19-21 Feb. 2015. doi: 10.1109/SPICES.2015.7091427
Abstract: Cloud computing is a service oriented paradigm that aims at sharing resources among a massive number of tenants and users. This sharing facility that it provides coupled with the sheer number of users make cloud environments susceptible to major security risks. Hence, security and auditing of cloud systems is of great relevance. Provenance is a meta-data history of objects which aid in verifiability, accountability and lineage tracking. Incorporating provenance to cloud systems can help in fault detection. This paper proposes a framework which aims at performing secure provenance audit of clouds across applications and multiple guest operating systems. For integrity preservation and verification, we use established cryptographic techniques. We look at it from the cloud service providers' perspective as improving cloud security can result in better trust relations with customers.
Keywords: auditing; cloud computing; cryptography; data integrity; fault diagnosis; meta data; resource allocation; service-oriented architecture; trusted computing; accountability; cloud computing systems; cloud environments; cloud security; cloud service providers; cryptographic techniques; fault detection; integrity preservation; integrity verification; lineage tracking; metadata history; operating systems; provenance auditing framework; resource sharing; security risks; service oriented paradigm; sharing facility; trust relations; verifiability; Cloud computing; Cryptography; Digital forensics; Monitoring; Virtual machining; Auditing; Cloud computing; Provenance
(ID#: 15-5349)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7091427&isnumber=7091354
Rawat, S.; Dhruv, B.; Kumar, P.; Mittal, P., "Dissection and Proposal of Multitudinal Security Threats and Menace in Cloud Computing," Computational Intelligence & Communication Technology (CICT), 2015 IEEE International Conference on, pp. 123, 128, 13-14 Feb. 2015. doi: 10.1109/CICT.2015.130
Abstract: Cloud computing has emerged as a amazing field in IT world today. It fords all the impediment of computing technology and allows the working and storage of data over internet itself. It has allowed the IT workers to expand their business over internet giving a hike to capabilities and potential in the business field. But the question of security remains unanswered as till now all the IT firms have not accepted cloud completely. Business firms still fear to deploy their enterprise solely on cloud due to the security issues. In this paper, we study about issues in the cloud service delivery models and the various security issue faced in cloud computing. Based on this detailed study, we further provide recommendation that could be followed to conquer the security concerns in the cloud.
Keywords: cloud computing; security of data; IT firms; IT workers; IT world; Internet; business field; cloud computing; cloud service delivery models; multitudinal security menace; multitudinal security threat dissection; multitudinal security threat proposal; Business; Cloud computing; Computational modeling; Security; Software as a service; Virtual machine monitors; Cloud Computing; Cloud Delivery Models; Data Security Threats and Risks (ID#: 15-5350)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7078680&isnumber=7078645
Xianqing Yu; Ning, Peng; Vouk, Mladen A., "Enhancing Security Of Hadoop In A Public Cloud," Information and Communication Systems (ICICS), 2015 6th International Conference on, pp. 38, 43, 7-9 April 2015. doi: 10.1109/IACS.2015.7103198
Abstract: Hadoop has become increasingly popular as it rapidly processes data in parallel. Cloud computing gives reliability, flexibility, scalability, elasticity and cost saving to cloud users. Deploying Hadoop in cloud can benefit Hadoop users. Our evaluation exhibits that various internal cloud attacks can bypass current Hadoop security mechanisms, and compromised Hadoop components can be used to threaten overall Hadoop. It is urgent to improve compromise resilience, Hadoop can maintain a relative high security level when parts of Hadoop are compromised. Hadoop has two vulnerabilities that can dramatically impact its compromise resilience. The vulnerabilities are the overloaded authentication key, and the lack of fine-grained access control at the data access level. We developed a security enhancement for a public cloud-based Hadoop, named SEHadoop, to improve the compromise resilience through enhancing isolation among Hadoop components and enforcing least access privilege for Hadoop processes. We have implemented the SEHadoop model, and demonstrated that SEHadoop fixes the above vulnerabilities with minimal or no run-time overhead, and effectively resists related attacks.
Keywords: Access control; Authentication; Cloud computing; Containers; Resilience; Virtual machine monitors; Public cloud; compromise resilience; lack of fine-grained access control; least access privilege; overloaded authentication key; security (ID#: 15-5351)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7103198&isnumber=7103173
Pasquier, Thomas F.J.-M.; Singh, Jatinder; Bacon, Jean, "Information Flow Control for Strong Protection with Flexible Sharing in PaaS," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp.279,282, 9-13 March 2015. doi: 10.1109/IC2E.2015.64
Abstract: The need to share data across applications is becoming increasingly evident. Current cloud isolation mechanisms focus solely on protection, such as containers that isolate at the OS-level, and virtual machines that isolate through the hypervisor. However, by focusing rigidly on protection, these approaches do not provide for controlled sharing. This paper presents how Information Flow Control (IFC) offers a flexible alternative. As a data-centric mechanism it enables strong isolation when required, while providing continuous, fine grained control of the data being shared. An IFC-enabled cloud platform would ensure that policies are enforced as data flows across all applications, without requiring any special sharing mechanisms.
Keywords: Cloud computing; Computers; Containers; Context; Kernel; Security (ID#: 15-5352)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092930&isnumber=7092808
Singh, Jatinder; Pasquier, Thomas F.J.-M.; Bacon, Jean; Eyers, David, "Integrating Messaging Middleware and Information Flow Control," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp.54,59, 9-13 March 2015. doi: 10.1109/IC2E.2015.13
Abstract: Security is an ongoing challenge in cloud computing. Currently, cloud consumers have few mechanisms for managing their data within the cloud provider's infrastructure. Information Flow Control (IFC) involves attaching labels to data, to govern its flow throughout a system. We have worked on kernel-level IFC enforcement to protect data flows within a virtual machine (VM). This paper makes the case for, and demonstrates the feasibility of an IFC-enabled messaging middleware, to enforce IFC within and across applications, containers, VMs, and hosts. We detail how such middleware can integrate with local (kernel) enforcement mechanisms, and highlight the benefits of separating data management policy from application/service-logic.
Keywords: Cloud computing; Context; Kernel; Runtime; Security; Servers; Information Flow Control; cloud computing; distributed systems; middleware; policy; security (ID#: 15-5353)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092899&isnumber=7092808
Yang, Chao-Tung; Lien, Wei-Hsiang; Shen, Yu-Chuan; Leu, Fang-Yi, "Implementation of a Software-Defined Storage Service with Heterogeneous Storage Technologies," Advanced Information Networking and Applications Workshops (WAINA), 2015 IEEE 29th International Conference on, pp. 102, 107, 24-27 March 2015. doi: 10.1109/WAINA.2015.50
Abstract: SDS becomes more and more popular, and several companies have announced their product. But the generic standard still has not appeared, most products are only appropriate for their devices and SDS just can integrate a few storages. In this thesis, we will use the Open Stack to build and manage the cloud service, and use software to integrate storage resources include Hadoop HDFS, Ceph and Swift on Open Stack to achieve the concept of SDS. The software used can integrate different storage devices to provide an integrated storage array and build a virtual storage pool, so that users do not feel restrained by the storage devices. Our software platform also provides a web interface for managers to arrange the storage space, administrate users and security settings. For allocation of the storage resources, we make a policy and assign the specific storage array to the machine that acquires the resource according to the policy.
Keywords: Arrays; Companies; Electromagnetic compatibility; Servers; Software; Virtualization; Ceph; Cloud service; HDFS; Hadoop; Software-Defined Storage; Virtualization (ID#: 15-5354)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7096155&isnumber=7096097
Yamaguchi, Hiroshi; Gotaishi, Masahito; Sheu, Phillip C-Y; Tsujii, Shigeo, "Privacy Preserving Data Processing," Advanced Information Networking and Applications (AINA), 2015 IEEE 29th International Conference on, pp. 714, 719, 24-27 March 2015. doi: 10.1109/AINA.2015.258
Abstract: A data processing functions are expected as a key-issue of knowledge-intensive service functions in the Cloud computing environment. Cloud computing is a technology that evolved from technologies of the field of virtual machine and distributed computing. However, these unique technologies brings unique privacy and security problems concerns for customers and service providers due to involvement of expertise (such as knowledge, experience, idea, etc.) in data to be processed. We propose the cryptographic protocols preserving the privacy of users and confidentiality of the problem solving servers.
Keywords: Data processing; Indexes; Information retrieval; Security; Servers; Web services; Cloud Computing; Cryptographic Protocol; Privacy; Security (ID#: 15-5355)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7098043&isnumber=7097928
Gimenez Ocano, S.; Ramamurthy, B.; Yong Wang, "Remote Mobile Screen (RMS): An Approach For Secure BYOD Environments," Computing, Networking and Communications (ICNC), 2015 International Conference on, pp. 52, 56, 16-19 Feb. 2015. doi: 10.1109/ICCNC.2015.7069314
Abstract: The introduction of bring your own device (BYOD) policy in the corporate world creates benefits for companies as well as job satisfaction for the employee. However, it also creates challenges in terms of security as new vulnerabilities arise. In particular, these challenges include space isolation, data confidentiality, and policy compliance as well as handling the resource constraints of mobile devices and the intrusiveness created by installed applications seeking to perform BYOD functions. We present Remote Mobile Screen (RMS), an approach for secure BYOD environments that addresses all these challenges. In order to achieve this, the enterprise provides the employee with a trusted virtual machine running a mobile operating system, which is located in the enterprise network and to which the employee connects using the mobile BYOD device. We describe our proposed solution and discuss our experimental results. Finally, we discuss advantages and disadvantages of RMS and possible future work.
Keywords: mobile computing; operating systems (computers); security of data; RMS; bring your own device policy; data confidentiality; mobile operating system; policy compliance; remote mobile screen; secure BYOD environments; space isolation; Companies; Computer architecture; Mobile communication; Mobile handsets; Random access memory; Security; Servers; Bring your own device (BYOD);data confidentiality; policy enforcement; security; space isolation; virtualization (ID#: 15-5356)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7069314&isnumber=7069279
Wester, Craig; Engelman, Noel; Smith, Terrence; Odetunde, Kehinde; Anderson, Bob; Reilly, Joe, "The Role Of The SCADA RTU In Today's Substation," Protective Relay Engineers, 2015 68th Annual Conference for, pp. 622, 628, March 30 2015-April 2 2015. doi: 10.1109/CPRE.2015.7102199
Abstract: The interface between Supervisory, Control and Data Acquisition (SCADA) functions and Protection and Control (P&C) functions has been blurred since the acceptance and full utilization of microprocessor based relays. The control, data acquisition and protection functions have been incorporated into a single Intelligent Electronic Device (IED). In many cases this is a clean, economically sound, solution. In some cases, the merging of the SCADA functions into a protective IED has created operation gaps that need to be addressed. There needs to be a balance of the merger so that reliability and redundancy are considered. In addition, it is important to consider how the substation can be operated if a protective relay output is not operational. The merger of SCADA with protection and control has created jurisdictional challenges since the SCADA group is a separate organization from the protection and control group. A Human Machine Interface (HMI) is being installed in substations by many utilities for monitoring and control purposes. It is important to incorporate local HMI functionality in this discussion. This paper will review several distribution and transmission substation designs that merge SCADA and Protection & Control. Each design will be discussed with advantages and disadvantages. The paper will propose designs that balance SCADA and Protection & Control and include local HMI functionality, IED access and security.
Keywords: Microprocessors; Protective relaying; Protocols; Reliability; Security; Substations; AAA (Authentication, Authorization, Accounting); ASCII (American Standard Code for Information Interchange); CIP (Critical Infrastructure Protection);Current Transformer (CT); DNP3 (Distributed Network Protocol); HMI (Human Machine Interface);IED (Intelligent Electronic Device); IP (Internet Protocol);Input/Output (I/O); LAN (Local Area Network); NERC (North American Electric Reliability Corporation); NIST (National Institute of Standards and Technology); P&C (Protection & Control); Potential Transformer (PT); RADIUS (Remote Authentication Dial-In User Service); RBAC (Role Based Access Control); RTOS (Real Time Operating System); RTU (Remote Terminal Unit); SCADA (Supervisory Control & Data Acquisition); SEM (Security Event Management); VPN (Virtual Private Network); WAN (Wide Area Network) (ID#: 15-5357)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7102199&isnumber=7102153
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Upcoming Events |
Mark your calendars!
This section features a wide variety of upcoming security-related conferences, workshops, symposiums, competitions, and events happening in the United States and the world. This list also includes several past events with links to proceedings or summaries of the actual activities.
Note: The events may also be found on the SoS Calendar, located by clicking the 'Calendar' tab on the left-hand navigation bar.
International Conference on Security of Smart cities, Industrial Control System and Communications (SSIC 2015)
International Conference on Security of Smart cities, Industrial Control System and Communications (SSIC 2015) is the first annual conference in the area of cyber security focusing on the industry control system, cloud platform and smart cities. City and industrial control infrastructures are changing with new interconnected systems for monitoring, control and automation. The goal of SSIC is to attract cyber security researchers, industry practitioners, policy makers, and users to exchange ideas, techniques and tools, and share experience related to all practical and theoretical aspects of communications and network security.
Date: August 5 - 7
Location: Shanghai, China
URL: http://www.ssic-conf.org/2015/quickstart/
SAC Summer School (S3)
In 2015, for the first time, SAC will be preceded by the SAC Summer School (S3). The purpose of S3 is to provide participants with an opportunity to gain in-depth knowledge of specific areas of cryptography related to the current SAC topics by bringing together world-class researchers who will give extended talks in their areas of specialty. S3 is designed to create a focused learning environment that is also relaxed and collaborative. The SAC Summer School is open to all attendees, and may be of particular interest to students, postdocs, and other early researchers.
Date: August 10 - 12
Location: New Brunswick, Canada
URL: http://www.mta.ca/sac2015/s3.html
24th USENIX Security Symposium
The USENIX Security Symposium brings together researchers, practitioners, system administrators, system programmers, and others interested in the latest advances in the security and privacy of computer systems and networks
Date: August 12 - 14
Location: Washington D.C.
URL: https://www.usenix.org/conference/usenixsecurity15
5th Annual Cyber Security Training & Technology Forum (CSTTF)
CSTTF is designed to further educate Cyber, Information Assurance, Information Management Officers', Information Technology, and Communications professionals. This will be accomplished through a number of in-depth cyber and technology sessions, as well as hands on government/industry exhibits and demos. Don't miss this local, educational, and cost effective, cyber and technology event.
Date: August 19 - 20
Location: Colorado Springs, CO
URL: http://www.fbcinc.com/e/csttf/
HTCIA Conference 2015
Bringing together experts from all over the world to share their latest research and techniques related to cybersecurity, Incident Response and Computer Forensics.
Date: August 30 - September 2
Location: Orlando, FL
URL: http://www.htciaconference.org/
44CON London
44CON London is the UK's largest combined annual Security Conference and Training event. We will have a fully dedicated conference facility, including secure wi-fi with high bandwidth Internet access, catering, private bar and daily Gin O'Clock break. 44CON London will comprise of two main presentation tracks, two workshop tracks and a mixed presentation/workshop track over the two full days covering Technical topics. The Hidden track is run under the Chatham House Rule and we're not going to tell you about that.
Date: September 9 - 11
Location: London, United Kingdom
URL: http://44con.com/
New York Cyber Security Summit 2015
The Cyber Security Summit, an exclusive C-Suite conference series, connects senior level executives responsible for protecting their companies' critical infrastructures with innovative solution providers and renowned information security experts. This educational and informational forum will focus on educating attendees on how to best protect highly vulnerable business applications and critical infrastructure. Attendees will have the opportunity to meet the nation's leading solution providers and discover the latest products and services for enterprise cyber defense.
Date: September 18
Location: New York, NY
URL: http://cybersummitusa.com/2015-new-york-city/
Global Identity Summit
The Global Identity Summit focuses on identity management solutions for the corporate, defense and homeland security communities.
Date: September 21 - 24
Location: Tampa, Fl
URL: http://events.jspargo.com/id15/Public/Enter.aspx
DerbyCon 5.0
Welcome to DerbyCon 5.0 - "Unity". This is the place where security professionals, hobbyists, and anyone interested in security come to hang out. DerbyCon V will be held September 23-27th, 2015 at the Hyatt Regency in downtown Louisville Kentucky. Training is held on Wednesday and Thursday (September 23rd and 24th) and the conference the Friday, Saturday, and Sunday (September 25th - 27th). DerbyCon 4.0 pulled in over 2,000 people with an amazing speaker lineup and a family-like feel. We continue to make the conference better each year and have a ton of new and exciting things planned for this year. Please excuse the website as it is currently under construction and planning for DerbyCon 5.0!
Date: September 23 - 27
Location: Louisville, Ky
URL: http://derbycon.com/
World Congress on Internet Security (WorldCIS-2015)
The World Congress on Internet Security (WorldCIS-2015) is Technical Co-sponsored by IEEE UK/RI Computer Chapter. The WorldCIS is an international refereed conference dedicated to the advancement of the theory and practical implementation of security on the Internet and Computer Networks. The inability to properly secure the Internet, computer networks, protecting the Internet against emerging threats and vulnerabilities, and sustaining privacy and trust has been a key focus of research. The WorldCIS aims to provide a highly professional and comparative academic research forum that promotes collaborative excellence between academia and industry.
Date: October 19 - 21
Location: Dublin, Ireland
URL: http://www.worldcis.org/
(ID#:15-5932)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.