Science of Security (SoS) Newsletter (2014 - Issue 4)

SoS Newsletter (2014-04)

2014 - Issue #04


Each issue of the SoS Newsletter highlights achievements in current research, as conducted by various global members of the Science of Security (SoS) community. All presented materials are open-source, and may link to the original work or web page for the respective program. The SoS Newsletter aims to showcase the great deal of exciting work going on in the security community, and hopes to serve as a portal between colleagues, research projects, and opportunities.

Please feel free to click on any issue of the Newsletter, which will bring you to their corresponding subsections:

General Topics of Interest

General Topics of Interest reflects today's most popularly discussed challenges and issues in the Cybersecurity space. GToI includes news items related to Cybersecurity, updated information regarding academic SoS research, interdisciplinary SoS research, profiles on leading researchers in the field of SoS, and global research being conducted on related topics.

Publications

The Publications of Interest provides available abstracts and links for suggested academic and industry literature discussing specific topics and research problems in the field of SoS. Please check back regularly for new information, or sign up for the CPSVO-SoS Mailing List.

Table of Contents

(ID#:14-2284)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.



In the News

In the News


This section features topical, current news items of interest to the international security community. These articles and highlights are selected from various popular science and security magazines, newspapers, and online sources.


  • "Day of commercially available quantum encryption nears", 11 September 2014. Los Alamos National Laboratory and Whitewood Encryption Systems, Inc. have teamed up in the LANL's biggest IT agreement to date to try to bring quantum encryption to the public. Quantum technologies are able to produce truly random cryptographic keys at a unprecedented rate, which is one of many properties of quantum computing that can make encryption even stronger. (ID: 14-50099) See http://www.homelandsecuritynewswire.com/dr20140911-day-of-commercially-available-quantum-encryption-nears
  • "Internet's security bug tracker faces its 'Y2K' moment", CNET, 16 September 2014. Common Vulnerabilities and Exposures, or CVE for short, is a list of security bugs used to keep track of vulnerabilities like Heartbleed. Ever since its creation in 1999, CVE has given each bug its own 4-digit identifier, but for the first time in its 15-year history, the number of vulnerabilities is set to exceed 9,999. Suddenly changing the standard to five digits could potentially create a Y2K-type scenario. (ID: 14-50100) Seehttp://www.cnet.com/news/internets-security-bug-tracker-faces-its-y2k-moment/
  • "DOD communications: Bringing it all together", FCW, 15 September 2014. The Department of Defense is awaiting NSA approval for a move to bring "unified capabilities" (UC), a collection of "voice, video and instant messaging", to the cloud. In doing so, the DoD hopes to take advantage of the private cloud, with help from industry leaders, to improve internal communications in the Department. (ID: 14-50101) See http://fcw.com/articles/2014/09/15/rfp-for-unified-capabilities.aspx
  • "SSL remains security weakness despite latest reinforcements", GCN, 12 September 2014. SSL is used to ensure server-to-client encryption, but a wide range of security weaknesses have become a growing cause for concern. The OpenSSL Project and security researchers have been working to develop tools and methods to find and patch weaknesses. (ID: 14-50102) See http://gcn.com/blogs/cybereye/2014/09/ssl-weakness.aspx?admgarea=TC_SecCybersSec
  • "DOE, Google back quantum computing research", GCN, 10 September 2014. The Department of energy is backing several private and academic research groups to work on developing quantum encryption technology to protect America's critical infrastructure, while a Google Quantum Artificial Intelligence team has partnered up with a team at UC Santa Barbara to develop quantum computing and quantum cryptography technologies. (ID: 14-50103) See http://gcn.com/blogs/pulse/2014/09/doe-google-quantum.aspx?admgarea=TC_SecCybersSec
  • "Confidence Wanes in Enterprise Ability to Detect a Network Attack", Infosecurity Magazine, 17 September 2014. According to surveys by Lieberman Software, many IT departments and security professionals are losing faith in their ability to detect and prevent intrusion into their networks, perhaps due to the increase of cyber-attack frequency. Fear of state-sponsored attacks are also high among IT and security professionals. (ID: 14-50104) See http://www.infosecurity-magazine.com/news/confidence-ability-to-detect-a/
  • "WikiLeaks Releases FinFisher Surveillance Spyware to the Masses", Infosecurity Magazine, 16 September 2104. Wikileaks has released FinFisher, an intrusion system that enables interception of communications from popular operating systems, to the public. According to Julian Assange, "FinFisher continues to operate brazenly from Germany selling weaponized surveillance malware to some of the most abusive regimes in the world.(ID: 14-50105) See http://www.infosecurity-magazine.com/news/wikileaks-releases-finfisher/
  • "Massively Distributed' Citadel Trojan Targets Middle East Petrochemical Giants", Infosecurity Magazine, 16 September 2014. The banking malware Citadel, which was discovered in 2012 as a tool for theft of banking credentials, has been re-purposed to attack petrochemical companies in the Middle East. APTs like Citadel often use tactics such as HTML-injection, remote-control, and keylogging. (ID: 14-50106) Seehttp://www.infosecurity-magazine.com/news/citadel-trojan-targets-middle-east/
  • "High-Risk flaws affect the NOAA Satellite System JPSS", Cyber Defense Magazine, 15 September 2014. A Department of Commerce's Office of the Inspector General (OIG) audit found that the National Oceanic and Atmospheric Administration's (NOAA) Joint Polar Satellite System is (JPSS) ground system has over 23,000 high-risk vulnerabilities, including missing security patches, and vulnerabilities that were discovered through penetration testing. Though a slight decrease from the number of vulnerabilities found in recent quarters, this number is still dramatically higher than that of 2012 Q1, which was about 14,500. (ID: 14-50107) See http://www.cyberdefensemagazine.com/high-risk-flaws-affect-the-noaa-satellite-system-jpss/
  • "'Spike' toolkit scales multi-vector DDoS with Windows, Linux hosts", SC Magazine, 24 September 2014. A newly-discovered toolkit named "Spike", which targets devices running Windows, Linux, and the ARM instruction set architecture, is capable of communicating and executing commands to perform DDoS attacks. The use of several DDoS payloads is a notable characteristic of Spike, as well as Spike's ability to target multiple platforms. (ID: 14-50110) See http://www.scmagazine.com/spike-ddos-toolkit-discovered/article/373501/
  • "Mozilla plans to phase out support of SHA-1 hash algorithm", SC Magazine, 24 September 2014. Over the next two years, Mozilla will phase out trust in the SHA-1 -based security certificates. Firefox, Mozilla's popular web browser, will display warnings when SHA-1 certificates are encountered. Though SHA-1 has been in service for many years, advances in methods of attack and in computing power (such as quantum computing) could make SHA-1 obsolete in a few years. (ID: 14-50111) See http://www.scmagazine.com/mozilla-plans-to-phase-out-support-of-sha-1-hash-algorithm/article/373487/
  • "IT giants Google and Apple enable encryption by default", SC Magazine, 24 September 2014. Apple and Google both announced moves to increase security for their customers through default encryption, as part of the recent trend towards UCE (User Controlled Encryption). UCE leaves the encryption keys in the user's hands, which means that service providers and companies like Google and Apple would be unable to help law enforcement access encrypted information. (ID: 14-50112) See http://www.cyberdefensemagazine.com/it-giants-google-and-apple-enable-encryption-by-default/
  • "Jimmy John's has confirmed breach of POS systems at 216 stores", Cyber Defense Magazine, 26 September 2014. Sandwich chain Jimmy John's announced that POS systems at 216 of its locations suffered a data breach. According to the company, an intruder was able to steal payment card data over the course of about three months. Vulnerabilities in the software of the POS systems is suspected as the cause for the breach. (ID: 14-50113) See http://www.cyberdefensemagazine.com/jimmy-johns-has-confirmed-breach-of-pos-systems-at-216-stores/
  • "Chinese hackers hit several US contractors", Cyber Defense Magazine, 19 September 2014. The Senate Armed Services Committee determined that an unreleased number of companies working as contractors for the US Transportation Command (TRANSCOM) suffered attacks from state-led Chinese APTs (Advanced Persistent Threats). The report in which the attacks were disclosed points to the 2015 defense spending bill, as well as better information in government, as an important part of combating APTs. (ID: 14-50114) See http://www.cyberdefensemagazine.com/chinese-hackers-hit-several-us-contractors/
  • "GM Appoints First Cybersecurity Chief", Infosecurity Magazine, 26 September 2014. With sights set on potentially becoming a part of the small but growing market for driver-less cars, General Motors appointed its first cybersecurity chief, Jeffrey Massimilla. GM has plans to produce a car that can communicate with other cars and pieces of highway infrastructure, like traffic lights, within a few years. Putting lives in the hands of electronic systems like driver-less cars creates serious security implications and concerns. (ID: 14-50115) See http://www.infosecurity-magazine.com/news/gm-appoints-first-cybersecurity/
  • "Apple's New iPhone 6 TouchID Hacked, as Usual", Infosecurity Magazine, 26 September 2014. Security researcher Marc Rogers reported that he was able to fool the iPhone 6's fingerprint scanner. The process Rogers describes would require a skilled criminal and a good copy of the fingerprint, but many would still consider this unacceptable, considering the iPhone 5 was plagued with the same exact security flaw. (ID: 14-50116) Seehttp://www.infosecurity-magazine.com/news/apples-new-iphone-6-touchid-hacked/
  • "Shellshock: Internet in Peril Again as 'Heartbleed 2.0' Bash Flaw Strikes", Infosecurity Magazine, 25 September 2014. A security flaw found in the Bourne Again Shell (BASH), received a severity rating of 10 out of 10 by the NIST after it was discovered that the flaw could allow hackers to remotely execute code on a server and thereby steal information and disrupt networks. The flaw affects Linux and UNIX systems, which account for a large portion of the worldwide webpage servers. (ID: 14-50117) See http://www.infosecurity-magazine.com/news/internet-peril-heartbleed-20-bash/
  • "Feds Issue Red-Flag Advisory on Escalating Insider Threats", Infosecurity Magazine, 24 September 2014. The Department of Homeland Security and the FBI concluded that insider threats are posing an increasingly grave threat to government and businesses. "Disgruntled" employees and former employees can cause serious harm by stealing data and software, or even sabotaging systems, even if they have been fired or otherwise left the organization. (ID: 14-50118) See http://www.infosecurity-magazine.com/news/feds-issue-redflag-advisory-on/
  • "FBI director worries about encryption on smartphones", Computerworld, 25 September 2014. FBI Director James Comey expressed his concerns over the move by tech giants to implement encryption by default, and allow user-based encryption on mobile devices. Though these measures would allow even better data security for users, it also means that law enforcement would have a much harder time obtaining evidence, even with a warrant. (ID: 14-50119) See http://www.computerworld.com/article/2688095/fbi-director-worries-about-encryption-on-smartphones.html
  • "AT&T offers secure links to IBM SoftLayer cloud", GCN, 26 September 2014. IBM and AT&T announced a new cooperative service that will allow customers to utilize IBM's SoftLayer cloud through the use of AT&Ts NetBond secure VPN. In doing so, both corporations hope to make it easier for customers to securely use the cloud for their IT needs. (ID: 14-50120) See http://gcn.com/articles/2014/09/26/ibm-att-cloud.aspx?admgarea=TC_SecCybersSec
  • "Passwords vs. biometrics", GCN, 19 September 2014. With the rate of data breaches rising, being able to properly identify personnel and users is becoming an increasingly important factor in the world of cybersecurity. With traditional methods of identification (passwords, namely) being susceptible to theft and other weaknesses, biometric identification is becoming increasingly appealing as a better security alternative. Biometrics relies on the "close enough" principle and can be susceptible to spoofing, however, so it is far from perfect. (ID: 14-50122) See http://gcn.com/blogs/cybereye/2014/09/passwords-vs-biometrics.aspx?admgarea=TC_SecCybersSec
  • "5 key IT bills still pending in Congress", FCW, 26 September 2014. A few significant IT bills are awaiting approval by the U.S. Congress, including The Federal IT Acquisition Reform Act, The Reforming Federal Procurement of IT Act, The Electronic Communications Privacy Act Amendments Act, The Cybersecurity Information Sharing Act of 2014, and The Federal Spectrum Incentive Act. (ID: 14-50123) Seehttp://fcw.com/articles/2014/09/26/5-key-it-bills-still-pending-in-congress.aspx
  • "New approach to computer security: Wrist-bracelet", Homeland Security Newswire, 23 September 2014. A new solution for user identification, called Zero-Effort Bilateral Recurring Authentication (ZEBRA), requires users to wear a wrist that monitors the user's location with an accelerometer and gyroscope and logs them out when the leave a terminal. Continuous monitoring of users could help improve the security in areas such as healthcare by replacing current systems that log users out upon periods of activity. (ID: 14-50124) See http://www.homelandsecuritynewswire.com/dr20140923-new-approach-to-computer-security-wristbracelet
  • "In Cyberspace, Anonymity and Privacy are Not the Same", Security Week, 26 September 2014 (Opinion). When it comes to cybersecurity, it is important to recognize the relationship between anonymity and privacy. Making this distinction, along with promoting information sharing, are several goals that have been embodied in bills that are currently being tried in Congress, namely The Cybersecurity Information Sharing Act of 2014 and The National Cybersecurity and Critical Infrastructure Protection Act of 2014. (ID: 14-50125) See http://www.securityweek.com/cyberspace-anonymity-and-privacy-are-not-same
  • "ISIS Cyber Ops: Empty Threat or Reality?", Security Week, 25 September 2014 (Opinion). Social media has always been an important tool for extremist and terrorist groups. ISIS, like others before it, uses mediums like Facebook to attract people to their cause, raise funds, and spread their messages. However, more sophisticated cyber tactics, like hacking, could be used to seriously harm U.S. critical infrastructure. If ISIS follows the example of groups like the Syrian Electronic Army, terrorism-based cyber attacks from ISIS could become a reality. (ID: 14-50126) See http://www.securityweek.com/isis-cyber-ops-empty-threat-or-reality
  • "The Security Revolution Will Be Automated", Security Week, 22 September 2014 (Opinion). As computer systems develop and evolve to allow increased functionality, lower costs, and increased productivity, vectors through which cyberattack can occur increases as well. Cyber crime and the software it employs evolve to continually test these new systems, with automated attacks becoming an increasingly significant part of this. It is the job of security professionals to combat this threat. (ID: 14-50127) See http://www.securityweek.com/security-insights-defending-against-automated-threats
  • "Taiwan probes Xiaomi on cyber security", Reuters, 24 September 2014. Upon learning of reports that smartphones made by Chinese smartphone company Xiaomi Inc. automatically send user data back to servers in China, the Taiwanese Government began independent tests on the phones to determine whether they are a security threat or not. In the recent past, China has been accused of state-sponsored cybersecurity threats and espionage. (ID: 14-50130) See http://www.reuters.com/article/2014/09/24/us-taiwan-xiaomi-cybersecurity-idUSKCN0HJ08Z20140924
  • "Bug Bounty Programs n The Good and the Bad", Information Security Buzz, 23 September 2014. Some might argue that bug bounty programs not only help reduce the occurrence of successful cyber attack, but can also be used in favor of the company in legal disputes after a breach. Poorly implemented bug bounty programs, however, have the potential to cause more harm than good, argues High-Tech Bridge CEO Ilia Kolochenko. (ID: 14-50132) See http://www.informationsecuritybuzz.com/bug-bounty-programs-good-bad/
  • "Wear the Danger: Security Risks Facing Wearable Connected Devices", Information Security Buzz, 19 September 2014. Wearable devices in the Internet of Things (IoT) are very convenient for the user, but can also pose grave security risks. Researchers from Kaspersky Lab were able to find several vulnerabilities in devices like Google Glasses and Galaxy Gear 2, which could be exploited for MiTM attacks and remote spying. (ID: 14-50133) See http://www.informationsecuritybuzz.com/wear-danger-security-risks-facing-wearable-connected-devices/
  • "Professor says Google search, not hacking, yielded medical info", SC Magazine, 29 August 2014. Upon being accused of hacking into a medical center's server and exposing sensitive information to a class of students, professor Sam Bowne of City College San Francisco (CCSF) clarified in an online post that the medical records were found via a simple Google search. According to Bowne, this was not done in front of a class and that the issue was reported to the E.A. Conway Medical Center upon discovery. (ID: 14-50044) See http://www.scmagazine.com/professor-says-google-search-not-hacking-yielded-medical-info/article/368909/
  • "DDoS attacks rally Linux servers", SC Magazine, 04 September 2014. Malware known as IptabLes and IptabLex has been posing a significant threat in mid-2014 by using vulnerabilities on neglected Linux servers to propagate DDoS attacks with "significant size and reach." The malware is unusual in that it appears to origionate from Asia, and that it targets Linux systems for such an application. (ID: 14-50045) See http://www.scmagazine.com/ddos-attacks-rally-linux-servers/article/369854/
  • "FBI, Apple investigate celebrity photo hacking incident", SC Magazine, 02 September 2014. The FBI and Apple have both confirmed that they are investigating a hacking incident that lead to the release of many "personal photos" from potentially over one hundred celebrities. Though the exact method by which the hacker obtained the photographs is unknown, they are known to have come from Apple's iCloud service. (ID: 14-50047) Seehttp://www.scmagazine.com/fbi-apple-investigate-celebrity-photo-hacking-incident/article/369340/
  • "Hackers Breached HealthCare.Gov Website", Security Magazine, 04 September 2014. In July, a hacker was able to upload malicious code into a Healthcare.gov website in July. The hacker was not able to obtain sensitive information, however, as s/he was only able to access a server for testing code for the website, as opposed to "more sensitive parts of the website that had better security protections." (ID: 14-50048) See http://www.securitymagazine.com/articles/85795-hackers-breached-healthcaregov-website
  • "Home Depot Reports Credit Card Security Breach", Security Magazine, 02 September 2014. Upon discovering stolen credit and debit card credentials on the underground market, several banks have contacted Home Depot to report evidence that the hardware retailer might be the source of a new round of stolen payment cards. The thieves appear to be the same group of Russian/Ukrainian hackers who were responsible for other recent breaches, such as that of Target and P.F. Chang's. (ID: 14-50049) See http://www.securitymagazine.com/articles/85770-home-depot-reports-credit-card-security-breach
  • "Security Implications of the Electric Smart Grid", Security Magazine, 04 September 2014. A long-term plan to upgrade America's worn electrical energy system to a "smart grid" of smart, collaborative systems is underway. The "implicit trust" between devices on this network, however, raises some security concerns; interconnected systems could create more potential for security weaknesses. (ID: 14-50050) See http://www.securitymagazine.com/articles/85785-security-implications-of-the-electric-smart-grid
  • "900,000 Android Phones Hit by Ransomware in 30 Days", Cyber Defense Magazine, 26 August 2014. In August alone, almost a million android devices are reported to have been infected with ransomware, which locks down phones and uses scare tactics to coerce victims into paying a ransom. This particular strain of ransomware, known as "ScarePackage," was reverse-engineered by mobile security firm Lookout, which reports that the authors of the ransomware appear to be Eastern European. (ID: 14-50056) See http://www.cyberdefensemagazine.com/900000-android-phones-hit-by-ransomware-in-30-days/
  • "Russian Gang's Billions of Stolen Credentials Resurface in New Attack", Infosecurity Magazine, 02 September 2014. By using stolen passwords from the August hacking incident that resulted in a massive compromise of credentials, hackers have been employing brute-force tactics to gain access to people's Namecheap accounts, the domain name registrar claims. (ID: 14-50057) See http://www.infosecurity-magazine.com/news/russian-gangs-billions-of-stolen/
  • "HP Warns of Growing North Korean Cyber Menace", Infosecurity Magazine, 02 September 2014. Despite a lack of sufficient critical infrastructure, North Korea has had some success in posing itself as "a serious cyber threat", according to a report by Hewlett-Packard. By using "quick-and-dirty" tactics, the hermit state has been able to launch numerous cyber attacks, including the Dark Seoul campaign in 2013. (ID: 14-50058) See http://www.infosecurity-magazine.com/news/hp-warns-growing-north-korean/
  • "Apple CEO: iCloud Nude Photo Hack Wasn't Our Fault", Infosecurity Magazine, 05 September 2014. Following the fallout of the leak of celebrity's personal photos, Apple CEO Tim Cool reassured that security protocols for Apple's iCloud service. While blame for the incident is still somewhat up for debate, additional security features, such as notifications for when a specific device tries to log into an iCloud account for the first time, are expected to make the cloud storage service safer. (ID: 14-50059) Seehttp://www.infosecurity-magazine.com/news/apple-ceo-icloud-nude-photo-hack/
  • "Barclays Unveils Vein Scanner to Authenticate Customers", Infosecurity Magazine, 05 September 2014. Financial services company Barclays announced that it will be using vein identification technology to identify customers and reduce the risk of fraud. Vein identification technology, which is already in use elsewhere, looks for unique vein patterns in fingers and is more accurate than conventional fingerprinting. (ID: 14-50060) See http://www.infosecurity-magazine.com/news/barclays-vein-scanner/
  • "McAfee: Phishing Awareness Remains Abysmal", Infosecurity Magazine, 04 September 2014. A phishing quiz run by McAfee reveals that the ability to distinguish genuine emails from phishing emails is, overall, underwhelming. Phishing remains one of the largest, most predominant threats to cyber security. (ID: 14-50061) See http://www.infosecurity-magazine.com/news/phishing-awareness-remains-abysmal
  • "Mozilla Combats MiTM Attacks, Rogue Certificates in Firefox 32", Infosecurity Magazine, 03 September 2014. Mozilla's newest browser update, Firefox 32, features enhanced security features, including rogue certificate prevention and MiTM-attack prevention through public-key pinning. Pinning creates an enhanced level of verification and trust for certificates, and helps to prevent "imposter" sites from hijacking a network connection. (ID: 14-50062) See http://www.infosecurity-magazine.com/news/mozilla-combats-mitm-attacks-in/
  • "Hackers Use Large Numbers of Transient Domains to Hide Attacks", Infosecurity Magazine, 03 September 2014. According to an analysis of over half a billion hostnames by Blue Coat Systems, a not insignificant portion of "One-Day Wonders" (hostnames that exist for a day or less) are used maliciously for launching DDoS attacks, spam, and botnets. Because of their short lifetimes, such domains are hard to detect and employ preventative measures against before it's too late. (ID: 14-50063) See http://www.infosecurity-magazine.com/news/hackers-use-transient-domains-to/
  • "NATO Set to Ratify Cyber as Key Military Threat", Infosecurity Magazine, 03 September 2014. This week, NATO plans to adopt a new policy in the cyber realm: an online attack against one NATO member will be considered an attack on all twenty-eight NATO members. The international alliance also plans to make improvements on information sharing and "mutual assistance" to help combat cyber threats. (ID: 14-50064) See http://www.infosecurity-magazine.com/news/nato-set-to-ratify-cyber-as-key/
  • "AT&T Launches Security Resource Center", Infosecurity Magazine, 04 September 2014. AT&T has announced that it will be starting a threat intelligence portal for security and IT professionals. AT&T Security Resource Center, as it is called, will allow security experts to research, discuss, share ideas and work together on cybersecurity issues. (ID: 14-50065) See http://www.infosecurity-magazine.com/news/att-launches-security-resource/
  • "SAIC debuts tiered cybersecurity solution", GCN, 02 September 2014. SAIC, along with the help of other cybersecurity groups, has created CyberSecurity Edge, a new solution that works with a customer's pre-existing infrastructure to fix vulnerabilities and optimize security measures. CyberSecurity Edge's tiered approach "provides maximum data security readiness and responds to advanced persistent cyber threats," according to sector president Doug Wagoner. (ID: 14-50067) See http://gcn.com/articles/2014/09/02/saic-cybersecurity.aspx?admgarea=TC_SecCybersSec
  • "Researchers work to harden cyber infrastructure from WMD", GCN, 27 August 2014. A University of New Mexico team is being funded to research and develop solutions for recovery of cyber-infrastructure that is under threat from attack, including attack by weapons of mass destruction. The project, which is funded by the Defense Threat Reduction Agency (DTRA), aims to create a solution that accurately reflects the "multiple technology domains/layers and support scalable connectivity across large distances" that modern cyber-infrastructure is comprised of. (ID: 14-50070) See http://gcn.com/articles/2014/08/27/unm-dtra.aspx?admgarea=TC_SecCybersSec
  • "Retailers spend less on cybersecurity than other industries, and it shows", Homeland Security News Wire, 05 September 2014. Home Depot is suspected to have been the latest retailer to have suffered a breach, which is likely responsible for a wave of stolen payment card credentials being sold on the underground market. Data breaches of this sort are becoming all too common, and some analysts say the lack of security funding on the part of large retailers is to blame. (ID: 14-50072) See http://www.homelandsecuritynewswire.com/dr20140905-retailers-spend-less-on-cybersecurity-than-other-industries-and-it-shows
  • "Security Researchers Lay Bare TSA Body Scanner Flaws", TechNewsWorld, 22 August 2014. A group of researchers reported at the San Diego USENIX security conference that the Rapiscan Secure 1000 full-body scanner, which was employed by the Transportation Security Administration until recently, is vulnerable to cyber attacks. Additionally, the researchers found that someone with an understanding of how the device works would be able to fool it. (ID: 14-50074) See http://www.technewsworld.com/story/80935.html
  • "Breaking the Cyber Kill Chain", Security Week, 04 September 2014. Lockheed Martin has created a "cyber kill chain" framework to describe the step-by-step process that hackers take when attacking a system. Depending on the capabilities of the entity defending against the attack, and on the specific type of attack itself, security experts will choose a specific point within the kill chain to attempt to disrupt the hacking process. If any part of the kill chain is interrupted, the entire hacking operation can be severely incapacitated. (ID: 14-50075) See http://www.securityweek.com/breaking-cyber-kill-chain
  • "China Launches MitM Attack on Google Users", Security Week, 05 September 2014. Despite being blocked in China, access to google.com is still allowed by the government through CERNET; however, warnings about invalid SSL certificates while accessing Google through CERNET has led some to believe that the Chinese government is most likely attempting a MitM-style attack to monitor usage of the Google search engine by its citizens. (ID: 14-50076) See http://www.securityweek.com/china-launches-mitm-attack-google-users
  • "Goodwill Blames Credit Card Breach on Third-Party Vendor", Security Week, 03 September 2014. After launching an investigation into a recent payment card data breach, Goodwill Industries concluded that attackers used a piece of malware to access Goodwill's systems over a one and a half year period. Names, payment card numbers, and expiration dates were stolen, but more sensitive information like PINs are believed to be safe. (ID: 14-50077) Seehttp://www.securityweek.com/goodwill-blames-credit-card-breach-third-party-vendor
  • "The Irish Are Being Emailed A Trojan Downloader", Information Security Buzz, 04 September 2014. A malicious email has been identified by ESET Ireland that masquerades as a email to purchase confirmation. Alarmed by the unknown purchase, the victim is baited into following a link provided in the email that downloads the Elenoocka trojan, which then proceeds to attempt to download several other malicious files from the internet. This email appears to be targeted at the Irish. (ID: 14-50078) See http://www.informationsecuritybuzz.com/irish-emailed-trojan-downloader/
  • "Data Breaches: Why the Costs Matter", Information Security Buzz, 03 September 2014. Though protecting against data breaches can be costly, cutting corners can lead to drastic consequences, and anyone who watches the news knows this too well. The legal costs, fines, and loss of reputation can easily be more costly to a large business than defensive measures. (ID: 14-50080) See http://www.informationsecuritybuzz.com/data-breaches-costs-matter/
  • "Malware Still Generated at a Rate of 160,000 New Samples a Day in Q2 2014, Reports PandaLabs", 02 September 2014. The rate at which new malware is being produces reached 160,000 per day in the second quarter of 2014. Noteworthy malware trends include a significant rise in the occurrence of PUPs (Potentially Unwanted Programs), while trojans now account for a decreasing portion of malware, despite remaining the most common at roughly fifty-eight percent. (ID: 14-50081) See http://www.informationsecuritybuzz.com/malware-still-generated-rate-160000-new-samples-day-q2-2014-reports-pandalabs/
  • "Can Cloud Vendors Be Trusted to Obey Data Protection Laws?" Information Security Buzz, 18 September 2104. A study on trust in the security of cloud storage found that European IT generally distrusts the ability of cloud storage providers to properly follow laws regarding protection of their data and their user's privacy, and that many see cloud storage as a factor that increases the likelihood of a data breach. The study also noted that data breaches that involved the cloud tended to have a much higher economic cost, which is known as the "cloud multiplier effect". (ID: 14-50082) See http://www.informationsecuritybuzz.com/can-cloud-vendors-trusted-obey-data-protection-laws/
  • "Businesses and IT Security Companies, Unite!", Information Security Buzz, 17 September 2014. Driverless vehicles are one of many futuristic, computer-controlled concepts coming to life. They, as with computers of old, will be subject to the same security risks that classical computer systems are vulnerable to. Kaspersky Labs CEO and Chairman Eugene Kaspersky, believes that securing these systems must be done preemptively to ensure the safety of those who put their lives into the hands of this new technology. Kaspersky Labs has been researching security and risk factors of connected vehicles. (ID: 14-50083) See http://www.informationsecuritybuzz.com/businesses-security-companies-unite/
  • "Cyber Security Initiatives Are Key to Public Sector Security, Says Databarracks", Information Security Buzz, 17 September 2014. Secure cloud services provider Databarracks concluded that in the UK, the public sector is often the most hard-hit when it comes to cyber threats. Public organizations, which may have fewer resources and a perceived lower risk, often lag behind government. Cyber initiatives and programs aimed at helping public businesses and organizations are, therefor, crucial. (ID: 14-50084) See http://www.informationsecuritybuzz.com/cyber-security-initiatives-key-public-sector-security-says-databarracks/
  • "Preventing the Next Mega-Breach with Identity Relationship Management (IRM)", Information Security Buzz, 16 September 2014. With large-scale "mega data breaches" becoming all too common, speedy disclosure of and response to breaches is crucial to the reputation and financial situation of a company. The surge in data breaches will hopefully bring about a "new awareness" of data security, along with interest in solutions like Identity Relationship Management. (ID: 14-50085) See http://www.informationsecuritybuzz.com/preventing-next-mega-breach-identity-relationship-management-irm/
  • "IoT Security Must Be Fixed for the Long Term, Says Beecham Report", Information Security Buzz, 16 September 2014. According to Beecham Research, the security of the rapidly approaching Internet of Things (IoT) is crucial to the safety and well-being of those who will be relying on it. As it stands, current IoT security technologies do not stand to the task. Beecham believes that industry collaboration, semiconductor-level security measures, and general awareness of the issues that interconnected "smart" devices bring about must all be stressed. (ID: 14-50086) See http://www.informationsecuritybuzz.com/iot-security-must-fixed-long-term-says-beecham-report/
  • "Context Hacks Into Canon IoT Printer to Run Doom", Information Security Buzz, 15 September 2014. Researchers at Context Information Security, who have gained attention in the recent past for hacking a smart light bulb and other IoT devices, were able to remotely access a networked Canon printer and modify the firmware to run the popular 1990's video game "Doom". Canon was notified and has since fixed the issue. (ID: 14-50087) See http://www.informationsecuritybuzz.com/context-hacks-canon-iot-printer-run-doom/
  • "Firms Must Have A BYOD Policy or Risk Major Security Breaches", Information Security Buzz, 09 September 2014. According to recent independent research by Samsung and McAfee, many companies report lost or stolen company-issued mobile devices, which poses a serious security risk. This, along with the not-insignificant cost of providing these devices, has made bring your own device (BYOD) policies more and more attractive to firms and companies. (ID: 14-50088) See http://www.informationsecuritybuzz.com/firms-must-byod-policy-risk-major-security-breaches/
  • "Will Technology Replace Security Analysts?", Security Week, 15 September 2014. Chief Security Strategist of the Enterprise Forensics Group at FireEye Joshua Goldfarb discusses thoughts on the automation of security analysis -- Would technology be able to keep up with the ever-changing cyber landscape? Are the cyber threats themselves too dynamic to be able to be stopped by an automated process, or is human intelligence required? (ID: 14-50091) See http://www.securityweek.com/will-technology-replace-security-analysts
  • "Next Generation Firewall: Looking Back to See Ahead", Security Week, 15 September 2014. By looking sequence of cat-and-mouse off that constitutes the history of the firewall, we can make predictions of where the imperative security tool is headed in the future. Learning from history will be essential to keeping modern firewalls up to the task they were created for. (ID: 14-50092) See http://www.securityweek.com/next-generation-firewall-looking-back-see-ahead
  • "Top security concerns, need-to-know industry trends on agenda for ASIS 2014", Government Security News, 09 September 2014. ASIS International's 2014 Annual Seminar and Exhibits is set to take place in Atlanta, Georgia from Sept. 29th to October 2nd. Guests will be able to visit a wide range of lectures, addresses, and other educational exhibits and sessions on a wide range of security subjects from hundreds of companies. (ID: 14-50093) See http://www.gsnmagazine.com/node/42427?c=cyber_security
  • "XSS Flaw Burns a Hole in Kindle Security", TechNewsWorld, 16 September 2014. An XXS flaw in Amazon's Kindle e-book library that allows cross-scripting was discovered by security consultant Benjamin Mussler. The flaw, which was fixed but then re-introduced later, allows hackers to use malicious code injection and steal a user's cookies that are associated with Amazon. (ID: 14-50094) See http://www.technewsworld.com/story/81055.html
  • "DoD Ramps Up Security as It Drifts Toward Cloud", TechNewsWorld, 12 September 2014. Amazon Web Services and two other vendors, which have received authorization to be used for certain security levels of the DoD's Cloud Security Model, allow DoD agencies to better utilize cloud technologies. This is part of the DoD's move towards embracing cloud technologies as an effective tool to aid in its missions. (ID: 14-50095) See http://www.technewsworld.com/story/81035.html
  • "Millions of Gmail Users Victims of Latest Password Heist", TechNewsWorld, 11 September 2014. A simple text file of approximately five million Gmail usernames and passwords were posted to a Russian security forum and distributed across the web. Google released a statement saying that there is no evidence that any of their systems were compromised, and notified the users who were listed in the text file. (ID: 14-50096) See http://www.technewsworld.com/story/81026.html
  • "IBM Enlists Intel to Shore Up Hybrid Cloud", TechNewsWorld, 10 September 2014. To better embrace the potential of cloud technology, IBM announced that it will use Intel's Trusted Execution Technology to allow hardware-level security reassurance for its SoftLayer cloud platform. Security concerns are considered the biggest obstacle to adoption of cloud technologies. (ID: 14-50097) See http://www.technewsworld.com/story/81022.html
  • "Virtually every agency of the U.S. government has been hacked: Experts", Homeland Security Newswire, 12 September 2014. Despite measures to bolster the United States' cyber defenses, the FBI's Robert Anderson explained to lawmakers that, in some way or another, nearly every one of the governments agencies has been hacked. Anderson, who is the executive assistant director for the FBIis Criminal, Cyber, Response, and Services branch, also cited cooperation between government and private sector cybersecurity teams is crucial for responding to and preventing cyber attack. (ID: 14- 50098) See http://www.homelandsecuritynewswire.com/dr20140912-virtually-every-agency-of-the-u-s-government-has-been-hacked-experts

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Best Scientific Cybersecurity Paper

Best Paper


NSA AWARD FOR THE BEST SCIENTIFIC CYBERSECURITY PAPER

Laurel, MD--19 September 2014.

Presentations by and to five academic researchers from the Universities of Maryland, Bonn, and Leibniz were the order of the day at a special ceremony in Emerson Cafe. The scholars were recognized as the winners of the Best Paper of 2013 in Cybersecurity and the runner up.

Dr. Deborah Fincke, NSA Director of Research, welcomed and thanked them for their contribution to the evolving Science of Security. Dr. Michael Hicks of the University of Maryland led the winning team which included Dr. Elaine Shi and graduate student Chang Liu. Their work, "Memory Trace Oblivious Program Execution" showed that combing Programming Languages (PL) and cryptography can yield memory trace obliviousness (MTO). Their goal was to address the problem when, in the Cloud, data encryption can mask content, but not header information. Using Oblivious RAM, around as a "curiosity" since the 1980's, they demonstrated a hybrid system that allows a relatively small overhead while masking both headers and content.

Dr. William Smith, now at the University of Bonn, and his colleague Sascha Fahl, University of Leibniz, presented the Honorable Mention paper, "Rethinking SSL Development in an Applied World." Dr. Smith told the audience about the problem of SSL certificate failure on Android and I-Phones. Their research showed that 14% to 18% of the applications they looked at were subject to Man in the Middle Attacks (MITMA ) because SSL certificates were invalid or bypassed. To find the reasons for this security failure, they interviewed developers and looked at the nature of the specific problem with the certificate. Their conclusions indicate that developers often inadvertently shut down and leave off the certificates for SSL when they develop apps, including one antivirus software that was used as an example.

Following the presentation, a lively group discussion and question and answer period ensued, moderated by longtime cybersecurity expert Dr. Carl Landwehr.

Stuart Krohn, Technical Director for the Science of Security, closed the session with praise for the research and the researchers' contribution to the advancement of the science of security. Copies of the papers and a short description of the researchers is available on the CPS-VO website at: http://cps-vo.org/group/sos/papercompetition

(ID: 14-2283)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


General Topics of Interest

General Topics of Interest


General Topics of Interest reflects today's most popularly discussed challenges and issues in the Cybersecurity space. GToI includes news items related to Cybersecurity, updated information regarding academic SoS research, interdisciplinary SoS research, profiles on leading researchers in the field of SoS, and global research being conducted on related topics.

(ID#:14-2285)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.




System Science of SecUrity and REsilience (SURE)

System Science of SecUrity and REsilience (SURE)


Research Project SURE to launch in October.
Laurel, MD
October 15, 2014

Four university research teams, representing Vanderbilt University, MIT, the University of California at Berkeley, and the University of Hawaii, are about to kick off the research project "System Science of SecUrity and REsilience for cyber-physical systems" (SURE), a research effort funded by NSA. SURE's goals are to develop foundations and tools for designing, building, and assuring cyber-physical systems (CPS) that can maintain essential system properties despite adversaries. Its technology base will provide cyber-physical system (CPS) designers and operators with models, methods, and tools that can be integrated with an end-to-end model-based design flow and tool chain.

According to the team, security and resilience have been largely disjointed or totally missing aspects of CPS design. Due to advances and integration in wireless sensor-actuator networks, the Internet of Things, data-driven analytics, and machine-to-machine interfaces, modern CPS can no longer permit such separation. CPS now has the ability to inter-operate and adapt to open dynamic environments.

New trends in CPS include faster operational time-scales, greater spatial interconnectedness, larger numbers of mixed initiative interactions, and increased heterogeneity of components. These trends are forcing the physical and cyber sides of systems to become tightly coupled. The failure of loosely coupled physical and cyber schemes shows up in unresolved design conflicts between performance and resilience against faults and intrusions, and conflicts between the need for performance optimization and maintaining robustness against adversarial impacts.

As an integral part of the proposed research program, the group will launch a sustained effort to create a new generation of engineers comfortable with understanding, exploiting and managing security and resilience in the context of integrated computational, physical phenomena interacting with human designers and operators.

SURE's research thrusts will focus on: Hierarchical Coordination and Control Resilient monitoring and control of the networked control system infrastructure Science of decentralized security which aims to develop a framework that will enable reasoning about the security of all the integrated constituent CPS components. Reliable and practical reasoning about secure computation and communication in networks which aims to contribute a formal framework for reasoning about security in CPS. Evaluation and experimentation using modeling and simulation integration of cyber and physical platforms that directly interface with human decision-making. Education and Outreach component that aims at education the next generation of researchers in the field of security and resilience of CPS.

The Lead PI, Xenofon Koutsoukos, is an Associate Professor in the Department of Electrical Engineering and Computer Science at Vanderbilt University. He is also a Senior Research Scientist in the Institute for Software Integrated Systems (ISIS). Before joining Vanderbilt, Dr. Koutsoukos was a Member of Research Staff in the Xerox Palo Alto Research Center (PARC) (2000-2002), working in the Embedded Collaborative Computing Area. He received his PhD in Electrical Engineering from the University of Notre Dame in 2000. His research work is in the area of cyber-physical systems with emphasis on formal methods, distributed algorithms, diagnosis and fault tolerance, and adaptive resource management.

Saurabh Amin (PI for MITI) is an Assistant Professor in the MIT Department of Civil and Environmental Engineering. His research focuses on the design and implementation of resilient network control algorithms for infrastructure systems. He works on robust diagnostics and control problems that involve using networked systems to facilitate the monitoring and control of large-scale critical infrastructures, including energy, transportation, and water distribution systems. He also studies the effect of security attacks and random faults on the survivability of these systems, and designs incentive mechanisms to reduce network risks.

Dusko Pavlovic (U. of Hawaii PI) was born in Sarajevo, studied mathematics at Utrecht, and was a postdoc at McGill, before starting an academic career in computer science at Imperial College and at Sussex. He left academia from 1999 to 2009 to work in software research at the Kestrel Institute in Palo Alto. He was a Visiting Professor at Oxford University from 2008-2012, Professor of Information Security at Royal Holloway, University of London (part time at University of Twente in the Netherlands) 2010-2013. He took his current chair in Computer Science at University of Hawaii at Manoa in 2013. Through the years, Dusko's publications covered a wide area of research interests, from mathematics (graphs, categories) through theoretical computer science (semantics, symbolic computation) and software engineering (behavioral specifications, adaptation), to security (protocols, trust, physical security) and network computation (information extraction). Dusko's past publications and the slides of some of his recent talks are available from his web page.

S. Shankar Sastry (UC Berkeley PI) received his B.Tech. from the Indian Institute of Technology, Bombay, 1977, a M.S. in EECS, M.A. in Mathematics and Ph.D. in EECS from UC Berkeley, 1979, 1980, and 1981 respectively. S. Shankar Sastry is currently dean of the College of Engineering. He was formerly the Director of CITRIS (Center for Information Technology Research in the Interest of Society) and the Banatao Institute @ CITRIS Berkeley. He served as chair of the EECS department from January, 2001 through June 2004. In 2000, he served as Director of the Information Technology Office at DARPA. From 1996-1999, he was the Director of the Electronics Research Laboratory at Berkeley, an organized research unit on the Berkeley campus conducting research in computer sciences and all aspects of electrical engineering. He is the NEC Distinguished Professor of Electrical Engineering and Computer Sciences and holds faculty appointments in the Departments of Bioengineering, EECS and Mechanical Engineering. Prior to joining the EECS faculty in 1983 he was a professor at MIT.

The SURE project kickoff meeting is scheduled for Monday, October 27th. (ID#:14-2622)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Wyvern Programming Language

Wyvern Programming Language


OVERVIEW

Researchers at Carnegie Mellon University, led by associate professor Jonathan Aldrich of the university's Institute for Software research (ISR), have been working on developing an innovative breakthrough programming language for building secure web and mobile applications. The new programming language, called Wyvern - aptly named for the legendary two-legged, winged dragon fiercely protective of its treasure, aims to help software engineers build secure mobile and web applications using several type-based, domain-specific languages (DSLs) within the same program. Wyvern is able to identify sublanguages (SQL, HTML, etc.) used in the program based on types and their context, which signify the format of the data, according to CMU's press release (Spice 2014, accessed at http://www.cs.cmu.edu/news/carnegie-mellon-developing-programming-language-accommodates-multiple-languages-same-program) . Just as a Wyvern dragon ensures protection of its treasure, the Wyvern language is designed to help create secure programs.

Dr. Aldrich and his team recognized the proliferation of programming languages used in the development of web and mobile applications to be incredibly inefficient. Though software development has come a long way, from Fortran to JavaScript, the web and mobile arenas struggle to cobble together a "...mishmash of artifacts written in different languages, file formats, and technologies", according to CMU's web page on Wyvern rationale (http://www.cs.cmu.edu/~aldrich/wyvern/spec-rationale.html). For example, constructing most commercial web pages often require HTML for structure, CSS for design, with JavaScript to appease user interaction, as well as SQL to access the database back-end. The diversity of current languages and tools used to create an application increases the associated time, cost, and security risks, opening the door for particularly prevalent Cross-Site Scripting and SQL Injection attacks. In light of this, Wyvern has eliminated the need for use of character strings as commands, which, for instance, is seen in SQL. By allowing character strings, malicious users with a rough knowledge of a system's structure could execute destructive commands such as DROP TABLE, or manipulate instituted access controls.

Dr. Aldrich likens Wyvern's capabilities to that of a "...skilled international negotiator who can smoothly switch between languages...", able to discern which sublanguage is being used through context, much like the way "...a person would realize that a conversation about gourmet dining might include some French words and phrases" (Spice 2013). Wyvern strives to provide
* Flexible Syntax, using an internal DSL strategy.
* Typechecking, a static type-checking based on defined rules in Wyvern-internal DSLs
* Secure language and library constructs, providing secure built-in datatypes and database access through an internal DSL
* High-level abstractions, wherein programmers will be able to define an application's architecture, to be enforced by the type system, and implemented by the compiler and runtime.

A succinct PowerPoint presentation of Wyvern and examples may be accessed at http://www.cs.cmu.edu/~comar/GlobalDSL13-Wyvern.pdf.

TECHNICAL SPECS

Similar to languages such as Python, Wyvern is a pure object-oriented language that is value-based, statically type-safe, and supports functional programming (Nistor et al. 2013, accessed at http://www.cs.cmu.edu/~aldrich/papers/maspeghi13.pdf). Wyvern follows the principle that objects should only be accessible by invoking their methods. As such, with Wyvern's use of type-specific languages (TSLs), a type is invoked only when a literal appears in the context of the expected type, ensuring non-interference (Omar 2014, accessed at http://www.cs.cmu.edu/~aldrich/papers/ecoop14-tsls.pdf ).

Wyvern is currently ongoing, and is an open-source project. Interested potential users may explore the language at https://github.com/wyvernlang/wyvern .

WYVERN IN THE NEWS

Interest in Wyvern programming language has been shown enthusiastically in the security world. Gizmag, which covers new and emerging technological innovations, mentions Wyvern as "something of a meta-language", and agrees that the web would be a much more secure place if not for vulnerabilities due to the common coding practice of "pasted-together strings of database commands" (Moss 2014, accessed at http://www.gizmag.com/wyvern-multiple-programming-languages/33302/#comments). The CMU Lablet and Wyvern were featured in a press release by SD Times, which mentions the integration of multiple languages, citing flexibility in terms of additional sublanguages, and easy-to-implement compilers. The article may be accessed at http://sdtimes.com/wyvern-language-works-platforms-interchangeably/. ACM Communications explains Wyvern as a host language that allows developers to import other languages for use on a project, but warns that Wyvern, as a meta-language, could be vulnerable to attack. The ACM article can be accessed at http://cacm.acm.org/news/178649-new-nsa-funded-programming-language-could-close-long-standing-security-holes/fulltext.

Learn more about Wyvern at http://www.cs.cmu.edu/~aldrich/wyvern/ .

References

Spice, Byron (2014). Carnegie Mellon developing programming language that accommodates multiple languages in same program. Carnegie Mellon University School of Computer Science. Retrieved from http://www.cs.cmu.edu/news/carnegie-mellon-developing-programming-language-accommodates-multiple-languages-same-program

Cyrus Omar, Darya Kurilova, Ligia Nistor, Benjamin Chung, Alex Potanin, and Jonathan Aldrich. Safely composable type-specific languages. Proc. European Conference on Object-Oriented Programming, 2014. Retrieved from http://www.cs.cmu.edu/~aldrich/papers/ecoop14-tsls.pdf

Ligia Nistor, Darya Kurilova, Stephanie Balzer, Benjamin Chung, Alex Potanin, and Jonathan Aldrich. Wayvern: a simple, typed, and pure object-oriented language. In Mechanisms for Specialization, Generalization, and Inheritance (MASPEGHI), 2013. Retrieved from http://www.cs.cmu.edu/~aldrich/papers/maspeghi13.pdf

Moss, Richard (2014). Wyvern system allows multiple programming languages within one computer system. Gizmag. Retrieved from http://www.gizmag.com/wyvern-multiple-programming-languages/33302/#comments

(ID: 14-2494)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Publications of Interest

Publications of Interest


The Publications of Interest section contains bibliographical citations, abstracts if available and links on specific topics and research problems of interest to the Science of Security community.

How recent are these publications?

These bibliographies include recent scholarly research on topics which have been presented or published within the past year. Some represent updates from work presented in previous years, others are new topics.

How are topics selected?

The specific topics are selected from materials that have been peer reviewed and presented at SoS conferences or referenced in current work. The topics are also chosen for their usefulness for current researchers.

How can I submit or suggest a publication?

Researchers willing to share their work are welcome to submit a citation, abstract, and URL for consideration and posting, and to identify additional topics of interest to the community. Researchers are also encouraged to share this request with their colleagues and collaborators.

Submissions and suggestions may be sent to: research (at) securedatabank.net

(ID#:14-2287)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Anonymity

Anonymity


Minimizing privacy risk is one of the major problems attendant on the development of social media and hand-held smart phone technologies. K-anonymity is one main method for anonymizing data. Many of the articles cited here focus on k-anonymity to ensure privacy. Others look at elliptic keys and privacy enhancing techniques more generally. These articles were presented between January and September, 2014.

  • Wu, S.; Wang, X.; Wang, S.; Zhang, Z.; Tung, AK.H., "K-Anonymity for Crowdsourcing Database," Knowledge and Data Engineering, IEEE Transactions on , vol.26, no.9, pp.2207,2221, Sept. 2014. doi: 10.1109/TKDE.2013.93 In crowdsourcing database, human operators are embedded into the database engine and collaborate with other conventional database operators to process the queries. Each human operator publishes small HITs (Human Intelligent Task) to the crowdsourcing platform, which consists of a set of database records and corresponding questions for human workers. The human workers complete the HITs and return the results to the crowdsourcing database for further processing. In practice, published records in HITs may contain sensitive attributes, probably causing privacy leakage so that malicious workers could link them with other public databases to reveal individual private information. Conventional privacy protection techniques, such as K-Anonymity, can be applied to partially solve the problem. However, after generalizing the data, the result of standard K-Anonymity algorithms may render uncontrollable information loss and affects the accuracy of crowdsourcing. In this paper, we first study the tradeoff between the privacy and accuracy for the human operator within data anonymization process. A probability model is proposed to estimate the lower bound and upper bound of the accuracy for general K-Anonymity approaches. We show that searching the optimal anonymity approach is NP-Hard and only heuristic approach is available. The second contribution of the paper is a general feedback-based K-Anonymity scheme. In our scheme, synthetic samples are published to the human workers, the results of which are used to guide the selection on anonymity strategies. We apply the scheme on Mondrian algorithm by adaptively cutting the dimensions based on our feedback results on the synthetic samples. We evaluate the performance of the feedback-based approach on U.S. census dataset, and show that given a predefined (K) , our proposal outperforms standard K-Anonymity approaches on retaining the effectiveness- of crowdsourcing. Keywords: Crowdsourcing; Database Management; General; Information Technology and Systems; K-Anonymity; Query design and implementation languages; Security;and protection; data partition; database privacy; integrity (ID#:14-2289) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6529080&isnumber=6871455
  • Jianpei Zhang; Ying Zhao; Yue Yang; Jing Yang, "A K-anonymity Clustering Algorithm Based On The Information Entropy," Computer Supported Cooperative Work in Design (CSCWD), Proceedings of the 2014 IEEE 18th International Conference on , vol., no., pp.319,324, 21-23 May 2014. doi: 10.1109/CSCWD.2014.6846862 Data anonymization techniques are the main way to achieve privacy protection, and as a classical anonymity model, K-anonymity is the most effective and frequently-used. But the majority of K-anonymity algorithms can hardly balance the data quality and efficiency, and ignore the privacy of the data to improve the data quality. To solve the problems above, by introducing the concept of "diameter" and a new clustering criterion based on the parameter of the maximum threshold of equivalence classes, we proposed a K-anonymity clustering algorithm based on the information entropy. The results of experiments showed that both the algorithm efficiency and data security are improved, and meanwhile the total information loss is acceptable, so the proposed algorithm has some practicability in application. Keywords: data privacy; entropy; pattern clustering; security of data; K-anonymity clustering algorithm; classical anonymity model; data anonymization techniques; data efficiency; data quality improvement; data security; information entropy; maximum equivalence class threshold; privacy protection; Algorithm design and analysis; Classification algorithms; Clustering algorithms; Data security; Entropy; Information entropy; Loss measurement; K-anonymity; clustering; information entropy; privacy preserving (ID#:14-2290) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846862&isnumber=6846800
  • Liu, J.K.; Man Ho Au; Susilo, W.; Jianying Zhou, "Linkable Ring Signature with Unconditional Anonymity," Knowledge and Data Engineering, IEEE Transactions on, vol.26, no.1, pp.157,165, Jan. 2014. doi: 10.1109/TKDE.2013.17 In this paper, we construct a linkable ring signature scheme with unconditional anonymity. It has been regarded as an open problem in [22] since 2004 for the construction of an unconditional anonymous linkable ring signature scheme. We are the first to solve this open problem by giving a concrete instantiation, which is proven secure in the random oracle model. Our construction is even more efficient than other schemes that can only provide computational anonymity. Simultaneously, our scheme can act as an counterexample to show that [19, Theorem 1] is not always true, which stated that linkable ring signature scheme cannot provide strong anonymity. Yet we prove that our scheme can achieve strong anonymity (under one of the interpretations). Keywords: cryptography; digital signatures; computational anonymity ;random oracle model; unconditional anonymity; unconditional anonymous linkable ring signature scheme; Adaptive systems; Electronic voting; Games; Indexes; Mathematical model; Public key; Ring signature; anonymity; linkable (ID#:14-2291) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6420832&isnumber=6674933
  • Ren-Hung Hwang; Fu-Hui Huang, "SocialCloaking: A Distributed Architecture For K-Anonymity Location Privacy Protection," Computing, Networking and Communications (ICNC), 2014 International Conference on , vol., no., pp.247,251, 3-6 Feb. 2014. doi: 10.1109/ICCNC.2014.6785340 As location information becomes commonly available in smart phones, applications of Location Based Service (LBS) has also become very popular and are widely used by smart phone users. Since the query of LBS contains user's location, it raises a privacy concern of exposure of user's location. K-anonymity is a commonly adopted technique for location privacy protection. In the literature, a centralized architecture which consists of a trusted anonymity server is widely adopted. However, this approach exhibits several apparent weaknesses, such as single point of failure, performance bottleneck, serious security threats, and not trustable to users, etc. In this paper, we re-examine the location privacy protection problem in LBS applications. We first provide an overview of the problem itself, to include types of query, privacy protection methods, adversary models, system architectures, and their related works in the literature. We then discuss the challenges of adopting a distributed architecture which does not need to set up a trusted anonymity server and propose a solution by combining unique features of structured peer-to-peer architecture and trust relationships among users of their on-line social networking relations. Keywords: data privacy; mobile computing; query processing; social networking (online);trusted computing; K-anonymity location privacy protection; LBS query; SocialCloaking; adversary model; centralized architecture; distributed architecture; failure point; location information; location-based service; on-line social networking relation; security threat; smart phones; structured peer-to-peer architecture; system architecture;trust relationship; trusted anonymity server; user location; Computer architecture; Mobile communication; Mobile handsets; Peer-to-peer computing; Privacy; Servers; Trajectory; Distributed Anonymity Server Architecture; Location Based Service; Location Privacy; Peer-to-Peer; Social Networking (ID#:14-2292) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785340&isnumber=6785290
  • Shinganjude, R.D.; Theng, D.P., "Inspecting the Ways of Source Anonymity in Wireless Sensor Network," Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on, pp.705,707, 7-9 April 2014. doi: 10.1109/CSNT.2014.148 Sensor networks mainly deployed to monitor and report real events, and thus it is very difficult and expensive to achieve event source anonymity for it, as sensor networks are very limited in resources. Data obscurity i.e. the source anonymity problem implies that an unauthorized observer must be unable to detect the origin of events by analyzing the network traffic; this problem has emerged as an important topic in the security of wireless sensor networks. This work inspects the different approaches carried for attaining the source anonymity in wireless sensor network, with variety of techniques based on different adversarial assumptions. The approach meeting the best result in source anonymity is proposed for further improvement in the source location privacy. The paper suggests the implementation of most prominent and effective LSB Steganography technique for the improvement. Keywords: steganography; telecommunication traffic; wireless sensor networks ;LSB steganography technique; adversarial assumptions; event source anonymity; network traffic; source location privacy; wireless sensor networks; Communication systems; Wireless sensor network; anonymity; coding theory; persistent dummy traffic; statistical test; steganography (ID#:14-2293) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821490&isnumber=6821334
  • Sabra, Z.; Artail, H., "Preserving Anonymity And Quality Of Service For VOIP Applications Over Hybrid Networks," Mediterranean Electrotechnical Conference (MELECON), 2014 17th IEEE , vol., no., pp.421,425, 13-16 April 2014. doi: 10.1109/MELCON.2014.6820571 In this work we seek to achieve VoIP end users' profile privacy without violating the QoS constraints on the throughput, end to end delay, and jitter, as these parameters are the most sensitive factors in multimedia applications. We propose an end-to-end user anonymity design that takes into consideration these constraints in a hybrid environment that involves ad-hoc and infrastructure networks. Using clusterheads for communication, and encryption of RTP payload, we prove using analysis and OPNET simulations, that our model can be easily integrated to present network infrastructures. Keywords: Internet telephony; cryptography; jitter; quality of service; OPNET simulations; QoS constraints; RTP payload; VoIP applications; anonymity preservation; encryption; end to end delay; hybrid networks; jitter; quality of service; Authentication; Conferences; Cryptography; Delays; Privacy; Protocols; Quality of service; Anonymity; Multimedia; QoS; VoIP; WLAN (ID#:14-2294) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6820571&isnumber=6820492
  • Liping Zhang; Shanyu Tang; Zhihua Cai, "Robust and Efficient Password Authenticated Key Agreement With User Anonymity For Session Initiation Protocol-Based Communications," Communications, IET , vol.8, no.1, pp.83,91, Jan. 3 2014. doi: 10.1049/iet-com.2012.0783 A suitable key agreement protocol plays an essential role in protecting the communications over open channels among users using voice over Internet protocol (VoIP). This study presents a robust and flexible password authenticated key agreement protocol with user anonymity for session initiation protocol (SIP) used by VoIP communications. Security analysis demonstrates that the proposed protocol enjoys many unique properties, such as user anonymity, no password table, session key agreement, mutual authentication, password updating freely, conveniently revoking lost smartcards and so on. Furthermore, the proposed protocol can resist the replay attack, the impersonation attack, the stolen-verifier attack, the man-in-middle attack, the Denning-Sacco attack and the offline dictionary attack with or without smartcards. Finally, the performance analysis shows that the protocol is more suitable for practical application in comparison with other related protocols. Keywords: Internet telephony; computer network security; cryptographic protocols; private key cryptography; public key cryptography; signaling protocols; Denning-Sacco attack; SIP; VoIP communications; flexible password authenticated key agreement protocol; impersonation attack; man-in-middle attack; offline dictionary attack; replay attack; security analysis; session initiation protocol-based communications; smartcards; stolen-verifier attack; user anonymity; voice over Internet protocol (ID#:14-2295) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6711996&isnumber=6711983
  • Burke, M.-J.; Kayem, AV.D.M., "K-Anonymity for Privacy Preserving Crime Data Publishing in Resource Constrained Environments," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on , vol., no., pp.833,840, 13-16 May 2014. doi: 10.1109/WAINA.2014.131 Mobile crime report services have become a pervasive approach to enabling community-based crime reporting (CBCR) in developing nations. These services hold the advantage of facilitating law enforcement when resource constraints make using standard crime investigation approaches challenging. However, CBCRs have failed to achieve widespread popularity in developing nations because of concerns for privacy. Users are hesitant to make crime reports with out strong guarantees of privacy preservation. Furthermore, oftentimes lack of data mining expertise within the law enforcement agencies implies that the reported data needs to be processed manually which is a time-consuming process. In this paper we make two contributions to facilitate effective and efficient CBCR and crime data mining as well as to address the user privacy concern. The first is a practical framework for mobile CBCR and the second, is a hybrid k-anonymity algorithm to guarantee privacy preservation of the reported crime data. We use a hierarchy-based generalization algorithm to classify the data to minimize information loss by optimizing the nodal degree of the classification tree. Results from our proof-of-concept implementation demonstrate that in addition to guaranteeing privacy, our proposed scheme offers a classification accuracy of about 38% and a drop in information loss of nearly 50% over previous schemes when compared on various sizes of datasets. Performance-wise we observe an average improvement of about 50ms proportionate to the size of the dataset. Keywords: criminal law; data mining; data privacy; generalisation (artificial intelligence);mobile computing; pattern classification; CBCR; classification accuracy; classification tree; community-based crime reporting; crime data mining; crime investigation approach; hierarchy-based generalization algorithm k-anonymity; law enforcement; mobile crime report services; pervasive approach; privacy preserving crime data publishing; resource constrained environment; user privacy concern; Cloud computing; Data privacy; Encryption; Law enforcement; Mobile communication; Privacy; Anonymity; Developing Countries; Encryption; Information Loss; Public/Private Key Cryptography; Resource Constrained Environments (ID#:14-2296) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844743&isnumber=6844560
  • Sharma, V., "Methods For Privacy Protection Using K-Anonymity," Optimization, Reliability, and Information Technology (ICROIT), 2014 International Conference on, vol., no., pp.149,152, 6-8 Feb. 2014. doi: 10.1109/ICROIT.2014.6798301 Large amount of data is produced in electronic form by various governmental and nongovernmental organizations. This data also has information related to specific individual. Information related to specific individual needs to be protected, so that it may not harm the privacy. Moreover sensitive information related to organization also needs to be protected. Data is released from various organizations as it is demanded by researchers and data mining companies to develop newer and better methods for finding patterns and trends. Any organization who wished to release data has two goals, one is to release the data as close as possible to the original form and second to protect the privacy of individuals and sensitive information from being released. K-anonymity has been used as successful technique in this regard. This method provides a guarantee that released data is at least k-anonymous. Various methods have been suggested to achieve k-anonymity for the given dataset. I categories these methods into four main domains based on the principle these are based and methods they are applying to achieve k-anonymous data. These methods have their respective advantages and disadvantages relating to loss of information, feasibility in real world and suitability to the number of tuples in the dataset. Keywords: data mining; data protection; data mining; data privacy protection; governmental organizations; information loss; k-anonymous data; nongovernmental organizations; Computers; Data privacy; Diseases; Hypertension; Anonymity; generalization; privacy (ID#:14-2297) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798301&isnumber=6798279
  • Ma, R.; Rath, H.K.; Balamuralidhar, P., "Design of a Mix Network Using Connectivity Index -- A Novel Privacy Enhancement Approach," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on, pp.512, 517, 13-16 May 2014. doi: 10.1109/WAINA.2014.86 Privacy Enhancing Techniques (PET) are key to the success in building the trust among the users of the digital world. Enhancing the communication privacy is getting attention nowadays. In this direction, anonymity schemes such as mix, mix networks, onion routing, crowds etc., have started in roads into the deployment at individual and community network levels. To measure the effectiveness and accuracy of such schemes, degree of anonymity is proposed as a privacy metric in literature. To measure the degree of anonymity, many empirical techniques are proposed. We observe that these techniques are computationally intensive and are infeasible for real-time requirements and thus may not be suitable to measure the degree of anonymity under the dynamic changes in the configuration of the network in real-time. In this direction, we propose a novel lightweight privacy metric to measure the degree of anonymity for mix, mix networks and their variants using graph theoretic approach based on Connectivity Index (CI). Further, we also extend this approach with Weighted Connectivity Index (WCI) and have demonstrated the usefulness of the metric through analytical analysis. Keywords: data privacy; graph theory; anonymity schemes; communication privacy; crowds; digital world; graph theoretic approach; lightweight privacy metric; mix network design; mix networks; onion routing; privacy enhancing techniques; real-time requirements; user trust; weighted connectivity index; Algorithm design and analysis; Complexity theory ;Indexes; Measurement; Ports (Computers); Privacy; Real-time systems; Anonymity; Connectivity Index; Mix; Mix Network; Privacy (ID#:14-2298) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844688&isnumber=6844560
  • Pervaiz, Z.; Aref, W.G.; Ghafoor, A; Prabhu, N., "Accuracy-Constrained Privacy-Preserving Access Control Mechanismfor Relational Data," Knowledge and Data Engineering, IEEE Transactions on , vol.26, no.4, pp.795,807, April 2014. doi: 10.1109/TKDE.2013.71 Access control mechanisms protect sensitive information from unauthorized users. However, when sensitive information is shared and a Privacy Protection Mechanism (PPM) is not in place, an authorized user can still compromise the privacy of a person leading to identity disclosure. A PPM can use suppression and generalization of relational data to anonymize and satisfy privacy requirements, e.g., k-anonymity and l-diversity, against identity and attribute disclosure. However, privacy is achieved at the cost of precision of authorized information. In this paper, we propose an accuracy-constrained privacy-preserving access control framework. The access control policies define selection predicates available to roles while the privacy requirement is to satisfy the k-anonymity or l-diversity. An additional constraint that needs to be satisfied by the PPM is the imprecision bound for each selection predicate. The techniques for workload-aware anonymization for selection predicates have been discussed in the literature. However, to the best of our knowledge, the problem of satisfying the accuracy constraints for multiple roles has not been studied before. In our formulation of the aforementioned problem, we propose heuristics for anonymization algorithms and show empirically that the proposed approach satisfies imprecision bounds for more permissions and has lower total imprecision than the current state of the art. Keywords: authorisation; data protection; query processing; relational databases; PPM; access control policies; accuracy constraints; accuracy-constrained privacy-preserving access control mechanism; anonymization algorithms; attribute disclosure; authorized information precision; authorized user; empirical analysis; identity disclosure; imprecision bound; imprecision bounds; k-anonymity ;l-diversity; person privacy; privacy protection mechanism; privacy requirement anonymization; privacy requirement satisfaction; query processing; relational data generalization; relational data suppression; selection predicates; sensitive information protection; sensitive information sharing; unauthorized users; workload-aware anonymization; $k$ -anonymity; Access control; privacy; query evaluation (ID#:14-2299) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6512493&isnumber=6777369
  • Zakhary, S.; Radenkovic, M.; Benslimane, A, "Efficient Location Privacy-Aware Forwarding in Opportunistic Mobile Networks," Vehicular Technology, IEEE Transactions on , vol.63, no.2, pp.893,906, Feb. 2014. doi: 10.1109/TVT.2013.2279671 This paper proposes a novel fully distributed and collaborative k-anonymity protocol (LPAF) to protect users' location information and ensure better privacy while forwarding queries/replies to/from untrusted location-based service (LBS) over opportunistic mobile networks (OppMNets). We utilize a lightweight multihop Markov-based stochastic model for location prediction to guide queries toward the LBS's location and to reduce required resources in terms of retransmission overheads. We develop a formal analytical model and present theoretical analysis and simulation of the proposed protocol performance. We further validate our results by performing extensive simulation experiments over a pseudorealistic city map using map-based mobility models and using real-world data trace to compare LPAF to existing location privacy and benchmark protocols. We show that LPAF manages to keep higher privacy levels in terms of k-anonymity and quality of service in terms of success ratio and delay, as compared with other protocols, while maintaining lower overheads. Simulation results show that LPAF achieves up to an 11% improvement in success ratio for pseudorealistic scenarios, whereas real-world data trace experiments show up to a 24% improvement with a slight increase in the average delay. Keywords: Markov processes; mobile ad hoc networks; mobility management (mobile radio);protocols; quality of service; telecommunication security ;LBS; LPAF; OppMNets; benchmark protocols; collaborative k-anonymity protocol; lightweight multihop Markov-based stochastic model; location prediction; location privacy-aware forwarding; location-based service; map-based mobility models; opportunistic mobile networks; pseudorealistic city map; quality of service; retransmission overhead; success ratio; Analytical models; Delays; Equations; Markov processes; Mathematical model; Privacy; Protocols; Anonymity; distributed computing; location privacy; mobile ad hoc network (ID#:14-2300) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6587139&isnumber=6739143
  • Banerjee, D.; Bo Dong; Biswas, S.; Taghizadeh, M., "Privacy-Preserving Channel Access Using Blindfolded Packet Transmissions," Communication Systems and Networks (COMSNETS), 2014 Sixth International Conference on, pp.1,8, 6-10 Jan. 2014. doi: 10.1109/COMSNETS.2014.6734887 This paper proposes a novel wireless MAC-layer approach towards achieving channel access anonymity. Nodes autonomously select periodic TDMA-like time-slots for channel access by employing a novel channel sensing strategy, and they do so without explicitly sharing any identity information with other nodes in the network. An add-on hardware module for the proposed channel sensing has been developed and the proposed protocol has been implemented in Tinyos-2.x. Extensive evaluation has been done on a test-bed consisting of Mica2 hardware, where we have studied the protocol's functionality and convergence characteristics. The functionality results collected at a sniffer node using RSSI traces validate the syntax and semantics of the protocol. Experimentally evaluated convergence characteristics from the Tinyos test-bed were also found to be satisfactory. Keywords: data privacy; time division multiple access; wireless channels; wireless sensor networks;Mica2 hardware;RSSI;Tinyos-2x test-bed implementation; add-on hardware module; blindfolded packet transmission; channel sensing strategy; periodic TDMA-Iike time-slot; privacy-preserving channel access anonymity; protocol; wireless MAC-layer approach; Convergence; Cryptography; Equations; Google; Heating; Interference; Noise; Anonymity; MAC protocols; Privacy; TDMA (ID#:14-2301) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6734887&isnumber=6734849
  • Ullah, R.; Nizamuddin; Umar, AI; ul Amin, N., "Blind Signcryption Scheme Based On Elliptic Curves," Information Assurance and Cyber Security (CIACS), 2014 Conference on , vol., no., pp.51,54, 12-13 June 2014. doi: 10.1109/CIACS.2014.6861332 In this paper blind signcryption using elliptic curves cryptosystem is presented. It satisfies the functionalities of Confidentiality, Message Integrity, Unforgeability, Signer Non-repudiation, Message Unlink-ability, Sender anonymity and Forward Secrecy. The proposed scheme has low computation and communication overhead as compared to existing blind Signcryption schemes and best suited for mobile phone voting and m-commerce. Keywords: public key cryptography; blind signcryption scheme; communication overhead;confidentiality; elliptic curves cryptosystem; forward secrecy; m-commerce; message integrity; message unlink-ability; mobile phone voting; sender anonymity; signer nonrepudiation; unforgeability; Digital signatures; Elliptic curve cryptography; Elliptic curves; Equations; Mobile handsets; Anonymity; Blind Signature; Blind Signcryption; (ID#:14-2302) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6861332&isnumber=6861314
  • Perez-Gonzalez, F.; Troncoso, C.; Oya, S., "A Least Squares Approach to the Static Traffic Analysis of High-Latency Anonymous Communication Systems," Information Forensics and Security, IEEE Transactions on , vol.9, no.9, pp.1341,1355, Sept. 2014. doi: 10.1109/TIFS.2014.2330696 Mixes, relaying routers that hide the relation between incoming and outgoing messages, are the main building block of high-latency anonymous communication networks. A number of so-called disclosure attacks have been proposed to effectively deanonymize traffic sent through these channels. Yet, the dependence of their success on the system parameters is not well-understood. We propose the least squares disclosure attack (LSDA), in which user profiles are estimated by solving a least squares problem. We show that LSDA is not only suitable for the analysis of threshold mixes, but can be easily extended to attack pool mixes. Furthermore, contrary to previous heuristic-based attacks, our approach allows us to analytically derive expressions that characterize the profiling error of LSDA with respect to the system parameters. We empirically demonstrate that LSDA recovers users' profiles with greater accuracy than its statistical predecessors and verify that our analysis closely predicts actual performance. Keywords: cryptography; least squares approximations ;LSDA; cryptographic means; disclosure attacks; high-latency anonymous communication systems ;least squares disclosure attack; pool mixes; static traffic analysis; statistical predecessors; Accuracy; Bayes methods; Estimation; Least squares approximations; Random variables; Receivers; Vectors; Anonymity; disclosure attacks; mixes (ID#:14-2304) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6832564&isnumber=6867417
  • Fouad, M.R.; Elbassioni, K.; Bertino, E., "A Supermodularity-Based Differential Privacy Preserving Algorithm for Data Anonymization," Knowledge and Data Engineering, IEEE Transactions on , vol.26, no.7, pp.1591,1601, July 2014. doi: 10.1109/TKDE.2013.107 Maximizing data usage and minimizing privacy risk are two conflicting goals. Organizations always apply a set of transformations on their data before releasing it. While determining the best set of transformations has been the focus of extensive work in the database community, most of this work suffered from one or both of the following major problems: scalability and privacy guarantee. Differential Privacy provides a theoretical formulation for privacy that ensures that the system essentially behaves the same way regardless of whether any individual is included in the database. In this paper, we address both scalability and privacy risk of data anonymization. We propose a scalable algorithm that meets differential privacy when applying a specific random sampling. The contribution of the paper is two-fold: 1) we propose a personalized anonymization technique based on an aggregate formulation and prove that it can be implemented in polynomial time; and 2) we show that combining the proposed aggregate formulation with specific sampling gives an anonymization algorithm that satisfies differential privacy. Our results rely heavily on exploring the supermodularity properties of the risk function, which allow us to employ techniques from convex optimization. Through experimental studies we compare our proposed algorithm with other anonymization schemes in terms of both time and privacy risk. Keywords: data privacy; optimisation; convex optimization; data anonymization; data usage maximization; database community; privacy risk; privacy risk minimization; random sampling; scalability risk; supermodularity-based differential privacy preserving algorithm; Aggregates; Communities; Data privacy; Databases; Privacy; Scalability; Security; Data; Data sharing; Database Management; Database design; Differential privacy; General ;Information Storage and Retrieval; Information Technology and Systems; Knowledge and data engineering tools and techniques; Online Information Services; Security; and protection; anonymity; data sharing; data utility; integrity; modeling and management; risk management; scalability; security (ID#:14-2305) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6709680&isnumber=6851230


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Digital Signatures

Digital Signatures


Digital signatures are a common method of demonstrating the authenticity of a message. But such signatures can, of course, be forged. Research into digital signatures cited here has looked at digital signatures in the context of the Internet of Things, the elliptic curve digital signature algorithm, a hardware quantum based algorithm, and the use of DNA cryptography. These papers were presented or published between January andAugust of 2014.

  • Skarmeta, AF.; Hernandez-Ramos, J.L.; Moreno, M.V., "A Decentralized Approach For Security And Privacy Challenges In The Internet Of Things," Internet of Things (WF-IoT), 2014 IEEE World Forum on , vol., no., pp.67,72, 6-8 March 2014. doi: 10.1109/WF-IoT.2014.6803122 The strong development of the Internet of Things (IoT) is dramatically changing traditional perceptions of the current Internet towards an integrated vision of smart objects interacting with each other. While in recent years many technological challenges have already been solved through the extension and adaptation of wireless technologies, security and privacy still remain as the main barriers for the IoT deployment on a broad scale. In this emerging paradigm, typical scenarios manage particularly sensitive data, and any leakage of information could severely damage the privacy of users. This paper provides a concise description of some of the major challenges related to these areas that still need to be overcome in the coming years for a full acceptance of all IoT stakeholders involved. In addition, we propose a distributed capability-based access control mechanism which is built on public key cryptography in order to cope with some of these challenges. Specifically, our solution is based on the design of a lightweight token used for access to CoAP Resources, and an optimized implementation of the Elliptic Curve Digital Signature Algorithm (ECDSA) inside the smart object. The results obtained from our experiments demonstrate the feasibility of the proposal and show promising in order to cover more complex scenarios in the future, as well as its application in specific IoT use cases. Keywords: Internet of Things; authorisation; computer network security; data privacy; digital signatures; personal area networks; public key cryptography;6LoWPAN;CoAP resources; ECDSA; Internet of Things; IoT deployment; IoT stakeholders; distributed capability-based access control mechanism; elliptic curve digital signature algorithm; information leakage; lightweight token; public key cryptography; security challenges; sensitive data management; user privacy; wireless technologies; Authentication; Authorization; Cryptography; Internet; Privacy; 6LoWPAN; Internet of Things; Privacy; Security; cryptographic primitives; distributed access control (ID#:14-2306) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6803122&isnumber=6803102
  • Qawaqneh, Z.; Elleithy, K.; Alotaibi, B.; Alotaibi, M., "A New Hardware Quantum-Based Encryption Algorithm," Systems, Applications and Technology Conference (LISAT), 2014 IEEE Long Island , vol., no., pp.1,5, 2-2 May 2014. doi: 10.1109/LISAT.2014.6845201 Cryptography is entering a new age since the first steps that have been made towards quantum computing, which also poses a threat to the classical cryptosystem in general. In this paper, we introduce a new novel encryption technique and algorithm to improve quantum cryptography. The aim of the suggested scheme is to generate a digital signature in quantum computing. An arbitrated digital signature is introduced instead of the directed digital signature to avoid the denial of sending the message from the sender and pretending that the sender's private key was stolen or lost and the signature has been forged. The onetime pad operation that most quantum cryptography algorithms that have been proposed in the past is avoided to decrease the possibility of the channel eavesdropping. The presented algorithm in this paper uses quantum gates to do the encryption and decryption processes. In addition, new quantum gates are introduced, analyzed, and investigated in the encryption and decryption processes. The authors believe the gates that are used in the proposed algorithm improve the security for both classical and quantum computing. (Against)The proposed gates in the paper have plausible properties that position them as suitable candidates for encryption and decryption processes in quantum cryptography. To demonstrate the security features of the algorithm, it was simulated using MATLAB simulator, in particular through the Quack Quantum Library. Keywords: digital signatures; quantum computing; quantum cryptography; quantum gates; Matlab simulator; Quack Quantum Library; arbitrated digital signature; channel eavesdropping; decryption process; encryption process; hardware quantum-based encryption algorithm; quantum computing; quantum cryptography improvement; quantum gates; sender private key; signature forging; Encryption; Logic gates; Protocols; Quantum computing; Quantum mechanics; algorithms; quantum; quantum cryptography; qubit key; secure communications (ID#:14-2307) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845201&isnumber=6845183
  • Chouhan, D.S.; Mahajan, R.P., "An Architectural Framework For Encryption & Generation Of Digital Signature Using DNA Cryptography," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on , vol., no., pp.743,748, 5-7 March 2014. doi: 10.1109/IndiaCom.2014.6828061 As most of the modern encryption algorithms are broken fully/partially, the world of information security looks in new directions to protect the data it transmits. The concept of using DNA computing in the fields of cryptography has been identified as a possible technology that may bring forward a new hope for hybrid and unbreakable algorithms. Currently, several DNA computing algorithms are proposed for cryptography, cryptanalysis and steganography problems, and they are proven to be very powerful in these areas. This paper gives an architectural framework for encryption & Generation of digital signature using DNA Cryptography. To analyze the performance; the original plaintext size and the key size; together with the encryption and decryption time are examined also the experiments on plaintext with different contents are performed to test the robustness of the program. Keywords: biocomputing; digital signatures; DNA computing; DNA cryptography; architectural framework; cryptanalysis; decryption time; digital signature encryption; digital signature generation; encryption algorithms; encryption time; information security; key size; plaintext size; steganography; Ciphers; DNA; DNA computing; Digital signatures; Encoding; Encryption; DNA; DNA computing DNA cryptography; DNA digital coding (ID#:14-2308) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828061&isnumber=6827395
  • Kishore Dutta, M.; Singh, A; Travieso, C.M.; Burget, R., "Generation Of Digital Signature From Multi-Feature Biometric Traits For Digital Right Management Control," Engineering and Computational Sciences (RAECS), 2014 Recent Advances in , vol., no., pp.1,4, 6-8 March 2014. doi: 10.1109/RAECS.2014.6799558 This paper addresses the issue of ownership of digital images by embedding imperceptible digital pattern in the image. The digital pattern is generated from multiple biometric features in a strategic matter so that the identification of individual subject can be done. The features from iris image and fingerprint image are strategically combined to generate the pattern. This digital pattern was embedded and extracted from the host image and the experiments were also carried out when the image was subjected to signal processing attacks. Experimental results indicate that the insertion of this digital pattern does not change the perceptual properties of the image, and the digital pattern survives signal processing attacks and can be extracted for unique identification. Keywords: {biometrics (access control);digital rights management; digital signatures; image watermarking; biometric features; digital right management control; digital signature; fingerprint image; host image; imperceptible digital pattern; iris image; multifeature biometric traits; signal processing attacks; Biomedical imaging; Discrete cosine transforms; Fingerprint recognition; Gabor filters; Image recognition; PSNR; Watermarking; Digital Right Management; Fingerprint Recognition ;Iris Pattern Recognition; Multimode Biometric Feature; Robustness; Signal Processing Attacks (ID#:14-2309) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6799558&isnumber=6799496
  • Oder, Tobias; Poppelmann, Thomas; Guneysu, Tim, "Beyond ECDSA and RSA: Lattice-based Digital Signatures On Constrained Devices," Design Automation Conference (DAC), 2014 51st ACM/EDAC/IEEE , vol., no., pp.1,6, 1-5 June 2014. doi: 10.1109/DAC.2014.6881437 All currently deployed asymmetric cryptography is broken with the advent of powerful quantum computers. We thus have to consider alternative solutions for systems with long-term security requirements (e.g., for long-lasting vehicular and avionic communication infrastructures). In this work we present an efficient implementation of BLISS, a recently proposed, post-quantum secure, and formally analyzed novel lattice-based signature scheme. We show that we can achieve a significant performance of 35.3 and 6 ms for signing and verification, respectively, at a 128-bit security level on an ARM Cortex-M4F microcontroller. This shows that lattice-based cryptography can be efficiently deployed on today's hardware and provides security solutions for many use cases that can even withstand future threats. Keywords: (not provided) (ID#:14-2310) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6881437&isnumber=6881325
  • Fisher, P.S.; Min Gyung Kwak; Eunjung Lee; Jinsuk Baek, "A Signature Scheme for Digital Imagery," Information Science and Applications (ICISA), 2014 International Conference on , vol., no., pp.1,4, 6-9 May 2014. doi: 10.1109/ICISA.2014.6847337 We propose a signature scheme for identifying a related class of images based upon the content of the images. With the proposed scheme, we represent an image to a collection of rules based upon a technique using relationships derived from the pixels of images. This collection of relationships or rules is called Finite Inductive sequences. These rules make up a collective storage structure which can be used to process an image. The rules used in processing an unknown image characterize the image. The storage requirement increases with the number of rules for an image, which is on the order of the number of pixels within the image. One way to alleviate the storage requirement associated with large images is to process the image by using a wavelet transform, and then considering only the resulting high frequency component of the transform as the input to this process. When a new image is submitted, the rules are used to recognize similarities between the stored image and the new image. The process will provide an interlinking mesh to images that are similar or have similar components, as a background process. Retrieval then can be done without additional work at the moment of retrieval. Keywords: content-based retrieval; image retrieval; wavelet transforms; collective storage structure; digital imagery; finite inductive sequences; high frequency component; interlinking mesh; signature scheme; wavelet transform; Databases; Face; Image recognition; Search problems; Tagging; Wavelet transforms (ID#:14-2311) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847337&isnumber=6847317
  • Huang Lu; Jie Li; Guizani, M., "Secure and Efficient Data Transmission for Cluster-Based Wireless Sensor Networks," Parallel and Distributed Systems, IEEE Transactions on , vol.25, no.3, pp.750,761, March 2014. doi: 10.1109/TPDS.2013.43 Secure data transmission is a critical issue for wireless sensor networks (WSNs). Clustering is an effective and practical way to enhance the system performance of WSNs. In this paper, we study a secure data transmission for cluster-based WSNs (CWSNs), where the clusters are formed dynamically and periodically. We propose two secure and efficient data transmission (SET) protocols for CWSNs, called SET-IBS and SET-IBOOS, by using the identity-based digital signature (IBS) scheme and the identity-based online/offline digital signature (IBOOS) scheme, respectively. In SET-IBS, security relies on the hardness of the Diffie-Hellman problem in the pairing domain. SET-IBOOS further reduces the computational overhead for protocol security, which is crucial for WSNs, while its security relies on the hardness of the discrete logarithm problem. We show the feasibility of the SET-IBS and SET-IBOOS protocols with respect to the security requirements and security analysis against various attacks. The calculations and simulations are provided to illustrate the efficiency of the proposed protocols. The results show that the proposed protocols have better performance than the existing secure protocols for CWSNs, in terms of security overhead and energy consumption. Keywords: digital signatures; protocols; telecommunication security; wireless sensor networks; Diffie Hellman problem; SET IBOOS;SET IBS; cluster based wireless sensor networks; computational overhead; discrete logarithm problem; efficient data transmission; identity based digital signature scheme; identity based online offline digital signature scheme; protocol security; secure data transmission; security analysis; Cryptography; Data communication; Digital signatures; Protocols; Steady-state; Wireless sensor networks; Cluster-based WSNs; ID-based digital signature; ID-based online/offline digital signature; secure data transmission protocol (ID#:14-2312) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6464257&isnumber=6731354
  • Kishore, N.; Kapoor, B., "An Efficient Parallel Algorithm For Hash Computation In Security And Forensics Applications," Advance Computing Conference (IACC), 2014 IEEE International , vol., no., pp.873,877, 21-22 Feb. 2014. doi: 10.1109/IAdCC.2014.6779437 Hashing algorithms are used extensively in information security and digital forensics applications. This paper presents an efficient parallel algorithm hash computation. It's a modification of the SHA-1 algorithm for faster parallel implementation in applications such as the digital signature and data preservation in digital forensics. The algorithm implements recursive hash to break the chain dependencies of the standard hash function. We discuss the theoretical foundation for the work including the collision probability and the performance implications. The algorithm is implemented using the OpenMP API and experiments performed using machines with multicore processors. The results show a performance gain by more than a factor of 3 when running on the 8-core configuration of the machine. Keywords: application program interfaces; cryptography; digital forensics; digital signatures; file organisation; parallel algorithms; probability; OpenMP API;SHA-1 algorithm; collision probability; data preservation; digital forensics; digital signature; hash computation; hashing algorithms ;information security; parallel algorithm; standard hash function; Algorithm design and analysis; Conferences; Cryptography; Multicore processing; Program processors; Standards; Cryptographic Hash Function; Digital Forensics; Digital Signature; MD5; Multicore Processors; OpenMP; SHA-1 (ID#:14-2313) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779437&isnumber=6779283
  • Dinu, D.D.; Togan, M., "DHCP Server Authentication Using Digital Certificates," Communications (COMM), 2014 10th International Conference on, pp.1,6, 29-31 May 2014. doi: 10.1109/ICComm.2014.6866756 In this paper we give an overview of the DHCP security issues and the related work done to secure the protocol. Then we propose a method based on the use of public key cryptography and digital certificates in order to authenticate the DHCP server and DHCP server responses, and to prevent in this way the rogue DHCP server attacks. We implemented and tested the proposed solution using different key and certificate types in order to find out the packet overhead and time consumed by the new added authentication option. Keywords: certification; cryptographic protocols; digital signatures; public key cryptography; DHCP security; DHCP server attacks; DHCP server authentication; digital certificates; digital signature; public key cryptography; Authentication; Digital signatures; IP networks; Message authentication; Protocols; Servers; DHCP; DHCP authentication; DHCP security; digital certificate; digital signature; replay detection method URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866756&isnumber=6866648
  • Benzaid, C.; Saiah, A; Badache, N., "An Enhanced Secure Pairwise Broadcast Time Synchronization Protocol in Wireless Sensor Networks," Parallel, Distributed and Network-Based Processing (PDP), 2014 22nd Euromicro International Conference on , vol., no., pp.569,573, 12-14 Feb. 2014. doi: 10.1109/PDP.2014.114 This paper proposes an Enhanced Secure Pairwise Broadcast Time Synchronization (E-SPBS) protocol that allows authenticated MAC-layer timestamping on high-data rate radio interfaces. E-SPBS ensures the security of the Receiver-Only synchronization approach using a Public-Key-based Cryptography authentication scheme. The robustness and accuracy of E-SPBS were evaluated through simulations and experiments on a MICAz platform. Both simulation and experimental results demonstrate that E-SPBS achieves high robustness to external and internal attacks with low energy consumption. However, while the simulation results indicate that E-SPBS can achieve an average accuracy of less than 1m s, the experimental results show that the synchronization error is higher and not stable. This comparison gives us a good indication on how much confidence can be put into simulation results. Keywords: access protocols; cryptographic protocols; public key cryptography; radio receivers; synchronisation; telecommunication security; wireless sensor networks; E-SPBS protocol; MAC-layer timestamping; MICAz platform; energy consumption; enhanced secure pairwise broadcast time synchronization protocol; high-data rate radio interfaces; public-key-based cryptography authentication scheme ;receiver-only synchronization approach; wireless sensor networks;Accuracy;Authentication;Delays;Protocols;Synchronization;Wireless sensor networks; Digital Signatures; Receiver-Only Synchronization approach; Secure Time Synchronization ;Sensor Networks (ID#:14-2314) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6787330&isnumber=6787236
  • Gulhane, G.; Mahajan, N.V., "Securing Multipath Routing Protocol Using Authentication Approach for Wireless Sensor Network," Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on , vol., no., pp.729,733, 7-9 April 2014. doi: 10.1109/CSNT.2014.153 Wireless Sensor Networks (WSN) suffers from variety of threats such as operational lifetime of sensor nodes and security for information carried by sensor nodes. There is an increasing threat of malicious nodes attack on WSN. Black Hole attack is one of the security thread in which the traffic is redirected to such a node that actually does not exist in network. Having multipath routing protocol the lifespan of the wireless sensor network has been increases by dispensing traffic among several paths instead of a single optimal path. Also, secured data communication is one of the important research challenges in wireless sensor network. A secure and authentic Multipath Routing protocol for wireless sensor networks should be proposed which overcomes black hole attacks and provides secure data transmission in network. Performance should be measure in terms of different network parameters such as packet delivery fraction, energy consumption, normalize routing load and end-to-end delay. Keywords: delays; multipath channels; routing protocols; telecommunication security; wireless sensor networks; authentication approach; black hole attacks; end to end delay; energy consumption; multipath routing protocol; normalize routing load; operational lifetime; packet delivery fraction; secured data communication; wireless sensor network; Ad hoc networks; Energy efficiency; Routing; Routing protocols; Security; Wireless sensor networks; Ad hoc On Demand Multipath Vector Routing Protocol; Black Hole Attack; Digital Signature; Multipath routing protocol; wireless sensor network (ID#:14-2315) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821495&isnumber=6821334
  • Soderstrom, H., "Self-Contained Digitally Signed Documents: Approaching "What You See Is What You Sign"," Information Science and Applications (ICISA), 2014 International Conference on , vol., no., pp.1,4, 6-9 May 2014. doi: 10.1109/ICISA.2014.6847461 The "what you see is what you sign" challenge has been part of digital signatures since the very start. Digital signatures apply to the bit level. Users see a higher level, so how can they know what they sign? A sample of real-life applications indicates that the issue is still open. We propose a method for improved assurance based on simple tenets. The document to be signed is a well-defined visual impression. Exactly that visual impression is signed. After signing all parties have a copy of the signed document, including its signatures. PDF makes it possible to store signatures and metadata in the document. The method is being implemented in an e-government web platform for a major Swedish city. Keywords: digital signatures; document handling; meta data; PDF; Swedish city; digital signature; e-government Web platform; metadata; self-contained digitally signed documents; visual impression; Digital signatures; Portable document format; Smart cards; Software; Visualization; XML (ID#:14-2316) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847461&isnumber=6847317
  • Benitez, Yesica Imelda Saavedra; Ben-Othman, Jalel; Claude, Jean-Pierre, "Performance Evaluation Of Security Mechanisms In RAOLSR Protocol for Wireless Mesh Networks," Communications (ICC), 2014 IEEE International Conference on , vol., no., pp.1808,1812, 10-14 June 2014. doi: 10.1109/ICC.2014.6883585 In this paper, we have proposed the IBE-RAOLSR and ECDSA-RAOLSR protocols for WMNs (Wireless Mesh Networks), which contributes to security routing protocols. We have implemented the IBE (Identity Based Encryption) and ECDSA (Elliptic Curve Digital Signature Algorithm) methods to secure messages in RAOLSR (Radio Aware Optimized Link State Routing), namely TC (Topology Control) and Hello messages. We then compare the ECDSA-based RAOLSR with IBE-based RAOLSR protocols. This study shows the great benefits of the IBE technique in securing RAOLSR protocol for WMNs. Through extensive ns-3 (Network Simulator-3) simulations, results have shown that the IBE-RAOLSR outperforms the ECDSA-RAOLSR in terms of overhead and delay. Simulation results show that the utilize of the IBE-based RAOLSR provides a greater level of security with light overhead. Keywords: Delays; Digital signatures; IEEE 802.11 Standards; Routing; Routing protocols; IBE; Identity Based Encryption; Radio Aware Optimized Link State Routing; Routing Protocol; Security; Wireless Mesh Networks (ID#:14-2317) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883585&isnumber=6883277
  • Tsai, J., "An Improved Cross-Layer Privacy-Preserving Authentication in WAVE-enabled VANETs," Communications Letters, IEEE, vol. PP, no.99, pp.1, 1, May 2014. doi: 10.1109/LCOMM.2014.2323291 In 2013, Biswas and Misic proposed a new privacy preserving authentication scheme for WAVE-based vehicular ad hoc networks (VANETs), claiming that they used a variant of the Elliptic Curve Digital Signature Algorithm (ECDSA). However, our study has discovered that the authentication scheme proposed by them is vulnerable to a private key reveal attack. Any malicious receiving vehicle who receives a valid signature from a legal signing vehicle can gain access to the signing vehicle private key from the learned valid signature. Hence, the authentication scheme proposed by Biswas and Misic is insecure. We thus propose an improved version to overcome this weakness. The proposed improved scheme also supports identity revocation and trace. Based on this security property, the CA and a receiving entity (RSU or OBU) can check whether a received signature has been generated by a revoked vehicle. Security analysis is also conducted to evaluate the security strength of the proposed authentication scheme. Keywords: Authentication; Digital signatures; Elliptic curves; Law; Public key; Vehicles (ID#:14-2318) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814798&isnumber=5534602
  • Shah, N.; Desai, N.; Vashi, V., "Efficient Cryptography for Data Security," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on , vol., no., pp.908,910, 5-7 March 2014. doi: 10.1109/IndiaCom.2014.6828095 In today's world Sensitive data are increasingly used in communication over internet. Thus Security of data is biggest concern of internet users. Best solution is use of some cryptography algorithm which encrypts data in some cipher and transfers it over internet and again decrypted to original data. This paper provides solution to data security problem through Cryptography technique based on ASCII value. Keywords: Internet;c ryptography; ASCII value Internet; cipher; cryptography algorithm; cryptography technique; data security; sensitive data; Digital signatures; Encryption; Internet; Public key; Reflective binary codes; Cryptography; Data Security (ID#:14-2319) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828095&isnumber=6827395
  • Premnath, AP.; Ju-Yeon Jo; Yoohwan Kim, "Application of NTRU Cryptographic Algorithm for SCADA Security," Information Technology: New Generations (ITNG), 2014 11th International Conference on , vol., no., pp.341,346, 7-9 April 2014. doi: 10.1109/ITNG.2014.38 Critical Infrastructure represents the basic facilities, services and installations necessary for functioning of a community, such as water, power lines, transportation, or communication systems. Any act or practice that causes a real-time Critical Infrastructure System to impair its normal function and performance will have debilitating impact on security and economy, with direct implication on the society. SCADA (Supervisory Control and Data Acquisition) system is a control system which is widely used in Critical Infrastructure System to monitor and control industrial processes autonomously. As SCADA architecture relies on computers, networks, applications and programmable controllers, it is more vulnerable to security threats/attacks. Traditional SCADA communication protocols such as IEC 60870, DNP3, IEC 61850, or Modbus did not provide any security services. Newer standards such as IEC 62351 and AGA-12 offer security features to handle the attacks on SCADA system. However there are performance issues with the cryptographic solutions of these specifications when applied to SCADA systems. This research is aimed at improving the performance of SCADA security standards by employing NTRU, a faster and light-weight NTRU public key algorithm for providing end-to-end security. Keywords: SCADA systems; critical infrastructures; cryptographic protocols; process control; process monitoring; production engineering computing; programmable controllers; public key cryptography; transport protocols;AGA-12;DNP3;IEC 60870;IEC 61850;IEC 62351;Modbus;NTRU cryptographic algorithm; NTRU public key algorithm; SCADA architecture; SCADA communication protocols; SCADA security standards; TCP/IP; communication systems; end-to-end security; industrial process control; industrial process monitoring; power lines; programmable controllers; real-time critical infrastructure system; security threats-attacks; supervisory control and data acquisition system; transportation; water; Authentication; Digital signatures; Encryption; IEC standards; SCADA systems;AGA-12;Critical Infrastructure System; IEC 62351; NTRU cryptographic algorithm; SCADA communication protocols over TCP/IP (ID#:14-2320) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6822221&isnumber=6822158
  • Ullah, R.; Nizamuddin; Umar, AI; ul Amin, N., "Blind Signcryption Scheme Based On Elliptic Curves," Information Assurance and Cyber Security (CIACS), 2014 Conference on , vol., no., pp.51,54, 12-13 June 2014. doi: 10.1109/CIACS.2014.6861332 In this paper blind signcryption using elliptic curves cryptosystem is presented. It satisfies the functionalities of Confidentiality, Message Integrity, Unforgeability, Signer Non-repudiation, Message Unlink-ability, Sender anonymity and Forward Secrecy. The proposed scheme has low computation and communication overhead as compared to existing blind Signcryption schemes and best suited for mobile phone voting and m-commerce. Keywords: public key cryptography; blind signcryption scheme; communication overhead; confidentiality; elliptic curves cryptosystem; forward secrecy; m-commerce; message integrity; message unlink-ability; mobile phone voting; sender anonymity; signer nonrepudiation; unforgeability; Digital signatures; Elliptic curve cryptography; Elliptic curves; Equations; Mobile handsets; Anonymity; Blind Signature; Blind Signcryption; Elliptic curves; Signcryption (ID#:14-2321) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6861332&isnumber=6861314
  • Daehee Kim; Sunshin An, "Efficient And Scalable Public Key Infrastructure For Wireless Sensor Networks," Networks, Computers and Communications, The 2014 International Symposium on , vol., no., pp.1,5, 17-19 June 2014. doi: 10.1109/SNCC.2014.6866514 Ensuring security is essential in wireless sensor networks (WSNs) since a variety of applications of WSNs, including military, medical and industrial sectors, require several kinds of security services such as confidentiality, authentication, and integrity. However, ensuring security is not trivial in WSNs because of the limited resources of the sensor nodes. This has led a lot of researchers to focus on a symmetric key cryptography which is computationally lightweight, but requires a shared key between the sensor nodes. Public key cryptography (PKC) not only solves this problem gracefully, but also provides enhanced security services such as non-repudiation and digital signatures. To take advantage of PKC, each node must have a public key of the corresponding node via an authenticated method. The most widely used way is to use digital signatures signed by a certificate authority which is a part of a public key infrastructure (PKI). Since traditional PKI requires a huge amount of computations and communications, it can be heavy burden to WSNs. In this paper, we propose our own energy efficient and scalable PKI for WSNs. This is accomplished by taking advantage of heterogeneous sensor networks and elliptic curve cryptography. Our proposed PKI is analyzed in terms of security, energy efficiency, and scalability. As you will see later, our PKI is secure, energy efficient, and scalable. Keywords: digital signatures; energy conservation; public key cryptography; telecommunication power management; wireless sensor networks; PKC; PKI; WSN; authenticated method; certificate authority; digital signatures; elliptic curve cryptography; energy efficiency; heterogeneous sensor networks; industrial sectors; medical sectors; military sectors; public key cryptography; public key infrastructure; security services; sensor nodes; symmetric key cryptography; wireless sensor networks; Cryptography; IP networks; Servers; Wireless communication; Wireless sensor networks;(k, n) Threshold Scheme; Certificate Authority; Elliptic Curve Cryptography; Heterogeneous Sensor Networks; Public Key Infrastructure; Wireless Sensor Networks (ID#:14-2322) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866514&isnumber=6866503
  • Vollala, S.; Varadhan, V.V.; Geetha, K.; Ramasubramanian, N., "Efficient Modular Multiplication Algorithms For Public Key Cryptography," Advance Computing Conference (IACC), 2014 IEEE International, pp.74,78, 21-22 Feb. 2014. doi: 10.1109/IAdCC.2014.6779297 The modular exponentiation is an important operation for cryptographic transformations in public key cryptosystems like the Rivest, Shamir and Adleman, the Difie and Hellman and the ElGamal schemes. computing ax mod n and axby mod n for very large x,y and n are fundamental to the efficiency of almost all pubic key cryptosystems and digital signature schemes. To achieve high level of security, the word length in the modular exponentiations should be significantly large. The performance of public key cryptography is primarily determined by the implementation efficiency of the modular multiplication and exponentiation. As the words are usually large, and in order to optimize the time taken by these operations, it is essential to minimize the number of modular multiplications. In this paper we are presenting efficient algorithms for computing ax mod n and axby mod n. In this work we propose four algorithms to evaluate modular exponentiation. Bit forwarding (BFW) algorithms to compute ax mod n, and to compute axby mod n two algorithms namely Substitute and reward (SRW), Store and forward(SFW) are proposed. All the proposed algorithms are efficient in terms of time and at the same time demands only minimal additional space to store the pre-computed values. These algorithms are suitable for devices with low computational power and limited storage. Keywords: digital signatures; public key cryptography; BFW algorithms; bit forwarding algorithms; cryptographic transformations; digital signature schemes; modular exponentiation; modular multiplication algorithms; public key cryptography; public key cryptosystems ;store and forward algorithms; substitute and reward algorithms; word length; Algorithm design and analysis; Ciphers; Conferences; Encryption; Public key cryptography; Modular Multiplication; Public key cryptography(PKC); RSA; binary exponentiation (ID#:14-2324) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779297&isnumber=6779283


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Efficient Encryption

Efficient Encryption


The term "efficient encryption" generally refers to the speed of an algorithm, that is, the time needed to complete the calculations to encrypt or decrypt a coded text. The research cited here shows a broader concept and looks at both hardware and software. Several of these works also address power consumption. The works cited here appeared from January to August of 2014.

  • Pathak, S.; Kamble, R.; Chaursia, D., "An Efficient Data Encryption Standard Image Encryption Technique With RGB Random Uncertainty," Optimization, Reliability, and Information Technology (ICROIT), 2014 International Conference on , vol., no., pp.413,421, 6-8 Feb. 2014. doi: 10.1109/ICROIT.2014.6798366 Image encryption is an emerging area of focus now a day. To make heavy distortion between the original image and the encrypted image is a crucial aspect. In this paper we propose an efficient approach based on data encryption standard (DES). In our approach we are using XOR with the combination of DES encryption which emphasize greater changes in RGB combination as well as in the histogram. We also discuss our results which show the variations. Higher the variation security will be improved. Keywords: cryptography; image processing ;DES; RGB random uncertainty; XOR; efficient data encryption standard; heavy distortion; histogram; image encryption technique; variation security; Cryptography; IP networks; Image color analysis; Irrigation; Uncertainty; Chaos; DES; Image Encryption; Security Measures (ID#:14-2325) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798366&isnumber=6798279
  • Seo, S.; Nabeel, M.; Ding, X.; Bertino, E., "An Efficient Certificateless Encryption for Secure Data Sharing in Public Clouds," Knowledge and Data Engineering, IEEE Transactions on , vol.26, no.9, pp.2107,2119, Sept. 2014. doi: 10.1109/TKDE.2013.138 We propose a mediated certificateless encryption scheme without pairing operations for securely sharing sensitive information in public clouds. Mediated certificateless public key encryption (mCL-PKE) solves the key escrow problem in identity based encryption and certificate revocation problem in public key cryptography. However, existing mCL-PKE schemes are either inefficient because of the use of expensive pairing operations or vulnerable against partial decryption attacks. In order to address the performance and security issues, in this paper, we first propose a mCL-PKE scheme without using pairing operations. We apply our mCL-PKE scheme to construct a practical solution to the problem of sharing sensitive information in public clouds. The cloud is employed as a secure storage as well as a key generation center. In our system, the data owner encrypts the sensitive data using the cloud generated users' public keys based on its access control policies and uploads the encrypted data to the cloud. Upon successful authorization, the cloud partially decrypts the encrypted data for the users. The users subsequently fully decrypt the partially decrypted data using their private keys. The confidentiality of the content and the keys is preserved with respect to the cloud, because the cloud cannot fully decrypt the information. We also propose an extension to the above approach to improve the efficiency of encryption at the data owner. We implement our mCL-PKE scheme and the overall cloud based system, and evaluate its security and performance. Our results show that our schemes are efficient and practical. Keywords: Access control; Artificial intelligence; Cloud computing; Encryption; Public key; Cloud computing; Data encryption; Public key cryptosystems; access control; certificateless cryptography; confidentiality (ID#:14-2326) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6574849&isnumber=6871455
  • Jiantao Zhou; Xianming Liu; Au, O.C.; Yuan Yan Tang, "Designing an Efficient Image Encryption-Then-Compression System via Prediction Error Clustering and Random Permutation," Information Forensics and Security, IEEE Transactions on , vol.9, no.1, pp.39,50, Jan. 2014. doi: 10.1109/TIFS.2013.2291625 In many practical scenarios, image encryption has to be conducted prior to image compression. This has led to the problem of how to design a pair of image encryption and compression algorithms such that compressing the encrypted images can still be efficiently performed. In this paper, we design a highly efficient image encryption-then-compression (ETC) system, where both lossless and lossy compression are considered. The proposed image encryption scheme operated in the prediction error domain is shown to be able to provide a reasonably high level of security. We also demonstrate that an arithmetic coding-based approach can be exploited to efficiently compress the encrypted images. More notably, the proposed compression approach applied to encrypted images is only slightly worse, in terms of compression efficiency, than the state-of-the-art lossless/lossy image coders, which take original, unencrypted images as inputs. In contrast, most of the existing ETC solutions induce significant penalty on the compression efficiency. Keywords: arithmetic codes; data compression; image coding; pattern clustering; prediction theory; random codes; ETC; arithmetic coding-based approach; image encryption-then-compression system design; lossless compression; lossless image coder; lossy compression ;lossy image coder; prediction error clustering; random permutation; security; Bit rate; Decoding; Encryption; mage coding; Image reconstruction; Compression of encrypted image; encrypted domain signal processing (ID#:14-2327) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6670767&isnumber=6684617
  • Haojie Shen; Li Zhuo; Yingdi Zhao, "An Efficient Motion Reference Structure Based Selective Encryption Algorithm For H.264 Videos," Information Security, IET , vol.8, no.3, pp.199,206, May 2014. doi: 10.1049/iet-ifs.2012.0349 In this study, based on both the prediction mechanism of H.264 encoder and the syntax of H.264 bitstream, an efficient selective video encryption algorithm is proposed. The contributions of the study include two aspects. First, motion reference ratio (MRR) of macroblock (MB) is proposed to describe the inter-frame dependency among the adjacent frames. At the MB layer, MRRs of MBs are statistically analysed, and MBs to be encrypted are selected based on the statistical results. Second, at the bitstream layer of MBs, bit-sensitivity is proposed to represent the degree of importance of each bit in the compressed bitstream for reconstructed video quality. The most significant bits for reconstructed video quality are selected to be encrypted based on the bit-sensitivity of H.264 bitstream. The intra-prediction mode codewords, the sign bits of the non-zero coefficients and the info_suffix of motion vector difference codewords are extracted to be encrypted. The proposed two-layer selection scheme improves the encryption efficiency significantly. Experimental results demonstrate that both perceptual security and cryptographic security are achieved, and compared with the existing SEH264 algorithm, the proposed selective encryption algorithm can reduce the computational complexity by 50% on average. Keywords: computational complexity; cryptography; data compression; image motion analysis; image reconstruction; video codecs; video coding;H.264 bitstream;H.264 encoder; MB layer;MRR;SEH264 algorithm; bit-sensitivity; computational complexity; cryptographic security ;interframe dependency; intra-prediction mode codewords; macroblock; motion reference ratio; motion reference structure; motion vector difference codewords; non-zero coefflcients; perceptual security; prediction mechanism; selective video encryption algorithm; sign bits ;two-layer selection scheme; video quality reconstruction (ID#:14-2328) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6786860&isnumber=6786849
  • Yuhao Wang; Hao Yu; Sylvester, D.; Pingfan Kong, "Energy Efficient In-Memory AES Encryption Based On Nonvolatile Domain-Wall Nanowire," Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014 , vol., no., pp.1,4, 24-28 March 2014. doi: 10.7873/DATE.2014.196 The widely applied Advanced Encryption Standard (AES) encryption algorithm is critical in secure big-data storage. Data oriented applications have imposed high throughput and low power, i.e., energy efficiency (J/bit), requirements when applying AES encryption. This paper explores an in-memory AES encryption using the newly introduced domain-wall nanowire. We show that all AES operations can be fully mapped to a logic-in-memory architecture by non-volatile domain-wall nanowire, called DW-AES. The experimental results show that DW-AES can achieve the best energy efficiency of 24 pJ/bit, which is 9X and 6.5X times better than CMOS ASIC and memristive CMOL implementations, respectively. Under the same area budget, the proposed DW-AES exhibits 6.4X higher throughput and 29% power saving compared to a CMOS ASIC implementation; 1.7X higher throughput and 74% power reduction compared to a memristive CMOL implementation. Keywords: cryptography; low-power electronics; nanowires; random-access storage; Advanced Encryption Standard; CMOS ASIC implementations; DW-AES; data oriented applications; energy efficient in-memory AES encryption; logic-in-memory architecture; low power; memristive CMOL implementations; nonvolatile domain-wall nanowire; secure big-data storage; Application specific integrated circuits; CMOS integrated circuits; Ciphers; Encryption; Nanoscale devices; Nonvolatile memory; Throughput (ID#:14-2329) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6800397&isnumber=6800201
  • Fei Huo; Guang Gong, "A New Efficient Physical Layer OFDM Encryption Scheme," INFOCOM, 2014 Proceedings IEEE , vol., no., pp.1024,1032, April 27 2014-May 2, 2014. doi: 10.1109/INFOCOM.2014.6848032 In this paper, we propose a new encryption scheme for OFDM systems. The reason for physical layer approach is that it has the least impact on the system and is the fastest among all layers. This scheme is computationally secure against the adversary. It requires less key streams compared with other approaches. The idea comes from the importance of orthogonality in OFDM symbols. Destroying the orthogonality create intercarrier interferences. This in turn cause higher bit and symbol decoding error rate. The encryption is performed on the time domain OFDM symbols, which is equivalent to performing nonlinear masking in the frequency domain. Various attacks are explored in this paper. These include known plaintext and ciphertext attack, frequency domain attack, time domain attack, statistical attack and random guessing attack. We show our scheme is resistant against these attacks. Finally, simulations are conducted to compare the new scheme with the conventional cipher encryption. Keywords: OFDM modulation; cryptography; decoding ;intercarrier interference; OFDM symbols; OFDM systems; cipher encryption; ciphertext attack ;efficient physical layer OFDM encryption scheme; frequency domain; frequency domain attack; intercarrier interferences; nonlinear masking; orthogonality; physical layer approach; plaintext attack; random guessing attack; statistical attack; symbol decoding error rate ;time domain OFDM symbols; time domain attack; Ciphers; Encryption; Frequency-domain analysis; OFDM; Receivers; Time-domain analysis (ID#:14-2330) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848032&isnumber=6847911
  • Hamdi, M.; Hermassi, H.; Rhouma, R.; Belghith, S., "A New Secure And Efficient Scheme Of ADPCM Encoder Based On Chaotic Encryption," Advanced Technologies for Signal and Image Processing (ATSIP), 2014 1st International Conference on , vol., no., pp.7,11, 17-19 March 2014. doi: 10.1109/ATSIP.2014.6834580 This paper presents a new secure variant of ADPCM encoders that are adopted by the CCITT as Adaptive Differential Pulse Code Modulation. This version provides encryption and decryption of voice simultaneously with operations ADPCM encoding and decoding. The evaluation of the scheme showed better performance in terms of speed and security. Keywords: adaptive modulation; cryptography; differential pulse code modulation; speech coding; CCITT; adaptive differential pulse code modulation; chaotic encryption; efficient ADPCM encoder; secure ADPCM encoder; voice decryption; voice encryption; Chaotic communication; Decoding; Encoding; Encryption; Speech; Encryption-Compression; Speech coding; chaotic encryption (ID#:14-2331) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6834580&isnumber=6834578
  • Hongchao Zhou; Wornell, G., "Efficient Homomorphic Encryption On Integer Vectors And Its Applications," Information Theory and Applications Workshop (ITA), 2014 , vol., no., pp.1,9, 9-14 Feb. 2014. doi: 10.1109/ITA.2014.6804228 Homomorphic encryption, aimed at enabling computation in the encrypted domain, is becoming important to a wide and growing range of applications, from cloud computing to distributed sensing. In recent years, a number of approaches to fully (or nearly fully) homomorphic encryption have been proposed, but to date the space and time complexity of the associated schemes has precluded their use in practice. In this work, we demonstrate that more practical homomorphic encryption schemes are possible when we require that not all encrypted computations be supported, but rather only those of interest to the target application. More specifically, we develop a homomorphic encryption scheme operating directly on integer vectors that supports three operations of fundamental interest in signal processing applications: addition, linear transformation, and weighted inner products. Moreover, when used in combination, these primitives allow us to efficiently and securely compute arbitrary polynomials. Some practically relevant examples of the computations supported by this framework are described, including feature extraction, recognition, classification, and data aggregation. Keywords: computational complexity; cryptography; polynomials; arbitrary polynomials; cloud computing; distributed sensing; homomorphic encryption scheme; integer vectors; space and time complexity; Encryption; Noise; Polynomials; Servers; Switches; Vectors (ID#:14-2332) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6804228&isnumber=6804199
  • Yongsung Jeon; Youngsae Kim; Jeongnyeo Kim, "Implementation of a Video Streaming Security System For Smart Device," Consumer Electronics (ICCE), 2014 IEEE International Conference on , vol., no., pp.97,100, 10-13 Jan. 2014. doi: 10.1109/ICCE.2014.6775925 This paper proposes an efficient hardware architecture to embody a video surveillance camera for security. The proposed smart camera will combine the Digital Media SoC with the low-cost FPGA. Each can perform video processing and security functions independently and the FPGA has a novel video security module. This security module encrypts video stream raw data by using an efficient encryption method; high 4 bits from the MSB of video data are encrypted by an AES algorithm. And, the proposed security module can encrypt raw video data with a maximum operation frequency of 39 MHz which is possible on a low-cost FPGA. This paper also asserts that the proposed encryption method can obtain a similar video data security level while using less hardware resources than when all of video data is encrypted. Keywords: cameras; cryptography; field programmable gate arrays; system-on-chip; telecommunication security; video streaming; video surveillance; AES algorithm; FPGA; MSB; digital media SoC; encryption method; frequency 39 MHz; hardware architecture; most significant bit; smart camera; smart device ;systen on chip; video data security; video stream raw data; video streaming security system; video surveillance camera; Computer architecture; Encryption; Field programmable gate arrays; Hardware; Streaming media (ID#:14-2333) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6775925&isnumber=6775879
  • Milioris, D.; Jacquet, P., "SecLoc: Encryption System Based On Compressive Sensing Measurements For Location Estimation," Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on , vol., no., pp.171,172, April 27 2014-May 2 2014. doi: 10.1109/INFCOMW.2014.6849210 In this paper we present an efficient encryption system based on Compressive Sensing, without the additional computational cost of a separate encryption protocol, when applied to indoor location estimation problems. The breakthrough of the method is the use of the weakly encrypted measurement matrices which are generated when solving the optimization problem to localize the source. It must be noted that in this method an alternative key is required to secure the system. Keywords: {compressed sensing; cryptographic protocols; matrix algebra; optimisation; SecLoc system; compressive sensing measurements; encryption protocol; encryption system; location estimation; optimization problem; weakly encrypted measurement matrices; Bayes methods; Compressed sensing; Encryption; Estimation; Runtime; Servers; Vectors (ID#:14-2334) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849210&isnumber=6849127
  • Zibideh, W.Y.; Matalgah, M.M., "Energy Consumptions Analysis For A Class Of Symmetric Encryption Algorithm," Radio and Wireless Symposium (RWS), 2014 IEEE , vol., no., pp.268,270, 19-23 Jan. 2014. doi: 10.1109/RWS.2014.6830130 Due to the increased demand on wireless devices and their applications, the necessity for efficient and secure encryption algorithms is critical. A secure encryption algorithm is considered energy efficient if it uses a minimum number of CPU operations. In this paper we use numerical calculations to analyze the energy consumption for a class of encryption algorithms. We compute the number of arithmetic and logical instructions in addition to the number of memory access used by each of the algorithms under study. Given some information about the microprocessor used in encryption, we can compute the energy consumed per each instruction and hence compute the total energy consumed by the encryption algorithm. In addition, we use computer simulations to compare the energy loss of transmitting encrypted information over the Wireless channel. Therefore, in this paper we introduce a comprehensive approach where we use two approaches to analyze the energy consumption of encryption algorithms. Keywords: cryptography; energy conservation; energy consumption; error statistics; microcomputers; telecommunication channels; telecommunication power management; CPU operations; arithmetic instructions; encrypted information; energy consumptions analysis; energy efficiency; energy loss; logical instructions; memory access; microprocessor; secure encryption algorithms; symmetric encryption algorithm; wireless channel; wireless devices; Bit error rate; Clocks; Encryption; Energy consumption; Microprocessors; Wireless communication (ID#:14-2335) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830130&isnumber=6830066
  • Bhatnagar, G.; Wu, Q.M.J., "Biometric Inspired Multimedia Encryption Based on Dual Parameter Fractional Fourier Transform," Systems, Man, and Cybernetics: Systems, IEEE Transactions on , vol.44, no.9, pp.1234,1247, Sept. 2014. doi: 10.1109/TSMC.2014.2303789 In this paper, a novel biometric inspired multimedia encryption technique is proposed. For this purpose, a new advent in the definition of fractional Fourier transform, namely, dual parameter fractional Fourier transform (DP-FrFT) is proposed and used in multimedia encryption. The core idea behind the proposed encryption technique is to obtain biometrically encoded bitstream followed by the generation of the keys used in the encryption process. Since the key generation process of encryption technique directly determines the security of the technique. Therefore, this paper proposes an efficient method for generating biometrically encoded bitstream from biometrics and its usage to generate the keys. Then, the encryption of multimedia data is done in the DP-FrFT domain with the help of Hessenberg decomposition and nonlinear chaotic map. Finally, a reliable decryption process is proposed to construct original multimedia data from the encrypted data. Theoretical analyses and computer simulations both confirm high security and efficiency of the proposed encryption technique. Keywords: Eigenvalues and eigenfunctions; Encryption; Fourier transforms; Iris recognition; Multimedia communication; Biometrics; Hessenberg Decomposition; dual parameter fractional Fourier transform (DP-FrFT); encryption techniques; fractional Fourier transform; nonlinear chaotic map (ID#:14-2336) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6748100&isnumber=6878502
  • Huang Qinlong; Ma Zhaofeng; Yang Yixian; Niu Xinxin; Fu Jingyi, "Attribute Based DRM Scheme With Dynamic Usage Control In Cloud Computing," Communications, China , vol.11, no.4, pp.50,63, April 2014. doi: 10.1109/CC.2014.6827568 In order to achieve fine-grained access control in cloud computing, existing digital rights management (DRM) schemes adopt attribute-based encryption as the main encryption primitive. However, these schemes suffer from inefficiency and cannot support dynamic updating of usage rights stored in the cloud. In this paper, we propose a novel DRM scheme with secure key management and dynamic usage control in cloud computing. We present a secure key management mechanism based on attribute-based encryption and proxy re-encryption. Only the users whose attributes satisfy the access policy of the encrypted content and who have effective usage rights can be able to recover the content encryption key and further decrypt the content. The attribute based mechanism allows the content provider to selectively provide fine-grained access control of contents among a set of users, and also enables the license server to implement immediate attribute and user revocation. Moreover, our scheme supports privacy-preserving dynamic usage control based on additive homomorphic encryption, which allows the license server in the cloud to update the users' usage rights dynamically without disclosing the plaintext. Extensive analytical results indicate that our proposed scheme is secure and efficient. Keywords: authorisation; cloud computing; data privacy; digital rights management; private key cryptography; public key cryptography; access policy; additive homomorphic encryption; attribute based DRM scheme; attribute-based encryption; cloud computing; content decryption; content encryption key; digital rights management; encrypted content recovery; fine-grained access control; immediate attribute; license server; privacy-preserving dynamic usage control; proxy re-encryption; secure key management; user revocation; Access control; Cloud computing Encryption ;Licenses; Privacy; attribute-based encryption; cloud computing; digital rights management; homomorphic encryption; usage control (ID#:14-2337) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6827568&isnumber=6827540
  • Lembrikov, B.I; Ben-Ezra, Y.; Yurchenko, Yu., "Transmission of Chaotically Encrypted Signals Over An Optical Channel," Transparent Optical Networks (ICTON), 2014 16th International Conference on , vol., no., pp.1,1, 6-10 July 2014. doi: 10.1109/ICTON.2014.6876414 The important problems of the contemporary information transmission systems are privacy and security. Traditional cryptosystems are based on software techniques where a short secret parameter defined as the key is used, or the message is encoded directly. A novel approach to encryption is based on a hardware communication system where the encryption is directly applied to the physical layer of the communication system. Chaos communication is a direct encoding and decoding scheme of a message system in a communication system. Optical communication with chaotic laser system attracted a wide interest. Optical-fiber communication systems using chaotic semiconductor lasers have been investigated both theoretically and experimentally. The advantages of the chaotic communications are following: (i) Efficient use of the bandwidth of the communication channel; (ii) Utilization of the intrinsic nonlinearities in communication devices such as semiconductor diode lasers; (iii) Large-signal modulation for efficient use of carrier-power; (iv) Reduced number of components in a communication system; (v) Security of communication based on chaotic encryption. Typically, generation of chaotic signals can be achieved by introduction of the delayed all-optical or electro-optical feedback into diode lasers. We propose a novel system of the coupled lasers synchronization based on the master and slave lasers both in the transmitter and in the receiver. We carried out the numerical simulations of the optical communication channel containing such a transmitter and a receiver. We investigated theoretically the influence of optical fiber dispersion and nonlinearity on the chaotically encoded signal transmission efficiency. The numerical simulations show that the efficient transmission of the chaotically modulated waveform over the optical channel of a 100 km distance and the following decoding are possible. Keywords: (not provided) (ID#:14-2338) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6876414&isnumber=6876260
  • Hazarika, N.; Saikia, M., "A Novel Partial Image Encryption Using Chaotic Logistic Map," Signal Processing and Integrated Networks (SPIN), 2014 International Conference on , vol., no., pp.231,236, 20-21 Feb. 2014. doi: 10.1109/SPIN.2014.6776953 Transmitted images may have many different applications like commercial, military, medical etc. To protect the information from unauthorized access secure image transfer is required and this can be achieved by image data encryption. But the encryption of whole image is time consuming. This paper proposed a selective encryption techniques using spatial or DCT domain. The result of the several experimental, statistical analysis and sensitivity test shows that the proposed image encryption scheme provides an efficient and secure way for real-time image encryption and transmission. A chaotic logistic map is used to perform different encryption/decryption operation in this proposed method. Keywords: chaos; cryptography; discrete cosine transforms; image processing; statistical analysis; DCT domain; chaotic logistic map; decryption operation; discrete cosine transform; novel partial image data encryption; real-time image transmission; selective encryption techniques; sensitivity test; spatial domain;statistical analysis;unauthorized secure image transfer access; Chaos; Ciphers; Discrete cosine transforms; Encryption; Histograms;Logistics; Block Cipher; Chaos; DCT; Logistic map; Partial Encryption (ID#:14-2339) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6776953&isnumber=6776904
  • Wenhai Sun; Shucheng Yu; Wenjing Lou; Hou, Y.T.; Hui Li, "Protecting Your Right: Attribute-Based Keyword Search With Fine-Grained Owner-Enforced Search Authorization In The Cloud," INFOCOM, 2014 Proceedings IEEE , vol., no., pp.226,234, April 27 2014-May 2, 2014. doi: 10.1109/INFOCOM.2014.6847943 Search over encrypted data is a critically important enabling technique in cloud computing, where encryption-before-outsourcing is a fundamental solution to protecting user data privacy in the untrusted cloud server environment. Many secure search schemes have been focusing on the single-contributor scenario, where the outsourced dataset or the secure searchable index of the dataset are encrypted and managed by a single owner, typically based on symmetric cryptography. In this paper, we focus on a different yet more challenging scenario where the outsourced dataset can be contributed from multiple owners and are searchable by multiple users, i.e. multi-user multi-contributor case. Inspired by attribute-based encryption (ABE), we present the first attribute-based keyword search scheme with efficient user revocation (ABKS-UR) that enables scalable fine-grained (i.e. file-level) search authorization. Our scheme allows multiple owners to encrypt and outsource their data to the cloud server independently. Users can generate their own search capabilities without relying on an always online trusted authority. Fine-grained search authorization is also implemented by the owner-enforced access policy on the index of each file. Further, by incorporating proxy re-encryption and lazy re-encryption techniques, we are able to delegate heavy system update workload during user revocation to the resourceful semi-trusted cloud server. We formalize the security definition and prove the proposed ABKS-UR scheme selectively secure against chosen-keyword attack. Finally, performance evaluation shows the efficiency of our scheme. Keywords: authorisation; cloud computing; cryptography; data privacy; information retrieval; trusted computing; ABE; ABKS-UR scheme; always online trusted authority ;attribute-based encryption; attribute-based keyword search; chosen-keyword attack; cloud computing; cloud server environment; data privacy; encryption; encryption-before-outsourcing; fine-grained owner-enforced search authorization; lazy re-encryption technique; owner-enforced access policy; proxy re-encryption technique ;resourceful semi-trusted cloud server; searchable index ;security definition; single-contributor search scenario; symmetric cryptography; user revocation; Authorization; Data privacy; Encryption; Indexes; Keyword search; Servers (ID#:14-2340) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847943&isnumber=6847911
  • Areed, N.F.F.; Obayya, S.S.A, "Multiple Image Encryption System Based on Nematic Liquid Photonic Crystal Layers," Lightwave Technology, Journal of vol.32, no.7, pp.1344,1350, April1, 2014. doi: 10.1109/JLT.2014.2300553 A novel design for multiple symmetric image encryption system based on a phase encoding is presented. The proposed encryptor utilizes a photonic bandgap (PBG) block in order to ensure high reflectivity over a relatively wide frequency range of interest. Also, the proposed encryptor can be utilized to encrypt two images simultaneously through the use of two nematic liquid crystal (NLC) layers across the PBG block. The whole system has been simulated numerically using the rigorous finite difference time domain method. To describe the robustness of the encryption, a root mean square of error and the signal to noise ratio are calculated. The statistical analysis of the retrieved images shows that the proposed image encryption system provides an efficient and secure way for real time image encryption and transmission. In addition, as the proposed system offers a number of advantages over existing systems such as simple design, symmetry allowing integrated encryptor/decryptor system, ultra high bandwidth and encrypting two images at the same time, it can be suitably exploited in optical imaging system applications. Keywords: cryptography; finite difference time-domain analysis; image processing; nematic liquid crystals; photonic crystals; reflectivity; statistical analysis; finite difference time domain method; multiple image encryption system; nematic liquid photonic crystal layers; photonic bandgap block; reflectivity; root mean square error; signal to noise ratio; statistical analysis; Encryption; Histograms; Laser beams; Optical imaging; Optical reflection; Photonic crystals; Encryption; finite difference time domain (FDTD); liquid crystal (LC); photonic crystal (PhC) (ID#:14-2341) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6712899&isnumber=6740872


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Information Assurance

Information Assurance


The term "information Assurance" was adopted in the late 1990's to cover what is often now referred to generically as "cybersecurity." Many still use the phrase, particularly in the U.S. government, both for teaching and research. Since it is a rather generic phrase, there is a wide area of coverage under this topic. The articles cited here, from the January to September of 2014, cover topics related both to technology and pedagogy.

  • Xiaohong Yuan; Williams, K.; Huiming Yu; Bei-Tseng Chu; Rorrer, A; Li Yang; Winters, K.; Kizza, J., "Developing Faculty Expertise in Information Assurance through Case Studies and Hands-On Experiences," System Sciences (HICSS), 2014 47th Hawaii International Conference on, pp.4938,4945, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.606 Though many Information Assurance (IA) educators agree that hands-on exercises and case studies improve student learning, hands-on exercises and case studies are not widely adopted due to the time needed to develop them and integrate them into curriculum. Under the support of National Science Foundation (NSF) Scholarship for Service program, we implemented two faculty development workshops to disseminate effective hands-on exercises and case studies developed through multiple previous and ongoing grants, and to develop faculty expertise in IA. This paper reports our experience of holding the faculty summer workshops on teaching information assurance through case studies and hands-on experiences. The topics presented at the workshops are briefly described and the evaluation results of the workshops are discussed. The workshops provided a valuable opportunity for IA educators to connect with each other and form collaboration in teaching and research in IA. Keywords: computer science education; continuing professional development; teacher training; teaching; IA educators; NSF Scholarship for Service program; National Science Foundation Scholarship for Service program; case studies; curriculum; faculty development workshops; faculty expertise; faculty summer workshops; hands-on exercises; hands-on experiences; information assurance educators; student learning; teaching; Access control; Authentication; Conferences; Cryptography; Educational institutions (ID#:14-2342) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759209&isnumber=6758592
  • Romero-Mariona, J., "DITEC (DoD-Centric and Independent Technology Evaluation Capability): A Process for Testing Security," Software Testing, Verification and Validation Workshops (ICSTW), 2014 IEEE Seventh International Conference on , vol., no., pp.24,25, March 31 2014-April 4 2014. doi: 10.1109/ICSTW.2014.52 Information Assurance (IA) is one of the Department of Defense's (DoD) top priorities today. IA technologies are constantly evolving to protect critical information from the growing number of cyber threats. Furthermore, DoD spends millions of dollars each year procuring, maintaining, and discontinuing various IA and Cyber technologies. Today, there is no process and/or standardized method for making informed decisions about which IA technologies are better/best. Due to this, efforts for selecting technologies go through very disparate evaluations that are often times non-repeatable and very subjective. DITEC (DoD-centric and Independent Technology Evaluation Capability) is a new capability that streamlines IA technology evaluation. DITEC defines a Process for evaluating whether or not a product meets DoD needs, Security Metrics for measuring how well needs are met, and a Framework for comparing various products that address the same IA technology area. DITEC seeks to reduce the time and cost of creating a test plan and expedite the test and evaluation effort for considering new IA technologies, consequently streamlining the deployment of IA products across DoD and increasing the potential to meet its needs. Keywords: data protection; decision making; military computing; security of data; DITEC; Department of Defense; DoD-centric and independent technology evaluation capability; IA technologies; critical information protection; cyber technologies; cyber threats; information assurance; informed decision making; security metrics; security testing process; Computer security; Conferences; Measurement; US Department of Defense; Usability; Decision-making Support; Evaluation; Information Assurance; Security; Security Metrics (ID#:14-2343) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825634&isnumber=6825623
  • Schumann, M.A; Drusinsky, D.; Michael, J.B.; Wijesekera, D., "Modeling Human-in-the-Loop Security Analysis and Decision-Making Processes," Software Engineering, IEEE Transactions on, vol.40, no.2, pp.154,166, Feb. 2014. doi: 10.1109/TSE.2014.2302433 This paper presents a novel application of computer-assisted formal methods for systematically specifying, documenting, statically and dynamically checking, and maintaining human-centered workflow processes. This approach provides for end-to-end verification and validation of process workflows, which is needed for process workflows that are intended for use in developing and maintaining high-integrity systems. We demonstrate the technical feasibility of our approach by applying it on the development of the US government's process workflow for implementing, certifying, and accrediting cross-domain computer security solutions. Our approach involves identifying human-in-the-loop decision points in the process activities and then modeling these via statechart assertions. We developed techniques to specify and enforce workflow hierarchies, which was a challenge due to the existence of concurrent activities within complex workflow processes. Some of the key advantages of our approach are: it results in development of a model that is executable, supporting both upfront and runtime checking of process-workflow requirements; aids comprehension and communication among stakeholders and process engineers; and provides for incorporating accountability and risk management into the engineering of process workflows. Keywords: decision making; formal specification; formal verification; government data processing; security of data; workflow management software; US government process workflow; United States; accountability; computer-assisted formal methods; cross-domain computer security solutions; decision-making process; end-to-end validation; end-to-end verification; high-integrity systems; human-centered workflow process; human-in-the-loop decision points; human-in-the-loop security analysis; process activities; process documentation; process dynamically checking; process maintenance; process specification; process statically checking; process workflows engineering ;risk management; statechart assertions; workflow hierarchies; Analytical models; Business; Formal specifications; Object oriented modeling; Runtime; Software; Unified modeling language; Formal methods; information assurance; process modeling; software engineering; statechart assertions; verification and validation (ID#:14-2344) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6727512&isnumber=6755497
  • Hershey, P.C.; Rao, S.; Silio, C.B.; Narayan, A, "System of Systems for Quality-of-Service Observation and Response in Cloud Computing Environments," Systems Journal, IEEE, vol. PP, no.99, pp.1, 11, January 2014. doi: 10.1109/JSYST.2013.2295961 As military, academic, and commercial computing systems evolve from autonomous entities that deliver computing products into network centric enterprise systems that deliver computing as a service, opportunities emerge to consolidate computing resources, software, and information through cloud computing. Along with these opportunities come challenges, particularly to service providers and operations centers that struggle to monitor and manage quality of service (QoS) for these services in order to meet customer service commitments. Traditional approaches fall short in addressing these challenges because they examine QoS from a limited perspective rather than from a system-of-systems (SoS) perspective applicable to a net-centric enterprise system in which any user from any location can share computing resources at any time. This paper presents a SoS approach to enable QoS monitoring, management, and response for enterprise systems that deliver computing as a service through a cloud computing environment. A concrete example is provided for application of this new SoS approach to a real-world scenario (viz., distributed denial of service). Simulated results confirm the efficacy of the approach. Keywords: Cloud computing; Delays; Monitoring; Quality of service; Security; Cloud computing; distributed denial of service (DDoS);enterprise systems; information assurance; net centric; quality of service (QoS); security; service-oriented architecture (SOA); systems of systems (SoS) (ID#:14-2345) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6729062&isnumber=4357939
  • Kowtko, M.A, "Biometric Authentication For Older Adults," Systems, Applications and Technology Conference (LISAT), 2014 IEEE Long Island, pp.1,6, 2-2 May 2014. doi: 10.1109/LISAT.2014.6845213 In recent times, cyber-attacks and cyber warfare have threatened network infrastructures from across the globe. The world has reacted by increasing security measures through the use of stronger passwords, strict access control lists, and new authentication means; however, while these measures are designed to improve security and Information Assurance (IA), they may create accessibility challenges for older adults and people with disabilities. Studies have shown the memory performance of older adults decline with age. Therefore, it becomes increasingly difficult for older adults to remember random strings of characters or passwords that have 12 or more character lengths. How are older adults challenged by security measures (passwords, CAPTCHA, etc.) and how does this affect their accessibility to engage in online activities or with mobile platforms? While username/password authentication, CAPTCHA, and security questions do provide adequate protection; they are still vulnerable to cyber-attacks. Passwords can be compromised from brute force, dictionary, and social engineering style attacks. CAPTCHA, a type of challenge-response test, was developed to ensure that user inputs were not manipulated by machine-based attacks. Unfortunately, CAPTCHA are now being exploited by new vulnerabilities and exploits. Insecure implementations through code or server interaction have circumvented CAPTCHA. New viruses and malware now utilize character recognition as means to circumvent CAPTCHA [1]. Security questions, another challenge response test that attempts to authenticate users, can also be compromised through social engineering attacks and spyware. Since these common security measures are increasingly being compromised, many security professionals are turning towards biometric authentication. Biometric authentication is any form of human biological measurement or metric that can be used to identify and authenticate an authorized user of a secure system. Biometric authentication- can include fingerprint, voice, iris, facial, keystroke, and hand geometry [2]. Biometric authentication is also less affected by traditional cyber-attacks. However, is Biometrics completely secure? This research will examine the security challenges and attacks that may risk the security of biometric authentication. Recently, medical professionals in the TeleHealth industry have begun to investigate the effectiveness of biometrics. In the United States alone, the population of older adults has increased significantly with nearly 10,000 adults per day reaching the age of 65 and older [3]. Although people are living longer, that does not mean that they are living healthier. Studies have shown the U.S. healthcare system is being inundated by older adults. As security with the healthcare industry increases, many believe that biometric authentication is the answer. However, there are potential problems; especially in the older adult population. The largest problem is authentication of older adults with medical complications. Cataracts, stroke, congestive heart failure, hard veins, and other ailments may challenge biometric authentication. Since biometrics often utilize metrics and measurement between biological features, anyone of the following conditions and more could potentially affect the verification of users. This research will analyze older adults and their impact of biometric authentication on the verification process. Keywords: authorisation; biometrics (access control); invasive software; medical administrative data processing; mobile computing; CAPTCHA; Cataracts; IA; TeleHealth industry ;US healthcare system; access control lists; authentication means; biometric authentication; challenge-response test; congestive heart failure; cyber warfare; cyber-attacks; dictionary; hard veins; healthcare industry; information assurance; machine-based attacks; medical professionals; mobile platforms; network infrastructures; older adults; online activities; security measures; security professionals; social engineering style attacks; spyware; stroke; username-password authentication; Authentication; Barium; CAPTCHAs; Computers; Heart; Iris recognition; Biometric Authentication; CAPTCHA; Cyber-attacks; Information Security; Older Adults; Telehealth (ID#:14-2346) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845213&isnumber=6845183
  • ier Jin, "EDA Tools Trust Evaluation Through Security Property Proofs," Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014, pp.1,4, 24-28 March 2014. doi: 10.7873/DATE.2014.260 The security concerns of EDA tools have long been ignored because IC designers and integrators only focus on their functionality and performance. This lack of trusted EDA tools hampers hardware security researchers' efforts to design trusted integrated circuits. To address this concern, a novel EDA tools trust evaluation framework has been proposed to ensure the trustworthiness of EDA tools through its functional operation, rather than scrutinizing the software code. As a result, the newly proposed framework lowers the evaluation cost and is a better fit for hardware security researchers. To support the EDA tools evaluation framework, a new gate-level information assurance scheme is developed for security property checking on any gatelevel netlist. Helped by the gate-level scheme, we expand the territory of proof-carrying based IP protection from RT-level designs to gate-level netlist, so that most of the commercially trading third-party IP cores are under the protection of proof-carrying based security properties. Using a sample AES encryption core, we successfully prove the trustworthiness of Synopsys Design Compiler in generating a synthesized netlist. Keywords: cryptography; electronic design automation; integrated circuit design; AES encryption core; EDA tools trust evaluation; Synopsys design compiler; functional operation; gate-level information assurance scheme; gate-level netlist; hardware security researchers; proof-carrying based IP protection; security property proofs; software code; third-party IP cores; trusted integrated circuits; Hardware; IP networks; Integrated circuits; Logic gates; Sensitivity; Trojan horses (ID#:14-2347) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6800461&isnumber=6800201
  • Whitmore, J.; Turpe, S.; Triller, S.; Poller, A; Carlson, C., "Threat Analysis In The Software Development Lifecycle," IBM Journal of Research and Development, vol.58, no.1, pp.6:1, 6:13, Jan.-Feb. 2014. doi: 10.1147/JRD.2013.2288060 Businesses and governments that deploy and operate IT (information technology) systems continue to seek assurance that software they procure has the security characteristics they expect. The criteria used to evaluate the security of software are expanding from static sets of functional and assurance requirements to complex sets of evidence related to development practices for design, coding, testing, and support, plus consideration of security in the supply chain. To meet these evolving expectations, creators of software are faced with the challenge of consistently and continuously applying the most current knowledge about risks, threats, and weaknesses to their existing and new software assets. Yet the practice of threat analysis remains an art form that is highly subjective and reserved for a small community of security experts. This paper reviews the findings of an IBM-sponsored project with the Fraunhofer Institute for Secure Information Technology (SIT) and the Technische Universitat Darmstadt. This project investigated aspects of security in software development, including practical methods for threat analysis. The project also examined existing methods and tools, assessing their efficacy for software development within an open-source software supply chain. These efforts yielded valuable insights plus an automated tool and knowledge base that has the potential for overcoming some of the current limitations of secure development on a large scale. Keywords: Analytical models; Business; Computer security; Encoding; Government; Information technology; Software development; information assurance (ID#:14-2348) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6717070&isnumber=6717043 Beato, F.; Peeters, R., "Collaborative Joint Content Sharing For Online Social Networks," Pervasive Computing and Communications Workshops (PERCOM Workshops), 2014 IEEE International Conference on , vol., no., pp.616,621, 24-28 March 2014. doi: 10.1109/PerComW.2014.6815277 Online social networks' (OSNs) epic popularity has accustomed users to the ease of sharing information. At the same time, OSNs have been a focus of privacy concerns with respect to the information shared. Therefore, it is important that users have some assurance when sharing on OSNs: popular OSNs provide users with mechanisms, to protect shared information access rights. However, these mechanisms do not allow collaboration when defining access rights for joint content related to more than one user (e.g, party pictures in which different users are being tagged). In fact, the access rights list for such content is represented by the union of the access list defined by each related user, which could result in unwanted leakage. We propose a collaborative access control scheme, based on secret sharing, in which sharing of content on OSNs is decided collaboratively by a number of related users. We demonstrate that such mechanism is feasible and benefits users' privacy. Keywords: authorisation; data privacy; groupware; social networking (online) ;OSN; access rights list; collaborative access control scheme; collaborative joint content sharing; information sharing; online social networks; privacy concerns; secret sharing; unwanted leakage; user privacy; Access control; Collaboration; Encryption; Joints; Privacy (ID#:14-2349) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6815277&isnumber=6815123
  • Adjei, J.K., "Explaining the Role of Trust in Cloud Service Acquisition," Mobile Cloud Computing, Services, and Engineering (MobileCloud), 2014 2nd IEEE International Conference on, pp.283, 288, 8-11 April 2014. doi: 10.1109/MobileCloud.2014.48 Effective digital identity management system is a critical enabler of cloud computing, since it supports the provision of the required assurances to the transacting parties. Such assurances sometimes require the disclosure of sensitive personal information. Given the prevalence of various forms of identity abuses on the Internet, a re-examination of the factors underlying cloud services acquisition has become critical and imperative. In order to provide better assurances, parties to cloud transactions must have confidence in service providers' ability and integrity in protecting their interest and personal information. Thus a trusted cloud identity ecosystem could promote such user confidence and assurances. Using a qualitative research approach, this paper explains the role of trust in cloud service acquisition by organizations. The paper focuses on the processes of acquisition of cloud services by financial institutions in Ghana. The study forms part of comprehensive study on the monetization of personal Identity information. Keywords: cloud computing; data protection; trusted computing; Ghana; Internet; cloud computing; cloud services acquisition; cloud transactions; digital identity management system; financial institutions; identity abuses; interest protection; organizations; personal identity information; sensitive personal information; service provider ability; service provider integrity; transacting parties; trusted cloud identity ecosystem; u ser assurances; user confidence; Banking; Cloud computing ;Context; Law; Organizations; Privacy; cloud computing; information privacy; mediating; trust (ID#:14-2350) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6834977&isnumber=6823830
  • Kekkonen, T.; Kanstren, T.; Hatonen, K., "Towards Trusted Environment in Cloud Monitoring," Information Technology: New Generations (ITNG), 2014 11th International Conference on, pp.180,185, 7-9 April 2014. doi: 10.1109/ITNG.2014.104 This paper investigates the problem of providing trusted monitoring information on a cloud environment to the cloud customers. The general trust between customer and provider is taken as a starting point. The paper discusses possible methods to strengthen this trust. It focuses on establishing a chain of trust inside the provider infrastructure to supply monitoring data for the customer. The goal is to enable delivery of state and event information to parties outside the cloud infrastructure. The current technologies and research are reviewed for the solution and the usage scenario is presented. Based on such technology, higher assurance of the cloud can be presented to the customer. This allows customers with high security requirements and responsibilities to have more confidence in accepting the cloud as their platform of choice. Keywords: cloud computing; security of data; trusted computing; cloud customers; cloud monitoring; cloud service provider infrastructure; monitoring data; security requirements; trusted environment; trusted monitoring information; Hardware; Monitoring; Operating systems; Probes; Registers; Security; Virtual machining; TPM; cloud; integrity measurement; remote attestation; security concerns; security measurement (ID#:14-2351) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6822195&isnumber=6822158
  • Dubrova, E.; Naslund, M.; Selander, G., "Secure and Efficient LBIST For Feedback Shift Register-Based Cryptographic Systems," Test Symposium (ETS), 2014 19th IEEE European, pp.1,6, 26-30 May 2014. doi: 10.1109/ETS.2014.6847821 Cryptographic methods are used to protect confidential information against unauthorised modification or disclo-sure. Cryptographic algorithms providing high assurance exist, e.g. AES. However, many open problems related to assuring security of a hardware implementation of a cryptographic algorithm remain. Security of a hardware implementation can be compromised by a random fault or a deliberate attack. The traditional testing methods are good at detecting random faults, but they do not provide adequate protection against malicious alterations of a circuit known as hardware Trojans. For example, a recent attack on Intel's Ivy Bridge processor demonstrated that the traditional Logic Built-In Self-Test (LBIST) may fail even the simple case of stuck-at fault type of Trojans. In this paper, we present a novel LBIST method for Feedback Shift Register (FSR)-based cryptographic systems which can detect such Trojans. The specific properties of FSR-based cryptographic systems allow us to reach 100% single stuck-at fault coverage with a small set of deterministic tests. The test execution time of the proposed method is at least two orders of magnitude shorter than the one of the pseudo-random pattern-based LBIST. Our results enable an efficient protection of FSR-based cryptographic systems from random and malicious stuck-at faults. Keywords: cryptography ;logic testing; shift registers ;FSR-based cryptographic systems ;Ivy Bridge processor; LBIST method; confidential information protection; cryptographic algorithms; cryptographic methods; deliberate attack; feedback shift register-based cryptographic systems; hardware Trojans logic built-in self-test; random fault attack; stuck-at fault coverage; Boolean functions; Circuit faults; Clocks; Cryptography; Logic gates; Trojan horses; Vectors (ID#:14-2352) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847821&isnumber=6847779
  • Zlomislic, Vinko; Fertalj, Kresimir; Sruk, Vlado, "Denial of Service Attacks: An Overview," Information Systems and Technologies (CISTI), 2014 9th Iberian Conference on, vol., no., pp.1,6, 18-21 June 2014. doi: 10.1109/CISTI.2014.6876979 Denial of service (DoS) attacks present one of the most significant threats to assurance of dependable and secure information systems. Rapid development of new and increasingly sophisticated attacks requires resourcefulness in designing and implementing reliable defenses. This paper presents an overview of current DoS attack and defense concepts, from a theoretical and practical point of view. Considering the elaborated DoS mechanisms, main directions are proposed for future research required in defending against the evolving threat. Keywords: Computer crime; Filtering; Floods; Protocols; Reliability; Servers; DDoS; Denial of Service; Denial of Sustainability; DoS; Network Security; System Security (ID#:14-2353) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6876979&isnumber=6876860
  • Almohri, H.M.J.; Danfeng Yao; Kafura, D., "Process Authentication for High System Assurance," Dependable and Secure Computing, IEEE Transactions on, vol.11, no.2, pp.168,180, March-April 2014. doi: 10.1109/TDSC.2013.29 This paper points out the need in modern operating system kernels for a process authentication mechanism, where a process of a user-level application proves its identity to the kernel. Process authentication is different from process identification. Identification is a way to describe a principal; PIDs or process names are identifiers for processes in an OS environment. However, the information such as process names or executable paths that is conventionally used by OS to identify a process is not reliable. As a result, malware may impersonate other processes, thus violating system assurance. We propose a lightweight secure application authentication framework in which user-level applications are required to present proofs at runtime to be authenticated to the kernel. To demonstrate the application of process authentication, we develop a system call monitoring framework for preventing unauthorized use or access of system resources. It verifies the identity of processes before completing the requested system calls. We implement and evaluate a prototype of our monitoring architecture in Linux. The results from our extensive performance evaluation show that our prototype incurs reasonably low overhead, indicating the feasibility of our approach for cryptographically authenticating applications and their processes in the operating system. Keywords: Linux; authorization; cryptography; operating system kernels; software architecture software performance evaluation; system monitoring; Linux; cryptographic authenticating applications; high system assurance; modern operating system kernels; monitoring architecture; performance evaluation; process authentication mechanism; process identification; requested system calls; secure application authentication framework; system call monitoring framework; unauthorized system resource access prevention; unauthorized system resource use prevention; user-level application; Authentication; Kernel; Malware; Monitoring; Runtime; Operating system security; process authentication; secret application credential; system call monitoring (ID#:14-2354) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6560050&isnumber=6785951
  • Xixiang Lv; Yi Mu; Hui Li, "Non-Interactive Key Establishment for Bundle Security Protocol of Space DTNs," Information Forensics and Security, IEEE Transactions on, vol.9, no.1, pp.5,13, Jan. 2014. doi: 10.1109/TIFS.2013.2289993 To ensure the authenticity, integrity, and confidentiality of bundles, the in-transit Protocol Data Units of bundle protocol (BP) in space delay/disruption tolerant networks (DTNs), the Consultative Committee for Space Data Systems bundle security protocol (BSP) specification suggests four IPsec style security headers to provide four aspects of security services. However, this specification leaves key management as an open problem. Aiming to address the key establishment issue for BP, in this paper, we utilize a time-evolving topology model and two-channel cryptography to design efficient and noninteractive key exchange protocol. A time-evolving model is used to formally model the periodic and predetermined behavior patterns of space DTNs, and therefore, a node can schedule when and to whom it should send its public key. Meanwhile, the application of two-channel cryptography enables DTN nodes to exchange their public keys or revocation status information, with authentication assurance and in a noninteractive manner. The proposed scheme helps to establish a secure context to support for BSP, tolerating high delays, and unexpected loss of connectivity of space DTNs. Keywords: cryptographic protocols; delay tolerant networks; space communication links; telecommunication channels; telecommunication security; BSP specification; DTN nodes; IPsec style security headers; authentication assurance; authenticity; bundle security protocol; connectivity loss; consultative committee; delay-disruption tolerant networks ;in-transit protocol data units; noninteractive key establishment; noninteractive key exchange protocol; noninteractive manner; revocation status information; security services; space DTN; pace data systems bundle security protocol; time-evolving model; time-evolving topology model; two-channel cryptography; Authentication; Delays; Message authentication; Protocols; Public key; Space-based delay tolerant networks; bundle authentication; key establishment (ID#:14-2355) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6657823&isnumber=6684617


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Insider Threat

Insider Threat


The insider threat continues to grow and the need to develop technical solutions to the problem grows as well. But through August of 2014, there has been little original scholarship written about research being conducted in this important area. The half dozen articles cited here are all of the works found in academic literature for the year.

  • Szott, S., "Selfish Insider Attacks In IEEE 802.11s Wireless Mesh Networks," Communications Magazine, IEEE, vol.52, no.6, pp.227,233, June 2014. doi: 10.1109/MCOM.2014.6829968 The IEEE 802.11s amendment for wireless mesh networks does not provide incentives for stations to cooperate and is particularly vulnerable to selfish insider attacks in which a legitimate network participant hopes to increase its QoS at the expense of others. In this tutorial we describe various attacks that can be executed against 802.11s networks and also analyze existing attacks and identify new ones. We also discuss possible countermeasures and detection methods and attempt to quantify the threat of the attacks to determine which of the 802.11s vulnerabilities need to be secured with the highest priority. Keywords: telecommunication security; wireless LAN; wireless mesh networks; IEEE 802.11s wireless mesh networks; selfish insider attacks; Ad hoc networks; IEEE 802.11 Standards; Logic gates; Protocols; Quality of service; Routing; Wireless mesh networks (ID#:14-2356) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6829968&isnumber=6829933
  • Flores, D.A, "An Authentication And Auditing Architecture For Enhancing Security On Egovernment Services," eDemocracy & eGovernment (ICEDEG), 2014 First International Conference on , vol., no., pp.73,76, 24-25 April 2014. doi: 10.1109/ICEDEG.2014.6819952 eGovernment deploys governmental information and services for citizens and general society. As the Internet is being used as underlying platform for information exchange, these services are exposed to data tampering and unauthorised access as main threats against citizen privacy. These issues have been usually tackled by applying controls at application level, making authentication stronger and protecting credentials in transit using digital certificates. However, these efforts to enhance security on governmental web sites have been only focused on what malicious users can do from the outside, and not in what insiders can do to alter data straight on the databases. In fact, the lack of security controls at back-end level hinders every effort to find evidence and investigate events related to credential misuse and data tampering. Moreover, even though attackers can be found and prosecuted, there is no evidence and audit trails on the databases to link illegal activities with identities. In this article, a Salting-Based Authentication Module and a Database Intrusion Detection Module are proposed as enhancements to eGovernment security to provide better authentication and auditing controls. Keywords: Internet; Web sites; access control; digital signatures; government data processing; information systems; public administration; security of data; Internet platform; auditing control; citizen privacy; data tampering; database intrusion detection module; digital certificates ;eGovernment security enhancement; eGovernment services; governmental Web sites; governmental information deployment; salting-based authentication module; unauthorised access; Access control; Authentication; Databases; Intrusion detection; Servers; Web sites; architecture; auditing; authentication; database; eGovernment; intrusion detection; log; salting (ID#:14-2357) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6819952&isnumber=6819917
  • Greitzer, F.L.; Strozer, J.; Cohen, S.; Bergey, J.; Cowley, J.; Moore, A; Mundie, D., "Unintentional Insider Threat: Contributing Factors, Observables, and Mitigation Strategies," System Sciences (HICSS), 2014 47th Hawaii International Conference on , vol., no., pp.2025,2034, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.256 Organizations often suffer harm from individuals who bear them no malice but whose actions unintentionally expose the organizations to risk in some way. This paper examines initial findings from research on such cases, referred to as unintentional insider threat (UIT). The goal of this paper is to inform government and industry stakeholders about the problem and its possible causes and mitigation strategies. As an initial approach to addressing the problem, we developed an operational definition for UIT, reviewed research relevant to possible causes and contributing factors, and provided examples of UIT cases and their frequencies across several categories. We conclude the paper by discussing initial recommendations on mitigation strategies and countermeasures. Keywords: organisational aspects; security of data; UIT; contributing factors; government; industry stakeholders; mitigation strategy; organizations unintentional insider threat; Electronic mail; Human factors; law; Organizations; Security; Stress; Contributing; Definition; Ethical; Factors; Feature; Human; Insider; Legal; Mitigation; Model; Organizational; Observables; Psychosocial; Strategies; Threat; Unintentional; demographic (ID#:14-2358) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6758854&isnumber=6758592
  • Yi-Lu Wang; Sang-Chin Yang, "A Method of Evaluation for Insider Threat," Computer, Consumer and Control (IS3C), 2014 International Symposium on , vol., no., pp.438,441, 10-12 June 2014. doi: 10.1109/IS3C.2014.121 Due to cyber security is an important issue of the cloud computing. Insider threat becomes more and more important for cyber security, it is also much more complex issue. But till now, there is no equivalent to a vulnerability scanner for insider threat. We survey and discuss the history of research on insider threat analysis to know system dynamics is the best method to mitigate insider threat from people, process, and technology. In the paper, we present a system dynamics method to model insider threat. We suggest some concludes for future research who are interested in insider threat issue The study. Keywords: cloud computing; security of data; cloud computing; cyber security; insider threat analysis; insider threat evaluation; insider threat mitigation; vulnerability scanner; Analytical models; Computer crime; Computers; Educational institutions; Organizations ;Insider threat; System Dynamic (ID#:14-2359) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845913&isnumber=6845429
  • Gritzalis, D.; Stavrou, V.; Kandias, M.; Stergiopoulos, G., "Insider Threat: Enhancing BPM through Social Media," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on , vol., no., pp.1,6, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814027 Modern business environments have a constant need to increase their productivity, reduce costs and offer competitive products and services. This can be achieved via modeling their business processes. Yet, even in light of modelling's widespread success, one can argue that it lacks built-in security mechanisms able to detect and fight threats that may manifest throughout the process. Academic research has proposed a variety of different solutions which focus on different kinds of threat. In this paper we focus on insider threat, i.e. insiders participating in an organization's business process, who, depending on their motives, may cause severe harm to the organization. We examine existing security approaches to tackle down the aforementioned threat in enterprise business processes. We discuss their pros and cons and propose a monitoring approach that aims at mitigating the insider threat. This approach enhances business process monitoring tools with information evaluated from Social Media. It exams the online behavior of users and pinpoints potential insiders with critical roles in the organization's processes. We conclude with some observations on the monitoring results (i.e. psychometric evaluations from the social media analysis) concerning privacy violations and argue that deployment of such systems should be only allowed on exceptional cases, such as protecting critical infrastructures. Keywords: {business data processing; organisational aspects; process monitoring; social networking (online);BPM enhancement; built-in security mechanism; business process monitoring tools; cost reduction; enterprise business processes; insider threat; organization business process management; privacy violations; social media; Media; Monitoring; Organizations; Privacy; Security; Unified modeling language (ID#:14-2360) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814027&isnumber=6813963
  • Kajtazi, M.; Bulgurcu, B.; Cavusoglu, H.; Benbasat, I, "Assessing Sunk Cost Effect on Employees' Intentions to Violate Information Security Policies in Organizations," System Sciences (HICSS), 2014 47th Hawaii International Conference on, vol., no., pp.3169,3177, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.393 It has been widely known that employees pose insider threats to the information and technology resources of an organization. In this paper, we develop a model to explain insiders' intentional violation of the requirements of an information security policy. We propose sunk cost as a mediating factor. We test our research model on data collected from three information-intensive organizations in banking and pharmaceutical industries (n=502). Our results show that sunk cost acts as a mediator between the proposed antecedents of sunk cost (i.e., completion effect and goal in congruency) and intentions to violate the ISP. We discuss the implications of our results for developing theory and for re-designing current security agendas that could help improve compliance behavior in the future. keywords: organisational aspects; personnel; security of data; ISP; banking; compliance behavior; employees intentions ;information security policy; information-intensive organizations; insider intentional violation; mediating factor; pharmaceutical industries; sunk cost effect assessment; technology resources; Educational institutions; Information security; Mathematical model; Organizations; Pharmaceuticals; Reliability; completion effect; goal incongruency; information security violation; insider threats; sunk cost (ID#:14-2361) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6758995&isnumber=6758592


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Lightweight Cryptography

Lightweight Cryptography


Lightweight cryptography is a major research direction. The release of SIMON in June 2013 has generated significant interest and a number of studies evaluating and comparing it to other cipher algorithms. The articles cited here are the first results of these studies and were presented in the first half of 2014. In addition, articles on other lightweight ciphers are included from the same period.

  • Min Chen; Shigang Chen; Qingjun Xiao, "Pandaka: A Lightweight Cipher For RFID Systems," INFOCOM, 2014 Proceedings IEEE , vol., no., pp.172,180, April 27 2014-May 2 2014. doi: 10.1109/INFOCOM.2014.6847937 The ubiquitous use of RFID tags raises concern about potential security risks in RFID systems. Because low-cost tags are extremely resource-constrained devices, common security mechanisms adopted in resource-rich equipment such as computers are no longer applicable to them. Hence, one challenging research topic is to design a lightweight cipher that is suitable for low-cost RFID tags. Traditional cryptography generally assumes that the two communicating parties are equipotent entities. In contrast, there is a large capability gap between readers and tags in RFID systems. We observe that the readers, which are much more powerful, should take more responsibility in RFID cryptographic protocols. In this paper, we make a radical shift from traditional cryptography, and design a novel cipher called Pandaka1, in which most workload is pushed to the readers. As a result, Pandaka is particularly hardware-efficient for tags. We perform extensive simulations to evaluate the effectiveness of Pandaka. In addition, we present security analysis of Pandaka facing different attacks. Keywords: cryptographic protocols; radiofrequency identification; telecommunication security; Pandaka security analysis; RFID cryptographic protocols; RFID systems; lightweight cipher; low-cost RFID tags; resource-constrained devices; resource-rich equipment; security mechanisms; security risks; Ciphers; Computers; Indexes; Radiofrequency identification; Servers (ID#:14-2362) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847937&isnumber=6847911
  • Lin Ding; Chenhui Jin; Jie Guan; Qiuyan Wang, "Cryptanalysis of Lightweight WG-8 Stream Cipher," Information Forensics and Security, IEEE Transactions on , vol.9, no.4, pp.645,652, April 2014. doi: 10.1109/TIFS.2014.2307202 WG-8 is a new lightweight variant of the well-known Welch-Gong (WG) stream cipher family, and takes an 80-bit secret key and an 80-bit initial vector (IV) as inputs. So far no attack on the WG-8 stream cipher has been published except the attacks by the designers. This paper shows that there exist Key-IV pairs for WG-8 that can generate keystreams, which are exact shifts of each other throughout the keystream generation. By exploiting this slide property, an effective key recovery attack on WG-8 in the related key setting is proposed, which has a time complexity of 253.32 and requires 252 chosen IVs. The attack is minimal in the sense that it only requires one related key. Furthermore, we present an efficient key recovery attack on WG-8 in the multiple related key setting. As confirmed by the experimental results, our attack recovers all 80 bits of WG-8 in on a PC with 2.5-GHz Intel Pentium 4 processor. This is the first time that a weakness is presented for WG-8, assuming that the attacker can obtain only a few dozen consecutive keystream bits for each IV. Finally, we give a new Key/IV loading proposal for WG-8, which takes an 80-bit secret key and a 64-bit IV as inputs. The new proposal keeps the basic structure of WG-8 and provides enough resistance against our related key attacks. Keywords: computational complexity; cryptography; microprocessor chips;80-bit initial vector;80-bit secret key; Intel Pentium 4 processor; Welch-Gong stream cipher; frequency 2.5 GHz; key recovery attack; keystream generation; lightweight WG-8 stream cipher cryptanalysis; related key attack; slide property; time complexity; Ciphers; Clocks;Equations;Proposals; Time complexity; Cryptanalysis; WG-8; lightweight stream cipher; related key attack (ID#:14-2363) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6746224&isnumber=6755552
  • Xuanxia Yao; Xiaoguang Han; Xiaojiang Du, "A lightweight access control mechanism for mobile cloud computing," Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on , vol., no., pp.380,385, April 27 2014-May 2 2014. doi: 10.1109/INFCOMW.2014.6849262 In order to meet the security requirement, most data are stored in cloud as cipher-texts. Hence, a cipher-text based access control mechanism is needed for data sharing in cloud. A popular solution is to use the attribute-based encryption. However, it is not suitable for mobile cloud due to the heavy computation overhead caused by bilinear pairing, which also makes it difficult to change the access control policy. In addition, attribute-based encryption can't achieve fine-grained access control yet. In this paper, we present a lightweight cipher-text access control mechanism for mobile cloud computing, which is based on authorization certificates and secret sharing. Only the certificate owner can reconstruct decryption keys for his/her files. Our analyses show that the mechanism can achieve efficient and fine-grained access control on cipher-text at a much lower cost than the attribute-based encryption solution. Keywords: authorisation; cloud computing; cryptography; mobile computing; access control policy; attribute-based encryption; authorization certificates; bilinear pairing; certificate owner; cipher-text based access control mechanism; data sharing; decryption key reconstruction; fine-grained access control ;lightweight cipher-text access control mechanism; mobile cloud computing; secret sharing; security requirement; Authorization; Cloud computing; Encryption; Mobile communication; Servers; Authorization; access control; certificate; mobile cloud storage (ID#:14-2364) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849262&isnumber=6849127
  • Fujishiro, M.; Yanagisawa, M.; Togawa, N., "Scan-based attack on the LED block cipher using scan signatures," Circuits and Systems (ISCAS), 2014 IEEE International Symposium on , vol., no., pp.1460,1463, 1-5 June 2014. doi: 10.1109/ISCAS.2014.6865421 LED (Light Encryption Device) block cipher, one of lightweight block ciphers, is very compact in hardware. Its encryption process is composed of AES-like rounds. Recently, a scan-based side-channel attack is reported which retrieves the secret information inside the cryptosystem utilizing scan chains, one of design-for-test techniques. In this paper, a scan-based attack method on the LED block cipher using scan signatures is proposed. In our proposed method, we focus on a particular 16-bit position in scanned data obtained from an LED LSI chip and retrieve its secret key using scan signatures. Experimental results show that our proposed method successfully retrieves its 64-bit secret key using 73 plaintexts on average if the scan chain is only connected to the LED block cipher. These experimental results also show the key is successfully retrieved even if the scan chain includes additional some 4000 1-bit registers. Keywords: design for testability; digital signatures; large scale integration; private key cryptography; AES-like rounds; LED LSI chip; LED block cipher; cryptosystem; design-for-test techniques; encryption process; light encryption device; lightweight block ciphers; plaintexts; scan chain; scan signatures; scan-based attack method; scan-based side-channel attack; secret information; secret key; word length 16 bit; word length 64 bit; Ciphers; Encryption; Hardware; Large scale integration; Light emitting diodes; Registers (ID#:14-2365) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6865421&isnumber=6865048
  • Bhasin, S.; Graba, T.; Danger, J.-L.; Najm, Z., "A Look Into SIMON From A Side-Channel Perspective," Hardware-Oriented Security and Trust (HOST), 2014 IEEE International Symposium on , vol., no., pp.56,59, 6-7 May 2014. doi: 10.1109/HST.2014.6855568 SIMON is a lightweight block cipher, specially designed for resource constrained devices that was recently presented by the National Security Agency (NSA). This paper deals with a hardware implementation of this algorithm from a side-channel point of view as it is a prime concern for embedded systems. We present the implementation of SIMON on a Xilinx Virtex-5 FPGA and propose a low-overhead countermeasure using first-order Boolean masking exploiting the simplistic construction of SIMON. Finally we evaluate the side-channel resistance of both implementations. Keywords: Boolean algebra; cryptography; field programmable gate arrays; SIMON; Xilinx Virtex-5 FPGA; embedded system; first-order Boolean masking; lightweight block cipher; resource constrained device; side-channel perspective; side-channel resistance; Ciphers; Field programmable gate arrays; Hardware; Magnetohydrodynamics; Registers; Table lookup; Countermeasures; Lightweight Cryptography; SIMON; Side-Channel Analysis (ID#:14-2366) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6855568&isnumber=6855557
  • Cioranesco, J.-M.; Danger, J.-L.; Graba, T.; Guilley, S.; Mathieu, Y.; Naccache, D.; Xuan Thuy Ngo, "Cryptographically Secure Shields," Hardware-Oriented Security and Trust (HOST), 2014 IEEE International Symposium on , vol., no., pp.25,31, 6-7 May 2014. doi: 10.1109/HST.2014.6855563 Abstract: Probing attacks are serious threats on integrated circuits. Security products often include a protective layer called shield that acts like a digital fence. In this article, we demonstrate a new shield structure that is cryptographically secure. This shield is based on the newly proposed SIMON lightweight block cipher and independent mesh lines to ensure the security against probing attacks of the hardware located behind the shield. Such structure can be proven secure against state-of-the-art invasive attacks. For the first time in the open literature, we describe a chip designed with a digital shield, and give an extensive report of its cost, in terms of power, metal layer(s) to sacrifice and of logic (including the logic to connect it to the CPU). Also, we explain how "Through Silicon Vias" (TSV) technology can be used for the protection against both frontside and backside probing. Keywords: cryptography integrated circuit design; three-dimensional integrated circuits; SIMON lightweight block cipher; TSV technology; chip design; cryptographical secure shield; digital fence; digital shield; integrated circuit invasive attacks; mesh lines; metal layer; probing attacks; protective layer; security product; shield structure; through silicon vias; Ciphers; Integrated circuits; Metals; Registers; Routing; Cryptographically secure shield ;Focused Ion Beam (FIB);SIMON block cipher; Through Silicon Vias (TSV) (ID#:14-2367) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6855563&isnumber=6855557
  • Hwajeong Seo; Jongseok Choi; Hyunjin Kim; Taehwan Park; Howon Kim, "Pseudo Random Number Generator And Hash Function For Embedded Microprocessors," Internet of Things (WF-IoT), 2014 IEEE World Forum on , vol., no., pp.37,40, 6-8 March 2014. doi: 10.1109/WF-IoT.2014.6803113 Embedded microprocessors are commonly used for future technologies such as Internet of Things(IoT), RFID and Wireless Sensor Networks(WSN). However, the microprocessors have limited computing power and storages so straight-forward implementation of traditional services on resource constrained devices is not recommenced. To overcome this problem, lightweight implementation techniques should be concerned for practical implementations. Among various requirements, security applications should be conducted on microprocessors for secure and robust service environments. In this paper, we presented a light weight implementation techniques for efficient Pseudo Random Number Generator(PRNG) and Hash function. To reduce memory consumption and accelerate performance, we adopted AES accelerator based implementation. This technique is firstly introduced in INDOCRYPT'12, whose idea exploits peripheral devices for efficient hash computations. With this technique, we presented block cipher based light-weight pseudo random number generator and simple hash function on embedded microprocessors. Keywords: cryptography; embedded systems; microprocessor chips; random number generation; AES accelerator; INDOCRYPT'12;PRNG;block cipher based lightweight pseudo random number generator; embedded microprocessors; future technologies; hash computations; hash function; lightweight implementation techniques; peripheral devices; resource constrained devices; robust service environments; secure service environments; security applications; straight-forward implementation; Ciphers; Clocks; Encryption; Generators; Microprocessors (ID#:14-2368) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6803113&isnumber=6803102
  • At, N.; Beuchat, J.-L.; Okamoto, E.; San, I; Yamazaki, T., "Compact Hardware Implementations of ChaCha, BLAKE, Threefish, and Skein on FPGA," Circuits and Systems I: Regular Papers, IEEE Transactions on , vol.61, no.2, pp.485,498, Feb. 2014. doi: 10.1109/TCSI.2013.2278385 The cryptographic hash functions BLAKE and Skein are built from the ChaCha stream cipher and the tweakable Threefish block cipher, respectively. Interestingly enough, they are based on the same arithmetic operations, and the same design philosophy allows one to design lightweight coprocessors for hashing and encryption. The key element of our approach is to take advantage of the parallelism of the algorithms considered in this work to deeply pipeline our Arithmetic and Logic Units, and to avoid data dependencies by interleaving independent tasks. We show for instance that a fully autonomous implementation of BLAKE and ChaCha on a Xilinx Virtex-6 device occupies 144 slices and three memory blocks, and achieves competitive throughputs. In order to offer the same features, a coprocessor implementing Skein and Threefish requires a substantial higher slice count. Keywords: coprocessors; cryptography; field programmable gate arrays ;BLAKE function; ChaCha stream cipher; FPGA; Skein function; Threefish block cipher; Xilinx Virtex-6 device; algorithm parallelism; arithmetic operations; arithmetic-and-logic units; competitive throughput; cryptographic hash functions; data dependencies; encryption;field programmable gate array; lightweight coprocessors; memory blocks; slice count; Ciphers; Coprocessors; Encryption; Field programmable gate arrays; Hardware; Pipelines; Ciphers; cryptography, coprocessors ;field programmable gate arrays (ID#:14-2369) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6607237&isnumber=6722960
  • Verma, S.; Pal, S.K.; Muttoo, S.K., "A new Tool For Lightweight Encryption On Android," Advance Computing Conference (IACC), 2014 IEEE International , vol., no., pp.306,311, 21-22 Feb. 2014. doi: 10.1109/IAdCC.2014.6779339 Theft or loss of a mobile device could be an information security risk as it can result in loss of confidential personal data. Traditional cryptographic algorithms are not suitable for resource constrained and handheld devices. In this paper, we have developed an efficient and user friendly tool called "NCRYPT" on Android platform. "NCRYPT" application is used to secure the data at rest on Android thus making it inaccessible to unauthorized users. It is based on lightweight encryption scheme i.e. Hummingbird-2. The application provides secure storage by making use of password based authentication so that an adversary cannot access the confidential data stored on the mobile device. The cryptographic key is derived through the password based key generation method PBKDF2 from the standard SUN JCE cryptographic provider. Various tools for encryption are available in the market which are based on AES or DES encryption schemes. Ihe reported tool is based on Hummingbird-2 and is faster than most of the other existing schemes. It is also resistant to most of attacks applicable to Block and Stream Ciphers. Hummingbird-2 has been coded in C language and embedded in Android platform with the help of JNI (Java Native Interface) for faster execution. This application provides choice for encrypting the entire data on SD card or selective files on the smart phone and protect personal or confidential information available in such devices. Keywords: C language; cryptography; smart phones; AES encryption scheme; Android platform; C language; DES encryption scheme;Hummingbird-2 scheme; JNI; Java native interface; NCRYPT application;PBKDF2 password based key generation method; SUN JCE cryptographic provider; block ciphers; confidential data; cryptographic algorithms; cryptographic key; information security risk; lightweight encryption scheme; mobile device; password based authentication; stream ciphers; Ciphers; Encryption; Smart phones; Standards; Throughput; Android; HummingBird2; Information Security ;Lightweight Encryption;PBKDF2 (ID#:14-2370) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779339&isnumber=6779283
  • Ahmadi, S.; Ahmadian, Z.; Mohajeri, J.; Aref, M.R., "Low Data Complexity Biclique Cryptanalysis of Block Ciphers with Application to Piccolo and HIGHT," Information Forensics and Security, IEEE Transactions on, vol.PP, no.99, pp.1, 1, July 2014. doi: 10.1109/TIFS.2014.2344445 In this paper, we present a framework for biclique cryptanalysis of block ciphers which extremely requires a low amount of data. To that end, we enjoy a new representation of biclique attack based on a new concept of cutset that describes our attack more clearly. Then, an algorithm for choosing two differential characteristics is presented to simultaneously minimize the data complexity and control the computational complexity. Then, we characterize those block ciphers that are vulnerable to this technique and among them, we apply this attack on lightweight block ciphers Piccolo-80, Piccolo-128 and HIGHT. The data complexity of these attacks is only 16 plaintextciphertext pairs which is considerably less than the existing cryptanalytic results. In all the attacks the computational complexity remains the same as the previous ones or even it is slightly improved. Keywords: (not provided) (ID#:14-2371) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6868260&isnumber=4358835
  • Aysu, A; Gulcan, E.; Schaumont, P., "SIMON Says: Break Area Records of Block Ciphers on FPGAs," Embedded Systems Letters, IEEE , vol.6, no.2, pp.37,40, June 2014. doi: 10.1109/LES.2014.2314961 While advanced encryption standard (AES) is extensively in use in a number of applications, its area cost limits its deployment in resource constrained platforms. In this letter, we have implemented SIMON, a recent promising low-cost alternative of AES on reconfigurable platforms. The Feistel network, the construction of the round function and the key generation of SIMON, enables bit-serial hardware architectures which can significantly reduce the cost. Moreover, encryption and decryption can be done using the same hardware. The results show that with an equivalent security level, SIMON is 86% smaller than AES, 70% smaller than PRESENT (a standardized low-cost AES alternative), and its smallest hardware architecture only costs 36 slices (72 LUTs, 30 registers). To our best knowledge, this work sets the new area records as we propose the hardware architecture of the smallest block cipher ever published on field-programmable gate arrays (FPGAs) at 128-bit level of security. Therefore, SIMON is a strong alternative to AES for low-cost FPGA-based applications. Keywords: cryptography; field programmable gate arrays; Feistel network; SIMON; advanced encryption standard; bit-serial hardware architectures; block ciphers; break area records; cost reduction; decryption; equivalent security level; field-programmable gate arrays; hardware architecture; low-cost FPGA-based applications; reconfigurable platforms; resource constrained platforms; round function; standardized low-cost AES alternative; Ciphers; Encryption; Field programmable gate arrays; Hardware; Parallel processing; Table lookup; Block ciphers; SIMON; field-programmable gate arrays (FPGAs) implementation; lightweight cryptography (ID#:14-2372) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6782431&isnumber=6820801
  • Mathew, S.; Satpathy, S.; Suresh, V.; Kaul, H.; Anders, M.; Chen, G.; Agarwal, A; Hsu, S.; Krishnamurthy, R., "340mV-1.1V, 289Gbps/W, 2090-gate NanoAES Hardware Accelerator With Area-Optimized Encrypt/Decrypt GF(24)2 Polynomials In 22nm Tri-Gate CMOS," VLSI Circuits Digest of Technical Papers, 2014 Symposium on , vol., no., pp.1,2, 10-13 June 2014. doi: 10.1109/VLSIC.2014.6858420 An on-die, lightweight nanoAES hardware accelerator is fabricated in 22nm tri-gate CMOS, targeted for ultra-low power mobile SOCs. Compared to conventional 128-bit AES implementations, this design uses an 8-bit Sbox datapath along with ShiftRow byte-order processing to compute all AES rounds in native GF(24)2 composite-field. This approach along with a serial-accumulating MixColumns circuit, area-optimized encrypt and decrypt Galois-field polynomials and integrated on-the-fly key generation circuit results in a compact 2090-gate design, enabling peak energy-efficiency of 289Gbps/W and AES-128 encrypt/decrypt throughput of 432/671Mbps with total energy consumption of 4.7/3nJ measured at 0.9V, 25degC. Keywords: CMOS digital integrated circuits; Galois fields; cryptography ;low-power electronics; system-on-chip; AES rounds; Sbox datapath; ShiftRow byte-order processing; area-optimized encrypt polynomials ;compact 2090-gate design; decrypt Galois-field polynomials; integrated on-the-fly key generation circuit; lightweight nanoAES hardware accelerator; native composite-field; serial-accumulating MixColumns circuit; size 22 nm; temperature 25 degC; trigate CMOS; ultra-low power mobile SOC; voltage 340 mV to 1.1 V; word length 8 bit; Abstracts; Area measurement; Ciphers; Energy measurement ;IP networks; Logic gates (ID#:14-2373) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6858420&isnumber=6858353


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Locking

Locking


In computer science, a lock is a timing mechanism designed to enforce a control policy. Locks have some advantages and many disadvantages. To be efficient, they typically require hardware support. The articles cited here look at cache locking, injection locking, phase locking, and a lock-free approach to addressing multicore computing. These articles appeared in the first half of 2014.

  • Huping Ding; Yun Liang; Mitra, T., "WCET-Centric Dynamic Instruction Cache Locking," Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014 , vol., no., pp.1,6, 24-28 March 2014. doi: 10.7873/DATE.2014.040 Cache locking is an effective technique to improve timing predictability in real-time systems. In static cache locking, the locked memory blocks remain unchanged throughout the program execution. Thus static locking may not be effective for large programs where multiple memory blocks are competing for few cache lines available for locking. In comparison, dynamic cache locking overcomes cache space limitation through time-multiplexing of locked memory blocks. Prior dynamic locking technique partitions the program into regions and takes independent locking decisions for each region. We propose a flexible loop-based dynamic cache locking approach. We not only select the memory blocks to be locked but also the locking points (e.g., loop level). We judiciously allow memory blocks from the same loop to be locked at different program points for WCET improvement. We design a constraint-based approach that incorporates a global view to decide on the number of locking slots at each loop entry point and then select the memory blocks to be locked for each loop. Experimental evaluation shows that our dynamic cache locking approach achieves substantial improvement of WCET compared to prior techniques. Keywords: {ache storage; real-time systems; WCET-centric dynamic instruction cache locking; cache lines; constraint-based approach; flexible loop-based dynamic cache locking approach; independent locking decisions; locked memory blocks ;locking points;loop entry point; multiple memory blocks; program execution; program points; real-time systems; time-multiplexing; timing predictability ;worst-case execution time; Abstracts; Benchmark testing; Educational institutions; Electronic mail; Nickel; Resilience; Timing (ID#:14-2374) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6800241&isnumber=6800201
  • Raj, M.; Emami, A, "A Wideband Injection-Locking Scheme and Quadrature Phase Generation in 65-nm CMOS," Microwave Theory and Techniques, IEEE Transactions on , vol.62, no.4, pp.763,772, April 2014. doi: 10.1109/TMTT.2014.2310172 A novel technique for wideband injection locking in an LC oscillator is proposed. Phased-lock-loop and injection-locking elements are combined symbiotically to achieve wide locking range while retaining the simplicity of the latter. This method does not require a phase frequency detector or a loop filter to achieve phase lock. A mathematical analysis of the system is presented and the expression for new locking range is derived. A locking range of 13.4-17.2 GHz and an average jitter tracking bandwidth of up to 400 MHz were measured in a high- Q LC oscillator. This architecture is used to generate quadrature phases from a single clock without any frequency division. It also provides high-frequency jitter filtering while retaining the low-frequency correlated jitter essential for forwarded clock receivers. Keywords: CMOS integrated circuits; LC circuits; MMIC oscillators ;injection locked oscillators; jitter; phase locked loops; voltage-controlled oscillators; forwarded clock receivers ;frequency 13.4 GHz to 17.2 GHz; high-Q LC oscillator; high-frequency jitter filtering; injection-locking elements; jitter tracking bandwidth; low-frequency correlated jitter; mathematical analysis; phased-lock-loop; quadrature phase generation; size 65 nm; wide locking range; wideband injection locking scheme; Clocks; Jitter; Mathematical model; Phase locked loops; Varactors; Voltage-controlled oscillators; Adler's equation; injection-locked (IL) phase-locked loop (PLL); injection-locked oscillator (ILO) ;jitter transfer function; locking range; quadrature; voltage-controlled oscillator (VCO) (ID#:14-2375) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6766809&isnumber=6782343
  • Asaduzzaman, A; Allen, M.P.; Jareen, T., "An Effective Locking-Free Caching Technique For Power-Aware Multicore Computing Systems," Informatics, Electronics & Vision (ICIEV), 2014 International Conference on , vol., no., pp.1,6, 23-24 May 2014. doi: 10.1109/ICIEV.2014.6850861 In multicore/manycore systems, multiple caches increase the total power consumption and intensify latency because it is nearly impossible to hide last-level latency. Studies suggest that there are opportunities to increase the performance to power ratio by locking selected memory blocks inside the caches during runtime. However, the cache locking technique reduces the effective cache size and may introduce additional configuration difficulties, especially for multicore architectures. Furthermore, there may be other restrictions (example: PowerPC 750GX processor does not allow cache locking at level-1). In this paper, we propose a Smart Victim Cache (SVC) assisted caching technique that eliminates traditional cache locking without compromising the performance to power ratio. In addition to functioning as a normal victim cache, the proposed SVC holds memory blocks that may cause higher cache misses and supports stream buffering to increase cache hits. We model a Quad-Core System that has Private First Level Caches (PFLCs), a Shared Last Level Cache (SLLC), and a shared SVC located between the PFLCs and SLLC. We run simulation programs using a diverse group of applications including MPEG-4 and H.264/AVC. Experimental results suggest that the proposed SVC added multicore cache memory subsystem helps decrease the total power consumption and average latency up to 21% and 17%, respectively, when compared with that of SLLC cache locking mechanism without SVC. Keywords: cache storage; multiprocessing systems; power aware computing; PFLCs; PowerPC 750GX processor; SLLC; SVC; cache locking technique; effective locking free caching technique; intensify latency; multicore cache memory subsystem; multicore-manycore systems; power aware multicore computing systems; power consumption; power ratio; private first level caches; quadcore system; selected memory blocks; shared last level cache; smart victim cache; Informatics; Memory management; Multicore processing; Power demand; Static VAr compensators; Transform coding; Video coding; Cache locking; green technology; low-power computing; multicore architecture; victim cache (ID#:14-2376) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6850861&isnumber=6850678
  • Dong Hou; Bo Ning; Shuangyou Zhang; Jiutao Wu; Jianye Zhao, "Long-Term Stabilization of Fiber Laser Using Phase-Locking Technique With Ultra-Low Phase Noise and Phase Drift," Selected Topics in Quantum Electronics, IEEE Journal of, vol.20, no.5, pp.1,8, Sept.-Oct. 2014. doi: 10.1109/JSTQE.2014.2316592 We investigated the phase noise performance of a conventional phase-locking technique in the long-term stabilization of a mode-locked fiber laser (MLFL). The investigation revealed that the electronic noise introduced by the electronic phase detector is a key contributor to the phase noise of the stabilization system. To eliminate this electronic noise, we propose an improved phase-locking technique with an optic-microwave phase detector and a pump-tuning-based technique. The mechanism and the theoretical model of the novel phase-locking technique are discussed. Long-term stabilization experiments demonstrated that the improved technique can achieve long-term stabilization of MLFLs with ultra-low phase noise and phase drift. The excellent locking performance of the improved phase-locking technique implies that this technique can be used to stabilize fiber lasers with a highly stable H-maser or an optical clock without stability loss. Keywords: fibre lasers; laser mode locking; laser tuning; optical pumping; phase detectors; phase noise; electronic noise; electronic phase detector; fiber laser; long-term stabilization; mode-locked fiber laser; optic-microwave phase detector; phase drift; phase-locking technique; pump-tuning-based technique; ultra-low phase noise; Adaptive optics; Optical fibers; Optical noise; Optical pulses; Phase locked loops; Phase noise ;Modeling; mode-locked fiber laser (MLFL);phase detection; phase-locking loop; stabilization (ID#:14-2377) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6797883&isnumber=6603383
  • Jing Jin; Bukun Pan; Xiaoming Liu; Jianjun Zhou, "Injection-Locking Frequency Divider based dual-modulus prescalers with extended locking range," Circuits and Systems (ISCAS), 2014 IEEE International Symposium on , vol., no., pp.502,505, 1-5 June 2014. doi: 10.1109/ISCAS.2014.6865182 A new Injection-Locking Frequency Divider (ILFD) based dual-modulus prescaler with extended locking range is presented in this paper. The tuning capacitor inserted into the ring oscillator loop can widen the common locking range of two operating modes of the prescaler. A dual-modulus prescaler using the proposed method is designed and simulated in a 65nm CMOS process. Simulation results show that the locking range of the divide-by-4/5, from 11.5 GHz to 19.1 GHz, is extended by more than 40 % compared with from 14 GHz to 19.4 GHz using the conventional design. Keywords: CMOS integrated circuits; field effect MMIC; frequency dividers; injection locked oscillators; microwave oscillators; CMOS process; dual modulus prescaler; extended locking range; frequency 11.5 GHz to 19.1 GHz; injection locking frequency divider; ring oscillator loop; size 65 nm; tuning capacitor; Capacitors; Frequency conversion; Phase locked loops; Power demand; Ring oscillators; Tuning (ID#:14-2378) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6865182&isnumber=6865048
  • Hwi Don Lee; Zhongping Chen; Myung Yung Jeong; Chang-Seok Kim, "Simultaneous Dual-Band Wavelength-Swept Fiber Laser Based on Active Mode Locking," Photonics Technology Letters, IEEE , vol.26, no.2, pp.190,193, Jan.15, 2014. doi: 10.1109/LPT.2013.2291834 We report a simultaneous dual-band wavelength-swept laser based on the active mode locking method. By applying a single modulation signal, synchronized sweeping of two lasing-wavelengths is demonstrated without the use of a mechanical wavelength-selecting filter. Two free spectral ranges are independently controlled with a dual path-length configuration of a laser cavity. The static and dynamic performances of a dual-band wavelength-swept active mode locking fiber laser are characterized in both the time and wavelength regions. Two lasing wavelengths were swept simultaneously from 1263.0 to 1333.3 nm for the 1310 nm band and from 1493 to 1563.3 nm for the 1550 nm band. The application of a dual-band wavelength-swept fiber laser was also demonstrated with a dual-band optical coherence tomography imaging system. Keywords: fibre lasers; laser beam applications; laser cavity resonators; laser mode locking; optical filters; optical modulation; optical tomography; active mode locking method; dual path-length configuration; dual-band optical coherence tomography imaging system; dual-band wavelength-swept active mode locking fiber laser; dynamic performances; laser cavity; lasing-wavelengths; mechanical wavelength-selecting filter; simultaneous dual-band wavelength-swept fiber laser; single modulation signal; static performances; synchronized sweeping; wavelength 1263.0 nm to 1333.3 nm; wavelength 1310 nm; wavelength 1493 nm to 1563.3 nm; wavelength 1550 nm; wavelength regions; Cavity resonators; Dual band; Fiber lasers; Frequency modulation; Laser mode locking; Optical fibers; Fiber lasers; laser mode locking; optical imaging (ID#:14-2379) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6674061&isnumber=6693740
  • Simos, H.; Bogris, A; Syvridis, D.; Elsasser, W., "Intensity Noise Properties of Mid-Infrared Injection Locked Quantum Cascade Lasers: I. Modeling," Quantum Electronics, IEEE Journal of, vol.50, no.2, pp.98,105, Feb. 2014. doi: 10.1109/JQE.2013.2295434 In this paper, we numerically investigate the effect of optical injection locking on the noise properties of mid-infrared quantum cascade lasers. The analysis is carried out by means of a rate equation model, which takes into account the various noise contributions and the injection of the master laser. The obtained results indicate that the locked slave laser may operate under reduced intensity noise levels compared with the free running operation. In addition, optimization of the locking process leads to further suppression of the intensity noise when the slave laser is biased close to the free-running threshold current. The main factors that significantly affect the locking process and the achievable noise levels are the injected optical power and the master-slave frequency detuning. Keywords: infrared spectra; laser mode locking; laser tuning; numerical analysis; optical noise; optimisation; quantum cascade lasers; free-running threshold current; intensity noise suppression; master-slave frequency detuning; midinfrared injection locking; midinfrared quantum cascade lasers; numerical investigation; optical injection locking; optical power injection; optimization; rate equation model; Laser noise; Mathematical model; Optical noise; Power lasers; Quantum cascade lasers; Quantum cascade lasers; injection locking; intensity noise; optical injection (ID#:14-2380) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6690160&isnumber=6685877
  • Wenrui Wang; Jinlong Yu; Bingchen Han; Ju Wang; Lingyun Ye; Enze Yang, "Tunable Microwave Frequency Multiplication by Injection Locking of DFB Laser With a Weakly Phase Modulated Signal," Photonics Journal, IEEE , vol.6, no.2, pp.1,8, April 2014. doi: 10.1109/JPHOT.2014.2308634 We have demonstrated in this paper a novel tunable microwave frequency multiplication by injecting a weakly phase-modulated optical signal into a DFB laser diode. Signals with multiple weak sidebands are generated by cross-phase modulation of a continuous wave (CW) with short pulses from mode-locked fiber laser. Then, frequency multiplication is achieved by injection and phase locking a commercially available DFB laser to one of the harmonics of the phase modulated signal. The multiplication factor can be tuned by changing the frequency difference between the CW and the free oscillating wavelength of the DFB laser. The experimental results show that, with an original signal at a repetition rate of 1 GHz, a microwave signal with high spectral purity and stability is generated with a multiplication factor up to 60. The side-mode suppression ratio over 40 dB and phase noise lower than -90 dBc/Hz at 10 kHz are demonstrated over a continuous tuning range from 20 to 40. Keywords: distributed feedback lasers; laser frequency stability; laser mode locking; laser noise; laser tuning; microwave generation; microwave photonics; optical modulation;phase modulation; phase noise; semiconductor lasers; CW wavelength; DFB laser diode; cross-phase modulation; distributed feedback laser; free oscillating wavelength; frequency 10 kHz; high spectral purity; injection locking; microwave signal generation; mode-locked fiber laser; phase locking; phase noise; side-mode suppression ratio; stability; tunable microwave frequency multiplication; weakly phase modulated signal; Laser mode locking; Masers; Microwave filters; Microwave photonics; Optical filters; Phase modulation; Semiconductor lasers; Microwave photonics; frequency multiplication; injection locking (ID#:14-2381) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6748869&isnumber=6750774
  • Arsenijevic, D.; Kleinert, M.; Bimberg, D., "Breakthroughs in Photonics 2013: Passive Mode-Locking of Quantum-Dot Lasers," Photonics Journal, IEEE , vol.6, no.2, pp.1,6, April 2014. doi: 10.1109/JPHOT.2014.2308195 Most recent achievements in passive mode-locking of quantum-dot lasers, with the main focus on jitter reduction and frequency tuning, are described. Different techniques, leading to record values for integrated jitter of 121 fs and a locking range of 342 MHz, are presented for a 40-GHz laser. Optical feedback is observed to be the method of choice in this field. For the first time, five different optical-feedback regimes are discovered, including the resonant one yielding a radio-frequency linewidth reduction by 99%. Keywords: jitter; laser feedback; laser mode locking; laser tuning; quantum dot lasers; frequency 40 GHz; frequency tuning; jitter reduction; optical feedback; passive mode-locking; photonics; quantum-dot lasers; radio-frequency linewidth reduction; Jitter; Laser mode locking; Optical attenuators; Optical feedback; Quantum dot lasers; Tuning; Mode-locked lasers; optical feedback; phase noise; quantum dots (ID#:14-2382) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6747957&isnumber=6750774
  • Nagashima, T.; Wei, X.; Tanaka, H.-A; Sekiya, H., "Locking Range Derivations for Injection-Locked Class-E Oscillator Applying Phase Reduction Theory," Circuits and Systems I: Regular Papers, IEEE Transactions on, vol. PP, no.99, pp.1,8, June 2014. doi: 10.1109/TCSI.2014.2327276 This paper presents a numerical locking-range prediction for the injection-locked class-E oscillator using the phase reduction theory (PRT). By applying this method to the injection-locked class-E oscillator designs, which is in the field of electrical engineering, the locking ranges of the oscillator on any injection-signal waveform can be efficiently obtained. The locking ranges obtained from the proposed method quantitatively agreed with those obtained from the simulations and circuit experiments, showing the validity and effectiveness of the locking-range derivation method based on PRT. Keywords: Capacitance; Equations; Limit-cycles; MOSFET; Oscillators; Switches; Synchronization; Injection-locked class-E oscillator; locking range; phase reduction theory (ID#:14-2383) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842684&isnumber=4358591
  • Habruseva, T.; Arsenijevic, D.; Kleinert, M.; Bimberg, D.; Huyet, G.; Hegarty, S.P., "Optimum Phase Noise Reduction And Repetition Rate Tuning In Quantum-Dot Mode-Locked Lasers," Applied Physics Letters , vol.104, no.2, pp.021112,021112-4, Jan 2014. doi: 10.1063/1.4861604 Competing approaches exist, which allow control of phase noise and frequency tuning in mode-locked lasers, but no judgement of pros and cons based on a comparative analysis was presented yet. Here, we compare results of hybrid mode-locking, hybrid mode-locking with optical injection seeding, and sideband optical injection seeding performed on the same quantum dot laser under identical bias conditions. We achieved the lowest integrated jitter of 121 fs and a record large radio-frequency (RF) tuning range of 342 MHz with sideband injection seeding of the passively mode-locked laser. The combination of hybrid mode-locking together with optical injection-locking resulted in 240 fs integrated jitter and a RF tuning range of 167 MHz. Using conventional hybrid mode-locking, the integrated jitter and the RF tuning range were 620 fs and 10 MHz, respectively. Keywords: (not provided) (ID#:14-2384) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6715601&isnumber=6712870
  • Jun-Chau Chien; Upadhyaya, P.; Jung, H.; Chen, S.; Fang, W.; Niknejad, AM.; Savoj, J.; Ken Chang, "2.8 A pulse-position-modulation phase-noise-reduction technique for a 2-to-16GHz injection-locked ring oscillator in 20nm CMOS," Solid-State Circuits Conference Digest of Technical Papers (ISSCC), 2014 IEEE International , vol., no., pp.52,53, 9-13 Feb. 2014. doi: 10.1109/ISSCC.2014.6757334 High-speed transceivers embedded inside FPGAs require software-programmable clocking circuits to cover a wide range of data rates across different channels [1]. These transceivers use high-frequency PLLs with LC oscillators to satisfy stringent jitter requirements at increasing data rates. However, the large area of these oscillators limits the number of independent LC-based clocking sources and reduces the flexibility offered by the FPGA. A ring-based PLL occupies smaller area but produces higher jitter. With injection-locking (IL) techniques [2-3], ring-based oscillators achieve comparable performance with their LC counterparts [4-5] at frequencies below 10GHz. Moreover, addition of a PLL to an injection-locked VCO (IL-PLL) provides injection-timing calibration and frequency tracking against PVT [3,5]. Nevertheless, applying injection-locking techniques to high-speed ring oscillators in deep submicron CMOS processes, with high flicker-noise corner frequencies at tens of MHz, poses a design challenge for low-jitter operation. Shown in Fig. 2.8.1, injection locking can be modeled as a single-pole feedback system that achieves 20dB/dec of in-band noise shaping against intrinsic VCO phase noise over a wide bandwidth [6]. As a consequence, this technique suppresses the 1/f2 noise of the VCO but not its 1/f3 noise. Note that the conventional IL-PLL is capable of shaping the VCO in-band noise at 40dB/dec [6]; however, its noise shaping is limited by the narrow PLL bandwidth due to significant attenuation of the loop gain by injection locking. To achieve wideband 2nd-order noise shaping in 20nm ring oscillators, we present a circuit technique that applies pulse-position-modulated (PPM) injection through feedback control. Keywords: 1/f noise; CMOS integrated circuits; flicker noise; injection locked oscillators; microwave oscillators; phase locked loops; phase noise; pulse position modulation; voltage-controlled oscillators;1/f2 noise; FPGA; LC oscillator; VCO phase noise; deep submicron CMOS process; feedback control; frequency 2 GHz to 16 GHz; frequency tracking; high-frequency PLL; high-speed ring oscillator; high-speed transceiver; injection-locked VCO; injection-locked ring oscillator; injection-locking technique; injection-timing calibration; phase-noise-reduction technique; pulse-position-modulation; ring-based PLL; single-pole feedback system; size 20 nm; software-programmable clocking circuit; Bandwidth; Injection-locked oscillators; Jitter; Noise; Phase locked loops; Ring oscillators; Voltage-controlled oscillators (ID#:14-2385) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6757334&isnumber=6757318
  • Mangold, M.; Link, S.M.; Klenner, A; Zaugg, C.A; Golling, M.; Tilma, B.W.; Keller, U., "Amplitude Noise and Timing Jitter Characterization of a High-Power Mode-Locked Integrated External-Cavity Surface Emitting Laser," Photonics Journal, IEEE , vol.6, no.1, pp.1,9, Feb. 2014. doi: 10.1109/JPHOT.2013.2295464 We present a timing jitter and amplitude noise characterization of a high-power mode-locked integrated external-cavity surface emitting laser (MIXSEL). In the MIXSEL, the semiconductor saturable absorber of a SESAM is integrated into the structure of a VECSEL to start and stabilize passive mode-locking. In comparison to previous noise characterization of SESAM-mode-locked VECSELs, this first noise characterization of a MIXSEL is performed at a much higher average output power. In a free-running operation, the laser generates 14.3-ps pulses at an average output power of 645 mW at a 2-GHz pulse repetition rate and an RMS amplitude noise of 0.15% [1 Hz, 10 MHz]. We measured an RMS timing jitter of 129 fs [100 Hz, 10 MHz], which represents the lowest value for a free-running passively mode-locked semiconductor disk laser to date. Additionally, we stabilized the pulse repetition rate with a piezo actuator to control the cavity length. With the laser generating 16.7-ps pulses at an average output power of 701 mW, the repetition frequency was phase-locked to a low-noise electronic reference using a feedback loop. In actively stabilized operation, the RMS timing jitter was reduced to less than 70 fs [1 Hz, 100 MHz]. In the 100-Hz to 10-MHz bandwidth, we report the lowest timing jitter measured from a passively mode-locked semiconductor disk laser to date with a value of 31 fs. These results show that the MIXSEL technology provides compact ultrafast laser sources combining high-power and low-noise performance similar to diode-pumped solid-state lasers, which enable world-record optical communication rates and low-noise frequency combs. Keywords: integrated optoelectronics; laser beams; laser cavity resonators ;laser feedback; laser mode locking; laser noise; laser stability; optical pulse generation; optical saturable absorption; piezoelectric actuators; semiconductor lasers; surface emitting lasers; timing jitter; MIXSEL; RMS amplitude noise; RMS timing jitter; SESAM; VECSEL; actively stabilized operation; average output power;c avity length; compact ultrafast laser sources; feedback loop; free-running passively mode-locked semiconductor disk laser; frequency 1 Hz to 100 MHz; frequency 2 GHz; high-power mode-locked integrated external-cavity surface emitting laser; low-noise electronic reference; low-noise frequency combs; low-noise performance ;optical communication rates; phase-locking; piezoactuator; power 645 mW; power 701 mW; pulse generation; pulse repetition rate; repetition frequency; semiconductor saturable absorber; stabilize passive mode-locking;time 129 fs ;time 14.3 ps;time 16.7 ps; Cavity resonators ;Laser mode locking ;Laser noise; Vertical cavity surface emitting lasers; Diode-pumped lasers; infrared lasers; mode-locked lasers; semiconductor lasers; ultrafast lasers (ID#:14-2386) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6690115&isnumber=6689334
  • Yu-Sheng Lin; Cheng-Han Wu; Chia-Chen Huang; Chun-Lin Lu; Yeong-Her Wang, "Ultra-Wide Locking Range Regenerative Frequency Dividers With Quadrature-Injection Current-Mode-Logic Loop Divider," Microwave and Wireless Components Letters, IEEE , vol.24, no.3, pp.179,181, March 2014. doi: 10.1109/LMWC.2013.2291864 The / 3 and / 5 regenerative frequency dividers (RFDs) with ultra-wide locking ranges are presented. The proposed dividers were fabricated by a TSMC 90 nm CMOS process, using / 2 and / 4 quadrature-injected current-mode-logic loop dividers to widen the locking ranges. The dividers also achieved quadrature input and quadrature output. Using a 1.2 V supply voltage, the power consumptions of the / 3 and the / 5 divider cores were 10.2 and 14.8 mW, respectively. Without using the tuning techniques, the measured locking ranges for the / 3 and the / 5 dividers were from 9 to 14.7 GHz (48.1%) and 7.2 to 19 GHz (90.1%), respectively. The phase deviation of the quadrature outputs for the two dividers were less than 0.8 deg and 1.1 deg. Compared with the reported data, the outstanding figure-of-merit values of the proposed / 3 and / 5 RFDs can be observed. Keywords: CMOS integrated circuits; circuit tuning; cores; current-mode circuits; frequency dividers; integrated circuit design; integrated circuit measurement; logic circuits; microwave integrated circuits; RFD;TSMC CMOS process; core; frequency 7.2 GHz to 19 GHz; integrated circuit design; phase deviation; power 10.2 mW; power 14.8 mW; power consumption; quadrature-injection current-mode-logic loop divider; size 90 nm ;tuning technique; ultrawide locking range regenerative frequency divider; voltage 1.2 V;CMOS integrated circuits; Frequency measurement; Mixers; Noise measurement; Phase measurement; Phase noise; CMOS; quadrature input and quadrature output (QIQO); quadrature-injected current-mode-logic; regenerative frequency divider (ID#:14-2387) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6710207&isnumber=6759771

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Machine Learning

Machine Learning


Machine learning offers potential efficiencies and is an important tool in data mining. However, the "learned" or derived data must maintain integrity. Machine learning can also be used to identify threats and attacks. Research in this field is of particular interest in sensitive industries, including healthcare. The works cited here appeared in the first half of 2014.

  • Mozaffari Kermani, M.; Sur-Kolay, S.; Raghunathan, A; Jha, N.K., "Systematic Poisoning Attacks on and Defenses for Machine Learning in Healthcare," Biomedical and Health Informatics, IEEE Journal of, vol. PP, no.99, pp.1,1, July 2014. doi: 10.1109/JBHI.2014.2344095 Machine learning is being used in a wide range of application domains to discover patterns in large datasets. Increasingly, the results of machine learning drive critical decisions in applications related to healthcare and biomedicine. Such health-related applications are often sensitive and, thus, any security breach would be catastrophic. Naturally, the integrity of the results computed by machine learning is of great importance. Recent research has shown that some machine learning algorithms can be compromised by augmenting their training datasets with malicious data, leading to a new class of attacks called poisoning attacks. Hindrance of a diagnosis may have life threatening consequences and could cause distrust. On the other hand, not only may a false diagnosis prompt users to distrust the machine learning algorithm and even abandon the entire system but also such a false positive classification may cause patient distress. In this paper, we present a systematic, algorithm independent approach for mounting poisoning attacks across a wide range of machine learning algorithms and healthcare datasets. The proposed attack procedure generates input data, which, when added to the training set, can either cause the results of machine learning to have targeted errors (e.g., increase the likelihood of classification into a specific class), or simply introduce arbitrary errors (incorrect classification). These attacks may be applied to both fixed and evolving datasets. They can be applied even when only statistics of the training dataset are available or, in some cases, even without access to the training dataset, although at a lower efficacy. We establish the effectiveness of the proposed attacks using a suite of six machine learning algorithms and five healthcare datasets. Finally, we present countermeasures against the proposed generic attacks that are based on tracking and detecting deviations in various accuracy metrics, and benchmark their effectiveness. Keywords: (not provided) (ID#:14-2388) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6868201&isnumber=6363502
  • Baughman, AK.; Chuang, W.; Dixon, K.R.; Benz, Z.; Basilico, J., "DeepQA Jeopardy! Gamification: A Machine-Learning Perspective," Computational Intelligence and AI in Games, IEEE Transactions on , vol.6, no.1, pp.55,66, March 2014. doi: 10.1109/TCIAIG.2013.2285651 DeepQA is a large-scale natural language processing (NLP) question-and-answer system that responds across a breadth of structured and unstructured data, from hundreds of analytics that are combined with over 50 models, trained through machine learning. After the 2011 historic milestone of defeating the two best human players in the Jeopardy! game show, the technology behind IBM Watson, DeepQA, is undergoing gamification into real-world business problems. Gamifying a business domain for Watson is a composite of functional, content, and training adaptation for nongame play. During domain gamification for medical, financial, government, or any other business, each system change affects the machine-learning process. As opposed to the original Watson Jeopardy!, whose class distribution of positive-to-negative labels is 1:100, in adaptation the computed training instances, question-and-answer pairs transformed into true-false labels, result in a very low positive-to-negative ratio of 1:100 000. Such initial extreme class imbalance during domain gamification poses a big challenge for the Watson machine-learning pipelines. The combination of ingested corpus sets, question-and-answer pairs, configuration settings, and NLP algorithms contribute toward the challenging data state. We propose several data engineering techniques, such as answer key vetting and expansion, source ingestion, oversampling classes, and question set modifications to increase the computed true labels. In addition, algorithm engineering, such as an implementation of the Newton-Raphson logistic regression with a regularization term, relaxes the constraints of class imbalance during training adaptation. We conclude by empirically demonstrating that data and algorithm engineering are complementary and indispensable to overcome the challenges in this first Watson gamification for real-world business problems. Keywords: business data processing ;computer games; learning (artificial intelligence);natural language processing; question answering (information retrieval) ;text analysis; DeepQA Jeopardy! gamification; NLP algorithms; NLP question-and-answer system; Newton-Raphson logistic regression; Watson gamification; Watson machine-learning pipelines; algorithm engineering; business domain; configuration settings; data engineering techniques; domain gamification; extreme class imbalance; ingested corpus sets ;large-scale natural language processing question-and-answer system; machine-learning process; nongame play; positive-to-negative ratio; question-and-answer pairs; real-world business problems; regularization term; structured data; training instances; true-false labels; unstructured data; Accuracy; Games ;Logistics; Machine learning algorithms; Pipelines; Training; Gamification; machine learning; natural language processing (NLP);pattern recognition (ID#:14-2389) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6632881&isnumber=6766678
  • Stevanovic, M.; Pedersen, J.M., "An Efficient Flow-Based Botnet Detection Using Supervised Machine Learning," Computing, Networking and Communications (ICNC), 2014 International Conference on, pp.797, 801, 3-6 Feb. 2014. doi: 10.1109/ICCNC.2014.6785439 Botnet detection represents one of the most crucial prerequisites of successful botnet neutralization. This paper explores how accurate and timely detection can be achieved by using supervised machine learning as the tool of inferring about malicious botnet traffic. In order to do so, the paper introduces a novel flow-based detection system that relies on supervised machine learning for identifying botnet network traffic. For use in the system we consider eight highly regarded machine learning algorithms, indicating the best performing one. Furthermore, the paper evaluates how much traffic needs to be observed per flow in order to capture the patterns of malicious traffic. The proposed system has been tested through the series of experiments using traffic traces originating from two well-known P2P botnets and diverse non-malicious applications. The results of experiments indicate that the system is able to accurately and timely detect botnet traffic using purely flow-based traffic analysis and supervised machine learning. Additionally, the results show that in order to achieve accurate detection traffic flows need to be monitored for only a limited time period and number of packets per flow. This indicates a strong potential of using the proposed approach within a future on-line detection framework. Keywords: computer network security ;invasive software; learning (artificial intelligence); peer-to-peer computing; telecommunication traffic; P2P botnets; botnet neutralization; flow-based botnet detection ;flow-based traffic analysis; malicious botnet network traffic identification; nonmalicious applications; packet flow; supervised machine learning; Accuracy; Bayes methods; Feature extraction; Protocols; Support vector machines; Training; Vegetation; Botnet; Botnet detection; Machine learning; Traffic analysis; Traffic classification (ID#:14-2390) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785439&isnumber=6785290
  • Aroussi, S.; Mellouk, A, "Survey on Machine Learning-Based Qoe-Qos Correlation Models," Computing, Management and Telecommunications (ComManTel), 2014 International Conference on, pp.200,204, 27-29 April 2014. doi: 10.1109/ComManTel.2014.6825604 The machine learning provides a theoretical and methodological framework to quantify the relationship between user OoE (Quality of Experience) and network QoS (Quality of Service). This paper presents an overview of QoE-QoS correlation models based on machine learning techniques. According to the learning type, we propose a categorization of correlation models. For each category, we review the main existing works by citing deployed learning methods and model parameters (QoE measurement, QoS parameters and service type). Moreover, the survey will provide researchers with the latest trends and findings in this field. Keywords: learning (artificial intelligence); quality of experience; quality of service; telecommunication computing; QoE measurement; QoE-QoS correlation model ;QoS parameter; QoS service type; machine learning; quality of experience; quality of service; Correlation; Data models; Packet loss; Predictive models; Quality of service; Streaming media; Correlation model; Machine Learning; Quality of Experience; Quality of Service (ID#:14-2391) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825604&isnumber=6825559
  • Alsheikh, M.A; Lin, S.; Niyato, D.; Tan, Hwee-Pink, "Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications," Communications Surveys & Tutorials, IEEE, vol. PP, no.99, pp.1,1, Aapril 2014. doi: 10.1109/COMST.2014.2320099 Wireless sensor networks monitor dynamic environments that change rapidly over time. This dynamic behavior is either caused by external factors or initiated by the system designers themselves. To adapt to such conditions, sensor networks often adopt machine learning techniques to eliminate the need for unnecessary redesign. Machine learning also inspires many practical solutions that maximize resource utilization and prolong the lifespan of the network. In this paper, we present an extensive literature review over the period 2002-2013 of machine learning methods that were used to address common issues in wireless sensor networks (WSNs). The advantages and disadvantages of each proposed algorithm are evaluated against the corresponding problem. We also provide a comparative guide to aid WSN designers in developing suitable machine learning solutions for their specific application challenges. Keywords: Algorithm design and analysis; Classification algorithms; Clustering algorithms; Machine learning algorithms; Principal component analysis; Routing; Wireless sensor networks (ID#:14-2392) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6805162&isnumber=5451756
  • Fangming Ye; Zhaobo Zhang; Chakrabarty, K.; Xinli Gu, "Board-Level Functional Fault Diagnosis Using Multikernel Support Vector Machines and Incremental Learning," Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on , vol.33, no.2, pp.279,290, Feb. 2014. doi: 10.1109/TCAD.2013.2287184 Advanced machine learning techniques offer an unprecedented opportunity to increase the accuracy of board-level functional fault diagnosis and reduce product cost through successful repair. Ambiguous or incorrect diagnosis results lead to long debug times and even wrong repair actions, which significantly increase repair cost. We propose a smart diagnosis method based on multikernel support vector machines (MK-SVMs) and incremental learning. The MK-SVM method leverages a linear combination of single kernels to achieve accurate faulty-component classification based on the errors observed. The MK-SVMs thus generated can also be updated based on incremental learning, which allows the diagnosis system to quickly adapt to new error observations and provide even more accurate fault diagnosis. Two complex boards from industry, currently in volume production, are used to validate the proposed diagnosis approach in terms of diagnosis accuracy (success rate) and quantifiable improvements over previously proposed machine-learning methods based on several single-kernel SVMs and artificial neural networks. Keywords: {electronic engineering computing; fault diagnosis ;learning (artificial intelligence);neural nets; printed circuit testing; support vector machines; MK-SVM method; advanced machine learning technique; artificial neural network; board level functional fault diagnosis; faulty component classification; linear combination; multikernel support vector machine; smart diagnosis method; Accuracy; Circuit faults; Fault diagnosis; Kernel; Maintenance engineering; Support vector machines; Training; Board-level fault diagnosis; functional failures; incremental learning; kernel; machine learning; support-vector machines (ID#:14-2393) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6714627&isnumber=6714471
  • Breuker, D., "Towards Model-Driven Engineering for Big Data Analytics -- An Exploratory Analysis of Domain-Specific Languages for Machine Learning," System Sciences (HICSS), 2014 47th Hawaii International Conference on , vol., no., pp.758,767, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.101 Graphical models and general purpose inference algorithms are powerful tools for moving from imperative towards declarative specification of machine learning problems. Although graphical models define the principle information necessary to adapt inference algorithms to specific probabilistic models, entirely model-driven development is not yet possible. However, generating executable code from graphical models could have several advantages. It could reduce the skills necessary to implement probabilistic models and may speed up development processes. Both advantages address pressing industry needs. They come along with increased supply of data scientist labor, the demand of which cannot be fulfilled at the moment. To explore the opportunities of model-driven big data analytics, I review the main modeling languages used in machine learning as well as inference algorithms and corresponding software implementations. Gaps hampering direct code generation from graphical models are identified and closed by proposing an initial conceptualization of a domain-specific modeling language. Keywords: Big Data; computer graphics; data analysis; inference mechanisms; learning (artificial intelligence);program compilers; specification languages; big data analytics; direct code generation; domain-specific languages; domain-specific modeling language; general purpose inference algorithms; graphical models; machine learning problems; model-driven development; model-driven engineering; modeling languages; probabilistic models; Adaptation models; Computational modeling; Data models; Graphical models; Inference algorithms; Random variables; Unified modeling language; Graphical Models; Machine Learning; Model-driven Engineering (ID#:14-2394) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6758697&isnumber=6758592
  • Aydogan, E.; Sen, S., "Analysis of Machine Learning Methods On Malware Detection," Signal Processing and Communications Applications Conference (SIU), 2014 22nd , vol., no., pp.2066,2069, 23-25 April 2014. doi: 10.1109/SIU.2014.6830667 Nowadays, one of the most important security threats are new, unseen malicious executables. Current anti-virus systems have been fairly successful against known malicious softwares whose signatures are known. However they are very ineffective against new, unseen malicious softwares. In this paper, we aim to detect new, unseen malicious executables using machine learning techniques. We extract distinguishing structural features of softwares and, employ machine learning techniques in order to detect malicious executables. Keywords: invasive software; learning (artificial intelligence); anti-virus systems; machine learning methods; malicious executables detection; malicious softwares; malware detection; security threats; software structural features; Conferences; Internet; Malware; Niobium; Signal processing; Software; machine learning; malware analysis and detection (ID#:14-2395) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830667&isnumber=6830164
  • Kandasamy, K.; Koroth, P., "An Integrated Approach To Spam Classification On Twitter Using URL Analysis, Natural Language Processing And Machine Learning Techniques," Electrical, Electronics and Computer Science (SCEECS), 2014 IEEE Students' Conference on , vol., no., pp.1,5, 1-2 March 2014. doi: 10.1109/SCEECS.2014.6804508 In the present day world, people are so much habituated to Social Networks. Because of this, it is very easy to spread spam contents through them. One can access the details of any person very easily through these sites. No one is safe inside the social media. In this paper we are proposing an application which uses an integrated approach to the spam classification in Twitter. The integrated approach comprises the use of URL analysis, natural language processing and supervised machine learning techniques. In short, this is a three step process. Keywords: classification; learning (artificial intelligence) ;natural language processing; social networking (online);unsolicited e-mail; Twitter; URL analysis; natural language processing; social media; social networks ;spam classification; spam contents; supervised machine learning techniques; Accuracy; Machine learning algorithms; Natural language processing; Training; Twitter; Unsolicited electronic mail; URLs; machine learning; natural language processing; tweets (ID#:14-2396) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6804508&isnumber=6804412
  • Singh, N.; Chandra, N., "Integrating Machine Learning Techniques to Constitute a Hybrid Security System," Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on , vol., no., pp.1082,1087, 7-9 April 2014. doi: 10.1109/CSNT.2014.221 Computer Security has been discussed and improvised in many forms and using different techniques as well as technologies. The enhancements keep on adding as the security remains the fastest updating unit in a computer system. In this paper we propose a model for securing the system along with the network and enhance it more by applying machine learning techniques SVM (support vector machine) and ANN (Artificial Neural Network). Both the techniques are used together to generate results which are appropriate for analysis purpose and thus, prove to be the milestone for security. Keywords: learning (artificial intelligence); neural nets; security of data; support vector machines; ANN; SVM; artificial neural network; computer security ;hybrid security system; machine learning techniques; support vector machine; Artificial neural networks; Intrusion detection; Neurons; Probabilistic logic; Support vector machines; Training; Artificial neural network; Host logs; Machine Learning; Network logs; Support vector machine}, (ID#:14-2397) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821566&isnumber=6821334
  • Asmitha, K.A; Vinod, P., "A Machine Learning Approach For Linux Malware Detection," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on , vol., no., pp.825,830, 7-8 Feb. 2014. doi: 10.1109/ICICICT.2014.6781387 The increasing number of malware is becoming a serious threat to the private data as well as to the expensive computer resources. Linux is a Unix based machine and gained popularity in recent years. The malware attack targeting Linux has been increased recently and the existing malware detection methods are insufficient to detect malware efficiently. We are introducing a novel approach using machine learning for identifying malicious Executable Linkable Files. The system calls are extracted dynamically using system call tracer Strace. In this approach we identified best feature set of benign and malware specimens to build classification model that can classify malware and benign efficiently. The experimental results are promising which depict a classification accuracy of 97% to identify malicious samples. Keywords: Linux; invasive software; learning (artificial intelligence);pattern classification; Linux malware detection; Unix based machine; benign specimens; classification model; machine learning approach; malicious executable linkable files identification; malware specimens; system call tracer Strace; Accuracy; Malware; Testing; dynamic analysis; feature selection; system call (ID#:14-2398) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781387&isnumber=6781240
  • Esmalifalak, M.; Liu, L.; Nguyen, N.; Zheng, R.; Han, Z., "Detecting Stealthy False Data Injection Using Machine Learning in Smart Grid," Systems Journal, IEEE , vol. PP, no.99, pp.1,9, August 2014. doi: 10.1109/JSYST.2014.2341597 Aging power industries, together with the increase in demand from industrial and residential customers, are the main incentive for policy makers to define a road map to the next-generation power system called the smart grid. In the smart grid, the overall monitoring costs will be decreased, but at the same time, the risk of cyber attacks might be increased. Recently, a new type of attacks (called the stealth attack) has been introduced, which cannot be detected by the traditional bad data detection using state estimation. In this paper, we show how normal operations of power networks can be statistically distinguished from the case under stealthy attacks. We propose two machine-learning-based techniques for stealthy attack detection. The first method utilizes supervised learning over labeled data and trains a distributed support vector machine (SVM). The design of the distributed SVM is based on the alternating direction method of multipliers, which offers provable optimality and convergence rate. The second method requires no training data and detects the deviation in measurements. In both methods, principal component analysis is used to reduce the dimensionality of the data to be processed, which leads to lower computation complexities. The results of the proposed detection methods on IEEE standard test systems demonstrate the effectiveness of both schemes. Keywords: Anomaly detection; bad data detection (BDD); power system state estimation; support vector machines (SVMs) (ID#:14-2399) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6880823&isnumber=4357939

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Multidimensional Signal Processing

Multidimensional Signal Processing


Research in multidimensional signal processing deals with issues such as those arising in automatic target detection and recognition problems, geophysical inverse problems, and medical estimation problems. Its goal is to develop methods to extract information from diverse data sources amid uncertainty. Research cited here was published or presented between January and September, 2014. It covers a range of subtopics including hidden communications channels, wave digital filters, SAR interferometry, and SAR tomography.

  • Seleym, A, "High-rate Hidden Communications Channel: A Multi-Dimensional Signaling Approach," Integrated Communications, Navigation and Surveillance Conference (ICNS), 2014 , vol., no., pp.W4-1,W4-8, 8-10 April 2014. doi: 10.1109/ICNSurv.2014.6820026 Hidden communications is one recent method to provide reliable security in transferring information between entities. Data hiding in media carriers is a power limited and band-limited system, as a consequence, there is a tradeoff between the host media perceptual fidelity and the transferred data error rate. In this paper, a developed embedding approach is proposed by considering the altering process as a signaling communications problem. This approach uses a structured scheme of Multiple Trellis-Coded Quantization jointed with Multiple Trellis-Coded Modulation (MTCQ/MTCM) to generate the stego-cover space. The developed scheme allows transferring a high volume of information without causing a severe perceptual or statistical degradation, and also be robust to additive noise attacks. Keywords: quantisation (signal); steganography; trellis coded modulation; additive noise attack; data hiding; high rate hidden communications channel; host media perceptual fidelity; media carrier; multidimensional signaling; multiple trellis coded modulation; multiple trellis coded quantization; reliable security; signaling communications problem; stego cover space; Constellation diagram; Encoding; Noise; Nonlinear distortion; Quantization (signal); Vectors (ID#:14-2400) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6820026&isnumber=6819972
  • Balasa, F.; Abuaesh, N.; Gingu, C.V.; Nasui, D.V., "Leakage-aware Scratch-Pad Memory Banking For Embedded Multidimensional Signal Processing," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on , vol., no., pp.5026,5030, 4-9 May 2014. doi: 10.1109/ICASSP.2014.6854559 Partitioning a memory into multiple banks that can be independently accessed is an approach mainly used for the reduction of the dynamic energy consumption. When leakage energy comes into play as well, the idle memory banks must be put in a low-leakage `dormant' state to save static energy when not accessed. The energy savings must be large enough to compensate the energy overhead spent by changing the bank status from active to dormant, then back to active again. This paper addresses the problem of energy-aware on-chip memory banking, taking into account - during the exploration of the search space - the idleness time intervals of the data mapped into the memory banks. As on-chip storage, we target scratch-pad memories (SPMs) since they are commonly used in embedded systems as an alternative to caches. The proposed approach proved to be computationally fast and very efficient when tested for several data-intensive applications, whose behavioral specifications contain multidimensional arrays as main data structures. Keywords: embedded systems; power aware computing; signal processing; storage management; SPMs; data structures; dynamic energy consumption reduction; embedded multidimensional signal processing; embedded systems; energy-aware on-chip memory banking; leakage energy ;leakage-aware scratch-pad memory banking; low-leakage dormant state; multidimensional arrays; on-chip storage; Arrays; Banking; Energy consumption; Lattices; Memory management; Signal processing algorithms; System-on-chip memory banking; memory management; multidimensional signal processing; scratch-pad memory (ID#:14-2401) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6854559&isnumber=6853544
  • Schwerdtfeger, T.; Kummert, A, "A Multidimensional Signal Processing Approach To Wave Digital Filters With Topology-Related Delay-Free Loops," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on , vol., no., pp.389,393, 4-9 May 2014. doi: 10.1109/ICASSP.2014.6853624 To avoid the occurrence of noncomputable, delay-free loops, classic Wave Digital Filters (WDFs) usually exhibit a tree-like topology. For the realization of prototype circuits that contain ring-like subnetworks, prior approaches require the decomposition of the structure and thus neglect the notion of modularity of the original Wave Digital concept. In this paper, a new modular approach based on Multidimensional Wave Digital Filters (MDWDFs) is presented. For this, the contractivity property of WDFs is shown. On that basis, the new approach is studied with respect to possible side-effects and an appropriate modification is proposed that counteracts these effects and significantly improves the convergence behaviour. Keywords: digital filters; network topology; delay-free loops; multidimensional signal processing; multidimensional wave digital filter; ring-like subnetwork; structure decomposition; topology related loops; Convergence; Delays; Digital filters; Mathematical model; Ports (Computers); Prototypes; Topology; Bridged-T Model; Contractivity; Delay-Free Loop; Multidimensional; Wave Digital Filter (ID#:14-2402) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6853624&isnumber=6853544
  • Holt, K.M., "Total Nuclear Variation and Jacobian Extensions of Total Variation for Vector Fields," Image Processing, IEEE Transactions on, vol.23, no.9, pp.3975,3989, Sept. 2014. doi: 10.1109/TIP.2014.2332397 We explore a class of vectorial total variation (VTV) measures formed as the spatial sum of a pixel-wise matrix norm of the Jacobian of a vector field. We give a theoretical treatment that indicates that, while color smearing and affine-coupling bias (often reported as gray-scale bias) are typically cited as drawbacks for VTV, these are actually fundamental to smoothing vector direction (i.e., smoothing hue and saturation in color images). In addition, we show that encouraging different vector channels to share a common gradient direction is equivalent to minimizing Jacobian rank. We thus propose total nuclear variation (TNV), and since nuclear norm is the convex envelope of matrix rank, we argue that TNV is the optimal convex regularizer for enforcing shared directions. We also propose extended Jacobians, which use larger neighborhoods than the conventional finite difference operator, and we discuss efficient VTV optimization algorithms. In simple color image denoising experiments, TNV outperformed other common VTV regularizers, and was further improved by using extended Jacobians. TNV was also competitive with the method of nonlocal means, often outperforming it by 0.25-2 dB when using extended Jacobians. Keywords: Color; Image color analysis; Image reconstruction; Jacobian matrices; Materials; TV; Vectors; Color imaging; convex optimization; denoising; image reconstruction; inverse problems; multidimensional signal processing; regularization; total variation; vector-valued images (ID#:14-2403) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841619&isnumber=6862127
  • Lombardini, F.; Cai, F., "Temporal Decorrelation-Robust SAR Tomography," Geoscience and Remote Sensing, IEEE Transactions on , vol.52, no.9, pp.5412,5421, Sept. 2014. doi: 10.1109/TGRS.2013.2288689 Much interest is continuing to grow in advanced interferometric synthetic aperture radar (SAR) methods for full 3-D imaging, particularly of volumetric forest scatterers. Multibaseline (MB) SAR tomographic elevation beam forming, i.e., spatial spectral estimation, is a promising technique in this framework. In this paper, the important effect of temporal decorrelation during the repeat-pass MB acquisition is tackled, analyzing the impact on superresolution (MUSIC) tomography with limited sparse data. Moreover, new tomographic methods robust to temporal decorrelation phenomena are proposed, exploiting the advanced differential tomography concept that produces "space-time" signatures of scattering dynamics in the SAR cell. To this aim, a 2-D version of MUSIC and a generalized MUSIC method matched to nonline spectra are applied to decouple the nuisance temporal signal components in the spatial spectral estimation. Simulated analyses are reported for different geometrical and temporal parameters, showing that the new concept of restoring tomographic performance in temporal decorrelating forest scenarios through differential tomography is promising. Keywords: array signal processing; decorrelation; forestry; image matching; image resolution; image restoration; optical tomography; radar imaging; ynthetic aperture radar; 2D MUSIC version; 3D imaging; MB SAR tomographic elevation beam forming; SAR; interferometric synthetic aperture radar method; multibase-line SAR tomographic elevation beam forming; nuisance temporal signal component; repeat-pass MB acquisition; space-time signature; spatial spectral estimation; superresolution tomography; temporal decorrelation-robust SAR tomography; volumetric forest scattering dynamic; Decorrelation; Estimation; Frequency estimation; Multiple signal classification; Synthetic aperture radar; Tomography; Decorrelation; electromagnetic tomography; multidimensional signal processing; radar interferometry; spectral analysis; synthetic aperture radar (SAR) (ID#:14-2404) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6679227&isnumber=6756973
  • Hao Fang; Vorobyov, S.A; Hai Jiang; Taheri, O., "Permutation Meets Parallel Compressed Sensing: How to Relax Restricted Isometry Property for 2D Sparse Signals," Signal Processing, IEEE Transactions on , vol.62, no.1, pp.196,210, Jan.1, 2014. doi: 10.1109/TSP.2013.2284762 Traditional compressed sensing considers sampling a 1D signal. For a multidimensional signal, if reshaped into a vector, the required size of the sensing matrix becomes dramatically large, which increases the storage and computational complexity significantly. To solve this problem, the multidimensional signal is reshaped into a 2D signal, which is then sampled and reconstructed column by column using the same sensing matrix. This approach is referred to as parallel compressed sensing, and it has much lower storage and computational complexity. For a given reconstruction performance of parallel compressed sensing, if a so-called acceptable permutation is applied to the 2D signal, the corresponding sensing matrix is shown to have a smaller required order of restricted isometry property condition, and thus, lower storage and computation complexity at the decoder are required. A zigzag-scan-based permutation is shown to be particularly useful for signals satisfying the newly introduced layer model. As an application of the parallel compressed sensing with the zigzag-scan-based permutation, a video compression scheme is presented. It is shown that the zigzag-scan-based permutation increases the peak signal-to-noise ratio of reconstructed images and video frames. Keywords: compressed sensing; matrix algebra; parallel processing; 2D sparse signals; computational complexity; image reconstruction; isometry property; multidimensional signal; parallel compressed sensing; peak signal-to-noise ratio; sensing matrix; video compression scheme; video frames; zigzag scan based permutation; Compressed sensing; Computational complexity; Educational institutions; Image reconstruction; Sensors; Size measurement; Sparse matrices; Compressed sensing; multidimensional signal processing; parallel processing; permutation (ID#:14-2405) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6619412&isnumber=6678249
  • Lyons, S.M.J.; Sarkka, S.; Storkey, AJ., "Series Expansion Approximations of Brownian Motion for Non-Linear Kalman Filtering of Diffusion Processes," Signal Processing, IEEE Transactions on , vol.62, no.6, pp.1514,1524, March15, 2014. doi: 10.1109/TSP.2014.2303430 In this paper, we describe a novel application of sigma-point methods to continuous-discrete filtering. The nonlinear continuous-discrete filtering problem is often computationally intractable to solve. Assumed density filtering methods attempt to match statistics of the filtering distribution to some set of more tractable probability distributions. Filters such as these are usually decompose the problem into two sub-problems. The first of these is a prediction step, in which one uses the known dynamics of the signal to predict its state at time tk+1 given observations up to time tk. In the second step, one updates the prediction upon arrival of the observation at time tk+1. The aim of this paper is to describe a novel method that improves the prediction step. We decompose the Brownian motion driving the signal in a generalised Fourier series, which is truncated after a number of terms. This approximation to Brownian motion can be described using a relatively small number of Fourier coefficients, and allows us to compute statistics of the filtering distribution with a single application of a sigma-point method. Assumed density filters that exist in the literature usually rely on discretisation of the signal dynamics followed by iterated application of a sigma point transform (or a limiting case thereof). Iterating the transform in this manner can lead to loss of information about the filtering distribution in highly non-linear settings. We demonstrate that our method is better equipped to cope with such problems. Keywords: Fourier series; Kalman filters; approximation theory; iterative methods; nonlinear filters; statistical distributions; Brownian motion approximation; Fourier coefficients; assumed density filtering methods; assumed density filters; diffusion processes; generalised Fourier series; nonlinear Kalman filtering; nonlinear continuous-discrete filtering problem; series expansion approximations; sigma-point methods; signal dynamic discretisation; tractable probability distributions; Approximation methods; Differential equations; Kalman filters; Mathematical model; Noise; Stochastic processes; Transforms; Kalman filters; Markov processes; multidimensional signal processing; nonlinear filters (ID#:14-2406) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6728679&isnumber=6744712
  • Xuefeng Liu; Bourennane, S.; Fossati, C., "Reduction of Signal-Dependent Noise From Hyperspectral Images for Target Detection," Geoscience and Remote Sensing, IEEE Transactions on , vol.52, no.9, pp.5396,5411, Sept. 2014. doi: 10.1109/TGRS.2013.2288525 Tensor-decomposition-based methods for reducing random noise components in hyperspectral images (HSIs), both dependent and independent from signal, are proposed. In this paper, noise is described by a parametric model that accounts for the dependence of noise variance on the signal. This model is thus suitable for the cases where photon noise is dominant compared with the electronic noise contribution. To denoise HSIs distorted by both signal-dependent (SD) and signal-independent (SI) noise, some hybrid methods, which reduce noise by two steps according to the different statistical properties of those two types of noise, are proposed in this paper. The first one, named as the PARAFACSI- PARAFACSD method, uses a multilinear algebra model, i.e., parallel factor analysis (PARAFAC) decomposition, twice to remove SI and SD noise, respectively. The second one is a combination of the well-known multiple-linear-regression-based approach termed as the HYperspectral Noise Estimation (HYNE) method and PARAFAC decomposition, which is named as the HYNE-PARAFAC method. The last one combines the multidimensional Wiener filter (MWF) method and PARAFAC decomposition and is named as the MWF-PARAFAC method. For HSIs distorted by both SD and SI noise, first, most of the SI noise is removed from the original image by PARAFAC decomposition, the HYNE method, or the MWF method based on the statistical property of SI noise; then, the residual SD components can be further reduced by PARAFAC decomposition due to its own statistical property. The performances of the proposed methods are assessed on simulated HSIs. The results on the real-world airborne HSI Hyperspectral Digital Imagery Collection Experiment (HYDICE) are also presented and analyzed. These experiments show that it is worth taking into account noise signal-dependence hypothesis for processing HYDICE data. Keywords: Wiener filters; geophysical image processing; hyperspectral imaging; image denoising; interference suppression; multidimensional signal processing; object detection; random noise;singular value decomposition; statistical analysis; tensors;HSI distortion; HYDICE; HYNE method ;MWF method; PARAFAC decomposition; PARAFACSD method; PARAFACSI method; SD noise removal; SI noise removal; airborne HSI; hybrid method; hyperspectral digital imagery collection experiment; hyperspectral image; hyperspectral noise estimation; image denoising; multidimensional Wiener filter; multilinear algebra model; noise variance; parallel factor analysis; parametric model; random noise component reduction; residual SD component reduction; signal dependent noise reduction; signal independent noise; statistical property; target detection; tensor decomposition-based method; Covariance matrices; Hyperspectral sensors; Noise; Noise reduction; Silicon; Tensile stress; Vectors;Denoising; PARAFAC; hyperspectral image (HSI);signal-dependent (SD) noise ;target detection (ID#:14-2407) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6675784&isnumber=6756973
  • Xun Chen; Aiping Liu; McKeown, M.J.; Poizner, H.; Wang, Z.J., "An EEMD-IVA Framework for Concurrent Multidimensional EEG and Unidimensional Kinematic Data Analysis," Biomedical Engineering, IEEE Transactions on , vol.61, no.7, pp.2187,2198, July 2014. doi: 10.1109/TBME.2014.2319294 Joint blind source separation (JBSS) is a means to extract common sources simultaneously found across multiple datasets, e.g., electroencephalogram (EEG) and kinematic data jointly recorded during reaching movements. Existing JBSS approaches are designed to handle multidimensional datasets, yet to our knowledge, there is no existing means to examine common components that may be found across a unidimensional dataset and a multidimensional one. In this paper, we propose a simple, yet effective method to achieve the goal of JBSS when concurrent multidimensional EEG and unidimensional kinematic datasets are available, by combining ensemble empirical mode decomposition (EEMD) with independent vector analysis (IVA). We demonstrate the performance of the proposed method through numerical simulations and application to data collected from reaching movements in Parkinson's disease. The proposed method is a promising JBSS tool for real-world biomedical signal processing applications. Keywords: biomechanics; blind source separation; data analysis; diseases; electroencephalography; kinematics; medical signal processing; multidimensional signal processing; numerical analysis; EEMD-IVA framework; Parkinson disease; concurrent multidimensional EEG; electroencephalogram; ensemble empirical mode decomposition; independent vector analysis ;joint blind source separation; kinematic data joint recording; multidimensional datasets; multiple datasets; numerical simulations; reaching movements; real-world biomedical signal processing applications; unidimensional kinematic data analysis; unidimensional kinematic datasets; Data analysis; Data mining;Electroencephalography;Joints;Kinematics;Noise;Vectors;Data fusion; EEG; EEMD; IVA; JBSS; unidimensional (ID#:14-2408) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6803885&isnumber=6835114
  • Paskaleva, B.S.; Godoy, S.E.; Woo-Yong Jang; Bender, S.C.; Krishna, S.; Hayat, M.M., "Model-Based Edge Detector for Spectral Imagery Using Sparse Spatiospectral Masks," Image Processing, IEEE Transactions on , vol.23, no.5, pp.2315,2327, May 2014. doi: 10.1109/TIP.2014.2315154 Two model-based algorithms for edge detection in spectral imagery are developed that specifically target capturing intrinsic features such as isoluminant edges that are characterized by a jump in color but not in intensity. Given prior knowledge of the classes of reflectance or emittance spectra associated with candidate objects in a scene, a small set of spectral-band ratios, which most profoundly identify the edge between each pair of materials, are selected to define a edge signature. The bands that form the edge signature are fed into a spatial mask, producing a sparse joint spatiospectral nonlinear operator. The first algorithm achieves edge detection for every material pair by matching the response of the operator at every pixel with the edge signature for the pair of materials. The second algorithm is a classifier-enhanced extension of the first algorithm that adaptively accentuates distinctive features before applying the spatiospectral operator. Both algorithms are extensively verified using spectral imagery from the airborne hyperspectral imager and from a dots-in-a-well midinfrared imager. In both cases, the multicolor gradient (MCG) and the hyperspectral/spatial detection of edges (HySPADE) edge detectors are used as a benchmark for comparison. The results demonstrate that the proposed algorithms outperform the MCG and HySPADE edge detectors in accuracy, especially when isoluminant edges are present. By requiring only a few bands as input to the spatiospectral operator, the algorithms enable significant levels of data compression in band selection. In the presented examples, the required operations per pixel are reduced by a factor of 71 with respect to those required by the MCG edge detector. Keywords: data compression; edge detection; image colour analysis; infrared imaging; multidimensional signal processing; HySPADE edge detectors; MCG edge detector; airborne hyperspectral imager; data compression; dots-in-a-well midinfrared imager; edge signature; hyperspectral-spatial detection of edges; isoluminant edges; model based edge detector; multicolor gradient;s parse joint spatiospectral nonlinear operator; sparse spatiospectral masks; spatial mask; spectral band ratio; spectral imagery; Detectors; Gray-scale; Hyperspectral imaging; Image color analysis; Image edge detection; Materials; Standards; Edge detection; classification ;isoluminant edge; multicolor edge detection; spatio-spectral mask; spectral ratios (ID#:14-2409) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781601&isnumber=6779706
  • Kamislioglu, B.; Karaboga, N., "Design of FIR QMF Bank Using Windowing Functions," Signal Processing and Communications Applications Conference (SIU), 2014 22nd , vol., no., pp.95,99, 23-25 April 2014. doi: 10.1109/SIU.2014.6830174 The past over the years, single or multi-dimensional signal processing applications, communication systems, biomedical signal processing, word coding, sub-band coding in applications such as efficient use filter banks; single filter instead of multiple custom filters come together with being designed. In this study, two-channel filter banks a special case known as the QMF (Quadrature Mirror Filter - Quarter-mirror filter) bank for the design of Kaiser, Chebyshev and Hanning windowing methods with the filter's cutoff frequency on the optimization of a design based were made. QMF bank design, failure to peak reconstruction error (Peak Reconstruction Error-PRA) is based. As a result of the ongoing applications designed filter banks belonging to the numerical results and comparisons are given. Keywords: Chebyshev approximation; channel bank filters; quadrature mirror filters; Chebyshev methods ;FIR QMF bank design; Hanning windowing methods; Kaiser design; QMF bank design; biomedical signal processing; communication systems; design optimization; filter banks; filter cutoff frequency; multidimensional signal processing; peak reconstruction error; quadrature mirror filter quarter-mirror filter bank; subband coding; two-channel filter banks; windowing functions; word coding; Chebyshev approximation; Conferences; Encoding; Filter banks; Finite impulse response filters; Mirrors (ID#:14-2410) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830174&isnumber=6830164
  • Deyun Wei; Yuanmin Li, "Reconstruction of Multidimensional Bandlimited Signals From Multichannel Samples In Linear Canonical Transform Domain," Signal Processing, IET , vol.8, no.6, pp.647,657, August 2014. doi: 10.1049/iet-spr.2013.0240 The linear canonical transform (LCT) has been shown to be a powerful tool for optics and signal processing. In this study, the authors address the problem of signal reconstruction from the multidimensional multichannel samples in the LCT domain. Firstly, they pose and solve the problem of expressing the kernel of the multidimensional LCT in the elementary functions. Then, they propose the multidimensional multichannel sampling (MMS) for the bandlimited signal in the LCT domain based on a basis expansion of an exponential function. The MMS expansion which is constructed by the ordinary convolution structure can reduce the effect of the spectral leakage and is easy to implement. Thirdly, based on the MMS expansion, they obtain the reconstruction method for the multidimensional derivative sampling and the periodic non-uniform sampling by designing the system filter transfer functions. Finally, the simulation results and the potential applications of the MMS are presented. Especially, the application of the multidimensional derivative sampling in the context of the image scaling about the image super-resolution is discussed. Keywords: signal processing ;transforms; LCT; MMS; bandlimited signal; image scaling; image super resolution ;linear canonical transform domain; multichannel samples; multidimensional bandlimited signal reconstruction; multidimensional multichannel samples; multidimensional multichannel sampling; optics processing; signal processing; transfer functions (ID#:14-2411) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6869171&isnumber=6869162
  • Wen-Long Chin; Chun-Wei Kao; Hsiao-Hwa Chen; Teh-Lu Liao, "Iterative Synchronization-Assisted Detection of OFDM Signals in Cognitive Radio Systems," Vehicular Technology, IEEE Transactions on , vol.63, no.4, pp.1633,1644, May 2014. doi: 10.1109/TVT.2013.2285389 Despite many attractive features of an orthogonal frequency-division multiplexing (OFDM) system, the signal detection in an OFDM system over multipath fading channels remains a challenging issue, particularly in a relatively low signal-to-noise ratio (SNR) scenario. This paper presents an iterative synchronization-assisted OFDM signal detection scheme for cognitive radio (CR) applications over multipath channels in low-SNR regions. To detect an OFDM signal, a log-likelihood ratio (LLR) test is employed without additional pilot symbols using a cyclic prefix (CP). Analytical results indicate that the LLR of received samples at a low SNR can be approximated by their log-likelihood (LL) functions, thus allowing us to estimate synchronization parameters for signal detection. The LL function is complex and depends on various parameters, including correlation coefficient, carrier frequency offset (CFO), symbol timing offset, and channel length. Decomposing a synchronization problem into several relatively simple parameter estimation subproblems eliminates a multidimensional grid search. An iterative scheme is also devised to implement a synchronization process. Simulation results confirm the effectiveness of the proposed detector. Keywords: OFDM modulation; cognitive radio; fading channels; iterative methods; multipath channels; parameter estimation; signal detection; synchronisation; LLR; OFDM signal detection; SNR; carrier frequency offset; cognitive radio systems; correlation coefficient; cyclic prefix ;iterative synchronization; log likelihood functions ;log-likelihood ratio; multidimensional grid search; multipath channels; multipath fading channels; orthogonal frequency division multiplexing; parameter estimation subproblems; signal-to-noise ratio; synchronization problem; Correlation; Detectors; OFDM; Signal to noise ratio; Synchronization; Cognitive radio; Cognitive radio (CR);cyclic prefix; cyclic prefix (CP); orthogonal frequency-division multiplexing; orthogonal frequency-division multiplexing (OFDM); synchronization (ID#:14-2412) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6627985&isnumber=6812142
  • Alvarez-Perez, J.L., "A Multidimensional Extension of the Concept of Coherence in Polarimetric SAR Interferometry," Geoscience and Remote Sensing, IEEE Transactions on, vol.PP, no.99, pp.1, 14, July 2014. doi: 10.1109/TGRS.2014.2336805 Interferometric synthetic aperture radar (InSAR) is a phase-based radar signal processing technique that has been addressed from a polarimetric point of view since the late 1990s, starting with Cloude and Papathanassiou's foundational work. Polarimeric InSAR (PolInSAR) has consolidated as an active field of research in parallel to non-PolInSAR. Regarding the latter, there have been a number of issues that were discussed in an earlier paper from which some other questions related to Cloude's PolInSAR come out naturally. In particular, they affect the usual understanding of coherence and statistical independence. Coherence involves the behavior of electromagnetic waves in at least a pair of points, and it is crucially related to the statistical independence of scatterers in a complex scene. Although this would seem to allow PolInSAR to overcome the difficulties involving the controversial confusion between statistical independence and polarization as present in PolSAR, Cloude's PolInSAR originally inherited the idea of separating physical contributors to the scattering phenomenon through the use of singular values and vectors. This was an assumption consistent with Cloude's PolSAR postulates that was later set aside. We propose the introduction of a multidimensional coherence tensor that includes PolInSAR's polarimetric interferometry matrix $Omega_{12}$ as its 2-D case. We show that some important properties of the polarimetric interferometry matrix are incidental to its bidimensionality. Notably, this exceptional behavior in 2-D seems to suggest that the singular value decomposition (SVD) of $Omega_{12}$ does not provide a physical insight into the scattering problem in the sense of splitting different scattering contributors. It might be argued that Cloude's PolInSAR in its current form does not rely on the SVD of $Omega_{12}$ but on other underlying optimization sch- mes. The drawbacks of such ulterior developments and the failure of the maximum coherence separation procedure to be a consistent scheme for surface topography estimation in a two-layer model are discussed in depth in this paper. Nevertheless, turning back to the SVD of $Omega_{12}$, the use of the singular values of a prewhitened version of $Omega_{12}$ is consistent with a leading method of characterizing coherence in modern Optics. For this reason, the utility of the SVD of $Omega_{12}$ as a means of characterizing coherence is analyzed here and extended to higher dimensionalities. Finally, these extensions of the concept of coherence to the multidimensional case are tested and compared with the 2-D case by numerically simulating the scattered electromagnetic field from a rough surface. Keywords: Coherence; Interferometry; Matrix decomposition; Tensile stress; Vectors; Coherence; electromagnetic scattering; polarimetric synthetic aperture radar interferometry (PolInSAR)} (ID#:14-2413) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6868983&isnumber=4358825
  • Di Franco, Carmelo; Franchino, Gianluca; Marinoni, Mauro, "Data Fusion For Relative Localization Of Wireless Mobile Nodes," Industrial Embedded Systems (SIES), 2014 9th IEEE International Symposium on, vol., no., pp.58,65, 18-20 June 2014. doi: 10.1109/SIES.2014.6871187 Monitoring teams of mobile nodes is becoming crucial in a growing number of activities. When it is not possible to use fix references or external measurements, a practicable solution is to derive relative positions from local communication. In this work, we propose an anchor-free Received Signal Strength Indicator (RSSI) method aimed at small multi-robot teams. Information from Inertial Measurement Unit (IMU) mounted on the nodes and processed with a Kalman Filter are used to estimate the robot dynamics, thus increasing the quality of RSSI measurements. A Multidimensional Scaling algorithm is then used to compute the network topology from improved RSSI data provided by all nodes. A set of experiments performed on data acquired from a real scenario show the improvements over RSSI-only localization methods. With respect to previous work only an extra IMU is required, and no constraints are imposed on its placement, like with camera-based approaches. Moreover, no a-priori knowledge of the environment is required and no fixed anchor nodes are needed. Keywords: Accuracy; Channel models; Covariance matrices; Equations; Estimation; Mobile nodes; Sensors (ID#:14-2414) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6871187&isnumber=6871170

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Network Accountability

Network Accountability


The term "accountability' suggests that an entity should be held responsible for its own specific actions. Once an event has transpired, the events that took place need to be traceable so that the causes can be determined afterwards. The goal of network accountability research is to provide accountability within networks and computers by building trace files of events. The research cited here was presented or published between January and September of 2014. The focus in these articles is on smart grid, wireless, cloud, and telemedicine.

  • Tongtong Li; Abdelhakim, M.; Jian Ren, "N-Hop Networks: A General Framework For Wireless Systems," Wireless Communications, IEEE, vol.21, no.2, pp.98, 105, April 2014. doi: 10.1109/MWC.2014.6812297 This article introduces a unified framework for quantitative characterization of various wireless networks. We first revisit the evolution of centralized, ad-hoc and hybrid networks, and discuss the trade-off between structure-ensured reliability and efficiency, and ad-hoc enabled flexibility. Motivated by the observation that the number of hops for a basic node in the network to reach the base station or the sink has a direct impact on the network capacity, delay, efficiency and their evaluation techniques, we introduce the concept of the N-hop networks. It can serve as a general framework that includes most existing network models as special cases, and can also make the analytical characterization of the network performance more tractable. Moreover, for network security, it is observed that hierarchical structure enables easier tracking of user accountability and malicious node detection; on the other hand, the multi-layer diversity increases the network reliability under unexpected network failure or malicious attacks, and at the same time, provides a flexible platform for privacy protection. Keywords: ad hoc networks; diversity reception; telecommunication security; wireless channels; N-hop networks; ad hoc networks; ad-hoc enabled flexibility; hybrid networks; malicious attacks; malicious node detection; multilayer diversity; network capacity; network reliability; network security; unexpected network failure ;user accountability; wireless systems; Ad hoc networks; Delays; Mobile communication; Mobile computing; Sensors; Throughput; Wireless sensor networks} (ID#:14-2415) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6812297&isnumber=6812279
  • Jing Liu; Yang Xiao; Jingcheng Gao, "Achieving Accountability in Smart Grid," Systems Journal, IEEE, vol.8, no.2, pp.493, 508, June 2014. doi: 10.1109/JSYST.2013.2260697 Smart grid is a promising power infrastructure that is integrated with communication and information technologies. Nevertheless, privacy and security concerns arise simultaneously. Failure to address these issues will hinder the modernization of the existing power system. After critically reviewing the current status of smart grid deployment and its key cyber security concerns, the authors argue that accountability mechanisms should be involved in smart grid designs. We design two separate accountable communication protocols using the proposed architecture with certain reasonable assumptions under both home area network and neighborhood area network. Analysis and simulation results indicate that the design works well, and it may cause all power loads to become accountable. Keywords: computer network security; power engineering computing; protocols; smart power grids; accountable communication protocols; cyber security concern; home area network; neighborhood area network; power system modernization; smart grid accountability; smart grid deployment; smart grid design; Accountability; advanced metering infrastructure (AMI); security; smart grid (ID#:14-2416) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6545310&isnumber=6819870
  • Jeyanthi, N.; Thandeeswaran, R.; Mcheick, H., "SCT: Secured Cloud based Telemedicine," Networks, Computers and Communications, The 2014 International Symposium on , vol., no., pp.1,4, 17-19 June 2014. doi: 10.1109/SNCC.2014.6866531 Telemedicine started its journey and successful deployment over several decades. But still it could not mark a remarkable contribution to neither rural nor urban areas. People realized its impact when it saved a life from becoming an extinct. Telemedicine connects patient and specialized doctors remotely and also allows them to share the sensitive medical records. Irrespective of the mode of data exchange, all types of media are vulnerable to security and performance issues. Remote data exchange during an emergency situation should not be delayed and at the same time should not be altered. While transit, a single bit change could be interpreted differently at the other end. Hence telemedicine comes with all the challenges of performance and security issues. Delay, cost and scalability are the pressing performance factors whereas integrity, availability and accountability are the security issues need to be addressed. This paper lights up on security without compromising quality of service. Telemedicine is on track from standard PSTN, wireless Mobile phones and satellites. Secure Cloud based Telemedicine (SCT) uses Cloud which could free the people from administrative and accounting burdens. Keywords: biomedical equipment; cloud computing; data integrity; delays; electronic data interchange; emergency services; mobile computing; mobile handsets; security of data; telemedicine; telephone networks ;SCT; accounting burdens; administrative burdens; emergency situation; medical record sharing; performance factors; quality service quality; remote data exchange alteration; remote data exchange delay; remote data exchange mode; secured cloud based telemedicine; single bit change effect; standard PSTN; telemedicine accountability; telemedicine availability; telemedicine cost; telemedicine delay; telemedicine effect; telemedicine integrity; telemedicine performance issues; telemedicine scalability; telemedicine security issues; wireless mobile phones; Availability; Cloud computing; Educational institutions; Medical services; Read only memory; Security; Telemedicine; Cloud; Security; Telemedicine; availability; confidentiality (ID#:14-2417) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866531&isnumber=6866503
  • Gueret, Christophe; de Boer, Victor; Schlobach, Stefan, "Let's "Downscale" Linked Data," Internet Computing, IEEE , vol.18, no.2, pp.70,73, Mar.-Apr. 2014. doi: 10.1109/MIC.2014.29 Open data policies and linked data publication are powerful tools for increasing transparency, participatory governance, and accountability. The linked data community proudly emphasizes the economic and societal impact such technology shows. But a closer look proves that the design and deployment of these technologies leave out most of the world's population. The good news is that it will take small but fundamental changes to bridge this gap. Research agendas should be updated to design systems for small infrastructure, provide multimodal interfaces to data, and account better for locally relevant, contextualized data. Now is the time to act, because most linked data technologies are still in development. Keywords: Data processing; Digital systems; Linked technologies; Open systems; digital divide; linked data technologies; multimodal interfaces; open linked data (ID#:14-2418) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6777473&isnumber=6777469
  • Chen, X.; Li, J.; Huang, X.; Li, J.; Xiang, Y.; Wong, D., "Secure Outsourced Attribute-based Signatures," Parallel and Distributed Systems, IEEE Transactions on, vol. PP, no.99, pp.1,1, January 2014. doi: 10.1109/TPDS.2013.2295809 Attribute-based signature (ABS) enables users to sign messages over attributes without revealing any information other than the fact that they have attested to the messages. However, heavy computational cost is required during signing in existing work of ABS, which grows linearly with the size of the predicate formula. As a result, this presents a significant challenge for resource-constrained devices (such as mobile devices or RFID tags) to perform such heavy computations independently. Aiming at tackling the challenge above, we first propose and formalize a new paradigm called Outsourced ABS, i.e., OABS, in which the computational overhead at user side is greatly reduced through outsourcing intensive computations to an untrusted signing-cloud service provider (S-CSP). Furthermore, we apply this novel paradigm to existing ABS schemes to reduce the complexity. As a result, we present two concrete OABS schemes: i) in the first OABS scheme, the number of exponentiations involving in signing is reduced from O(d) to O(1) (nearly three), where d is the upper bound of threshold value defined in the predicate; ii) our second scheme is built on Herranz et al.'s construction with constant-size signatures. The number of exponentiations in signing is reduced from O(d2) to O(d) and the communication overhead is O(1). Security analysis demonstrates that both OABS schemes are secure in terms of the unforgeability and attribute-signer privacy definitions specified in the proposed security model. Finally, to allow for high efficiency and flexibility, we discuss extensions of OABS and show how to achieve accountability as well. Keywords: Educational institutions; Electronic mail; Games; Outsourcing; Polynomials; Privacy; Security (ID#:14-2419) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6714536&isnumber=4359390

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Network Coding

Network Coding


Network coding methods are used to improve a network's throughput, efficiency and scalability. It can also be a method for dealing with attacks and eavesdropping. Research into network coding deals with finding optimal solutions to the general network problems that remain open. The articles cited here were presented or published between January and September 2014.

  • Shiyu Ji; Tingting Chen; Sheng Zhong; Kak, S., "DAWN: Defending Against Wormhole Attacks In Wireless Network Coding Systems," INFOCOM, 2014 Proceedings IEEE , vol., no., pp.664,672, April 27 2014-May 2 2014. doi: 10.1109/INFOCOM.2014.6847992 Network coding has been shown to be an effective approach to improve the wireless system performance. However, many security issues impede its wide deployment in practice. Besides the well-studied pollution attacks, there is another severe threat, that of wormhole attacks, which undermines the performance gain of network coding. Since the underlying characteristics of network coding systems are distinctly different from traditional wireless networks, the impact of wormhole attacks and countermeasures are generally unknown. In this paper, we quantify wormholes' devastating harmful impact on network coding system performance through experiments. Then we propose DAWN, a Distributed detection Algorithm against Wormhole in wireless Network coding systems, by exploring the change of the flow directions of the innovative packets caused by wormholes. We rigorously prove that DAWN guarantees a good lower bound of successful detection rate. We perform analysis on the resistance of DAWN against collusion attacks. We find that the robustness depends on the node density in the network, and prove a necessary condition to achieve collusion-resistance. DAWN does not rely on any location information, global synchronization assumptions or special hardware/middleware. It is only based on the local information that can be obtained from regular network coding protocols, and thus does not introduce any overhead by extra test messages. Extensive experimental results have verified the effectiveness and the efficiency of DAWN. Keywords: network coding; radio networks; synchronisation; telecommunication security; DAWN; collusion attacks; collusion-resistance; detection rate; distributed detection algorithm; flow directions; global synchronization assumptions; location information; node density; pollution attacks; regular network coding protocols; test messages; wireless network coding systems; wireless system performance; wormhole attacks; Encoding; Network coding; Probability; Protocols; Routing; Throughput; Wireless networks (ID#:14-2420) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847992&isnumber=6847911
  • Shang Tao; Pei Hengli; Liu Jianwei, "Secure network coding based on lattice signature," Communications, China , vol.11, no.1, pp.138,151, Jan. 2014. doi: 10.1109/CC.2014.6821316 To provide a high-security guarantee to network coding and lower the computing complexity induced by signature scheme, we take full advantage of homomorphic property to build lattice signature schemes and secure network coding algorithms. Firstly, by means of the distance between the message and its signature in a lattice, we propose a Distance-based Secure Network Coding (DSNC) algorithm and stipulate its security to a new hard problem Fixed Length Vector Problem (FLVP), which is harder than Shortest Vector Problem (SVP) on lattices. Secondly, considering the boundary on the distance between the message and its signature, we further propose an efficient Boundary-based Secure Network Coding (BSNC) algorithm to reduce the computing complexity induced by square calculation in DSNC. Simulation results and security analysis show that the proposed signature schemes have stronger unforgeability due to the natural property of lattices than traditional Rivest-Shamir-Adleman (RSA)-based signature scheme. DSNC algorithm is more secure and BSNC algorithm greatly reduces the time cost on computation. Keywords: computational complexity; digital signatures; network coding ;telecommunication security; BSNC; DSNC; FLVP; boundary-based secure network coding; computing complexity; distance-based secure network coding; fixed length vector problem; hard problem; high-security guarantee; homomorphic property; lattice signature; signature scheme; Algorithm design and analysis; Cryptography; Lattices; Network coding; Network security; fixed length vector problem; lattice signature; pollution attack; secure network coding (ID#:14-2421) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821316&isnumber=6821299
  • Keshavarz-Haddad, A; Riedi, R.H., "Bounds on the Benefit of Network Coding for Wireless Multicast and Unicast," Mobile Computing, IEEE Transactions on , vol.13, no.1, pp.102,115, Jan. 2014. doi: 10.1109/TMC.2012.234 In this paper, we explore fundamental limitations of the benefit of network coding in multihop wireless networks. We study two well-accepted scenarios in the field: single multicast session and multiple unicast sessions. We assume arbitrary but fixed topology and traffic patterns for the wireless network. We prove that the gain of network coding in terms of throughput and energy saving of a single multicast session is at most a constant factor. Also, we present a lower bound on the average number of transmissions of multiple unicast sessions under any arbitrary network coding. We identify scenarios under which network coding provides no gain at all, in the sense that there exists a simple flow scheme that achieves the same performance. Moreover, we prove that the gain of network coding in terms of the maximum transport capacity is bounded by a constant factor of at most $(pi)$ in any arbitrary wireless network under all traditional Gaussian channel models. As a corollary, we find that the gain of network coding on the throughput of large homogeneous wireless networks is asymptotically bounded by a constant. Furthermore, we establish theorems which relate a network coding scheme to a simple routing scheme for multiple unicast sessions. The theorems can be used as criteria for evaluating the potential gain of network coding in a given wired or wireless network. Based on these criteria, we find more scenarios where network coding has no gain on throughput or energy saving. Keywords: Gaussian channels; multicast communication; network coding; Gaussian channel models; arbitrary wireless network; constant factor; large homogeneous wireless networks; maximum transport capacity; multihop wireless networks; multiple unicast sessions; network coding scheme; single multicast session; wireless multicast; wireless unicast; Channel models; Energy consumption; Network coding; Throughput; Unicast; Wireless networks; Channel models; Energy consumption; Network coding; Network coding gain; Throughput; Unicast; Wireless networks; energy consumption; multicast throughput; transport capacity (ID#:14-2422) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6357191&isnumber=6674931
  • Tae-hwa Kim; Hyungwoo Choi; Hong-Shik Park, "Centrality-based Network Coding Node Selection Mechanism For Improving Network Throughput," Advanced Communication Technology (ICACT), 2014 16th International Conference on , vol., no., pp.864,867, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6779083 The problem of minimizing the number of coding nodes is caused by network coding overhead and is proved to be NP-hard. To resolve this issue, this paper proposes Centrality-based Network Coding Node Selection (CNCNS) that is the heuristic and distributed mechanism to minimize the number of network coding (NC) nodes without compromising the achievable network throughput. CNCNS iteratively analyses the node centrality and selects NC node in the specific area. Since CNCNS operates with distributed manner, it can dynamically adapt the network status with approximately minimizing network coding nodes. Especially, CNCNS adjusts the network performance of network throughput and reliability using control indicator. Simulation results show that the well selected network coding nodes can improve the network throughput and almost close to throughput performance of a system where all network nodes operate network coding. Keywords: network coding; radio networks; NP hard problem; centrality based network coding node selection mechanism; coding nodes; distributed mechanism; heuristic mechanism; network coding overhead; network reliability; network status; network throughput improvement; Decoding; Delays; Encoding; Network coding; Receivers; Reliability; Throughput; Centrality; Degree; Network coding; Throughput; Weight (ID#:14-2423) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779083&isnumber=6778899
  • Min Yang; Yuanyuan Yang, "Applying Network Coding to Peer-to-Peer File Sharing," Computers, IEEE Transactions on , vol.63, no.8, pp.1938,1950, Aug. 2014. doi: 10.1109/TC.2013.88 Network coding is a promising enhancement of routing to improve network throughput and provide high reliability. It allows a node to generate output messages by encoding its received messages. Peer-to-peer networks are a perfect place to apply network coding due to two reasons: the topology of a peer-to-peer network is constructed arbitrarily, thus it is easy to tailor the topology to facilitate network coding; the nodes in a peer-to-peer network are end hosts which can perform more complex operations such as decoding and encoding than simply storing and forwarding messages. In this paper, we propose a scheme to apply network coding to peer-to-peer file sharing which employs a peer-to-peer network to distribute files resided in a web server or a file server. The scheme exploits a special type of network topology called combination network. It was proved that combination networks can achieve unbounded network coding gain measured by the ratio of network throughput with network coding to that without network coding. Our scheme encodes a file into multiple messages and divides peers into multiple groups with each group responsible for relaying one of the messages. The encoding scheme is designed to satisfy the property that any subset of the messages can be used to decode the original file as long as the size of the subset is sufficiently large. To meet this requirement, we first define a deterministic linear network coding scheme which satisfies the desired property, then we connect peers in the same group to flood the corresponding message, and connect peers in different groups to distribute messages for decoding. Moreover, the scheme can be readily extended to support link heterogeneity and topology awareness to further improve system performance in terms of throughput, reliability and link stress. Our simulation results show that the new scheme can achieve 15%-20% higher throughput than another peer-to-peer multicast system, Narada, which does not employ network c- ding. In addition, it achieves good reliability and robustness to link failure or churn. Keywords: network coding; peer-to-peer computing; telecommunication network reliability; telecommunication network topology; Web server; combination network; decoding; deterministic linear network; encoding; file server; network topology; peer-to-peer file sharing; Network coding; file sharing; multicast; peer-to-peer networks; web-based applications (ID#:14-2424) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6497042&isnumber=6857445
  • Bourtsoulatze, E.; Thomos, N.; Frossard, P., "Decoding Delay Minimization in Inter-Session Network Coding," Communications, IEEE Transactions on , vol.62, no.6, pp.1944,1957, June 2014. doi: 10.1109/TCOMM.2014.2318701 Intra-session network coding has been shown to offer significant gains in terms of achievable throughput and delay in settings where one source multicasts data to several clients. In this paper, we consider a more general scenario where multiple sources transmit data to sets of clients over a wireline overlay network. We propose a novel framework for efficient rate allocation in networks where intermediate network nodes have the opportunity to combine packets from different sources using randomized network coding. We formulate the problem as the minimization of the average decoding delay in the client population and solve it with a gradient-based stochastic algorithm. Our optimized inter-session network coding solution is evaluated in different network topologies and is compared with basic intra-session network coding solutions. Our results show the benefits of proper coding decisions and effective rate allocation for lowering the decoding delay when the network is used by concurrent multicast sessions. Keywords: computer networks; decoding; delays; gradient methods; minimisation; network coding; overlay networks; stochastic processes; telecommunication network topology; client population; coding decisions; concurrent multicast sessions; decoding delay minimization; gradient-based stochastic algorithm; intermediate network nodes ;intersession network coding solution; intrasession network coding solutions; network topologies; randomized network coding; rate allocation; wireline overlay network; Decoding; Delays; Encoding; Network coding; Resource management; Throughput; Vectors; Network coding; decoding delay; inter-session network coding; overlay networks; rate allocation (ID#:14-2425) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6804664&isnumber=6839072
  • Yin, X.; Wang, Y.; Li, Z.; Wang, X.; Xue, X., "A Graph Minor Perspective to Multicast Network Coding," Information Theory, IEEE Transactions on , vol.60, no.9, pp.5375,5386, Sept. 2014. doi: 10.1109/TIT.2014.2336836 Network coding encourages information coding across a communication network. While the necessity, benefit and complexity of network coding are sensitive to the underlying graph structure of a network, existing theory on network coding often treats the network topology as a black box, focusing on algebraic or information theoretic aspects of the problem. This paper aims at an in-depth examination of the relation between algebraic coding and network topologies. We mathematically establish a series of results along the direction of: if network coding is necessary/beneficial, or if a particular finite field is required for coding, then the network must have a corresponding hidden structure embedded in its underlying topology, and such embedding is computationally efficient to verify. Specifically, we first formulate a meta-conjecture, the NC-minor conjecture, that articulates such a connection between graph theory and network coding, in the language of graph minors. We next prove that the NC-minor conjecture for multicasting two information flows is almost equivalent to the Hadwiger conjecture, which connects graph minors with graph coloring. Such equivalence implies the existence of (K_{4}) , (K_{5}) , (K_{6}) , and (K_{O(q/log {q})}) minors, for networks that require (mathbb {F}_{3}) , (mathbb {F}_{4}) , (mathbb {F}_{5}) , and (mathbb {F}_{q}) to multicast two flows, respectively. We finally pro- e that, for the general case of multicasting arbitrary number of flows, network coding can make a difference from routing only if the network contains a (K_{4}) minor, and this minor containment result is tight. Practical implications of the above results are discussed. Keywords: Color; Encoding; Network coding; Network topology; Receivers; Routing; Vectors; Network coding; graph minor ;multicast; treewidth (ID#:14-2426) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6850047&isnumber=6878505
  • Coondu, S.; Mitra, A; Chattopadhyay, S.; Chattopadhyay, M.; Bhattacharya, M., "Network-coded Broadcast Incremental Power Algorithm For Energy-Efficient Broadcasting In Wireless Ad-Hoc Network," Applications and Innovations in Mobile Computing (AIMoC), 2014, pp.42, 47, Feb. 27, 2014-March 1, 2014. doi: 10.1109/AIMOC.2014.6785517 An important operation in multi-hop wireless ad-hoc networks is broadcasting, which propagates information throughout the network. We are interested to explore the issue of broadcasting, where all nodes of the network are sources that want to transmit information to all other nodes, in an ad-hoc wireless network. Our performance metric is energy efficiency, a vital defining factor for wireless networks as it directly concerns the battery life and thus network longevity. We show the benefits network coding has to offer in a wireless ad-hoc network as far as energy-savings is concerned, compared to the store-and-forward strategy. Network coded broadcasting concentrates on reducing the number of transmissions performed by each forwarding node in the all-to-all broadcast application, where each forwarding node combines the incoming messages for transmission. The total number of transmissions can be reduced using network coding, compared to broadcasting using the same forwarding nodes without coding. In this paper, we present the performance of a network coding-based Broadcast Incremental Power (BIP) algorithm for all-to-all broadcast. Simulation results show that optimisation using network coding method lead to substantial improvement in the cost associated with BIP. Keywords: ad hoc networks; network coding; telecommunication network reliability; all-to-all broadcast application; battery life; energy-efficient broadcasting; energy-savings; forwarding node; multihop wireless ad hoc networks; network coding-based BIP algorithm; network longevity; network nodes; network-coded broadcast incremental power algorithm; store-and-forward strategy; vital defining factor; Ad hoc networks; Broadcasting; Encoding; Energy consumption; Network coding; Space vehicles; Wireless communication; Broadcast Incremental Power; Energy-Efficiency; Minimum Power Broadcast Problem; Network Coding; Wireless Ad-Hoc Network; Wireless Multicast Advantage (ID#:14-2427) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785517&isnumber=6785503
  • Deze Zeng; Song Guo; Yong Xiang; Hai Jin, "On the Throughput of Two-Way Relay Networks Using Network Coding," Parallel and Distributed Systems, IEEE Transactions on , vol.25, no.1, pp.191,199, Jan. 2014. doi: 10.1109/TPDS.2013.187 Network coding has shown the promise of significant throughput improvement. In this paper, we study the network throughput using network coding and explore how the maximum throughput can be achieved in a two-way relay wireless network. Unlike previous studies, we consider a more general network with arbitrary structure of overhearing status between receivers and transmitters. To efficiently utilize the coding opportunities, we invent the concept of network coding cliques (NCCs), upon which a formal analysis on the network throughput using network coding is elaborated. In particular, we derive the closed-form expression of the network throughput under certain traffic load in a slotted ALOHA network with basic medium access control. Furthermore, the maximum throughput as well as optimal medium access probability at each node is studied under various network settings. Our theoretical findings have been validated by simulation as well. Keywords: access protocols; network coding; radio receivers; radio transmitters; relay networks (telecommunication); telecommunication traffic; NCCs; closed-form expression; medium access control; network coding clique; network traffic load; optimal medium access probability; receiver; slotted ALOHA network; transmitter; two-way relay wireless network; Encoding; Network coding;Receivers;Relays;Throughput;Transmitters;Unicast;Performance analysis; network coding; slotted ALOHA (ID#:14-2428) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6573287&isnumber=6674937
  • Lili Wei; Wen Chen; Hu, R.Q.; Geng Wu, "Network Coding In Multiple Access Relay Channel With Multiple Antenna Relay," Computing, Networking and Communications (ICNC), 2014 International Conference on , vol., no., pp.656,661, 3-6 Feb. 2014. doi: 10.1109/ICCNC.2014.6785414 Network coding is a paradigm for modern communication networks by allowing intermediate nodes to mix messages received from multiple sources. In this paper, we carry out a study on network coding in multiple access relay channel (MARC) with multiple antenna relay. Under the same transmission time slots constraint, we investigate several different transmission strategies applicable to the system model, including direct transmission, decode-and-forward, digital network coding, digital network coding with Alamouti space time coding, analog network coding, and compare the error rate performance. Interestingly, simulation studies show that in the system model under investigation, the schemes with network coding do not show any performance gain compared with the traditional schemes with same time slots consumption. Keywords: antenna arrays; decode and forward communication; network coding; radio access networks; relay networks (telecommunication);simulation; space-time codes; Alamouti space time coding; MARC; analog network coding; decode-and-forward transmission; digital network coding; direct transmission; multiple access relay channel; multiple antenna relay; transmission time slots constraint; Encoding; Erbium; Network coding; Relays; Slot antennas; Vectors; Wireless communication; cooperative; multiple access relay channel; network coding; space-time coding (ID#:14-2429) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785414&isnumber=6785290
  • Ye Liu; Chi Wan Sung, "Quality-Aware Instantly Decodable Network Coding," Wireless Communications, IEEE Transactions on , vol.13, no.3, pp.1604,1615, March 2014. doi: 10.1109/TWC.2014.012314.131046 In erasure broadcast channels, network coding has been demonstrated to be an efficient way to satisfy each user's demand. However, the erasure broadcast channel model does not fully characterize the information available in a "lost" packet, and therefore any retransmission schemes designed based on the erasure broadcast channel model cannot make use of that information. In this paper, we characterize the quality of erroneous packets by Signal-to-Noise Ratio (SNR) and then design a network coding retransmission scheme with the knowledge of the SNRs of the erroneous packets, so that a user can immediately decode two source packets upon reception of a useful retransmission packet. We demonstrate that our proposed scheme, namely Quality-Aware Instantly Decodable Network Coding (QAIDNC), can increase the transmission efficiency significantly compared to the existing Instantly Decodable Network Coding (IDNC) and Random Linear Network Coding (RLNC). Keywords: broadcast channels; decoding; linear codes; network coding; QAIDNC; RLNC; SNR; erasure broadcast channel model; lost packet; quality of erroneous packets; quality-aware instantly decodable network coding; random linear network coding; retransmission schemes; signal-to-noise ratio; source packets; transmission efficiency; user demand; Decoding; Encoding; Network coding; Phase shift keying; Signal to noise ratio; Vectors; Broadcast channel; Rayleigh fading; instantly decodable network coding; maximal-ratio combining; network coding (ID#:14-2430) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6725590&isnumber=6776574
  • Amerimehr, M.H.; Ashtiani, F.; Valaee, S., "Maximum Stable Throughput of Network-Coded Multiple Broadcast Sessions for WirelessTandem Random Access Networks," Mobile Computing, IEEE Transactions on, vol.13, no.6, pp.1256,1267, June 2014. doi: 10.1109/TMC.2013.2296502 This paper presents an analytical study of the stable throughput for multiple broadcast sessions in a multi-hop wireless tandem network with random access. Intermediate nodes leverage on the broadcast nature of wireless medium access to perform inter-session network coding among different flows. This problem is challenging due to the interaction among nodes, and has been addressed so far only in the saturated mode where all nodes always have packet to send, which results in infinite packet delay. In this paper, we provide a novel model based on multi-class queueing networks to investigate the problem in unsaturated mode. We devise a theoretical framework for computing maximum stable throughput of network coding for a slotted ALOHA-based random access system. Using our formulation, we compare the performance of network coding and traditional routing. Our results show that network coding leads to high throughput gain over traditional routing. We also define a new metric, network unbalance ratio (NUR), that indicates the unbalance status of the utilization factors at different nodes. We show that although the throughput gain of the network coding compared to the traditional routing decreases when the number of nodes tends to infinity, NUR of the former outperforms the latter. We carry out simulations to confirm our theoretical analysis. Keywords: access protocols; broadcast communication; network coding; queueing theory; radio access networks; infinite packet delay; inter-session network coding; maximum stable throughput; multiclass queueing networks; multihop wireless tandem network; multiple broadcast sessions; network coding; network routing; network unbalance ratio; network-coded multiple broadcast sessions; slotted ALOHA-based random access system; theoretical analysis; wireless medium access; wireless tandem random access networks; Analytical models; Multicast communication; Network coding; Routing; Spread spectrum communication; Throughput; Wireless communication; Network coding; queueing networks; random access; routing; stable throughput; vehicular networks (ID#:14-2431) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6697896&isnumber=6824285
  • Gang Wang; Xia Dai; Yonghui Li, "On the Network Sharing of Mixed Network Coding and Routing Data Flows in Congestion Networks," Vehicular Technology, IEEE Transactions on , vol.63, no.5, pp.2420,2428, Jun 2014. doi: 10.1109/TVT.2013.2291859 In this paper, we study the congestion game for a network where multiple network coding (NC) and routing users sharing a single common congestion link to transmit their information. The data flows using NC and routing will compete network resources, and we need to determine the optimal allocation of network resources between NC and routing data flows to maximize the network payoff. To facilitate the design, we formulate this process using a cost-sharing game model. A novel average-cost-sharing (ACS) pricing mechanism is developed to maximize the overall network payoff. We analyze the performance of ACS in terms of price of anarchy (PoA). We formulate an analytical expression to compute PoA under the ACS mechanism. In contrast to the previous affine marginal cost (AMC) mechanism, where the overall network payoff decreases when NC is applied, the proposed ACS mechanism can considerably improve the overall network payoff by optimizing the number and the spectral resource allocation of NC and routing data flows sharing the network link. Keywords: game theory; network coding; radio networks; telecommunication congestion control; telecommunication network routing; anarchy price; congestion game; congestion networks; cost sharing game model; data flow routing; mixed network coding; network sharing; optimal network resource allocation; pricing mechanism; single common congestion link; Aggregates; Games; Nash equilibrium; Network coding; Pricing; Resource management; Routing; Affine Marginal Cost (AMC);Affine marginal cost (AMC); Average Cost Sharing (ACS) ;Network Coding (NC); Price of Ararchy (PoA); average cost sharing (ACS); network coding (NC); price of anarchy (PoA) (ID#:14-2432) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6671460&isnumber=6832681
  • Kramarev, D.; Yi Hong; Viterbo, E., "Software Defined Radio Implementation Of A Two-Way Relay Network With Digital Network Coding," Communications Theory Workshop (AusCTW), 2014 Australian, pp.120, 125, 3-5 Feb. 2014. doi: 10.1109/AusCTW.2014.6766439 Network coding is a technology which has the potential to increase network throughput beyond existing standards based on routing. Despite the fact, that the theoretical understanding is mature, there have been only a few papers on implementation of network coding and demonstration of a working testbed. This paper presents the implementation of a two-way relay network with digital network coding. Unlike previous work, where the testbeds are implemented on custom hardware, we implement the testbed on GNU Radio, an open-source software defined radio platform. In this paper we discuss the implementation issues and the ways to overcome the hardware imperfections and software inadequacies of the GNU Radio platform. Using our testbed we measure the throughput of the system in an indoor environment. The experimental results show that the network coding outperforms the traditional routing as predicted by the theoretical analysis. Keywords: network coding; public domain software; relay networks (telecommunication); software radio; GNU Radio platform; digital network coding; hardware imperfections; open-source software; radio implementation; software inadequacies; testbed; two-way relay network; Hardware; Network coding; Packet switching; Relays; Software; Synchronization; Throughput; GNU radio; Software-defined radio; network coding; network coding implementation; testbed; two-way relay network (ID#:14-2433) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6766439&isnumber=6766413
  • Carrillo, E.; Ramos, V., "On the Impact of Network Coding Delay for IEEE 802.11s Infrastructure Wireless Mesh Networks," Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on , vol., no., pp.305,312, 13-16 May 2014. doi: 10.1109/AINA.2014.39 The distributed coordination function (DCF) may reduce the potential of network coding in 802.11 wireless networks. Due to the randomness of DCF, the coding delay, defined as the time that a packet must wait for a coding opportunity, may increase and degrade the network performance. In this paper, we study the potential impact of the coding delay in the performance of TCP over IEEE 802.11s infrastructure wireless mesh networks. By means of simulation, we evaluate the formation of coding opportunities at the mesh access points. We find that as TCP traffic increases, the coding opportunities rise up to 70% and the coding delay increases considerably. We propose to adjust dynamically the maximum time that a packet can wait in the coding queues to reduce the coding delay. We evaluate different moving-average estimation methods for this aim. Our results show that the coding delay may be reduced with these methods using at the same time an estimation threshold. This threshold increases the estimation's mean in order to exploit a high percentage of the coding opportunities. Keywords: moving average processes; network coding; transport protocols; wireless LAN; wireless mesh networks; DCF; IEEE 802.11s infrastructure wireless mesh networks; TCP traffic increases; coding delay; coding opportunity; coding queues; distributed coordination function; mesh access points; moving-average estimation methods; network coding; Delays; Encoding; IEEE 802.11 Standards; Markov processes; Network coding; Wireless networks; IEEE 802.11s;mesh networks; network coding (ID#:14-2434) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838680&isnumber=6838626
  • Nabaee, M.; Labeau, F., "Bayesian Quantized Network Coding Via Generalized Approximate Message Passing," Wireless Telecommunications Symposium (WTS), 2014, pp.1,7, 9-11 April 2014. doi: 10.1109/WTS.2014.6834995 In this paper, we study message passing-based decoding of real network coded packets. We explain our developments on the idea of using real field network codes for distributed compression of inter-node correlated messages. Then, we discuss the use of iterative message passing-based decoding for the described network coding scenario, as the main contribution of this paper. Motivated by Bayesian compressed sensing, we discuss the possibility of approximate decoding, even with fewer received measurements (packets) than the number of messages. As a result, our real field network coding scenario, called quantized network coding, is capable of inter-node compression without the need to know the inter-node redundancy of messages. We also present our numerical and analytic arguments on the robustness and computational simplicity (relative to the previously proposed linear programming and standard belief propagation) of our proposed decoding algorithm for the quantized network coding. Keywords: Bayes methods; compressed sensing; iterative decoding; linear programming; message passing; network coding; Bayesian compressed sensing; Bayesian quantized network coding; distributed compression; internode compression; internode correlated messages; iterative message passing-based decoding; linear programming; network coded packets; Bayes methods; Decoding; Message passing; Network coding; Noise; Noise measurement; Quantization (signal);Bayesian compressed sensing; Network coding; approximate message passing (ID#:14-2435) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6834995&isnumber=6834983
  • Shalaby, A; Ragab, M.E.-S.; Goulart, V.; Fujiwara, I; Koibuchi, M., "Hierarchical Network Coding for Collective Communication on HPC Interconnects," Parallel, Distributed and Network-Based Processing (PDP), 2014 22nd Euromicro International Conference on, vol., no., pp.98,102, 12-14 Feb. 2014. doi: 10.1109/PDP.2014.58 Network bandwidth is a performance concern especially for collective communication because the bisection bandwidth of recent supercomputers is far less than their full bisection bandwidth. In this context we propose to exploit the use of a network coding technique to reduce the number of unicasts and the size of transferred data generated by latency-sensitive collective communication in supercomputers. Our proposed network coding scheme has a hierarchical multicasting structure with intra-group and inter-group unicasts. Quantitative analysis show that the aggregate path hop counts by our hierarchical network coding decrease as much as 94% when compared to conventional unicast-based multicasts. We validate these results by cycle-accurate network simulations. In 1,024-switch networks, the network reduces the execution time of collective communication as much as 64%. We also show that our hierarchical network coding is beneficial for any packet size. Keywords: network coding; parallel machines; parallel processing; HPC interconnects; hierarchical multicasting structure; hierarchical network coding technique; inter-group unicasts; intra-group unicasts; latency-sensitive collective communication; network bandwidth; supercomputers; Aggregates; Bandwidth; Network coding; Network topology; Routing; Supercomputers; Topology; Interconnection networks; collective communication; high-performance computing; multicast algorithm; network coding (ID#:14-2436) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6787258&isnumber=6787236

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Network Intrusion Detection

Network Intrusion Detection


Network intrusion detection is one of the chronic problems in cybersecurity. The growth of cellular and ad hoc networks has increased the threat and risks. Research into this area of concern reflects its importance. The articles cited here were presented or published between January and August of 2014.

  • Weiming Hu; Jun Gao; Yanguo Wang; Ou Wu; Maybank, S., "Online Adaboost-Based Parameterized Methods for Dynamic Distributed Network Intrusion Detection," Cybernetics, IEEE Transactions on, vol.44, no.1, pp.66,82, Jan. 2014. doi: 10.1109/TCYB.2013.2247592 Current network intrusion detection systems lack adaptability to the frequently changing network environments. Furthermore, intrusion detection in the new distributed architectures is now a major requirement. In this paper, we propose two online Adaboost-based intrusion detection algorithms. In the first algorithm, a traditional online Adaboost process is used where decision stumps are used as weak classifiers. In the second algorithm, an improved online Adaboost process is proposed, and online Gaussian mixture models (GMMs) are used as weak classifiers. We further propose a distributed intrusion detection framework, in which a local parameterized detection model is constructed in each node using the online Adaboost algorithm. A global detection model is constructed in each node by combining the local parametric models using a small number of samples in the node. This combination is achieved using an algorithm based on particle swarm optimization (PSO) and support vector machines. The global model in each node is used to detect intrusions. Experimental results show that the improved online Adaboost process with GMMs obtains a higher detection rate and a lower false alarm rate than the traditional online Adaboost process that uses decision stumps. Both the algorithms outperform existing intrusion detection algorithms. It is also shown that our PSO, and SVM-based algorithm effectively combines the local detection models into the global model in each node; the global model in a node can handle the intrusion types that are found in other nodes, without sharing the samples of these intrusion types. Keywords: Gaussian processes; computer architecture; computer network security; distributed processing; learning (artificial intelligence);particle swarm optimisation; support vector machines; GMM; PSO;SVM-based algorithm; distributed architectures; dynamic distributed network intrusion detection; local parameterized detection model; network attack detection; network information security; online Adaboost process; online Adaboost-based intrusion detection algorithms; online Adaboost-based parameterized methods; online Gaussian mixture models; particle swarm optimization; support vector machines; weak classifiers; Dynamic distributed detection; network intrusions; online Adaboost learning; parameterized model (ID#:14-2437) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6488798&isnumber=6683070
  • Al-Jarrah, O.; Arafat, A, "Network Intrusion Detection System Using Attack Behavior Classification," Information and Communication Systems (ICICS), 2014 5th International Conference on , vol., no., pp.1,6, 1-3 April 2014. doi: 10.1109/IACS.2014.6841978 Intrusion Detection Systems (IDS) have become a necessity in computer security systems because of the increase in unauthorized accesses and attacks. Intrusion Detection is a major component in computer security systems that can be classified as Host-based Intrusion Detection System (HIDS), which protects a certain host or system and Network-based Intrusion detection system (NIDS), which protects a network of hosts and systems. This paper addresses Probes attacks or reconnaissance attacks, which try to collect any possible relevant information in the network. Network probe attacks have two types: Host Sweep and Port Scan attacks. Host Sweep attacks determine the hosts that exist in the network, while port scan attacks determine the available services that exist in the network. This paper uses an intelligent system to maximize the recognition rate of network attacks by embedding the temporal behavior of the attacks into a TDNN neural network structure. The proposed system consists of five modules: packet capture engine, preprocessor, pattern recognition, classification, and monitoring and alert module. We have tested the system in a real environment where it has shown good capability in detecting attacks. In addition, the system has been tested using DARPA 1998 dataset with 100% recognition rate. In fact, our system can recognize attacks in a constant time. Keywords: computer network security; neural nets; pattern classification; HIDS; NIDS; TDNN neural network structure; alert module; attack behavior classification; computer security systems; host sweep attacks; host-based intrusion detection system; network intrusion detection system; network probe attacks; packet capture engine; pattern classification; pattern recognition; port scan attacks; preprocessor; reconnaissance attacks; unauthorized accesses; IP networks; Intrusion detection; Neural networks; Pattern recognition; Ports (Computers); Probes; Protocols; Host sweep; Intrusion Detection Systems; Network probe attack; Port scan; TDNN neural network (ID#:14-2438) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841978&isnumber=6841931
  • Jaic, K.; Smith, M.C.; Sarma, N., "A Practical Network Intrusion Detection System For Inline FPGAs On 10gbe Network Adapters," Application-specific Systems, Architectures and Processors (ASAP), 2014 IEEE 25th International Conference on, pp.180,181, 18-20 June 2014. doi: 10.1109/ASAP.2014.6868655 A network intrusion detection system (NIDS), such as SNORT, analyzes incoming packets to identify potential security threats. Pattern matching is arguably the most important and most computationally intensive component of a NIDS. Software-based NIDS implementations drop up to 90% of packets during increased network load even at lower network bandwidth. We propose an alternative hybrid-NIDS that couples an FPGA with a network adapter to provide hardware support for pattern matching and software support for post processing. The proposed system, SFAOENIDS, offers an extensible open-source NIDS for Solarflare AOE devices. The pattern matching engine-the primary component of the hardware architecture was designed based on the requirements of typical NIDS implementations. In testing on a real network environment, the SFAOENIDS hardware implementation, operating at 200 MHz, handles a 10Gbps data rate without dropping packets while simultaneously minimizing the server CPU load. Keywords: field programmable gate arrays; security of data; SFAOENIDS; SNORT; Solarflare AOE devices ;inline FPGA; lower network bandwidth; network adapters; network load; open-source NIDS; pattern matching; pattern matching engine; practical network intrusion detection system; real network environment; security threats; software based NIDS implementations; Engines; Field programmable gate arrays; Hardware; Intrusion detection; Memory management; Pattern matching; Software (ID#:14-2439) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6868655&isnumber=6868606
  • Valgenti, V.C.; Hai Sun; Min Sik Kim, "Protecting Run-Time Filters for Network Intrusion Detection Systems," Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on , vol., no., pp.116,122, 13-16 May 2014. doi: 10.1109/AINA.2014.19 Network Intrusion Detection Systems (NIDS) examine millions of network packets searching for malicious traffic. Multi-gigabit line-speeds combined with growing databases of rules lead to dropped packets as the load exceeds the capacity of the device. Several areas of research have attempted to mitigate this problem through improving packet inspection efficiency, increasing resources, or reducing the examined population. A popular method for reducing the population examined is to employ run-time filters that can provide a quick check to determine that a given network packet cannot match a particular rule set. While this technique is an excellent method for reducing the population under examination, rogue elements can trivially bypass such filters with specially crafted packets and render the run-time filters effectively useless. Since the filtering comes at the cost of extra processing a filtering solution could actually perform worse than a non-filtered solution under such pandemic circumstances. To defend against such attacks, it is necessary to consider run-time filters as an independent anomaly detector capable of detecting attacks against itself. Such anomaly detection, together with judicious rate-limiting of traffic forwarded to full packet inspection, allows the detection, logging, and mitigation of attacks targeted at the filters while maintaining the overall improvements in NIDS performance garnered from using run-time filters. Keywords: filters; security of data; telecommunication traffic; NIDS performance; anomaly detector; crafted packets; filtering solution; malicious traffic; multigigabit line-speeds; network intrusion detection systems; network packets; packet inspection; run-time filters; run-time filters protection; Automata; Detectors; Inspection; Intrusion detection; Limiting; Matched filters; Sociology; Deep Packet Inspection; Filters; IDS; Intrusion Detection; Network Security; Run-time Filters; Security (ID#:14-2440) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838655&isnumber=6838626
  • Chakchai So-In; Mongkonchai, N.; Aimtongkham, P.; Wijitsopon, K.; Rujirakul, K., "An Evaluation Of Data Mining Classification Models For Network Intrusion Detection," Digital Information and Communication Technology and it's Applications (DICTAP), 2014 Fourth International Conference on , vol., no., pp.90,94, 6-8 May 2014. doi: 10.1109/DICTAP.2014.6821663 Due to a rapid growth of Internet, the number of network attacks has risen leading to the essentials of network intrusion detection systems (IDS) to secure the network. With heterogeneous accesses and huge traffic volumes, several pattern identification techniques have been brought into the research community. Data Mining is one of the analyses which many IDSs have adopted as an attack recognition scheme. Thus, in this paper, the classification methodology including attribute and data selections was drawn based on the well-known classification schemes, i.e., Decision Tree, Ripper Rule, Neural Networks, Naive Bayes, k-Nearest-Neighbour, and Support Vector Machine, for intrusion detection analysis using both KDD CUP dataset and recent HTTP BOTNET attacks. Performance of the evaluation was measured using recent Weka tools with a standard cross-validation and confusion matrix. Keywords: Internet; computer network security; data mining; invasive software; pattern classification; telecommunication traffic; HTTP BOTNET attacks; IDS; Internet; KDD CUP dataset; Weka tools; attack recognition scheme; attribute selection; confusion matrix; data mining classification models; data selection; network attack; network intrusion detection system; pattern identification techniques; traffic volumes; Accuracy; Computational modeling; Data mining; Internet; Intrusion detection; Neural networks; Probes; BOTNET; Classification; Data Mining; Intrusion Detection; KDD CUP dataset; Network Security (ID#:14-2441) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821663&isnumber=6821645
  • do Carmo, R.; Hollick, M., "Analyzing Active Probing For Practical Intrusion Detection in Wireless Multihop Networks," Wireless On-demand Network Systems and Services (WONS), 2014 11th Annual Conference on , vol., no., pp.77,80, 2-4 April 2014. doi: 10.1109/WONS.2014.6814725 Practical intrusion detection in Wireless Multihop Networks (WMNs) is a hard challenge. It has been shown that an active-probing-based network intrusion detection system (AP-NIDS) is practical for WMNs. However, understanding its interworking with real networks is still an unexplored challenge. In this paper, we investigate this in practice. We identify the general functional parameters that can be controlled, and by means of extensive experimentation, we tune these parameters and analyze the trade-offs between them, aiming at reducing false positives, overhead, and detection time. The traces we collected help us to understand when and why the active probing fails, and let us present countermeasures to prevent it. Keywords: frequency hop communication; security of data; wireless mesh networks; active-probing-based network intrusion detection system; wireless mesh network; wireless multihop networks; Ad hoc networks; Communication system security; Intrusion detection; Routing protocols; Testing; Wireless communication; Wireless sensor networks (ID#:14-2442) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814725&isnumber=6814711
  • Al-Obeidat, F.N.; El-Alfy, E.-S.M., "Network Intrusion Detection Using Multi-Criteria PROAFTN Classification," Information Science and Applications (ICISA), 2014 International Conference on , vol., no., pp.1,5, 6-9 May 2014. doi: 10.1109/ICISA.2014.6847436 Network intrusion is recognized as a chronic and recurring problem. Hacking techniques continually change and several countermeasure methods have been suggested in the literature including statistical and machine learning approaches. However, no single solution can be claimed as a rule of thumb for the wide spectrum of attacks. In this paper, a novel methodology is proposed for network intrusion detection based on the multicriteria PROAFTN classification. The algorithm is evaluated and compared on a publicly available and widely used dataset. The results in this paper show that the proposed algorithm is promising in detecting various types of intrusions with high classification accuracy. Keywords: computer crime; learning (artificial intelligence); statistical analysis; hacking techniques; machine learning approach; multicriteria PROAFTN classification; network intrusion detection; statistical approach; Accuracy; Computers; Decision making; Educational institutions; Intrusion detection; Prototypes; Support vector machines (ID#:14-2443) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847436&isnumber=6847317
  • Weller-Fahy, D.; Borghetti, B.J.; Sodemann, AA, "A Survey of Distance and Similarity Measures used within Network Intrusion Anomaly Detection," Communications Surveys & Tutorials, IEEE, vol. PP, no.99, pp.1, 1, July 2014. doi: 10.1109/COMST.2014.2336610 Anomaly Detection (AD) use within the Network Intrusion Detection (NID) field of research, or Network Intrusion Anomaly Detection (NIAD), is dependent on the proper use of similarity and distance measures, but the measures used are often not documented in published research. As a result, while the body of NIAD research has grown extensively, knowledge of the utility of similarity and distance measures within the field has not grown correspondingly. NIAD research covers a myriad of domains and employs a diverse array of techniques from simple k-means clustering through advanced multi-agent distributed anomaly detection systems. This review presents an overview of the use of similarity and distance measures within NIAD research. The analysis provides a theoretical background in distance measures, and a discussion of various types of distance measures and their uses. Exemplary uses of distance measures in published research are presented, as is the overall state of the distance measure rigor in the field. Finally, areas which require further focus on improving distance measure rigor in the NIAD field are presented. Key words: (not provided) (ID#:14-2444) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6853338&isnumber=5451756
  • Kumar, G.V.P.; Reddy, D.K., "An Agent Based Intrusion Detection System for Wireless Network with Artificial Immune System (AIS) and Negative Clone Selection," Electronic Systems, Signal Processing and Computing Technologies (ICESC), 2014 International Conference on , vol., no., pp.429,433, 9-11 Jan. 2014. doi: 10.1109/ICESC.2014.73 Intrusion in Wireless network differs from IP network in a sense that wireless intrusion is both of packet level as well as signal level. Hence a wireless intrusion signature may be as simple as say a changed MAC address or jamming signal to as complicated as session hijacking. Therefore merely managing and cross verifying the patterns from an intrusion source are difficult in such a network. Beside the difficulty of detecting the intrusion at different layers, the network credential varies from node to node due to factors like mobility, congestion, node failure and so on. Hence conventional techniques for intrusion detection fail to prevail in wireless networks. Therefore in this work we device a unique agent based technique to gather information from various nodes and use this information with an evolutionary artificial immune system to detect the intrusion and prevent the same via bypassing or delaying the transmission over the intrusive paths. Simulation results show that the overhead of running AIS system does not vary and is consistent for topological changes. The system also proves that the proposed system is well suited for intrusion detection and prevention in wireless network. keywords: {access protocols; artificial immune systems; jamming; packet radio networks; radio networks; security of data; AIS system; IP network; MAC address; agent based intrusion detection system; artificial immune system; jamming signal; negative clone selection; network topology; session hijacking; wireless intrusion signature; wireless network; Bandwidth; Delays ;Immune system; Intrusion detection; Mobile agents; Wireless networks; Wireless sensor networks; AIS; congestion; intrusion detection; mobility (ID#:14-2445) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6745417&isnumber=6745317
  • Junho Hong; Chen-Ching Liu; Govindarasu, M., "Detection Of Cyber Intrusions Using Network-Based Multicast Messages For Substation Automation," Innovative Smart Grid Technologies Conference (ISGT), 2014 IEEE PES , vol., no., pp.1,5, 19-22 Feb. 2014. doi: 10.1109/ISGT.2014.6816375 This paper proposes a new network-based cyber intrusion detection system (NIDS) using multicast messages in substation automation systems (SASs). The proposed network-based intrusion detection system monitors anomalies and malicious activities of multicast messages based on IEC 61850, e.g., Generic Object Oriented Substation Event (GOOSE) and Sampled Value (SV). NIDS detects anomalies and intrusions that violate predefined security rules using a specification-based algorithm. The performance test has been conducted for different cyber intrusion scenarios (e.g., packet modification, replay and denial-of-service attacks) using a cyber security testbed. The IEEE 39-bus system model has been used for testing of the proposed intrusion detection method for simultaneous cyber attacks. The false negative ratio (FNR) is the number of misclassified abnormal packets divided by the total number of abnormal packets. The results demonstrate that the proposed NIDS achieves a low fault negative rate. Keywords: power engineering computing; security of data; substation automation; FNR;GOOSE;IEC 61850;IEEE 39-bus system model; NIDS; SAS;S V; anomaly detection; cyber security testbed; denial-of-service attacks; false negative ratio; generic object-oriented substation event; low-fault negative rate; misclassified abnormal packets; network-based cyber intrusion detection system; network-based multicast messages; packet modification; predefined security rules; replay; sampled value ;simultaneous cyber attacks;specification-based algorithm; substation automation systems; Computer security; Educational institutions; IEC standards; Intrusion detection; Substation automation; Cyber Security of Substations; GOOSE and SV; Intrusion Detection System; Network Security (ID#:14-2446) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6816375&isnumber=6816367
  • Arya, A; Kumar, S., "Information theoretic feature extraction to reduce dimensionality of Genetic Network Programming based intrusion detection model," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on , vol., no., pp.34,37, 7-8 Feb. 2014.doi: 10.1109/ICICICT.2014.6781248 Intrusion detection techniques require examining high volume of audit records so it is always challenging to extract minimal set of features to reduce dimensionality of the problem while maintaining efficient performance. Previous researchers analyzed Genetic Network Programming framework using all 41 features of KDD cup 99 dataset and found the efficiency of more than 90% at the cost of high dimensionality. We are proposing a new technique for the same framework with low dimensionality using information theoretic approach to select minimal set of features resulting in six attributes and giving the accuracy very close to their result. Feature selection is based on the hypothesis that all features are not at same relevance level with specific class. Simulation results with KDD cup 99 dataset indicates that our solution is giving accurate results as well as minimizing additional overheads. Keywords: feature extraction; feature selection; genetic algorithms; information theory; security of data; KDD cup 99 dataset; audit records; dimensionality reduction; feature selection; genetic network programming based intrusion detection model; information theoretic feature extraction; Artificial intelligence; Correlation; Association rule; Discretization; Feature Selection; GNP (ID#:14-2447) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781248&isnumber=6781240
  • Nafir, Abdenacer; Mazouzi, Smaine; Chikhi, Salim, "Collective intrusion detection in wide area networks," Innovations in Intelligent Systems and Applications (INISTA) Proceedings, 2014 IEEE International Symposium on , vol., no., pp.46,51, 23-25 June 2014.doi: 10.1109/INISTA.2014.6873596 We present in this paper a collective approach for intrusion detection in wide area networks. We use the multi-agent paradigm to model the proposed distributed system. In this system, an agent, which plays several roles, is situated on each node of the net. The first role of an agent is to perform the work of a local intrusion detection system (IDS). Periodically, it proceeds to exchange security data within its local neighbouring. The agent neighbouring consists of IDS agents of local neighbour nodes. The goal of such an approach is to consolidate the decision, regarding every suspected security event. Unlike previous works having proposed distributed systems for intrusion detection, our system is not restricted to data sharing. It proceeds in the case of a conflict to a negotiation between neighbouring agents in order to produce a consensual decision. So, the proposed system is fully distributed. It does not require any central or hierarchical control, which compromises its scalability, specially in wide area networks such as Internet. Indeed, in this kind of networks, some attacks like distributed denial of service (DDoS) require fully distributed defence. Experiments on our system show its potential for satisfactory DDoS attack detection. Keywords: Computer crime; Computer hacking; Internet; Intrusion detection; Multi-agent systems; Wide area networks; DDoS; IDS; Intrusion detection; Multi-agent systems; Network security (ID#:14-2448) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6873596&isnumber=6873584
  • Soo Young Moon; Ji Won Kim; Tae Ho Cho, "An Energy-Efficient Routing Method With Intrusion Detection And Prevention For Wireless Sensor Networks," Advanced Communication Technology (ICACT), 2014 16th International Conference on , vol., no., pp.467,470, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6779004 Because of the features such as limited resources, wireless communication and harsh environments, wireless sensor networks (WSNs) are prone to various security attacks. Therefore, we need intrusion detection and prevention methods in WSNs. When the two types of schemes are applied, heavy communication overhead and resulting excessive energy consumption of nodes occur. For this reason, we propose an energy efficient routing method in an environment where both intrusion detection and prevention schemes are used in WSNs. We confirmed through experiments that the proposed scheme reduces the communication overhead and energy consumption compared to existing schemes. Keywords: security of data; telecommunication network routing; wireless sensor networks; energy-efficient routing method; excessive energy consumption; heavy communication overhead; intrusion detection scheme; intrusion prevention scheme; security attacks; wireless communication; wireless sensor networks; Energy consumption; Intrusion detection; Network topology; Routing; Sensors; Topology; Wireless sensor networks; intrusion detection; intrusion prevention; network layer attacks; wireless sensor network (ID#:14-2449) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779004&isnumber=6778899
  • Chaudhary, A; Tiwari, V.N.; Kumar, A, "Design an Anomaly Based Fuzzy Intrusion Detection System For Packet Dropping Attack In Mobile Ad Hoc Networks," Advance Computing Conference (IACC), 2014 IEEE International , vol., no., pp.256,261, 21-22 Feb. 2014. doi: 10.1109/IAdCC.2014.6779330 Due to the advancement in communication technologies, mobile ad hoc network increases the ability in terms of ad hoc communication between the mobile nodes. Mobile ad hoc networks do not use any predefined infrastructure during the communication so that all the present mobile nodes which are want to communicate with each other immediately form the topology and initiates the request for data packets to send or receive. In terms of security perspectives, communication via wireless links makes mobile ad hoc networks more vulnerable to attacks because any one can join and move the networks at any time. Particularly, in mobile ad hoc networks one of very common attack is packet dropping attack through the malicious node (s). This paper developed an anomaly based fuzzy intrusion detection system to detect the packet dropping attack from mobile ad hoc networks and this proposed solution also save the resources of mobile nodes in respect to remove the malicious nodes. For implementation point of view, qualnet simulator 6.1 and sugeno-type fuzzy inference system are used to make the fuzzy rule base for analyzing the results. From the simulation results it's proved that proposed system is more capable to detect the packet dropping attack with high positive rate and low false positive under each level (low, medium and high) of speed of mobile nodes. Keywords: fuzzy logic; fuzzy reasoning; fuzzy set theory; mobile ad hoc networks; telecommunication network topology; telecommunication security; anomaly based fuzzy intrusion detection system; data packets; fuzzy rule base ;malicious nodes; mobile ad hoc networks; mobile nodes; network topology; packet dropping attack; qualnet simulator 6.1;sugeno-type fuzzy inference system; wireless communication; Fuzzy logic; Intrusion detection; Mobile ad hoc networks; Mobile computing; Mobile nodes; MANETs security issues; detection methods; fuzzy logic; intrusion detection system (IDS);mobile ad hoc networks (MANETs);packet dropping attack (ID#:14-2450) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779330&isnumber=6779283
  • Holm, H., "Signature Based Intrusion Detection for Zero-Day Attacks: (Not) A Closed Chapter?," System Sciences (HICSS), 2014 47th Hawaii International Conference on , vol., no., pp.4895,4904, 6-9 Jan. 2014.doi: 10.1109/HICSS.2014.600 A frequent claim that has not been validated is that signature based network intrusion detection systems (SNIDS) cannot detect zero-day attacks. This paper studies this property by testing 356 severe attacks on the SNIDS Snort, configured with an old official rule set. Of these attacks, 183 attacks are zero-days' to the rule set and 173 attacks are theoretically known to it. The results from the study show that Snort clearly is able to detect zero-days' (a mean of 17% detection). The detection rate is however on overall greater for theoretically known attacks (a mean of 54% detection). The paper then investigates how the zero-days' are detected, how prone the corresponding signatures are to false alarms, and how easily they can be evaded. Analyses of these aspects suggest that a conservative estimate on zero-day detection by Snort is 8.2%. Keywords: computer network security; digital signatures ; SNIDS; false alarm; signature based network intrusion detection; zero day attacks; zero day detection ;Computer architecture; Payloads; Ports (Computers); Reliability; Servers; Software; Testing; Computer security; NIDS; code injection; exploits (ID#:14-2451) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759203&isnumber=6758592

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Pervasive Computing

Pervasive Computing


Also called ubiquitous computing, pervasive computing is the concept that all man-made and some natural products will have embedded hardware and software technology and connectivity. This evolution has been proceeding exponentially as computing devices become progressively smaller and more powerful. The goal of pervasive computing, which combines current network technologies with wireless computing, voice recognition, Internet capability and artificial intelligence, is to create an environment where the connectivity of devices is embedded in such a way that the connectivity is unobtrusive and always available. Such an approach offers security challenges. The articles cited here were published in the first half of 2014.

  • Chopra, A; Tokas, S.; Sinha, S.; Panchal, V.K., "Integration of Semantic Search Technique And Pervasive Computing," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on ,pp.283,285, 5-7 March 2014. doi: 10.1109/IndiaCom.2014.6828144 The main goal of pervasive computing is to provide services that can be used by the user in the given context with minimal user intervention. To support such an environment services or the applications in the environment should be able to interact seamlessly, with the other devices or applications present in the environment, to gather relevant information in current context. Main challenge is devices are resource constrained. To support such systems, so that they can utilize resources of other sensor nodes/mobile devices, I propose a system that integrates semantic search in pervasive computing. Information associated with mobile devices and sensor nodes is used in a way that results in minimal inexact matching, efficient and improved service discovery. Keywords: information retrieval; ubiquitous computing; information gathering; mobile devices; pervasive computing; resource utilization; semantic search technique; sensor nodes; service discovery; user intervention; Context; Decision support systems; Mobile handsets; Pervasive computing; Resource description framework; Semantics; Wireless sensor networks; RDF; pervasive computing ;semantic search; service discovery (ID#:14-2452) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828144&isnumber=6827395
  • Kiljander, J.; D'Elia, A; Morandi, F.; Hyttinen, P.; Takalo-Mattila, J.; Ylisaukko-oja, A; Soininen, J.; Salmon Cinotti, T., "Semantic interoperability architecture for pervasive computing and Internet of Things," Access, IEEE, vol. PP, no.99, pp.1, 1, August 2014. doi: 10.1109/ACCESS.2014.2347992 Pervasive computing and Internet of Things (IoT) paradigms have created a huge potential for new business. To fully realize this potential, there is a need for a common way to abstract the heterogeneity of devices so that their functionality can be represented as a virtual computing platform. To this end, we present novel semantic-level interoperability architecture for pervasive computing and Internet of Things (IoT). There are two main principles in the proposed architecture. First, information and capabilities of devices are represented with Semantic Web knowledge representation technologies and interaction with devices and the physical world is achieved by accessing and modifying their virtual representations. Second, global IoT is divided into numerous local smart spaces managed by a Semantic Information Broker (SIB) that provides a means to monitor and update the virtual representation of the physical world. An integral part of the architecture is a Resolution Infrastructure that provides a means to resolve the network address of a SIB either by using a physical object identifier as a pointer to information or by searching SIBs matching a specification represented with SPARQL. We present several reference implementations and applications that we have developed to evaluate the architecture in practice. The evaluation also includes performance studies that, together with the applications, demonstrate the suitability of the architecture to real-life IoT scenarios. Additionally, to validate that the proposed architecture conforms to the common IoT-A Architecture Reference Model (ARM), we map the central components of the architecture to the IoT-ARM. Keywords: Computer architecture; Context awareness; Interoperability; Pervasive computing; Resource description framework; Semantics; Sensors (ID#:14-2453) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6879461&isnumber=6514899
  • Strobel, D.; Oswald, D.; Richter, B.; Schellenberg, F.; Paar, C., "Microcontrollers as (In)Security Devices for Pervasive Computing Applications," Proceedings of the IEEE , vol.102, no.8, pp.1157,1173, Aug. 2014. doi: 10.1109/JPROC.2014.2325397 Often overlooked, microcontrollers are the central component in embedded systems which drive the evolution toward the Internet of Things (IoT). They are small, easy to handle, low cost, and with myriads of pervasive applications. An increasing number of microcontroller-equipped systems are security and safety critical. In this tutorial, we take a critical look at the security aspects of today's microcontrollers. We demonstrate why the implementation of sensitive applications on a standard microcontroller can lead to severe security problems. To this end, we summarize various threats to microcontroller-based systems, including side-channel analysis and different methods for extracting embedded code. In two case studies, we demonstrate the relevance of these techniques in real-world applications: Both analyzed systems, a widely used digital locking system and the YubiKey 2 onetime password generator, turned out to be susceptible to attacks against the actual implementations, allowing an adversary to extract the cryptographic keys which, in turn, leads to a total collapse of the system security. Keywords: Internet of Things; cryptography; embedded systems; microcontrollers; ubiquitous computing; Internet of Things; IoT; YubiKey 2 onetime password generator; cryptographic key extraction; digital locking system; embedded code extraction; embedded systems; microcontroller-equipped systems; pervasive computing applications; security devices; side-channel analysis; Algorithm design and analysis; Cryptography; Embedded systems; Field programmable gate arrays; Integrated circuit modeling; Microcontrollers; Pervasive computing; Security; Code extraction; microcontroller; real-world attacks; reverse engineering; side-channel analysis (ID#:14-2455) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6826474&isnumber=6860340
  • Alomair, B.; Poovendran, R., "Efficient Authentication for Mobile and Pervasive Computing," Mobile Computing, IEEE Transactions on, vol.13, no.3, pp. 469,481, March 2014. doi: 10.1109/TMC.2012.252 With today's technology, many applications rely on the existence of small devices that can exchange information and form communication networks. In a significant portion of such applications, the confidentiality and integrity of the communicated messages are of particular interest. In this work, we propose two novel techniques for authenticating short encrypted messages that are directed to meet the requirements of mobile and pervasive applications. By taking advantage of the fact that the message to be authenticated must also be encrypted, we propose provably secure authentication codes that are more efficient than any message authentication code in the literature. The key idea behind the proposed techniques is to utilize the security that the encryption algorithm can provide to design more efficient authentication mechanisms, as opposed to using standalone authentication primitives. Keywords: cryptography; message authentication; mobile computing; communicated message confidentiality; communicated message integrity; communication networks; encryption algorithm; information exchange; mobile applications; mobile computing; pervasive applications; pervasive computing; provably secure authentication codes; short encrypted message authentication mechanism; Algorithm design and analysis; Authentication; Encryption; Message authentication; Authentication; computational security; pervasive computing; unconditional security; universal hash-function families (ID#:14-2456) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6380496&isnumber=6731368
  • Vihavainen, S.; Lampinen, A; Oulasvirta, A; Silfverberg, S.; Lehmuskallio, A, "The Clash between Privacy and Automation in Social Media," Pervasive Computing, IEEE, vol.13, no.1, pp.56, 63, Jan.-Mar. 2014. doi: 10.1109/MPRV.2013.25 Classic research on human factors has found that automation never fully eliminates the human operator from the loop. Instead, it shifts the operator's responsibilities to the machine and changes the operator's control demands, sometimes with adverse consequences, called the "ironies of automation." In this article, the authors revisit the problem of automation in the era of social media, focusing on privacy concerns. Present-day social media automatically discloses information, such as users' whereabouts, likings, and undertakings. This review of empirical studies exposes three recurring privacy-related issues in automated disclosure: insensitivity to situational demands, inadequate control of nuance and veracity, and inability to control disclosure with service providers and third parties. The authors claim that "all-or-nothing" automation has proven problematic and that social network services should design their user controls with all stages of the disclosure process in mind. Keywords: data privacy; human factors; social networking (online); automated disclosure; human factors; privacy-related issues; social media; social network services; Automation; Context awareness; Human factors; Media; Pervasive computing; Privacy; Social implications of technology; Social network services; automation; pervasive computing; privacy; social media (ID#:14-2457) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6419690&isnumber=6750476
  • Arbit, A; Oren, Y.; Wool, A, "A Secure Supply-Chain RFID System that Respects Your Privacy," Pervasive Computing, IEEE , vol.13, no.2, pp.52,60, Apr.-June. 2014. doi: 10.1109/MPRV.2014.22 Supply-chain RFID systems introduce significant privacy issues to consumers, making it necessary to encrypt communications. Because the resources available on tags are very small, it is generally assumed that only symmetric-key cryptography can be used in such systems. Unfortunately, symmetric-key cryptography imposes negative trust issues between the various stake-holders, and risks compromising the security of the whole system if even a single tag is reverse engineered. This work presents a working prototype implementation of a secure RFID system which uses public-key cryptography to simplify deployment, reduce trust issues between the supply-chain owner and tag manufacturer, and protect user privacy. The authors' prototype system consists of a UHF tag running custom firmware, a standard off-the-shelf reader and custom point-of-sale terminal software. No modifications were made to the reader or the air interface, proving that high-security EPC tags and standard EPC tags can coexist and share the same infrastructure. Keywords: data privacy; manufacturing data processing; public key cryptography; radiofrequency identification; supply chain management; UHF tag; custom point-of-sale terminal software; data privacy; high-security EPC tags; off-the-shelf reader; privacy issues; public key cryptography; radiofrequency identification; reverse engineering; secure supply-chain RFID system; supply-chain owner; symmetric-key cryptography; system security; tag manufacturer; trust issues user privacy; Encryption; Payloads; Protocols; Public key; Radiofrequency identification; Supply chain management; RFID; pervasive computing; security; supply chain (ID#:14-2458) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6818503&isnumber=6818495
  • Abas, K.; Porto, C.; Obraczka, K., "Wireless Smart Camera Networks for the Surveillance of Public Spaces," Computer, vol.47, no.5, pp.37,44, May 2014. doi: 10.1109/MC.2014.140 A taxonomy of wireless visual sensor networks for surveillance offers design goals that try to balance energy efficiency and application performance requirements. SWEETcam, a wireless smart camera network platform, tries to address the challenges raised by achieving adequate energy-performance tradeoffs. Keywords: cameras; video surveillance; wireless sensor networks; SWEETcam; energy-performance tradeoffs; public space surveillance; wireless smart camera networks; Bandwidth; Cameras; Data visualization; Energy efficiency; Smart cameras; Surveillance; Wireless communication; Wireless sensor networks; computer vision; distributed systems; embedded systems; hardware; image processing; pervasive computing; surveillance systems; visualization; wireless sensor networks (ID#:14-2459) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6818944&isnumber=6818895
  • Avoine, G.; Coisel, I; Martin, T., "Untraceability Model for RFID," Mobile Computing, IEEE Transactions on, vol. PP, no.99, pp.1, 1, December 2013. doi: 10.1109/TMC.2013.161 After several years of research on cryptographic models for privacy in RFID systems, it appears that no universally model exists yet. Experience shows that security experts usually prefer using their own ad-hoc model than the existing ones. In particular, the impossibility of the models to refine the privacy assessment of different protocols has been highlighted in several studies. The paper emphasizes the necessity to define a new model capable of comparing protocols meaningfully. It introduces an untraceability model that is operational where the previous models are not. The model aims to be easily usable to design proofs or describe attacks. This spirit led to a modular model where adversary actions (oracles), capabilities (selectors and restrictions), and goals (experiment) follow an intuitive and practical approach. This design enhances the ability to formalize new adversarial assumptions and future evolutions of the technology, and provide a finest privacy evaluation of protocols. Keywords: Pervasive computing; Security; Systems and Information Theory; and protection; integrity (ID#:14-2460) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6692838&isnumber=4358975
  • Chia-Mei Chen; Peng-Yu Yang; Ya-Hui Ou; Han-Wei Hsiao, "Targeted Attack Prevention at Early Stage," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on , vol., no., pp.866,870, 13-16 May 2014. doi: 10.1109/WAINA.2014.134 Targeted cyber attacks play a critical role in disrupting network infrastructure and information privacy. Based on the incident investigation, Intelligence gathering is the first phase of such attacks. To evade detection, hacker may make use of botnet, a set of zombie machines, to gain the access of a target and the zombies send the collected results back to the hacker. Even though the zombies would be blocked by detection system, the hacker, using the access information obtained from the botnet, would login the target from another machine without being noticed by the detection system. Such information gathering tactic can evade detection and the hacker grants the initial access to the target. The proposed defense system analyzes multiple logs from the network and extracts the reconnaissance attack sequences related to targeted attacks. State-based model is adopted to model the steps of the above early phase attack performed by multiple scouts and an intruder and such attack events in a long time frame becomes significant in the state-aware model. The results show that the proposed system can identify the attacks at the early stage efficiently to prevent further damage in the networks. Keywords: authorisation; data privacy; invasive software; ubiquitous computing; botnet; cyber attack;information privacy; intelligence gathering; network infrastructure; state-based model ;targeted attack prevention; Computer hacking; Hidden Markov models; IP networks; Joints; Reconnaissance; Servers intrusion detection; pervasive computing; targeted attacks (ID#:14-2461) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844748&isnumber=6844560
  • Mirzadeh, S.; Cruickshank, H.; Tafazolli, R., "Secure Device Pairing: A Survey," Communications Surveys & Tutorials, IEEE , vol.16, no.1, pp.17,40, First Quarter 2014. doi: 10.1109/SURV.2013.111413.00196 In this paper, we discuss secure device pairing mechanisms in detail. We explain man-in-the-middle attack problem in unauthenticated Diffie-Hellman key agreement protocols and show how it can be solved by using out-of-band channels in the authentication procedure. We categorize out-of-band channels into three categories of weak, public, and private channels and demonstrate their properties through some familiar scenarios. A wide range of current device pairing mechanisms are studied and their design circumstances, problems, and security issues are explained. We also study group device pairing mechanisms and discuss their application in constructing authenticated group key agreement protocols. We divide the mechanisms into two categories of protocols with and without the trusted leader and show that protocols with trusted leader are more communication and computation efficient. In our study, we considered both insider and outsider adversaries and present protocols that provide secure group device pairing for uncompromised nodes even in presence of corrupted group members. Keywords: cryptographic protocols; authenticated group key agreement protocol; authentication procedure; device pairing mechanism; man-in-the-middle attack problem; out-of-band channel; private channel; public channel ;unauthenticated Diffie-Hellman key agreement protocols; Authentication; DH-HEMTs; Protocols; Public key; Wireless communication;key management; machine-to-machine communication; pervasive computing; security (ID#:14-2462) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6687314&isnumber=6734839
  • Thuong Nguyen, "Bayesian Nonparametric Extraction Of Hidden Contexts From Pervasive Honest Signals," Pervasive Computing And Communications Workshops (PERCOM Workshops), 2014 IEEE International Conference on , vol., no., pp.168,170, 24-28 March 2014. doi: 10.1109/PerComW.2014.6815190 Hidden patterns and contexts play an important part in intelligent pervasive systems. Most of the existing works have focused on simple forms of contexts derived directly from raw signals. High-level constructs and patterns have been largely neglected or remained under-explored in pervasive computing, mainly due to the growing complexity over time and the lack of efficient principal methods to extract them. Traditional parametric modeling approaches from machine learning find it difficult to discover new, unseen patterns and contexts arising from continuous growth of data streams due to its practice of training-then-prediction paradigm. In this work, we propose to apply Bayesian nonparametric models as a systematic and rigorous paradigm to continuously learn hidden patterns and contexts from raw social signals to provide basic building blocks for context-aware applications. Bayesian nonparametric models allow the model complexity to grow with data, fitting naturally to several problems encountered in pervasive computing. Under this framework, we use nonparametric prior distributions to model the data generative process, which helps towards learning the number of latent patterns automatically, adapting to changes in data and discovering never-seen-before patterns, contexts and activities. The proposed methods are agnostic to data types, however our work shall demonstrate to two types of signals: accelerometer activity data and Bluetooth proximal data. Keywords: data mining; learning (artificial intelligence); ubiquitous computing; Bayesian nonparametric extraction; Bayesian nonparametric models; Bluetooth proximal data; accelerometer activity data; context-aware applications; data streams; hidden contexts extraction; high-level constructs; high-level patterns; intelligent pervasive systems; machine learning; parametric modeling approach; pervasive computing; pervasive honest signals; social signals; training-then-prediction paradigm; Adaptation models; Context; Context modeling; Data mining; Data models; Hidden Markov models; Pervasive computing (ID#:14-2463) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6815190&isnumber=6815123

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Router Systems Security

Router Systems Security


Routers are among the most ubiquitous electronic devices in use. Basic security from protocols and encryption can be readily achieved, but routing has many leaks. The articles cited here look at route leaks, stack protection, and mobile platforms using Tor, iOS and Android OS. They were published in the first half of 2014.

  • Siddiqui, M.S.; Montero, D.; Yannuzzi, M.; Serral-Gracia, R.; Masip-Bruin, X., "Diagnosis of Route Leaks Among Autonomous Systems In The Internet," Smart Communications in Network Technologies (SaCoNeT), 2014 International Conference on , vol., no., pp.1,6, 18-20 June 2014. doi: 10.1109/SaCoNeT.2014.6867765 Border Gateway Protocol (BGP) is the defacto inter-domain routing protocol in the Internet. It was designed without an inherent security mechanism and hence is prone to a number of vulnerabilities which can cause large scale disruption in the Internet. Route leak is one such inter-domain routing security problem which has the potential to cause wide-scale Internet service failure. Route leaks occur when Autonomous systems violate export policies while exporting routes. As BGP security has been an active research area for over a decade now, several security strategies were proposed, some of which either advocated complete replacement of the BGP or addition of new features in BGP, but they failed to achieve global acceptance. Even the most recent effort in this regard, lead by the Secure Inter-Domain Routing (SIDR) working group (WG) of IETF fails to counter all the BGP anomalies, especially route leaks. In this paper we look at the efforts in countering the policy related BGP problems and provide an analytical insights into why they are ineffective. We contend a new direction for future research in managing the broader security issues in the inter-domain routing. In that light, we propose a naive approach for countering the route leak problem by analyzing the information available at hand, such as the RIB of the router. The main purpose of this paper was to position and highlight the autonomous smart analytical approach for tackling policy related BGP security issues. Keywords: Internet ;computer network security; routing protocols; BGP security issue; IETF ;Internet autonomous systems; Secure InterDomain Routing working group; border gateway protocol; interdomain routing protocol; interdomain routing security problem; route leak diagnosis; security issues; IP networks; Internet; Radiation detectors; Routing; Routing protocols; Security (ID#:14-2464) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6867765&isnumber=6867755
  • Peng Wu; Wolf, T., "Stack Protection In Packet Processing Systems," Computing, Networking and Communications (ICNC), 2014 International Conference on, pp.53, 57, 3-6 Feb. 2014. doi: 10.1109/ICCNC.2014.6785304 Network security is a critical aspect of Internet operations. Most network security research has focused on protecting end-systems from hacking and denial-of-service attacks. In our work, we address hacking attacks on the network infrastructure itself. In particular, we explore data plane stack smashing attacks that have demonstrated successfully on network processor systems. We explore their use in the context of software routers that are implemented on top of general-purpose processor and operating systems. We discuss how such attacks can be adapted to these router systems and how stack protection mechanisms can be used as defense. We show experimental results that demonstrate the effectiveness of these stack protection mechanisms. Keywords: Internet; computer crime; computer network security; general purpose computers; operating systems (computers);packet switching; telecommunication network routing; Internet; computer network security; denial of service attacks; end systems protection; general purpose processor; hacking attacks; network infrastructure; network processor systems; operating systems; packet processing system; router systems; smashing attacks; software routers; stack protection mechanism; Computer architecture; Information security; Linux; Operating systems; Protocols; attack; defense; network security; stack smashing} (ID#:14-2465) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785304&isnumber=6785290
  • Frantti, Tapio; Roning, Juha, "A Risk-Driven Security Analysis For A Bluetooth Low Energy Based Microdata Ecosystem," Ubiquitous and Future Networks (ICUFN), 2014 Sixth International Conf on, vol., no., pp.69,74, 8-11 July 2014. doi: 10.1109/ICUFN.2014.6876753 This paper presents security requirements, risk survey, security objectives, and security controls of the Bluetooth Low Energy (BLE) based Catcher devices and the related Microdata Ecosystem of Ceruus company for a secure, energy efficient and scalable wireless content distribution. The system architecture was composed of the Mobile Cellular Network (MCN) based gateway/edge router device, such as Smart Phone, Catchers, and web based application servers. It was assumed that MCN based gateways communicate with application servers and surrounding Catcher devices. The analysis of the scenarios developed highlighted common aspects and led to security requirements, objectives, and controls that were used to define and develop the Catcher and MCN based router devices and guide the system architecture design of the Microdata Ecosystem. Keywords: Authentication; Ecosystems; Encryption; Logic gates; Protocols; Servers; Internet of Things; authentication; authorization; confidentiality; integrity; security; timeliness (ID#:14-2466) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6876753&isnumber=6876727
  • Wassel, H.M.G.; Ying Gao; Oberg, J.K.; Huffmire, T.; Kastner, R.; Chong, F.T.; Sherwood, T., "Networks on Chip with Provable Security Properties," Micro, IEEE , vol.34, no.3, pp.57,68, May-June 2014. doi: 10.1109/MM.2014.46 In systems where a lack of safety or security guarantees can be catastrophic or even fatal, noninterference is used to separate domains handling critical (or confidential) information from those processing normal (or unclassified) data for purposes of fault containment and ease of verification. This article introduces SurfNoC, an on-chip network that significantly reduces the latency incurred by strict temporal partitioning. By carefully scheduling the network into waves that flow across the interconnect, data from different domains carried by these waves are strictly noninterfering while avoiding the significant overheads associated with cycle-by-cycle time multiplexing. The authors describe the scheduling policy and router microarchitecture changes required, and evaluate the information-flow security of a synthesizable implementation through gate-level information flow analysis. When comparing their approach for varying numbers of domains and network sizes, they find that in many cases SurfNoC can reduce the latency overhead of implementing cycle-level noninterference by up to 85 percent. Keywords: network-on-chip; processor scheduling; security of data; SurfNoC; cycle-by-cycle time multiplexing; ycle-level noninterference; gate-level information flow analysis; information-flow security; network scheduling; networks on chip; provable security properties; Computer architecture; Computer security; Microarchitecture; Network-on-chip; Ports (Computers);Quality of service; Schedules; Computer architecture; Computer security; Microarchitecture; Network-on-chip; Ports (Computers);Quality of service; Schedules; high performance computing; high-assurance systems; networks on chip; noninterference; security; virtualization (ID#:14-2467) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828567&isnumber=6828565
  • Sivaraman, V.; Matthews, J.; Russell, C.; Ali, S.T.; Vishwanath, A, "Greening Residential WiFi Networks under Centralized Control," Mobile Computing, IEEE Transactions on, vol . PP, no.99, pp.1, 1, May 2014. doi: 10.1109/TMC.2014.2324582 Residential broadband gateways (comprising modem, router, and WiFi access point), though individually consuming only 5-10 Watts of power, are significant contributors to overall network energy consumption due to large deployment numbers. Moreover, home gateways are typically always on, so as to provide continuous online presence to household devices for VoIP, smart metering, security surveillance, medical monitoring, etc. A natural solution for reducing the energy consumption of home gateways is to leverage the overlap of WiFi networks common in urban environments and aggregate user traffic on to fewer gateways, thus putting the remaining to sleep. In this paper we propose, evaluate, and prototype an architecture that overcomes significant challenges in making this solution feasible at large-scale. We advocate a centralized approach, whereby a single authority coordinates the home gateways to maximize energy savings in a fair manner. Our solution can be implemented across heterogeneous ISPs, avoids client-side modifications (thus encompassing arbitrary user devices and operating systems), and permits explicit control of session migrations. We apply our solution to WiFi traces collected in a building with 30 access points and 25,000 client connections, and evaluate via simulation the trade-offs between energy savings, session disruptions, and fairness. We then prototype our system on commodity WiFi access points, test it in a two-storey building emulating 6 residences, and demonstrate radio energy reduction of over 60% with little impact on user experience. Keywords: Bandwidth; Buildings; Energy consumption; Green products; IEEE 802.11 Standards; Logic gates; Security (ID#:14-2468) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6816063&isnumber=4358975
  • Tennekoon, R.; Wijekoon, J.; Harahap, E.; Nishi, H.; Saito, E.; Katsura, S., "Per HOP DATA ENCRYPTION PROTOCOL FOR TRANSMISSION OF MOTION CONTROL DATA OVER PUBLIC NETWORKS," Advanced Motion Control (AMC),2014 IEEE 13th International Workshop on , vol., no., pp.128,133, 14-16 March 2014. doi: 10.1109/AMC.2014.6823269 Bilateral controllers are widely used vital technology to perform remote operations and telesurgeries. The nature of the bilateral controller enables control objects, which are geographically far from the operation location. Therefore, the control data has to travel through public networks. As a result, to maintain the effectiveness and the consistency of applications such as teleoperations and telesurgeries, faster data delivery and data integrity are essential. The Service-oriented Router (SoR) was introduced to maintain the rich information on the Internet and to achieve maximum benefit from networks. In particular, the security, privacy and integrity of bilateral communication are not discoursed in spite of its significance brought by its underlying skill information or personal vital information. An SoR can analyze all packet or network stream transactions on its interfaces and store them in high throughput databases. In this paper, we introduce a hop-by-hop routing protocol which provides hop-by-hop data encryption using functions of the SoR. This infrastructure can provide security, privacy and integrity by using these functions. Furthermore, we present the implementations of proposed system in the ns-3 simulator and the test result shows that in a given scenario, the protocol only takes a processing delay of 46.32 ms for the encryption and decryption processes per a packet. Keywords: Internet; computer network security; control engineering computing; cryptographic protocols; data communication; data integrity; data privacy; force control; medical robotics; motion control; position control; routing protocols; surgery; telecontrol; telemedicine; telerobotics; Internet; SoR; bilateral communication; bilateral controller; control objects; data delivery; data integrity; decryption process; hop-by-hop data encryption; hop-by-hop routing protocol; motion control data transmission; network stream transaction analysis;ns-3 simulator operation location; packet analysis; per hop data encryption protocol; personal vital information; privacy; processing delay; public network; remote operation; security; service-oriented router; skill information; teleoperation; telesurgery; throughput database; Delays; Encryption; Haptic interfaces; Routing protocols; Surgery; Bilateral Controllers; Service-oriented Router; op-by-hop routing; motion control over networks; ns-3 (ID#:14-2469) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6823269&isnumber=6823244
  • Bingyang Liu; Jun Bi; Vasilakos, AV., "Toward Incentivizing Anti-Spoofing Deployment," Information Forensics and Security, IEEE Transactions on, vol.9, no.3, pp.436,450, March 2014. doi: 10.1109/TIFS.2013.2296437 IP spoofing-based flooding attacks are a serious and open security problem on the current Internet. The best current antispoofing practices have long been implemented in modern routers. However, they are not sufficiently applied due to the lack of deployment incentives, i.e., an autonomous system (AS) can hardly gain additional protection by deploying them. In this paper, we propose mutual egress filtering (MEF), a novel antispoofing method, which provides continuous deployment incentives. The MEF is implemented on the AS border routers using access control lists (ACLs). It drops an outbound packet whose source address does not belong to the local AS if the packet is related to a spoofing attack against other MEF-enabled ASes. By this means, only the deployers of the MEF can gain protection, whereas nondeployers cannot free ride. As more ASes deploy MEF, deployment incentives become higher. We present the system design of MEF, and propose an optimal prefix compression algorithm to compact the ACL into the routers' limited hardware resource. With theoretical analysis and simulations with real Internet data, our evaluation results show that MEF is the only method that achieves monotonically increasing deployment incentives for all types of spoofing attacks, and the system design is lightweight and practical. The prefix compression algorithm advances the state-of-the-art by generalizing the functionalities and reducing the overhead in both time and space. Keywords: IP networks; Internet; authorisation; computer network security; telecommunication network routing; ACL; AS border routers; IP spoofing-based flooding attacks; Internet; MEF; access control lists; antispoofing deployment incentivization; deployment incentives; functionality generalization; mutual egress filtering; open security problem; optimal prefix compression resource; overhead reduction; Compression algorithms; Filtering; Hardware; IP networks; Internet; Routing protocols; System analysis and design; DoS defense; IP spoofing; deployment incentive; spoofing prevention (ID#:14-2470) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6697842&isnumber=6727454
  • Naito, K.; Mori, K.; Kobayashi, H.; Kamienoo, K.; Suzuki, H.; Watanabe, A, "End-to-end IP Mobility Platform In Application Layer for iOS and Android OS," Consumer Communications and Networking Conference (CCNC), 2014 IEEE 11th , vol., no., pp.92,97, 10-13 Jan. 2014. doi: 10.1109/CCNC.2014.6866554 Smartphones are a new type of mobile devices that users can install additional mobile software easily. In the almost all smartphone applications, client-server model is used because end-to-end communication is prevented by NAT routers. Recently, some smartphone applications provide real time services such as voice and video communication, online games etc. In these applications, end-to-end communication is suitable to reduce transmission delay and achieve efficient network usage. Also, IP mobility and security are important matters. However, the conventional IP mobility mechanisms are not suitable for these applications because most mechanisms are assumed to be installed in OS kernel. We have developed a novel IP mobility mechanism called NTMobile (Network Traversal with Mobility). NTMobile supports end-to-end IP mobility in IPv4 and IPv6 networks, however, it is assumed to be installed in Linux kernel as with other technologies. In this paper, we propose a new type of end-to-end mobility platform that provides end-to-end communication, mobility, and also secure data exchange functions in the application layer for smartphone applications. In the platform, we use NTMobile, which is ported as the application program. Then, we extend NTMobile to be suitable for smartphone devices and to provide secure data exchange. Client applications can achieve secure end-to-end communication and secure data exchange by sharing an encryption key between clients. Users also enjoy IP mobility which is the main function of NTMobile in each application. Finally, we confirmed that the developed module can work on Android system and iOS system. Keywords: Android (operating system); IP networks; client-server systems; cryptography; electronic data interchange; iOS (operating system);real-time systems; smart phones; Android OS;IPv4 networks; IPv6 networks ;Linux kernel; NAT routers; NTMobile; OS kernel; application layer; client-server model encryption key; end-to-end IP mobility platform; end-to-end communication; iOS system; network traversal with mobility; network usage; real time services; secure data exchange ;smartphones; transmission delay; Authentication; Encryption; IP networks; Manganese; Relays; Servers (ID#:14-2471) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866554&isnumber=6866537
  • Zhen Ling; Junzhou Luo; Kui Wu; Wei Yu; Xinwen Fu, "TorWard: Discovery of Malicious Traffic Over Tor," INFOCOM, 2014 Proceedings IEEE , vol., no., pp.1402,1410, April 27 2014-May 2 2014. doi: 10.1109/INFOCOM.2014.6848074 Tor is a popular low-latency anonymous communication system. However, it is currently abused in various ways. Tor exit routers are frequently troubled by administrative and legal complaints. To gain an insight into such abuse, we design and implement a novel system, TorWard, for the discovery and systematic study of malicious traffic over Tor. The system can avoid legal and administrative complaints and allows the investigation to be performed in a sensitive environment such as a university campus. An IDS (Intrusion Detection System) is used to discover and classify malicious traffic. We performed comprehensive analysis and extensive real-world experiments to validate the feasibility and effectiveness of TorWard. Our data shows that around 10% Tor traffic can trigger IDS alerts. Malicious traffic includes P2P traffic, malware traffic (e.g., botnet traffic), DoS (Denial-of-Service) attack traffic, spam, and others. Around 200 known malware have been identified. To the best of our knowledge, we are the first to perform malicious traffic categorization over Tor. Keywords: computer network security; peer-to-peer computing; telecommunication network routing telecommunication traffic; DoS; IDS; IDS alerts;P2P traffic; Tor exit routers; denial-of-service attack traffic; intrusion detection system; low-latency anonymous communication system; malicious traffic categorization; malicious traffic discovery; spam; Bandwidth; Computers; Logic gates; Malware; Mobile handsets; Ports (Computers);Servers; Intrusion Detection System; Malicious Traffic; Tor (ID#:14-2472) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848074&isnumber=6847911
  • Ganegedara, T.; Weirong Jiang; Prasanna, V.K., "A Scalable and Modular Architecture for High-Performance Packet Classification," Parallel and Distributed Systems, IEEE Transactions on , vol.25, no.5, pp.1135,1144, May 2014. doi: 10.1109/TPDS.2013.261 Packet classification is widely used as a core function for various applications in network infrastructure. With increasing demands in throughput, performing wire-speed packet classification has become challenging. Also the performance of today's packet classification solutions depends on the characteristics of rulesets. In this work, we propose a novel modular Bit-Vector (BV) based architecture to perform high-speed packet classification on Field Programmable Gate Array (FPGA). We introduce an algorithm named StrideBV and modularize the BV architecture to achieve better scalability than traditional BV methods. Further, we incorporate range search in our architecture to eliminate ruleset expansion caused by range-to-prefix conversion. The post place-and-route results of our implementation on a state-of-the-art FPGA show that the proposed architecture is able to operate at 100+ Gbps for minimum size packets while supporting large rulesets up to 28 K rules using only the on-chip memory resources. Our solution is ruleset-feature independent, i.e. the above performance can be guaranteed for any ruleset regardless the composition of the ruleset. Keywords: field programmable gate arrays; packet switching; FPGA; core function ;field programmable gate array; high performance packet classification solutions; high speed packet classification; modular architecture; modular bit vector; network infrastructure; on-chip memory resources; range-to-prefix conversion; ruleset expansion; ruleset-feature independent; scalable architecture; wire speed packet classification; Arrays; Field programmable gate arrays; Hardware; Memory management; Pipelines; Throughput; Vectors; ASIC; FPGA; Packet classification; firewall; hardware architectures; network security; networking; router (ID#:14-2473) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6627892&isnumber=6786006
  • Sgouras, K.I; Birda, AD.; Labridis, D.P., "Cyber Attack Impact On Critical Smart Grid Infrastructures," Innovative Smart Grid Technologies Conference (ISGT), 2014 IEEE PES , vol., no., pp.1,5, 19-22 Feb. 2014. doi: 10.1109/ISGT.2014.6816504 Electrical Distribution Networks face new challenges by the Smart Grid deployment. The required metering infrastructures add new vulnerabilities that need to be taken into account in order to achieve Smart Grid functionalities without considerable reliability trade-off. In this paper, a qualitative assessment of the cyber attack impact on the Advanced Metering Infrastructure (AMI) is initially attempted. Attack simulations have been conducted on a realistic Grid topology. The simulated network consisted of Smart Meters, routers and utility servers. Finally, the impact of Denial-of-Service and Distributed Denial-of-Service (DoS/DDoS) attacks on distribution system reliability is discussed through a qualitative analysis of reliability indices. Keywords: computer network security; power distribution reliability; power engineering computing; power system security; smart meters; smart power grids; AMI; DoS-DDoS attacks; advanced metering infrastructure ;critical smart grid infrastructures; cyber attack impact; distributed denial-of-service attacks; distribution system reliability; electrical distribution networks; grid topology; qualitative assessment; routers; smart grid deployment; smart meters; utility servers; Computer crime; Reliability; Servers; Smart grids; Topology; AMI; Cyber Attack; DDoS; DoS; Reliability; Simulation; Smart Grid (ID#:14-2474) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6816504&isnumber=6816367
  • Sarma, K.J.; Sharma, R.; Das, R., "A Survey Of Black Hole Attack Detection In Manet," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on , vol., no., pp.202,205, 7-8 Feb. 2014. doi: 10.1109/ICICICT.2014.6781279 MANET is an infrastructure less, dynamic, decentralised network. Any node can join the network and leave the network at any point of time. Due to its simplicity and flexibility, it is widely used in military communication, emergency communication, academic purpose and mobile conferencing. In MANET there no infrastructure hence each node acts as a host and router. They are connected to each other by Peer-to-peer network. Decentralised means there is nothing like client and server. Each and every node is acted like a client and a server. Due to the dynamic nature of mobile Ad-HOC network it is more vulnerable to attack. Since any node can join or leave the network without any permission the security issues are more challenging than other type of network. One of the major security problems in ad hoc networks called the black hole problem. It occurs when a malicious node referred as black hole joins the network. The black hole conducts its malicious behavior during the process of route discovery. For any received RREQ, the black hole claims having route and propagates a faked RREP. The source node responds to these faked RREPs and sends its data through the received routes once the data is received by the black hole; it is dropped instead of being sent to the desired destination. This paper discusses some of the techniques put forwarded by researchers to detect and prevent Black hole attack in MANET using AODV protocol and based on their flaws a new methodology also have been proposed. Keywords: client-server systems; mobile ad hoc networks; network servers; peer-to-peer computing; radio wave propagation; routing protocols; telecommunication security; AODV protocol; MANET; academic purpose; black hole attack detection; client; decentralised network; emergency communication; military communication; mobile ad-hoc network; mobile conferencing; peer-to-peer network; received RREQ; route discovery; security; server; Europe; Mobile communication; Routing protocols; Ad-HOC; Black hole attack; MANET; RREP; RREQ (ID#:14-2475) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781279&isnumber=6781240

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Steganography

Steganography


Digital steganography is one of the primary areas or science of security research. Detection and countermeasures are the topics pursued. The articles cited here were presented between January ad August of 2014. They cover a range of topics, including Least Significant Bit (LSB), LDPC codes, combinations with DES encryption, and Hamming code.

  • Akhtar, N.; Khan, S.; Johri, P., "An Improved Inverted LSB Image Steganography," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on , vol., no., pp.749,755, 7-8 Feb. 2014. doi: 10.1109/ICICICT.2014.6781374 In this paper, an improvement in the plain LSB based image steganography is proposed and implemented. The paper proposes the use of bit inversion technique to improve the stego-image quality. Two schemes of the bit inversion techniques are proposed and implemented. In these techniques, LSBs of some pixels of cover image are inverted if they occur with a particular pattern of some bits of the pixels. In this way, less number of pixels is modified in comparison to plain LSB method. So PSNR of stego-image is improved. For correct de-steganography, the bit patterns for which LSBs has inverted needs to be stored within the stego-image somewhere. The proposed bit inversion technique provides good improvement to LSB steganography. This technique could be combined with other methods to improve the steganography further. Keywords: image processing; steganography; PSNR; bit inversion technique; bit patterns; cover image pixels; de-steganography ;inverted LSB image steganography; least significant bit; plain LSB-based image steganography; steganography quality improvement; stego-image; Clocks; Cryptography; Laser transitions; PSNR; PSNR; bit inversion; least significant bit; quality; steganography (ID#:14-2476) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781374&isnumber=6781240
  • Islam, M.R.; Siddiqa, A; Uddin, M.P.; Mandal, AK.; Hossain, M.D., "An Efficient Filtering Based Approach Improving LSB Image Steganography Using Status Bit Along With AES Cryptography," Informatics, Electronics & Vision (ICIEV), 2014 International Conference on , vol., no., pp.1,6, 23-24 May 2014. doi: 10.1109/ICIEV.2014.6850714 In Steganography, the total message will be invisible into a cover media such as text, audio, video, and image in which attackers don't have any idea about the original message that the media contain and which algorithm use to embed or extract it. In this paper, the proposed technique has focused on Bitmap image as it is uncompressed and convenient than any other image format to implement LSB Steganography method. For better security AES cryptography technique has also been used in the proposed method. Before applying the Steganography technique, AES cryptography will change the secret message into cipher text to ensure two layer security of the message. In the proposed technique, a new Steganography technique is being developed to hide large data in Bitmap image using filtering based algorithm, which uses MSB bits for filtering purpose. This method uses the concept of status checking for insertion and retrieval of message. This method is an improvement of Least Significant Bit (LSB) method for hiding information in images. It is being predicted that the proposed method will able to hide large data in a single image retaining the advantages and discarding the disadvantages of the traditional LSB method. Various sizes of data are stored inside the images and the PSNR are also calculated for each of the images tested. Based on the PSNR value, the Stego image has higher PSNR value as compared to other method. Hence the proposed Steganography technique is very efficient to hide the secret information inside an image. Keywords: cryptography; filtering theory; image processing; image retrieval; steganography; AES cryptography technique; Bitmap image; LSB image steganography; PSNR value; bitmap image; cover media; efficient filtering image format least significant bit method; message retrieval; secret message; steganography technique; Ciphers; Encryption; Histograms; Image color analysis; PSNR; AES Cryptography; Conceal of Message; Filtering Algorithm; Image Steganography; LSB Image Steganography (ID#:14-2477) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6850714&isnumber=6850678
  • Yang Ren-er; Zheng Zhiwei; Tao Shun; Ding Shilei, "Image Steganography Combined with DES Encryption Pre-processing," Measuring Technology and Mechatronics Automation (ICMTMA), 2014 Sixth International Conference on , vol., no., pp.323,326, 10-11 Jan. 2014. doi: 10.1109/ICMTMA.2014.80 In order to improve the security of steganography, this paper studied image steganography combined with pre-processing of DES encryption. When transmitting the secret information, firstly, encrypt the information intended to hide by DES encryption is encrypted, and then is written in the image through the LSB steganography. Encryption algorithm improves the lowest matching performance between the image and the secret information by changing the statistical characteristics of the secret information to enhance the anti-detection of the image steganography. The experimental results showed that the anti-detection robustness of image steganography combined with pre-processing of DES encryption is found much better than the way using LSB steganography algorithms directly. Keywords: cryptography ;image matching; steganography; DES encryption preprocessing; LSB steganography; image matching performance; image steganography; least significant bit; secret information; statistical characteristics; Algorithm design and analysis; Encryption; Histograms; Media; Probability distribution; DES encryption; High security; Information hiding; Steganography (ID#:14-2478) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6802697&isnumber=6802614
  • Singla, D.; Juneja, M., "An Analysis Of Edge Based Image Steganography Techniques In Spatial Domain," Engineering and Computational Sciences (RAECS), 2014 Recent Advances in , vol., no., pp.1,5, 6-8 March 2014. doi: 10.1109/RAECS.2014.6799604 Steganography is a branch of information security. Steganography aims at hiding the existence of the actual communication. This aim is achieved by hiding the actual information into other information in such a way that intruder cannot detect it. A variety of carrier file formats can be used to carry out steganography e.g. images, text, videos, audio, radio waves etc. But mainly images are used for this purpose because of their high frequency on internet. Number of image steganography techniques has been introduced having some drawbacks and advantages. These techniques are evaluated on the basis of three parameters imperceptibility, robustness and capacity. In this paper we will review various edge based image steganography techniques. Main idea behind these techniques is that edges can bear more variation than smooth areas without being detected. Keywords: image coding; steganography; Internet; carrier file formats; edge based image steganography techniques; information security; smooth areas; spatial domain; Algorithm design and analysis; Cryptography Detectors; Image edge detection ;PSNR; Robustness; LSB substitution; Pixel Value Differencing steganography; edge based image steganography; peak signal to noise ratio ;random edge pixel embedding (ID#:14-2478) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6799604&isnumber=6799496
  • Kaur, S.; Bansal, S.; Bansal, R.K., "Steganography and Classification Of Image Steganography Techniques," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on , vol., no., pp.870,875, 5-7 March 2014. doi: 10.1109/IndiaCom.2014.6828087 Information is wealth of any organization and in present era in which information transferred through digital media and internet, it became a top priority for any organizations to protect this wealth. Whatever technique we adopt for the security purpose, the degree and level of security always remains top concern. Steganography is one such technique in which presence of secret message cannot be detected and we can use it as a tool for security purpose to transmit the confidential information in a secure way. It is an ongoing research area having vast number of applications in distinct fields such as defense and intelligence, medical, on-line banking, on-line transaction, to stop music piracy and other financial and commercial purposes. There are various steganography approaches exist and they differs depending upon message to be embedded, use of file type as carrier or compression method used etc. The focus of this paper is to classify distinct image steganography techniques besides giving overview, importance and challenges of steganography techniques. Other related security techniques are also been discussed in brief in this paper. The classification of steganography techniques may provide not only understanding and guidelines to researchers in this field but also provide directions for future work in this field. Keywords: Internet; image classification; image coding; steganography ;Internet; confidential information transmission; digital media; image steganography technique classification; music piracy; online banking; online transaction; secret message; security purpose; Algorithm design and analysis; Discrete cosine transforms; Frequency-domain analysis; Image coding; Robustness; Security; Confidential; Cover Object ;Data Security; Steganalysis etc; Steganography; Stego Object; information (ID#:14-2479) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828087&isnumber=6827395
  • Bugar, G.; Banoci, V.; Broda, M.; Levicky, D.; Dupak, D., "Data Hiding In Still Images Based On Blind Algorithm Of Steganography," Radioelektronika (RADIOELEKTRONIKA), 2014 24th International Conference , vol., no., pp.1,4, 15-16 April 2014. doi: 10.1109/Radioelek.2014.6828423 Steganography is the science of hiding secret information in another unsuspicious data. Generally, a steganographic secret message could be a widely useful multimedia: as a picture, an audio file, a video file or a message in clear text - the covertext. The most recent steganography techniques tend to hide a secret message in digital images. We propose and analyze experimentally a blind steganography method based on specific attributes of two dimensional discrete wavelet transform set by Haar mother wavelet. The blind steganography methods do not require an original image in the process of extraction what helps to keep a secret communication undetected to third party user or steganalysis tools. The secret message is encoded by Huffman code in order to achieve a better imperceptibility result. Moreover, this modification also increases the security of the hidden communication. Keywords: Huffman codes; discrete wavelet transforms; image coding; steganography; Haar mother wavelet; Huffman code; blind algorithm; blind steganography method; covertext; data hiding; digital images; hidden communication security; multimedia; secret communication; secret information hiding; steganalysis tool; steganographic secret message; steganography techniques; still images; third party user; two dimensional discrete wavelet transform; unsuspicious data; Decoding; Discrete wavelet transforms; Huffman coding; Image coding; Pixel; DWT; message hiding; steganography (ID#:14-2480) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828423&isnumber=6828396
  • Devi, M.; Sharma, N., "Improved Detection Of Least Significant Bit Steganography Algorithms In Color And Gray Scale Images," Engineering and Computational Sciences (RAECS), 2014 Recent Advances in , vol., no., pp.1,5, 6-8 March 2014. doi: 10.1109/RAECS.2014.6799507 This paper proposes an improved LSB (least Significant bit) based Steganography technique for images imparting better information security for hiding secret information in images. There is a large variety of steganography techniques some are more complex than others and all of them have respective strong and weak points. It ensures that the eavesdroppers will not have any suspicion that message bits are hidden in the image and standard steganography detection methods can not estimate the length of the secret message correctly. In this paper we present improved steganalysis methods, based on the most reliable detectors of thinly-spread LSB steganography presently known, focusing on the case when grayscale Bitmaps are used as cover images. Keywords: image coding; image colour analysis; security of data; steganography; color scale images; gray scale images; grayscale bitmaps; information security; least significant bit steganography algorithm detection; secret information hiding; steganalysis methods; steganography detection methods; Conferences; Gray-scale; Image coding; Image color analysis; Image edge detection; PSNR; Security; Gray Images; LSB; RGB; Steganalysis; Steganography (ID#:14-2481) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6799507&isnumber=6799496
  • Mstafa, R.J.; Elleithy, K.M., "A Highly Secure Video Steganography Using Hamming Code (7, 4)," Systems, Applications and Technology Conference (LISAT), 2014 IEEE Long Island, pp.1,6, 2-2 May 2014. doi: 10.1109/LISAT.2014.6845191 Due to the high speed of internet and advances in technology, people are becoming more worried about information being hacked by attackers. Recently, many algorithms of steganography and data hiding have been proposed. Steganography is a process of embedding the secret information inside the host medium (text, audio, image and video). Concurrently, many of the powerful steganographic analysis software programs have been provided to unauthorized users to retrieve the valuable secret information that was embedded in the carrier files. Some steganography algorithms can be easily detected by steganalytical detectors because of the lack of security and embedding efficiency. In this paper, we propose a secure video steganography algorithm based on the principle of linear block code. Nine uncompressed video sequences are used as cover data and a binary image logo as a secret message. The pixels' positions of both cover videos and a secret message are randomly reordered by using a private key to improve the system's security. Then the secret message is encoded by applying Hamming code (7, 4) before the embedding process to make the message even more secure. The result of the encoded message will be added to random generated values by using XOR function. After these steps that make the message secure enough, it will be ready to be embedded into the cover video frames. In addition, the embedding area in each frame is randomly selected and it will be different from other frames to improve the steganography scheme's robustness. Furthermore, the algorithm has high embedding efficiency as demonstrated by the experimental results that we have obtained. Regarding the system's quality, the Pick Signal to Noise Ratio (PSNR) of stego videos are above 51 dB, which is close to the original video quality. The embedding payload is also acceptable, where in each video frame we can embed 16 Kbits and it can go up to 90 Kbits without noticeable degrading of the stego video's quality. Keywords: block codes; image sequences; private key cryptography ;steganography;video coding; Hamming code (7, 4);XOR function; binary image logo; cover data; data hiding; high secure video steganography algorithm; linear block code; private key; secret information; steganalytical detectors; steganographic analysis software programs; uncompressed video sequences; Algorithm design and analysis; Block codes ;Image color analysis; PSNR; Security; Streaming media; Vectors; Embedding Efficiency; Embedding Payload; Hamming Code; Linear Block Code; Security; Video Steganography (ID#:14-2482) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845191&isnumber=6845183
  • Diop, I; Farss, S.M.; Tall, K.; Fall, P.A; Diouf, M.L.; Diop, AK., "Adaptive Steganography Scheme Based On LDPC Codes," Advanced Communication Technology (ICACT), 2014 16th International Conference on , vol., no., pp.162,166, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6778941 Steganography is the art of secret communication. Since the advent of modern steganography, in the 2000s, many approaches based on the error correcting codes (Hamming, BCH, RS, STC ...) have been proposed to reduce the number of changes of the cover medium while inserting the maximum bits. The works of LDiop and al [1], inspired by those of T. Filler [2] have shown that the LDPC codes are good candidates in minimizing the impact of insertion. This work is a continuation of the use of LDPC codes in steganography. We propose in this paper a steganography scheme based on these codes inspired by the adaptive approach to the calculation of the map detectability. We evaluated the performance of our method by applying an algorithm for steganalysis. Keywords: parity check codes ;steganography; LDPC codes; adaptive steganography scheme; error correcting codes; map detectability; secret communication; steganalysis; Complexity theory; Distortion measurement; Educational institutions; Histograms; PSNR; Parity check codes; Vectors; Adaptive steganography; complexity; detectability; steganalysis (ID#:14-2483) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778941&isnumber=6778899
  • Bansal, D.; Chhikara, R., "Performance Evaluation of Steganography Tools Using SVM and NPR Tool," Advanced Computing & Communication Technologies (ACCT), 2014 Fourth International Conference on , vol., no., pp.483,487, 8-9 Feb. 2014. doi: 10.1109/ACCT.2014.17 Steganography is the art of hiding the secret messages in an innocent medium like images, audio, video, text, etc. such that the existence of any secret message is not revealed. There are various Steganography tools available. In this paper, we are considering three algorithms - nsF5, PQ,Outguess. To compare the robustness and to withstand the steganalytic attack of the above three algorithms, an algorithm based on sensitive features is presented. SVM and Neural Network Pattern Recognition Tool is used on sensitive features extracted from DCT domain. A comparison between the accuracy obtained from SVM and NPR is also shown. Experimental results show that the Outguess method can withstand steganalytic attack by a margin of 35% accuracy as compared to nsF5 and PQ, hence Outguess is more reliable for Steganography. Keywords: data compression; discrete cosine transforms; feature extraction; image coding; neural nets; performance evaluation; steganography; support vector machines; DCT domain; JPEG feature set; NPR tool; Outguess algorithm; PQ algorithm; SVM tool; discrete cosine transform; neural network pattern recognition tool;nsF5 algorithm performance evaluation; secret message hiding; sensitive feature extraction; steganalytic attack; steganography tools; support vector machine; Accuracy; Discrete cosine transforms; Feature extraction; Histograms; Pattern recognition; Support vector machines; Training; Discrete Cosine Transform; Neural Network Pattern Recognition; Outguess; PQ; SVM; Steganography; nsF5 (ID#:14-2484) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6783501&isnumber=6783406
  • Bin Li; Shunquan Tan; Ming Wang; Jiwu Huang, "Investigation on Cost Assignment in Spatial Image Steganography," Information Forensics and Security, IEEE Transactions on , vol.9, no.8, pp.1264,1277, Aug. 2014. doi: 10.1109/TIFS.2014.2326954 Relating the embedding cost in a distortion function to statistical detectability is an open vital problem in modern steganography. In this paper, we take one step forward by formulating the process of cost assignment into two phases: 1) determining a priority profile and 2) specifying a cost-value distribution. We analytically show that the cost-value distribution determines the change rate of cover elements. Furthermore, when the cost-values are specified to follow a uniform distribution, the change rate has a linear relation with the payload, which is a rare property for content-adaptive steganography. In addition, we propose some rules for ranking the priority profile for spatial images. Following such rules, we propose a five-step cost assignment scheme. Previous steganographic schemes, such as HUGO, WOW, S-UNIWARD, and MG, can be integrated into our scheme. Experimental results demonstrate that the proposed scheme is capable of better resisting steganalysis equipped with high-dimensional rich model features. Keywords: image coding; steganography; content-adaptive steganography; cost assignment investigation; cost-value distribution; distortion function; five-step cost assignment scheme; high-dimensional rich model features; spatial image steganography; spatial images; statistical detectability; Additives ;Educational institutions; Encoding; Feature extraction; Payloads; Security; Vectors; Cost-value distribution; distortion function; priority profile; steganalysis; steganography (ID#:14-2485) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6822611&isnumber=6846399
  • Banerjee, I; Bhattacharyya, S.; Sanyal, G., "Robust Image Steganography With Pixel Factor Mapping (PFM) Technique," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on , vol., no., pp.692,698, 5-7 March 2014. doi: 10.1109/IndiaCom.2014.6828050 Our routine life is carrying an essential dependability on Internet Technologies and their expertise's in various activities. It has advantages and disadvantages. Technology requires information hiding expertise for maintaining the secrecy of the information. Steganography is one of the fashionable information hiding technique. Extensive competence of attempt has been agreed in this land by different researchers. In this contribution, a frequency domain image Steganography method using DCT coefficient has been proposed which has been design based on prime factor mapping technique. Keywords: Internet; discrete cosine transforms; image processing; steganography; DCT coefficient; Internet technology; frequency domain image steganography method; information hiding technique; pixel factor mapping technique; robust image steganography; Discrete cosine transforms; Entropy; Frequency-domain analysis; PSNR; Security; Cover Image; DCT; Pixel Factor Mapping (PFM) method; Steganography ;Stego Image (ID#:14-2486) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828050&isnumber=6827395
  • Linjie Guo; Jiangqun Ni; Yun Qing Shi, "Uniform Embedding for Efficient JPEG Steganography," Information Forensics and Security, IEEE Transactions on, vol.9, no.5, pp.814,825, May 2014. doi: 10.1109/TIFS.2014.2312817 Steganography is the science and art of covert communication, which aims to hide the secret messages into a cover medium while achieving the least possible statistical detectability. To this end, the framework of minimal distortion embedding is widely adopted in the development of the steganographic system, in which a well designed distortion function is of vital importance. In this paper, a class of new distortion functions known as uniform embedding distortion function (UED) is presented for both side-informed and non side-informed secure JPEG steganography. By incorporating the syndrome trellis coding, the best codeword with minimal distortion for a given message is determined with UED, which, instead of random modification, tries to spread the embedding modification uniformly to quantized discrete cosine transform (DCT) coefficients of all possible magnitudes. In this way, less statistical detectability is achieved, owing to the reduction of the average changes of the first- and second-order statistics for DCT coefficients as a whole. The effectiveness of the proposed scheme is verified with evidence obtained from exhaustive experiments using popular steganalyzers with various feature sets on the BOSSbase database. Compared with prior arts, the proposed scheme gains favorable performance in terms of secure embedding capacity against steganalysis. Keywords: discrete cosine transforms; distortion; higher order statistics; image coding; steganography; trellis codes; BOSSbase database; DCT; UED; distortion functions; first-order statistics; minimal distortion embedding framework; nonside-informed secure JPEG steganography; quantized discrete cosine transform coefficients; second-order statistics; secure embedding capacity; side-informed secure JPEG steganography; statistical detectability; syndrome trellis coding; uniform embedding distortion function; Additives; Discrete cosine transforms; Encoding; Histograms; Payloads; Security; Transform coding; JPEG steganography ;distortion function design; minimal-distortion embedding; uniform embedding (ID#:14-2487) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6776485&isnumber=6776454
  • Gupta, N.; Sharma, N., "Dwt and Lsb based Audio Steganography," Optimization, Reliabilty, and Information Technology (ICROIT), 2014 International Conference on, pp.428,431, 6-8 Feb. 2014 doi: 10.1109/ICROIT.2014.6798368 Steganography is a fascinating and effective method of hiding data that has been used throughout history. Methods that can be employed to uncover such devious tactics, but the first step are awareness that such methods even exist. There are many good reasons as well to use this type of data hiding, including watermarking or a more secure central storage method for such things as passwords, or key processes. Regardless, the technology is easy to use and difficult to detect. Researchers and scientists have made a lot of research work to solve this problem and to find an effective method for image hiding .The proposed system aims to provide improved robustness, security by using the concept of DWT (Discrete Wavelet Transform) and LSB (Least Significant Bit) proposed a new method of Audio Steganography. The emphasize will be on the proposed scheme of image hiding in audio and its comparison with simple Least Significant Bit insertion method for data hiding in audio. Keywords: audio watermarking; data encapsulation; discrete wavelet transforms; steganography; DWT based audio steganography; LSB based audio steganography; audio watermarking; data hiding; discrete wavelet transform; image hiding; least significant bit insertion method; secure central storage method; Cryptography; Discrete wavelet transforms; Generators; Audio steganography; DWT; LSB; PSNR (ID#:14-2488) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798368&isnumber=6798279
  • Balakrishna, C.; Naveen Chandra, V.; Pal, R., "Image Steganography Using Single Digit Sum With Varying Base," Electronics, Computing and Communication Technologies (IEEE CONECCT), 2014 IEEE International Conference on , vol., no., pp.1,5, 6-7 Jan. 2014. doi: 10.1109/CONECCT.2014.6740336 Hiding an important message within an image is known as image steganography. Imperceptibility of the message is a major concern of an image steganography scheme. A novel single digit sum (SDS) based image steganography scheme has been proposed in this paper. At first, the computation of SDS has been generalized to support a number system with any given base. Then, an image steganography scheme has been developed, where the base for computing SDS is varied from one pixel to another. Therefore, the number of embedding bits in a pixel is varied across pixels. The purpose of this technique is to control the amount of change in a pixel. A lossy compressed version of the cover image is used to determine the upper limit of change in each pixel value. The base for computing SDS is determined by using this upper limit for a pixel. Thus, it is ensured that the stego image does not degrade beyond the degradation in the lossily compressed image. Keywords: data compression; image coding; steganography; SDS; lossy compressed version; message hiding; novel single digit sum based image steganography scheme; pixel value; Payloads (ID#:14-2489) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6740336&isnumber=6740167
  • Odeh, A; Elleithy, K.; Faezipour, M., "Fast Real-Time Hardware Engine for Multipoint Text Steganography," Systems, Applications and Technology Conference (LISAT), 2014 IEEE Long Island , vol., no., pp.1,5, 2-2 May 2014. doi: 10.1109/LISAT.2014.6845184 Different strategies were introduced in the literature to protect data. Some techniques change the data form while other techniques hide the data inside another file. Steganography techniques conceal information inside different digital media like image, audio, and text files. Most of the introduced techniques use software implementation to embed secret data inside the carrier file. Most software implementations are not sufficiently fast for real-time applications. In this paper, we present a new real-time Steganography technique to hide data inside a text file using a hardware engine with 11.27 Gbps hidden data rate. The fast Steganography implementation is presented in this paper. Keywords: data protection; steganography; text analysis; carrier file; data hiding; data protection; digital media; multipoint text steganography; real-time hardware engine; real-time steganography technique; secret data; text file; Algorithm design and analysis; Engines; Field programmable gate arrays; Hardware; Real-time systems; Signal processing algorithms; Streaming media (ID#:14-2490) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845184&isnumber=6845183
  • Karakis, R.; Guler, I, "An Application Of Fuzzy Logic-Based Image Steganography," Signal Processing and Communications Applications Conference (SIU), 2014 22nd , vol., no., pp.156,159, 23-25 April 2014 doi: 10.1109/SIU.2014.6830189 Today, data security in digital environment (such as text, image and video files) is revealed by development technology. Steganography and Cryptology are very important to save and hide data. Cryptology saves the message contents and Steganography hides the message presence. In this study, an application of fuzzy logic (FL)-based image Steganography was performed. First, the hidden messages were encrypted by XOR (eXclusive Or) algorithm. Second, FL algorithm was used to select the least significant bits (LSB) of the image pixels. Then, the LSBs of selected image pixels were replaced with the bits of the hidden messages. The method of LSB was improved as robustly and safely against steg-analysis by the FL-based LSB algorithm. Keywords: cryptography; fuzzy logic; image coding; steganography; FL-based LSB algorithm; XOR algorithm; cryptology; data security ; eXclusive OR algorithm; fuzzy logic; image steganography; least significant bits; Conferences; Cryptography; Fuzzy logic; Internet; PSNR; Signal processing algorithms (ID#:14-2491) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830189&isnumber=6830164
  • Sarreshtedari, S.; Akhaee, M.A, "One-third Probability Embedding: A New +-1 Histogram Compensating Image Least Significant Bit Steganography Scheme," Image Processing, IET vol.8, no.2, pp.78,89, February 2014. doi: 10.1049/iet-ipr.2013.0109 A new method is introduced for the least significant bit (LSB) image steganography in spatial domain providing the capacity of one bit per pixel. Compared to the recently proposed image steganography techniques, the new method called one-third LSB embedding reduces the probability of change per pixel to one-third without sacrificing the embedding capacity. This improvement results in a better imperceptibility and also higher robustness against well-known LSB detectors. Bits of the message are carried using a function of three adjacent cover pixels. It is shown that no significant improvement is achieved by increasing the length of the pixel sequence employed. A closed-form expression for the probability of change per pixel in terms of the number of pixels used in the pixel groups has been derived. Another advantage of the proposed algorithm is to compensate, as much as possible, for any changes in the image histogram. It has been demonstrated that one-third probability embedding outperforms histogram compensating version of the LSB matching in terms of keeping the image histogram unchanged. Keywords: image coding; image enhancement; probability; steganography; LSB image steganography; closed-form expression; histogram compensating image least significant bit steganography scheme; image histogram; one-third LSB embedding; one-third probability embedding; pixel sequence; spatial domain (ID#:14-2492) URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6733839&isnumber=6733837
  • Pathak, P.; Chattopadhyay, AK.; Nag, A, "A New Audio Steganography Scheme Based On Location Selection With Enhanced Security," Automation, Control, Energy and Systems (ACES), 2014 First International Conference on , vol., no., pp.1,4, 1-2 Feb. 2014. doi: 10.1109/ACES.2014.6807979 Steganography is the art and science of secret communication. In this paper a new scheme for digital audio steganography is presented where the bits of a secret message are embedded into the coefficients of a cover audio. Each secret bit is embedded into the selected position of a cover coefficient. The position for insertion of a secret bit is selected from the 0th (Least Significant Bit) to 7th LSB based on the upper three MSB (Most Significant Bit). This scheme provides high audio quality, robustness and lossless recovery from the cover Audio. Keywords: security of data; steganography; telecommunication security; LSB; MSB; communication security; cover audio coefficient; digital audio quality; digital audio steganography scheme; information security; least significant bit; location selection; message security; most significant bit; Decoding; Encoding; Encryption; Information security; Signal processing algorithms; LSB; Steganography; digital audio; secret communication (ID#:14-2493) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6807979&isnumber=6807973

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


SoS Lablet Publications


This page highlights publications by SoS University Lablets. Universities featured are part of the ongoing effort to quantify and establish security properties and behaviours. Please feel free to click on a university to see current publications and highlights.

(ID#:14-2622)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


CMU – Carnegie Mellon University

CMU Publications


These publications were done for the Lablet activities at this school, and were listed in the Quarterly Reports back to the government. Please direct any comments to research (at) securedatabank.net if there are any questions or concerns regarding these publications.


CMU - Carnegie Mellon University
Topic: Recertification
Title: Analyzing Interactions and Isolation among Configuration Options
Author(s): Kaestner, Christian & Pfeffer, Juergen
Hard Problem: Scalability and Composability, Metrics
Abstract: PDF not found, the following abstract listed on CPS-VO SoS site: In highly configurable systems the configuration space is too big for (re-)certifying every configuration in isolation. In this project, we combine software analysis with network analysis to detect which configuration options interact and which have local effects. Instead of analyzing a system as Linux and SELinux for every combination of configuration settings one by one (>10^2000 even considering compile-time configurations only), we analyze the effect of each configuration option once for the entire configuration space. The analysis will guide us to designs separating interacting configuration options in a core system and isolating orthogonal and less trusted configuration options from this core. (ID#:14-2495)
URL: http://cps-vo.org/node/11855
Publication Location: HotSoS 2014

CMU - Carnegie Mellon University
Topic: Recertification
Title: Limiting Recertification in Highly Configurable Systems. Analyzing Interactions and Isolation among Configuration Options
Author(s): Kaestner, Christian & Pfeffer, Juergen
Hard Problem: Scalability and Composability, Metrics
Abstract: Christian Kastner and Jurgen Pfeffer. 2014. Limiting recertification in highly configurable systems: analyzing interactions and isolation among configuration options. In Proceedings of the 2014 Symposium and Bootcamp on the Science of Security (HotSoS '14). ACM, New York, NY, USA, , Article 23 , 2 pages. DOI=10.1145/2600176.2600199 http://doi.acm.org/10.1145/2600176.2600199
In highly configurable systems the configuration space is too big for (re-)certifying every configuration in isolation. In this project, we combine software analysis with network analysis to detect which configuration options interact and which have local effects. Instead of analyzing a system as Linux and SELinux for every combination of configuration settings one by one (>102000 even considering compile-time configurations only), we analyze the effect of each configuration option once for the entire configuration space. The analysis will guide us to designs separating interacting configuration options in a core system and isolating orthogonal and less trusted configuration options from this core. (ID#:14-2496)
URL: http://dl.acm.org/citation.cfm?id=2600199&dl=ACM&coll=DL&CFID=552978724&CFTOKEN=96539078
Publication Location: HotSoS 2014


CMU - Carnegie Mellon University
Topic: Geo-Temporal Characterization of Security Threats
Title: Longitudinal Analysis of a Large Corpus of Cyber Threat Descriptions
Author(s): Ghita Mezzour, L. Richard Carley and Kathleen M. Carley
Hard Problem: Policy-Governed Secure Collaboration
Abstract: Available from Springer via link listed below. (ID#:14-2497)
URL: http://link.springer.com/article/10.1007%2Fs11416-014-0217-8
Publication Location: Journal of Computer Virology and Hacking Techniques



CMU - Carnegie Mellon University
Topic: Usable Formal Methods for the Design and Composition of Security and Privacy Policies (CMU/UTSA Collaborative Proposal)
Title: Less is More? Investigating the Role of Examples in Security Studies using Analogical Transfer
Author(s): Ashwini Rao, Hanan Hibshi, Travis Breaux, Jean-Michel Lehker, Jianwei Niu
Hard Problem: Metrics, Human Behavior
Collaborating Institution(s): UTSA
Abstract: Ashwini Rao, Hanan Hibshi, Travis Breaux, Jean-Michel Lehker, and Jianwei Niu. 2014. Less is more?: investigating the role of examples in security studies using analogical transfer. InProceedings of the 2014 Symposium and Bootcamp on the Science of Security (HotSoS '14). ACM, New York, NY, USA, , Article 7 , 12 pages. DOI=10.1145/2600176.2600182 http://doi.acm.org/10.1145/2600176.2600182
Information system developers and administrators often overlook critical security requirements and best practices. This may be due to lack of tools and techniques that allow practitioners to tailor security knowledge to their particular context. In order to explore the impact of new security methods, we must improve our ability to study the impact of security tools and methods on software and system development. In this paper, we present early findings of an experiment to assess the extent to which the number and type of examples used in security training stimuli can impact security problem solving. To motivate this research, we formulate hypotheses from analogical transfer theory in psychology. The independent variables include number of problem surfaces and schemas, and the dependent variable is the answer accuracy. Our study results do not show a statistically significant difference in performance when the number and types of examples are varied. We discuss the limitations, threats to validity and opportunities for future studies in this area. (ID#:14-2498)
URL: http://dl.acm.org/citation.cfm?id=2600182
Publication Location: HotSoS 2014


CMU - Carnegie Mellon University
Topic: Usable Formal Methods for the Design and Composition of Security and Privacy Policies (CMU/UTSA Collaborative Proposal)
Title: Less is More? Investigating the Role of Examples in Security Studies using Analogical Transfer
Author(s): Ashwini Rao, Hanan Hibshi, Travis Breaux, Jean-Michel Lehker, Jianwei Niu
Hard Problem: Metrics, Human Behavior
Collaborating Institution(s): UTSA
Abstract: Ashwini Rao, Hanan Hibshi, Travis Breaux, Jean-Michel Lehker, and Jianwei Niu. 2014. Less is more?: investigating the role of examples in security studies using analogical transfer. InProceedings of the 2014 Symposium and Bootcamp on the Science of Security (HotSoS '14). ACM, New York, NY, USA, , Article 7 , 12 pages. DOI=10.1145/2600176.2600182 http://doi.acm.org/10.1145/2600176.2600182
Information system developers and administrators often overlook critical security requirements and best practices. This may be due to lack of tools and techniques that allow practitioners to tailor security knowledge to their particular context. In order to explore the impact of new security methods, we must improve our ability to study the impact of security tools and methods on software and system development. In this paper, we present early findings of an experiment to assess the extent to which the number and type of examples used in security training stimuli can impact security problem solving. To motivate this research, we formulate hypotheses from analogical transfer theory in psychology. The independent variables include number of problem surfaces and schemas, and the dependent variable is the answer accuracy. Our study results do not show a statistically significant difference in performance when the number and types of examples are varied. We discuss the limitations, threats to validity and opportunities for future studies in this area. (ID#:14-2499)
URL: http://dl.acm.org/citation.cfm?id=2600182
Publication Location: HotSoS 2014

CMU - Carnegie Mellon University
Topic: Usable Formal Methods for the Design and Composition of Security and Privacy Policies (CMU/UTSA Collaborative Proposal)
Title: Managing Security Requirement Patterns Using Feature Diagram Hierarchies
Author(s): R. Slavin, J.-M. Lehker, J. Niu, T. Breaux
Hard Problem: Metrics, Human Behavior
Collaborating Institution(s): UTSA
Abstract: Hosted on UTSA site: Security requirements patterns represent reusable security practices that software engineers can apply to improve security in their system. Reusing best practices that others have employed could have a number of benefits, such as decreasing the time spent in the requirements elicitation process or improving the quality of the product by reducing product failure risk. Pattern selection can be difficult due to the diversity of applicable patterns from which an analyst has to choose. The challenge is that identifying the most appropriate pattern for a situation can be cumbersome and time-consuming. We propose a new method that combines an inquiry-cycle based approach with the feature diagram notation to review only relevant patterns and quickly select the most appropriate patterns for the situation. Similar to patterns themselves, our approach captures expert knowledge to relate patterns based on decisions made by the pattern user. The resulting pattern hierarchies allow users to be guided through these decisions by questions, which elicit and refine requirements as well as introduce related patterns. Furthermore, our approach is unique in the way that pattern forces are interpreted as quality attributes to balance trade-offs in the resulting requirements. We evaluate our approach using access control patterns in a pattern user study. (ID#:14-2500)
URL: http://venom.cs.utsa.edu/dmz/techrep/2014/CS-TR-2014-002.pdf
Publication Location: IEEE International Requirements Engineering Conference, 2014

CMU - Carnegie Mellon University
Topic: Usable Formal Methods for the Design and Composition of Security and Privacy Policies (CMU/UTSA Collaborative Proposal)
Title: Discovering Security Requirements from Natural Language
Author(s): Slankas, J., Riaz, M. King, J., Williams, L.
Hard Problem: Metrics, Human Behavior
Collaborating Institution(s): UTSA
Abstract: Hosted on NCSU.edu: Natural language artifacts, such as requirements specifications, often explicitly state the security requirements for software systems. However, these artifacts may also imply additional security requirements that developers may overlook but should consider to strengthen the overall security of the system. The goal of this research is to aid requirements engineers in producing a more comprehensive and classified set of security requirements by (1) automatically identifying security-relevantsentences in natural language requirements artifacts, and (2) providing context-specific security requirements templates to help translate the security-relevant sentences into functional security requirements. Using machine learning techniques, we have developed a tool-assisted process that takes as input a set of natural language artifacts. Our process automatically identifies security-relevant sentences in the artifacts and classifies them according to the security objectives, either explicitly stated or implied by the sentences. We classified 10,963 sentences in six different documents from healthcare domain and extracted corresponding security objectives. Our manual analysis showed that 46% of the sentences were security-relevant. Of these, 28% explicitly mention security while 72% of the sentences are functional requirements with security implications. Using our tool, we correctly predict and classify 82% of the security objectives for all the sentences (precision). We identify 79% of all security objectives implied by the sentences within the documents (recall). Based on our analysis, we develop context-specific templates that can be instantiated into a set of functional security requirements by filling in key information from security-relevant sentences. (ID#:14-2501)
URL: http://www4.ncsu.edu/~mriaz/docs/re14main-hidden-in-plain-sight-preprint.pdf
Publication Location: IEEE International Requirements Engineering Conference, 2014


CMU - Carnegie Mellon University
Topic: Usable Formal Methods for the Design and Composition of Security and Privacy Policies (CMU/UTSA Collaborative Proposal)
Title: Rethinking Security Requirements in RE Research
Author(s): H. Hibshi, R. Slavin, J. Niu, T. Breaux
Hard Problem: Metrics, Human Behavior
Collaborating Institution(s): UTSA
Abstract: From UTSA.edu: As information security became an increasing concern for software developers and users, requirements engineering (RE) researchers brought new insight to security requirements. Security requirements aim to address security at the early stages of system design while accommodating the complex needs of different stakeholders. Meanwhile, other research communities, such as usable privacy and security, have also examined these requirements with specialized goal to make security more usable for stakeholders from product owners, to system users and administrators. In this paper we report results from conducting a literature survey to compare security requirements research from RE Conferences with the Symposium on Usable Privacy and Security (SOUPS). We report similarities between two research areas, such as common goals, technical definitions, research problems, and directions. Further, we clarify the differences between these two communities to understand how they can leverage each other's insights from out analysis, we recommend new directions in security requirements research mainly to expand the meaning of security requirements in RE to reflect the technological advancements that the broader field of security is experiencing. These recommendations to encourage cross-collaboration with other communities are not limited to the security requirements area; in fact, we believe they can be generalized to other areas of RE. (ID#:14-2502)
URL: http://venom.cs.utsa.edu/dmz/techrep/2014/CS-TR-2014-001.pdf
Publication Location: University of Texas at San Antonio, Technical Report #CS-TR-2014-001, January, 2014


CMU - Carnegie Mellon University
Topic: Usable Formal Methods for the Design and Composition of Security and Privacy Policies (CMU/UTSA Collaborative Proposal)
Title: On the Design of Empirical Studies to Evaluate Software Patterns: A Survey
Author(s): Riaz, M., Breaux, T., Williams, L.
Hard Problem: Metrics, Human Behavior
Collaborating Institution(s): UTSA
Abstract: From NCSU.edu: Software patterns are created with the goal of capturing expert knowledge so it can be efficiently and effectively shared with the software development community. However, patterns in practice may or may not achieve these goals. Empirical studies of the use of software patterns can help in providing deeper insight into whether these goals have been met. The objective of this paper is to aid researchers in designing empirical studies of software patterns by summarizing the study designs of software patterns available in the literature. The important components of these study designs include the evaluation criteria and how the patterns are presented to study the participants. We select and analyze 19 distinct empirical studies and identify 17 independent variables in three different categories (participants demographics; pattern presentation; problem presentation). We also extract 10 evaluation criteria with 23 associated observable measures. Additionally, by synthesizing the reported observations, we identify challenges faced during study execution. Provision of multiple domain-specific examples of pattern application and tool support to assist in pattern selection are helpful for the study participants in understanding and completing the study task. Capturing data regarding the cognitive processes of participants can provide insights into the findings of the study. (ID#:14-2503)
URL: http://repository.lib.ncsu.edu/dr/bitstream/1840.4/8269/1/tr-2012-9.pdf
Publication Location: ACM SIGSOFT'12


CMU - Carnegie Mellon University
Topic: Usable Formal Methods for the Design and Composition of Security and Privacy Policies (CMU/UTSA Collaborative Proposal)
Title: Towards a Framework for Pattern Experimentation: Understanding empirical validity in requirements engineering patterns
Author(s): Breaux, T., Hibshi, H., Rao, A., Lehker, J.-M.
Hard Problem: Metrics, Human Behavior
Collaborating Institution(s): UTSA
Abstract: Breaux, T.D.; Hibshi, H.; Rao, A; Lehker, J., "Towards a framework for pattern experimentation: Understanding empirical validity in requirements engineering patterns," Requirements Patterns (RePa), 2012 IEEE Second International Workshop on , vol., no., pp.41,47, 24-24 Sept. 2012 doi: 10.1109/RePa.2012.6359975
Despite the abundance of information security guidelines, system developers have difficulties implementing technical solutions that are reasonably secure. Security patterns are one possible solution to help developers reuse security knowledge. The challenge is that it takes experts to develop security patterns. To address this challenge, we need a framework to identify and assess patterns and pattern application practices that are accessible to non-experts. In this paper, we narrowly define what we mean by patterns by focusing on requirements patterns and the considerations that may inform how we identify and validate patterns for knowledge reuse. We motivate this discussion using examples from the requirements pattern literature and theory in cognitive psychology.(ID#:14-2504)
URL: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6359975&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F6338383%2F6359964%2F06359975.pdf%3Farnumber%3D6359975
Publication Location: 2nd IEEE Workshop on Requirements Engineering Patterns (RePa'12), Chicago, Illinois, Sep. 2012


CMU - Carnegie Mellon University
Topic: Usable Formal Methods for the Design and Composition of Security and Privacy Policies (CMU/UTSA Collaborative Proposal)
Title: Characterizations and Boundaries of Security Requirements Patterns
Author(s): Slavin, R., Shen, H., Niu, J.
Hard Problem: Metrics, Human Behavior
Collaborating Institution(s): UTSA
Abstract: Slavin, R.; Hui Shen; Jianwei Niu, "Characterizations and boundaries of security requirements patterns," Requirements Patterns (RePa), 2012 IEEE Second International Workshop on , vol., no., pp.48,53, 24-24 Sept. 2012 doi: 10.1109/RePa.2012.6359974
Very often in the software development life cycle, security is applied too late or important security aspects are overlooked. Although the use of security patterns is gaining popularity, the current state of security requirements patterns is such that there is not much in terms of a defining structure. To address this issue, we are working towards defining the important characteristics as well as the boundaries for security requirements patterns in order to make them more effective. By examining an existing general pattern format that describes how security patterns should be structured and comparing it to existing security requirements patterns, we are deriving characterizations and boundaries for security requirements patterns. From these attributes, we propose a defining format. We hope that these can reduce user effort in elicitation and specification of security requirements patterns. (ID#:14-2505)
URL: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?reload=true&arnumber=6359974
Publication Location: 2nd IEEE Workshop on Requirements Engineering Patterns (RePa'12)



CMU - Carnegie Mellon University
Topic: Science of Secure Frameworks (CMU/Wayne State University/George Mason University Collaborative Proposal)
Title: Language-Based Architectural Control
Author(s): Jonathan Aldrich, Cyrus Omar, Alex Potanin, and Du Li
Hard Problem: Scalability and Composability
Collaborating Institution(s): Victoria University of Wellington
Abstract: From CMU.edu: Software architects design systems to achieve quality attributes like security, reliability, and performance. Key to achieving these quality attributes are design constraints governing how components of the system are configured, communicate, and access resources. Unfortunately, identifying, specifying, communication and enforcing important design constraints - achieving architectural control - can be difficult, particularly in large software systems.

We argue for the development of architectural frameworks, built to leverage language mechanisms that provide for domain-specific syntax, editor services and explicit control over capabilities, that help increase architectural control. In particular, we argue for concise, centralized architectural descriptions which are responsible for specifying constraints and passing a minimal set of ca pabilities to downstream system components, or explicitly entrusting them to individuals playing defined roles within a team. BY integrating these architectural descriptions directly into the language, the type system can help enforce technical constraints and editor services can help enforce social constraints. We sketch our approach in the context of distributed systems. (ID#:14-2506)
URL: http://www.cs.cmu.edu/~aldrich/papers/iwaco2014-arch-control.pdf
Publication Location: Submitted to 6th International Workshop on Aliasing, Capabilities, and Ownership (IWACO '14)




CMU - Carnegie Mellon University
Topic: Secure Frameworks
Title: A Systematic Survey of Self-Protecting Software Systems
Author(s): Eric Yuan, Naeem Esfahani, and Sam Malek
Hard Problem: Scalable and Composable
Abstract: Eric Yuan, Naeem Esfahani, and Sam Malek. 2014. A Systematic Survey of Self-Protecting Software Systems. ACM Trans. Auton. Adapt. Syst. 8, 4, Article 17 (January 2014), 41 pages. DOI=10.1145/2555611 http://doi.acm.org/10.1145/2555611
Self-protecting software systems are a class of autonomic systems capable of detecting and mitigating security threats at runtime. They are growing in importance, as the stovepipe static methods of securing software systems have been shown to be inadequate for the challenges posed by modern software systems. Self-protection, like other self-* properties, allows the system to adapt to the changing environment through autonomic means without much human intervention, and can thereby be responsive, agile, and cost effective. While existing research has made significant progress towards autonomic and adaptive security, gaps and challenges remain. This article presents a significant extension of our preliminary study in this area. In particular, unlike our preliminary study, here we have followed a systematic literature review process, which has broadened the scope of our study and strengthened the validity of our conclusions. By proposing and applying a comprehensive taxonomy to classify and characterize the state-of-the-art research in this area, we have identified key patterns, trends and challenges in the existing approaches, which reveals a number of opportunities that will shape the focus of future research efforts. (ID#:14-2507)
URL: http://dl.acm.org/citation.cfm?id=2555611
Publication Location: ACM Transactions on Autonomous and Adaptive Systems (TAAS)




CMU - Carnegie Mellon University
Topic: A Language and Framework for Development of Secure Mobile Applications
Title: Safely Composable Type-Specific Languages
Author(s): Cyrus Omar, Darya Kurilova, Ligia Nistor, Benjamin Chung, Alex Potanin, and Jonathan Aldrich
Hard Problem: Scalability and Composability, Human Behavior
Collaborating Institution(s): Victoria University of Wellington
Abstract: Available from Springer via link listed below.. (ID#:14-2508)
URL: http://link.springer.com/chapter/10.1007%2F978-3-662-44202-9_5
Publication Location: To appear in proceedings of the European Conference on Object-Oriented Programming, 2014


CMU - Carnegie Mellon University
Topic: A Language and Framework for Development of Secure Mobile Applications
Title: Structuring Documentation to Support State Search: A Laboratory Experiment about Protocol Programming
Author(s): Joshua Sunshine, James D. Herbsleb, and Jonathan Aldrich
Hard Problem: Scalability and Composability, Human Behavior
Abstract: From Springer: Application Programming Interfaces (APIs) often define object protocols. Objects with protocols have a finite number of states and in each state a different set of method calls is valid. Many researchers have developed protocol verification tools because protocols are notoriously difficult to follow correctly. However, recent research suggests that a major challenge for API protocol programmers is effectively searching the state space. Verification is an ineffective guide for this kind of search. In this paper we instead propose Plaiddoc, which is like Javadoc except it organizes methods by state instead of by class and it includes explicit state transitions, state-based type specifications, and rich state relationships. We compare Plaiddoc to a Javadoc control in a between-subjects laboratory experiment. We find that Plaiddoc participants complete state search tasks in significantly less time and with significantly fewer errors than Javadoc participants. (ID#:14-2509)
URL: http://link.springer.com/chapter/10.1007%2F978-3-662-44202-9_7
Publication Location: To appear in proceedings of the European Conference on Object-Oriented Programming, 2014


CMU - Carnegie Mellon University
Topic: A Language and Framework for Development of Secure Mobile Applications
Title: In-Nimbo Sandboxing
Author(s): Michael Maass, Bill Scherlis, and Jonathan Aldrich
Hard Problem: Scalability and Composability, Human Behavior
Abstract: Michael Maass, William L. Scherlis, and Jonathan Aldrich. 2014. In-nimbo sandboxing. InProceedings of the 2014 Symposium and Bootcamp on the Science of Security (HotSoS '14). ACM, New York, NY, USA, , Article 1 , 12 pages. DOI=10.1145/2600176.2600177 http://doi.acm.org/10.1145/2600176.2600177
Sandboxes impose a security policy, isolating applications and their components from the rest of a system. While many sandboxing techniques exist, state of the art sandboxes generally perform their functions within the system that is being defended. As a result, when the sandbox fails or is bypassed, the security of the surrounding system can no longer be assured. We experiment with the idea of in-nimbo sandboxing, encapsulating untrusted computations away from the system we are trying to protect. The idea is to delegate computations that may be vulnerable or malicious to virtual machine instances in a cloud computing environment.

This may not reduce the possibility of an in-situ sandbox compromise, but it could significantly reduce the consequences should that possibility be realized. To achieve this advantage, there are additional requirements, including: (1) A regulated channel between the local and cloud environments that supports interaction with the encapsulated application, (2) Performance design that acceptably minimizes latencies in excess of the in-situ baseline.

To test the feasibility of the idea, we built an in-nimbo sandbox for Adobe Reader, an application that historically has been subject to significant attacks. We undertook a prototype deployment with PDF users in a large aerospace firm. In addition to thwarting several examples of existing PDF-based malware, we found that the added increment of latency, perhaps surprisingly, does not overly impair the user experience with respect to performance or usability. (ID#:14-2510)
URL: http://dl.acm.org/citation.cfm?id=2600177
Publication Location: To appear in proceedings of HotSOS, 2014.


CMU - Carnegie Mellon University
Topic: A Language and Framework for Development of Secure Mobile Applications
Title: The Power of Interoperability: Why Objects Are Inevitable
Author(s): Jonathan Aldrich
Hard Problem: Scalability and Composability, Human Behavior
Abstract: Jonathan Aldrich. 2013. The power of interoperability: why objects are inevitable. In Proceedings of the 2013 ACM international symposium on New ideas, new paradigms, and reflections on programming & software (Onward! '13). ACM, New York, NY, USA, 101-116. DOI=10.1145/2509578.2514738 http://doi.acm.org/10.1145/2509578.2514738
Three years ago in this venue, Cook argued that in their essence, objects are what Reynolds called procedural data structures. His observation raises a natural question: if procedural data structures are the essence of objects, has this contributed to the empirical success of objects, and if so, how?

This essay attempts to answer that question. After reviewing Cook's definition, I propose the term service abstractions to capture the essential nature of objects. This terminology emphasizes, following Kay, that objects are not primarily about representing and manipulating data, but are more about providing services in support of higher-level goals. Using examples taken from object-oriented frameworks, I illustrate the unique design leverage that service abstractions provide: the ability to define abstractions that can be extended, and whose extensions are interoperable in a first-class way. The essay argues that the form of interoperable extension supported by service abstractions is essential to modern software: many modern frameworks and ecosystems could not have been built without service abstractions. In this sense, the success of objects was not a coincidence: it was an inevitable consequence of their service abstraction nature. (ID#:14-2511)
URL: http://dl.acm.org/citation.cfm?id=2514738
Publication Location: In Onward! Essays, 2013.


CMU - Carnegie Mellon University
Topic: A Language and Framework for Development of Secure Mobile Applications
Title: Type-Directed, Whitespace-Delimited Parsing for Embedded DSLs
Author(s): Cyrus Omar, Benjamin Chung, Darya Kurilova, Alex Potanin, and Jonathan Aldrich
Hard Problem: Scalability and Composability, Human Behavior
Abstract: From CMU.edu: Domain-specific languages improve ease-of-use, expressiveness and verifiability, but defining and using different DSLs within a single application remains difficult. We introduce an approach for embedded DSLs where 1) whitespace delimits DSL-governed blocks, and 2) the parsing and type checking phases occur in tandem so that the expected type of the block determines which domain-specific parser governs that block. We argue that this approach occupies a sweet spot, providing high expressiveness as ease-of-use while maintaining safe composability. We introduce the design, provide examples and describe an ongoing implementation of this strategy in the Wyvern programming language. We also discuss how a more conventional keyword-directed strategy for parsing of DSLs can arise as a special case of this type-directed strategy. (ID#:14-2512)
URL: http://www.cs.cmu.edu/~aldrich/papers/globaldsl13.pdf
Publication Location: Proceedings of the International Workshop on the Globalization of Domain Specific Languages (GlobalDSL), 2013


CMU - Carnegie Mellon University
Topic: A Language and Framework for Development of Secure Mobile Applications
Title: Wyvern: A Simple, Typed, and Pure Object-Oriented Language
Author(s): Ligia Nistor, Darya Kurilova, Stephanie Balzer, Benjamin Chung, Alex Potanin, and Jonathan Aldrich
Hard Problem: Scalability and Composability, Human Behavior

Abstract: Ligia Nistor, Darya Kurilova, Stephanie Balzer, Benjamin Chung, Alex Potanin, and Jonathan Aldrich. 2013. Wyvern: a simple, typed, and pure object-oriented language. In Proceedings of the 5th Workshop on MechAnisms for SPEcialization, Generalization and inHerItance (MASPEGHI '13), Markku Sakkinen (Ed.). ACM, New York, NY, USA, 9-16. DOI=10.1145/2489828.2489830 http://doi.acm.org/10.1145/2489828.2489830
The simplest and purest practical object-oriented language designs today are seen in dynamically-typed languages, such as Smalltalk and Self. Static types, however, have potential benefits for productivity, security, and reasoning about programs. In this paper, we describe the design of Wyvern, a statically typed, pure object-oriented language that attempts to retain much of the simplicity and expressiveness of these iconic designs.

Our goals lead us to combine pure object-oriented and functional abstractions in a simple, typed setting. We present a foundational object-based language that we believe to be as close as one can get to simple typed lambda calculus while keeping object-orientation. We show how this foundational language can be translated to the typed lambda calculus via standard encodings. We then define a simple extension to this language that introduces classes and show that classes are no more than sugar for the foundational object-based language. Our future intention is to demonstrate that modules and other object-oriented features can be added to our language as not more than such syntactical extensions while keeping the object-oriented core as pure as possible.

The design of Wyvern closely follows both historical and modern ideas about the essence of object-orientation, suggesting a new way to think about a minimal, practical, typed core language for objects. (ID#:14-2513)
URL: http://dl.acm.org/citation.cfm?id=2489830
Publication Location: Proceedings of the Workshop on Mechanisms for Specialization, Generalization, and Inheritance (MASPEGHI), 2013


CMU - Carnegie Mellon University
Topic: A Language and Framework for Development of Secure Mobile Applications
Title: How Does Your Password Measure Up? The Effect of Strength Meters on Password Creation
Author(s): vBlase Ur, Patrick Gage Kelley, Saranga Komanduri, Joel Lee, Michael Maass, Michelle Mazurek, Timothy Passaro, Richard Shay, Timothy Vidas, Lujo Bauer, Nicolas Christin, and Lorrie Faith Cranor
Hard Problem: Scalability and Composability, Human Behavior
Abstract: Blase Ur, Patrick Gage Kelley, Saranga Komanduri, Joel Lee, Michael Maass, Michelle L. Mazurek, Timothy Passaro, Richard Shay, Timothy Vidas, Lujo Bauer, Nicolas Christin, and Lorrie Faith Cranor. 2012. How does your password measure up? the effect of strength meters on password creation. In Proceedings of the 21st USENIX conference on Security symposium (Security'12). USENIX Association, Berkeley, CA, USA, 5-5.
To help users create stronger text-based passwords, many web sites have deployed password meters that provide visual feedback on password strength. Although these meters are in wide use, their effects on the security and usability of passwords have not been well studied.

We present a 2,931-subject study of password creation in the presence of 14 password meters. We found that meters with a variety of visual appearances led users to create longer passwords. However, significant increases in resistance to a password-cracking algorithm were only achieved using meters that scored passwords stringently. These stringent meters also led participants to include more digits, symbols, and uppercase letters.

Password meters also affected the act of password creation. Participants who saw stringent meters spent longer creating their password and were more likely to change their password while entering it, yet they were also more likely to find the password meter annoying. However, the most stringent meter and those without visual bars caused participants to place less importance on satisfying the meter. Participants who saw more lenient meters tried to fill the meter and were averse to choosing passwords a meter deemed "bad" or "poor." Our findings can serve as guidelines for administrators seeking to nudge users towards stronger passwords. (ID#:14-2514)
URL: http://dl.acm.org/citation.cfm?id=2362793.2362798
Publication Location: Proceedings of the 21st USENIX Security Symposium.


CMU - Carnegie Mellon University
Topic: A Language and Framework for Development of Secure Mobile Applications
Title: Declarative Access Policies based on Objects, Relationships, and States
Author(s): Simin Chen
Hard Problem: Scalability and Composability, Human Behavior
Abstract: Simin Chen. 2012. Declarative access policies based on objects, relationships, and states. InProceedings of the 3rd annual conference on Systems, programming, and applications: software for humanity (SPLASH '12). ACM, New York, NY, USA, 99-100. DOI=10.1145/2384716.2384757 http://doi.acm.org/10.1145/2384716.2384757
Access policies are hard to express in existing programming languages. However, their accurate expression is a prerequisite for many of today's applications. We propose a new language that uses classes, first-class relationships, and first-class states to express access policies in a more declarative and fine-grained way than existing solutions allow. (ID#:14-2515)
URL: http://dl.acm.org/citation.cfm?id=2384757
Publication Location: Proceedings of the SPLASH 2012 Student Research Competition


CMU - Carnegie Mellon University
Topic: A Language and Framework for Development of Secure Mobile Applications
Title: Domain Specific Security through Extensible Type Systems
Author(s): Nathan Fulton
Hard Problem: Scalability and Composability, Human Behavior
Abstract: Nathan Fulton. 2012. Security through extensible type systems. In Proceedings of the 3rd annual conference on Systems, programming, and applications: software for humanity (SPLASH '12). ACM, New York, NY, USA, 107-108. DOI=10.1145/2384716.2384761 http://doi.acm.org/10.1145/2384716.2384761
Researchers interested in security often wish to introduce new primitives into a language. Extensible languages hold promise in such scenarios, but only if the extension mechanism is sufficiently safe and expressive. This paper describes several modifications to an extensible language motivated by end-to-end security concerns. (ID#:14-2516)
URL: http://dl.acm.org/citation.cfm?id=2384761
Publication Location: Proceedings of the SPLASH 2012 Student Research Competition


CMU - Carnegie Mellon University
Topic: Race Vulnerability Study and Hybrid Race Detection (CMU/University of Nebraska, Lincoln Collaborative Proposal)
Title: SimRT: An Automated Framework to Support Regression Testing for Data Races
Author(s): Vingting Yu, Witawas Srisa-an, and Gregg Rothermel
Abstract: Tingting Yu, Witawas Srisa-an, and Gregg Rothermel. 2014. SimRT: an automated framework to support regression testing for data races. In Proceedings of the 36th International Conference on Software Engineering (ICSE 2014). ACM, New York, NY, USA, 48-59. DOI=10.1145/2568225.2568294 http://doi.acm.org/10.1145/2568225.2568294
Concurrent programs are prone to various classes of difficult-to-detect faults, of which data races are particularly prevalent. Prior work has attempted to increase the cost-effectiveness of approaches for testing for data races by employing race detection techniques, but to date, no work has considered cost-effective approaches for re-testing for races as programs evolve. In this paper we present SimRT, an automated regression testing framework for use in detecting races introduced by code modifications. SimRT employs a regression test selection technique, focused on sets of program elements related to race detection, to reduce the number of test cases that must be run on a changed program to detect races that occur due to code modifications, and it employs a test case prioritization technique to improve the rate at which such races are detected. Our empirical study of SimRT reveals that it is more efficient and effective for revealing races than other approaches, and that its constituent test selection and prioritization components each contribute to its performance. (ID#:14-2517)

URL: http://dl.acm.org/citation.cfm?id=2568294

Publication Location: Proceedings of the International Conference on Software Engineering (ICSE) 2014


CMU - Carnegie Mellon University
Topic: Architecture-based Self-securing Systems
Title: Measuring Attack Surface in Software Architecture
Author(s): Jeffrey Gennari and David Garlan
Abstract: From CMU.edu: In this report we show how to adapt the notion of "attack surgace" to formally evaluate security properties at the architectural level of design and to identity the vulnerabilities in architectural designs. Further we explore the application of this metric in the context of architecture-based transformations to improve security by reducing the attack surface. The transformations are described in detail and validated with a simple experiment. (ID#:14-2518)
URL: http://reports-archive.adm.cs.cmu.edu/anon/isr2011/CMU-ISR-11-121.pdf
Publication Location: Technical Report, CMU-ISR-11-121


CMU - Carnegie Mellon University
Topic: Architecture-based Self-securing Systems
Title: Evolution Styles: foundations and models for software architecture evolution
Author(s): David Garlan, Jeffrey M. Barnes and Bradley Schmerl
Abstract: Jeffrey M. Barnes, David Garlan, and Bradley Schmerl. 2014. Evolution styles: foundations and models for software architecture evolution. Softw. Syst. Model. 13, 2 (May 2014), 649-678. DOI=10.1007/s10270-012-0301-9 http://dx.doi.org/10.1007/s10270-012-0301-9
As new market opportunities, technologies, platforms, and frameworks become available, systems require large-scale and systematic architectural restructuring to accommodate them. Today's architects have few techniques to help them plan this architecture evolution. In particular, they have little assistance in planning alternative evolution paths, trading off various aspects of the different paths, or knowing best practices for particular domains. In this paper, we describe an approach for planning and reasoning about architecture evolution. Our approach focuses on providing architects with the means to model prospective evolution paths and supporting analysis to select among these candidate paths. To demonstrate the usefulness of our approach, we show how it can be applied to an actual architecture evolution. In addition, we present some theoretical results about our evolution path constraint specification language. (ID#:14-2519)
URL: http://dl.acm.org/citation.cfm?id=2617332
Publication Location: Software and Systems Modeling


CMU - Carnegie Mellon University
Topic: Architecture-based Self-securing Systems
Title: Architecture-based run-time diagnosis of multiple, correlated faults
Author(s): Paulo Casanova, Bradley Schmerl, David Garlan, and Rui Abreu
Abstract: Paulo Casanova, David Garlan, Bradley Schmerl, and Rui Abreu. 2013. Diagnosing architectural run-time failures. In Proceedings of the 8th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS '13). IEEE Press, Piscataway, NJ, USA, 103-112
Self-diagnosis is a fundamental capability of self-adaptive systems. In order to recover from faults, systems need to know which part is responsible for the incorrect behavior. In previous work we showed how to apply a design-time diagnosis technique at run time to identify faults at the architectural level of a system. Our contributions address three major shortcomings of our previous work: 1) we present an expressive, hierarchical language to describe system behavior that can be used to diagnose when a system is behaving different to expectation; the hierarchical language facilitates mapping low level system events to architecture level events; 2) we provide an automatic way to determine how much data to collect before an accurate diagnosis can be produced; and 3) we develop a technique that allows the detection of correlated faults between components. Our results are validated experimentally by injecting several failures in a system and accurately diagnosing them using our algorithm. (ID#:14-2520)

URL: http://dl.acm.org/citation.cfm?id=2487354

Publication Location: Software Architecture

CMU - Carnegie Mellon University
Topic: Architecture-based Self-securing Systems
Title: Software engineering for self-adaptive systems: A second research roadmap
Author(s): Rogerio de Lemos, Holger Giese, Hausi A. Muller, et al.
Hard Problem: Resilient Architectures
Abstract: Available from Springer via link listed below.. (ID#:14-2521)
URL: http://link.springer.com/chapter/10.1007%2F978-3-642-35813-5_1
Publication Location: Software Engineering for Self-Adaptive Systems II


CMU - Carnegie Mellon University
Topic: Architecture-based Self-securing Systems
Title: Diagnosing architectural run-time failures
Author(s): Paulo Casanova, David Garlan, Bradley Schmerl and Rui Abreu
Abstract: Paulo Casanova, David Garlan, Bradley Schmerl, and Rui Abreu. 2013. Diagnosing architectural run-time failures. In Proceedings of the 8th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS '13). IEEE Press, Piscataway, NJ, USA, 103-112.
Self-diagnosis is a fundamental capability of self-adaptive systems. In order to recover from faults, systems need to know which part is responsible for the incorrect behavior. In previous work we showed how to apply a design-time diagnosis technique at run time to identify faults at the architectural level of a system. Our contributions address three major shortcomings of our previous work: 1) we present an expressive, hierarchical language to describe system behavior that can be used to diagnose when a system is behaving different to expectation; the hierarchical language facilitates mapping low level system events to architecture level events; 2) we provide an automatic way to determine how much data to collect before an accurate diagnosis can be produced; and 3) we develop a technique that allows the detection of correlated faults between components. Our results are validated experimentally by injecting several failures in a system and accurately diagnosing them using our algorithm. (ID#:14-2522)
URL: http://dl.acm.org/citation.cfm?id=2487354
Publication Location: the 8th International Symposium on Software Engineering for Adaptive and Self-Managing Systems


CMU - Carnegie Mellon University
Topic: Using Crowdsourcing to Analyze and Summarize the Security of Mobile Applications
Title: Expectation and Purpose: Understanding Users' Mental Models of Mobile App Privacy through Crowdsourcing
Author(s): Jialiu Lin, Shahriyar Amini, Jason Hong, Norman Sadeh, Janne Lindqvist, Joy Zhang
Abstract: Jialiu Lin, Shahriyar Amini, Jason I. Hong, Norman Sadeh, Janne Lindqvist, and Joy Zhang. 2012. Expectation and purpose: understanding users' mental models of mobile app privacy through crowdsourcing. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing(UbiComp '12). ACM, New York, NY, USA, 501-510. DOI=10.1145/2370216.2370290 http://doi.acm.org/10.1145/2370216.2370290
Smartphone security research has produced many useful tools to analyze the privacy-related behaviors of mobile apps. However, these automated tools cannot assess people's perceptions of whether a given action is legitimate, or how that action makes them feel with respect to privacy. For example, automated tools might detect that a blackjack game and a map app both use one's location information, but people would likely view the map's use of that data as more legitimate than the game. Our work introduces a new model for privacy, namely privacy as expectations. We report on the results of using crowdsourcing to capture users' expectations of what sensitive resources mobile apps use. We also report on a new privacy summary interface that prioritizes and highlights places where mobile apps break people's expectations. We conclude with a discussion of implications for employing crowdsourcing as a privacy evaluation technique. (ID#:14-2523)
URL: http://dl.acm.org/citation.cfm?id=2370290
Publication Location: UbiComp 2012



CMU - Carnegie Mellon University
Topic: Secure Composition of Systems and Policies
Title: A Type System for Reasoning about Trace Properties For Distributed Systems
Author(s): Limin Jia, Deepak Garg, Anupam Datta
Abstract: Not Found (ID#:14-2524)
URL: Not Found
Publication Location: Draft CMU Technical Paper


CMU - Carnegie Mellon University
Topic: Secure Composition of Systems and Policies
Title: Compositional Flaws - A Technical Account
Author(s): Anupam Datta, Limin Jia and Jeannette Wing
Abstract: not found (ID#:14-2525)
URL: not found
Publication Location: Draft CMU Technical Paper


CMU - Carnegie Mellon University
Topic: Secure Composition of Systems and Policies
Title: Compositional Security for Higher-Order Programs
Author(s): Limin Jia, Deepak Garg and Anupam Datta
Abstract: Located as a PPT on CPSVO-SoS (ID#:14-2526)
URL: http://cps-vo.org/node/9531
Publication Location: not available


CMU - Carnegie Mellon University
Topic: Secure Composition of Systems and Policies
Title: An Epistemic Formulation of Information Flow Analysis
Author(s): Arbob Ahmad and Robert Harper
Abstract: From cmu.edu: The non-interference (NI) property defines a program to be secure if changes to high-security inputs cannot alter the values of low-security outputs. NI indirectly states the epistemic property that no low-security principal acquires knowledge of high-security data. We consider a directly epistemic account of information flow (IF) security focusing on the knowledge flows engendered by the program's execution. Storage effects are of primary interest, since principals acquire knowledge from the execution only through these effects. The IF properties of the individual effectful actions are characterized using a substructural epistemic logic that accounts for the knowledge transferred through their execution. We prove that a low-security principal never acquires knowledge of a high-security input by execution a well-typed program.
The epistemic approach has several advantages over NI. First, it directly accounts for the knowledge flow engendered by a program. Second, in contrast to the bimodal NI property, the epistemic approach accounts for authorized declassification. We prove that a low-security principal acquires knowledge of a high security input only if it is authorized by a proof in authorization logic. Third, the explicit formulation of IF properties as an epistemic theory provides a crisp treatment of "side channels". Rather than prove that a principle does not know a secret, we instead prove that it is not provable that the principal knows that secret. The latter statement characterizes the "minimal model," for which a precise statement may be made, whereas the former applies to "any model," including those with "side channels" that violate the model's basic premises. Fourth, the NI property is re-positioned as providing an adequacy proof of epistemic theory of effects, ensuring that the logical theory corresponds to the actual program behavior. In this way we obtain a generalization of the classical approach to IF security that extends to authorized declassification. (ID#:14-2527)
URL: http://www.cs.cmu.edu/~adahmad/epi-if-draft.pdf
Publication Location: IEEE 26th Computer Security Foundations Symposium (CSF 2013)



CMU - Carnegie Mellon University
Topic: Trust from Explicit Evidence: Integrating Digital Signatures and Formal Proofs
Title: Inductive types in homotopy type theory
Author(s): Steven Awodey, Nicola Gambino, and Kristina Sojakova
Abstract: Steve Awodey, Nicola Gambino, and Kristina Sojakova. 2012. Inductive Types in Homotopy Type Theory. In Proceedings of the 2012 27th Annual IEEE/ACM Symposium on Logic in Computer Science (LICS '12). IEEE Computer Society, Washington, DC, USA, 95-104. DOI=10.1109/LICS.2012.21 http://dx.doi.org/10.1109/LICS.2012.21
Homotopy type theory is an interpretation of Martin-LAPf's constructive type theory into abstract homotopy theory. There results a link between constructive mathematics and algebraic topology, providing topological semantics for intensional systems of type theory as well as a computational approach to algebraic topology via type theory-based proof assistants such as Coq. The present work investigates inductive types in this setting. Modified rules for inductive types, including types of well-founded trees, or W-types, are presented, and the basic homotopical semantics of such types are determined. Proofs of all results have been formally verified by the Coq proof assistant, and the proof scripts for this verification form an essential component of this research. (ID#:14-2528)
URL: http://dl.acm.org/citation.cfm?id=2359495
Publication Location: Proceedings of the 27th Conference on Logic in Computer Science (LICS 2012)


CMU - Carnegie Mellon University
Topic: Trust from Explicit Evidence: Integrating Digital Signatures and Formal Proofs
Title: Higher-Order Processes, Functions, and Sessions: A monadic integration
Author(s): Bernardo Toninho, Luis Caires, and Frank Pfenning
Abstract: Bernardo Toninho, Luis Caires, and Frank Pfenning. 2013. Higher-Order processes, functions, and sessions: a monadic integration. In Proceedings of the 22nd European conference on Programming Languages and Systems (ESOP'13), Matthias Felleisen and Philippa Gardner (Eds.). Springer-Verlag, Berlin, Heidelberg, 350-369. DOI=10.1007/978-3-642-37036-6_20 http://dx.doi.org/10.1007/978-3-642-37036-6_20
In prior research we have developed a Curry-Howard interpretation of linear sequent calculus as session-typed processes. In this paper we uniformly integrate this computational interpretation in a functional language via a linear contextual monad that isolates session-based concurrency. Monadic values are open process expressions and are first class objects in the language, thus providing a logical foundation for higher-order session typed processes. We illustrate how the combined use of the monad and recursive types allows us to cleanly write a rich variety of concurrent programs, including higher-order programs that communicate processes. We show the standard metatheoretic result of type preservation, as well as a global progress theorem, which to the best of our knowledge, is new in the higher-order session typed setting. (ID#:14-2529)
URL: http://dl.acm.org/citation.cfm?id=2450295
Publication Location: Programming Languages and Systems


CMU - Carnegie Mellon University
Topic: Systematic Testing of Distributed and Multi-threaded Systems at Scale
Title: Estimating Runtime of Stateless Exploration
Author(s): Jiri Simsa, Randy Bryant, Garth Gibson, Jason Hickey, John Wilkes
Abstract: From CMU.edu: In the past 15 years, stateless exploration, a collection of techniques for automated and systematic testing of concurrent programs, has experienced wide-spread adoption. As stateless exploration moves into practice, becoming part of testing infrastructures of large-scale system developers, new practical challenges are being identified. In this paper we address the problem of efficient allocation of resources to stateless exploration runs. To this end, this paper presents techniques for estimating the total runtime of stateless exploration runs and policies for allocating resources among tests based on these runtime estimates.
Evaluating our techniques on a collection of traces from a real-world deployment at Google, we demonstrate the techniques' success at providing accurate runtime estimations, achieving estimation accuracy above 60% after as little as 1% of the state space has been explored. We further show that these estimates can be used to implement intelligent resource allocation policies that meet testing objectives more than twice as efficiently as the round-robin policy. (ID#:14-2530)
URL: http://www.pdl.cmu.edu/PDL-FTP/Storage/CMU-PDL-12-113.pdf
Publication Location: CMU-PDL-12-113, 2012


CMU - Carnegie Mellon University
Topic: Systematic Testing of Distributed and Multi-threaded Systems at Scale
Title: Borrow checking: A safe alias analysis for types with ownership and mutability
Author(s): Niko Matsakis, Brian Anderson, Ben Blum, Tim Chevalier, Graydon Hoare, Patrick Walton, David Herman
Abstract: not found (ID#:14-2531)
URL: not found
Publication Location: PLDI 2013.


CMU - Carnegie Mellon University
Topic: Validating Productivity Benefits of Type-Like Behavioral Specifications
Title: Searching the State Space: A Qualitative Study of API Protocol Usability
Author(s): Joshua Sunshine, James D Herbsleb, Jonathan Aldrich
Abstract: From CMU.edu: Application Programming Interfaces (APIs) often define protocols - restrictions on the order of client class to API methods. API protocols are common and difficult to use, which has generated tremendous research effort in alternative specifications, implementation, and verification techniques. However, little is understood about the barriers programmers face when using these APIs, and therefore the research effort may be misdirected.

To understand these barriers better, we perform a two-part qualitative study. First, we study developer forums to identify problems that developers have with protocols. Second, we perform a think-aloud observational study, in which we systematically observe professional programmers struggle with these same problems to get more detail on the nature of their struggles and how they use available resources. In our observations, programmer time was spent primarily on four types of searches of the protocol state space. These observations suggest protocol-targeted tools, languages, and verification techniques will be most effective if they enable programmers to efficiently perform state search. (ID#:14-2532)
URL: http://www.cs.cmu.edu/~jssunshi/pubs/searchingfse14draft.pdf
Publication Location: not available


CMU - Carnegie Mellon University
Topic: Improving the Usability of Security Requirements by Software Developers through Empirical Studies and Analysis
Title: Security Requirements Patterns: Understanding the Science Behind the Art of Pattern Writing
Author(s): Maria Riaz, Laurie Williams
Abstract: Riaz, M.; Williams, L., "Security requirements patterns: understanding the science behind the art of pattern writing," Requirements Patterns (RePa), 2012 IEEE Second International Workshop on , vol., no., pp.29,34, 24-24 Sept. 2012
doi: 10.1109/RePa.2012.6359977
Security requirements engineering ideally combines expertise in software security with proficiency in requirements engineering to provide a foundation for developing secure systems. However, security requirements are often inadequately understood and improperly specified, often due to lack of security expertise and a lack of emphasis on security during early stages of system development. Software systems often have common and recurrent security requirements in addition to system-specific security needs. Security requirements patterns can provide a means of capturing common security requirements while documenting the context in which a requirement manifests itself and the tradeoffs involved. The objective of this paper is to aid in understanding of the process for pattern development and provide considerations for writing effective security requirements patterns. We analyzed existing literature on software patterns, problem solving and cognition to outline the process for developing software patterns. We also reviewed strategies for specifying reusable security requirements and security requirements patterns. Our proposed considerations can aid pattern writers in capturing necessary contextual information when documenting security requirements patterns to facilitate application and integration of security requirements. (ID#:14-2533)
URL: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6359977&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F6338383%2F6359964%2F06359977.pdf%3Farnumber%3D6359977
Publication Location: 2nd IEEE Workshop on Requirements Engineering Patterns (RePa'12)


CMU - Carnegie Mellon University
Topic: Proofs and Signatures
Title: Behavioral Polymorphism and Parametricity in Session-Based Communication Author(s): Luis Caires, Jorge A. Perez, Frank Pfenning, and Bernardo Toninho
Abstract: Luis Caires, Jorge A. Perez, Frank Pfenning, and Bernardo Toninho. 2013. Behavioral polymorphism and parametricity in session-based communication. In Proceedings of the 22nd European conference on Programming Languages and Systems (ESOP'13), Matthias Felleisen and Philippa Gardner (Eds.). Springer-Verlag, Berlin, Heidelberg, 330-349. DOI=10.1007/978-3-642-37036-6_19 http://dx.doi.org/10.1007/978-3-642-37036-6_19
We investigate a notion of behavioral genericity in the context of session type disciplines. To this end, we develop a logically motivated theory of parametric polymorphism, reminiscent of the Girard-Reynolds polymorphic l-calculus, but casted in the setting of concurrent processes. In our theory, polymorphism accounts for the exchange of abstract communication protocols and dynamic instantiation of heterogeneous interfaces, as opposed to the exchange of data types and dynamic instantiation of individual message types. Our polymorphic session-typed process language satisfies strong forms of type preservation and global progress, is strongly normalizing, and enjoys a relational parametricity principle. Combined, our results confer strong correctness guarantees for communicating systems. In particular, parametricity is key to derive non-trivial results about internal protocol independence, a concurrent analogous of representation independence, and non-interference properties of modular, distributed systems. (ID#:14-2534)
URL: http://dl.acm.org/citation.cfm?id=2450294
Publication Location: ESOP'13 Proceedings of the 22nd European conference on Programming Languages and Systems


CMU - Carnegie Mellon University
Topic: Proofs and Signatures
Title: LiquidPi: Inferrable Dependent Session Types
Author(s): Dennis Griffith and Elsa Gunter
Abstract: Available from Springer via link listed below.. (ID#:14-2535)
URL: http://link.springer.com/chapter/10.1007%2F978-3-642-38088-4_13
Publication Location: 5th NASA Formal Methods Symposium (NFM), May 2013


CMU - Carnegie Mellon University
Topic: Proofs and Signatures
Title: Linear Logic Propositions as Session Types
Author(s): Luis Caires, Frank Pfenning, and Bernardo Toninho
Abstract: From CMU.edu: Throughout the years, several typing disciplines for the p-calculus have been proposed. Arguably, the most widespread of these typing disciplines consists of session types. Session types describe the input/output behavior of processes and traditionally provide strong guarantees about this behavior (i.e., deadlock freedom and fidelity). While these systems exploit a fundamental notion of linearity, the precise connection between linear logic and session types has not been well understood.
This paper proposes a type system for the p-calculus that corresponds to a standard sequent calculus presentation of intuitionistic linear logic, interpreting linear propositions as session types and thus providing a purely logical account of all key features and properties of session types. We show the deep correspondence between linear logic and session types by exhibiting a tight operational correspondence between cut elimination steps and process reductions. We also discuss an alternative presentation of linear session types based on classical linear logic, and compare our development with other more traditional session type systems. (ID#:14-2536)
URL: http://www.cs.cmu.edu/~btoninho/mscs12.pdf
Publication Location: Mathematical Structures in Computer Science (MSCS)



CMU - Carnegie Mellon University
Topic: Security Reasoning for Distributed Systems with Uncertainty
Title: A Generalization of SAT and #SAT for Robust Policy Evaluation
Author(s): E. Zawadzki, A. Platzer, G. Gordon
Abstract: Erik Zawadzki, Andre Platzer, and Geoffrey J. Gordon. 2013. A generalization of SAT and #SAT for robust policy evaluation. In Proceedings of the Twenty-Third international joint conference on Artificial Intelligence (IJCAI'13), Francesca Rossi (Ed.). AAAI Press 2583-2589.
Both SAT and #SAT can represent difficult problems in seemingly dissimilar areas such as planning, verification, and probabilistic inference. Here, we examine an expressive new language, #SAT, that generalizes both of these languages. #SAT problems require counting the number of satisfiable formulas in a concisely-describable set of existentially-quantified, propositional formulas. We characterize the expressiveness and worst-case difficulty of #SAT by proving it is complete for the complexity class #PNP[1], and relating this class to more familiar complexity classes. We also experiment with three new general-purpose #SAT solvers on a battery of problem distributions including a simple logistics domain. Our experiments show that, despite the formidable worst-case complexity of #PNP[1], many of the instances can be solved efficiently by noticing and exploiting a particular type of frequent structure. (ID#:14-2537)
URL: http://dl.acm.org/citation.cfm?id=2540500
Publication Location: Proceedings of the Twenty-Third International Joint Conference on Artificial Intelligence



CMU - Carnegie Mellon University
Topic: Geo-Temporal Characterization of Security Threats
Title: A Large-Scale Exploratory Analysis of the Cyber-Threat Landscape over Time
Author(s): Ghita Mezzour, L. Richard Carley, Kathleen M. Carley
Hard Problem: Policy governed secure collaboration, resilient architectures.
Abstract: Available from Springer via link listed below.. (ID#:14-2538)
URL: http://link.springer.com/article/10.1007%2Fs11416-014-0217-8
Publication Location: Springer


CMU - Carnegie Mellon University
Topic: No Listed Topic
Title: Efficient Exploratory Testing of Concurrent Systems
Author(s): Jiri Simsa, Randy Bryant, Garth Gibson, Jason Hickey
Abstract: From CMU.edu: In our experience, exploratory testing has reached a level of maturity that makes it a practical and often the most cost-effective approach to testing. Notably, previous work has demonstrated that exploratory testing is capable of finding bugs even in well-tested systems [4, 17, 24, 23]. However, the number of bugs found gives little indication of the efficiency of a testing approach. To drive testing efficiency, this paper focuses on techniques for measuring and maximizing the coverage achieved by exploratory testing. In particular, this paper describes the design, implementation, and evaluation of Eta, a framework for exploratory testing of multi-threaded components of a large-scale cluster management system at Google. For simpler tests (with millions to billions of possible executions), Eta achieves complete coverage one to two order of magnitude faster than random testing. For complex test, Eta adopts a state space reduction technique to avoid the need to explore over 85% of executions and harnesses parallel processing to explore multiple test executions concurrently, achieving a throughput of up to 17.5x. (ID#:14-2539)
URL: http://www.pdl.cmu.edu/PDL-FTP/associated/CMU-PDL-11-113.pdf
Publication Location: Carnegie Mellon University Parallel Data Laboratory Techical Report CMU-PDL-11-113


CMU - Carnegie Mellon University
Topic: No Listed Topic
Title: Landslide: Systematic Exploration for Kernel-Space Race Detection
Author(s): Ben Blum
Abstract: From pitt.edu: Systematic exploration is an approach to finding race conditions by deterministically executing every possible interleaving of thread transitions and identifying which ones expose bugs. Current systematic exploration techniques are suitable for testing user-space programs, but are inadequate for testing kernels, where the testing framework's control over concurrency is more complicated.

We present Landslide, a systematic exploration tool for finding races in kernels. Landslide targets Pebbles, the kernel specification that students implement in the undergraduate Opening Systems course at Carnegie Mellon University (15-410). We discuss the techniques Landslide uses to address the general challenges of kernel-level concurrency, and we evaluate its effectiveness and usability as a debugging aid. We show that our techniques make systematic testing in kernel-space feasible and that Landslide is a useful tool for doing so in the context of 15-410. (ID#:14-2540)
URL: http://www.pdl.cmu.edu/PDL-FTP/associated/CMU-CS-12-118.pdf
Publication Location: Carnegie Mellon University Parallel Data Laboratory Techical Report CMU-PDL-12-118


CMU - Carnegie Mellon University
Topic: No Listed Topic
Title: Scalable Dynamic Partial Order Reduction
Author(s): Jiri Simsa, Randal Bryant, Garth Gibson and Jason Hickey
Abstract: From Springer: Systematic testing, first demonstrated in small, specialized cases 15 years ago, has matured sufficiently for large-scale systems developers to begin to put it into practice. With actual deployment come new, pragmatic challenges to the usefulness of the techniques. In this paper we are concerned with scaling dynamic partial order reduction, a key technique for mitigating the state space explosion problem, to very large clusters. In particular, we present a new approach for distributed dynamic partial order reduction. Unlike previous work, our approach is based on a novel exploration algorithm that 1) enables trading space complexity for parallelism, 2) achieves efficient load-balancing through time-slicing, 3) provides for fault tolerance, which we consider a mandatory aspect of scalability, 4) scales to more than a thousand parallel workers, and 5) is guaranteed to avoid redundant exploration of overlapping portions of the state space. (ID#:14-2541)
URL: http://link.springer.com/chapter/10.1007%2F978-3-642-35632-2_4
Publication Location: 3rd International Conference on Runtime Verification (RV2012) 2012


CMU - Carnegie Mellon University
Topic: No Listed Topic
Title: Architecture-Based Self-Protecting Software Systems
Author(s): Eric Yuan, Sam Malek, Bradley Schmerl, David Garlan and Jeffrey Gennari
Abstract: Eric Yuan, Sam Malek, Bradley Schmerl, David Garlan, and Jeff Gennari. 2013. Architecture-based self-protecting software systems. In Proceedings of the 9th international ACM Sigsoft conference on Quality of software architectures (QoSA '13). ACM, New York, NY, USA, 33-42. DOI=10.1145/2465478.2465479 http://doi.acm.org/10.1145/2465478.246547
Since conventional software security approaches are often manually developed and statically deployed, they are no longer sufficient against today's sophisticated and evolving cyber security threats. This has motivated the development of self-protecting software that is capable of detecting security threats and mitigating them through runtime adaptation techniques. In this paper, we argue for an architecture-based self- protection (ABSP) approach to address this challenge. In ABSP, detection and mitigation of security threats are informed by an architectural representation of the running system, maintained at runtime. With this approach, it is possible to reason about the impact of a potential security breach on the system, assess the overall security posture of the system, and achieve defense in depth. To illustrate the effectiveness of this approach, we present several architecture adaptation patterns that provide reusable detection and mitigation strategies against well-known web application security threats. Finally, we describe our ongoing work in realizing these patterns on top of Rainbow, an existing architecture-based adaptation framework. (ID#:14-2542)
URL: http://dl.acm.org/citation.cfm?id=2465479
Publication Location: QoSA '13 Proceedings of the 9th international ACM Sigsoft conference on Quality of software architectures


CMU - Carnegie Mellon University
Topic: No Listed Topic
Title: Finding Security Vulnerabilities that are Architectural Flaws using Constraints
Author(s): Radu Vanciu and Marwan Abi-Antoun
Abstract: Vanciu, R.; Abi-Antoun, M., "Finding architectural flaws using constraints," Automated Software Engineering (ASE), 2013 IEEE/ACM 28th International Conference on , vol., no., pp.334,344, 11-15 Nov. 2013 doi: 10.1109/ASE.2013.6693092
During Architectural Risk Analysis (ARA), security architects use a runtime architecture to look for security vulnerabilities that are architectural flaws rather than coding defects. The current ARA process, however, is mostly informal and manual. In this paper, we propose Scoria, a semi-automated approach for finding architectural flaws. Scoria uses a sound, hierarchical object graph with abstract objects and dataflow edges, where edges can refer to nodes in the graph. The architects can augment the object graph with security properties, which can express security information unavailable in code. Scoria allows architects to write queries on the graph in terms of the hierarchy, reachability, and provenance of a dataflow object. Based on the query results, the architects enhance their knowledge of the system security and write expressive constraints. The expressiveness is richer than previous approaches that check only for the presence or absence of communication or do not track a dataflow as an object. To evaluate Scoria, we apply these constraints to several extended examples adapted from the CERT standard for Java to confirm that Scoria can detect injected architectural flaws. Next, we write constraints to enforce an Android security policy and find one architectural flaw in one Android application. (ID#:14-2543)
URL: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6693092&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6693092
Publication Location: Location Not Available


CMU Logo


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


NCSU – North Carolina State University

NCSU Publications


These publications were done for the Lablet activities at this school, and were listed in the Quarterly Reports back to the government. Please direct any comments to research (at) securedatabank.net if there are any questions or concerns regarding these publications.


NCSU - North Carolina State University
Topic: Developing a User Profile to Predict Phishing Susceptibility and Security Technology Acceptance
Title: Keeping up with the Joneses: Assessing phishing susceptibility in an email task
Author(s): Hong, K. W., Kelley, C. M., Mayhorn, C. B., & Murphy-Hill, E
Hard Problem: Human Behavior
Abstract: From NCSU.edu: Most prior research on preventing phishing attacks focuses on technology to identify and prevent the delivery of phishing emails. The current study supports an ongoing effort to develop a user-profile that predicts when phishing attacks will be successful. We sought to identify the behavioral, cognitive and perceptual attributes that make some individuals more vulnerable to phishing attack than others. Fifty-three participants responded to a number of self-report measures (e.g., dispositional trust) and completed the 'Bob Jones' email task that was designed to empirically evaluate phishing susceptibility. Over 92% of participants were to some extent vulnerable to phishing attacks. Additionally, individual differences in gender, trust, and personality were associated with phishing vulnerability. Application and implications for future research are discussed.(ID#:14-2544)
URL: http://www4.ncsu.edu/~khong/papers/kwh_etal_hfes_13.pdf
Publication Location: Human Factors and Ergonomics Society 57th Annual Meeting 2013


NCSU - North Carolina State University
Topic: Developing a User Profile to Predict Phishing Susceptibility and Security Technology Acceptance
Title: Something smells phishy: Exploring definitions, consequences, and reactions to phishing
Author(s): Kelley, C. M., Hong, K. W., Mayhorn, C. B., & Murphy-Hill, E
Hard Problem: Human Behavior
Abstract: From NCSU.edu: One hundred fifty-five participants completed a survey on Amazon's Mechanical Turk that assessed characteristics of phishing attacks and requested participants to describe their previous experiences and the related consequences. Results indicated almost all participants had been targets of a phishing with 22% reporting these attempts were successful. Participants reported actively engaging in efforts to protect themselves online by noticing the "padlock icon" and seeking additional information to verify the legitimacy of e-retailers. Moreover, participants indicated that phishers most frequently pose as members of organizations and that phishing typically occurs via email yet they are aware that other media might also make them susceptible to phishing scams. The reported consequences of phishing attacks go beyond financial loss, with many participants describing social ramifications such as embarrassment and reduced trust. Implications for research in risk communication and design roles by human factors/ergonomics (HF/E) professionals are discussed. (ID#:14-2545)
URL: http://www4.ncsu.edu/~khong/papers/ck_etal_hfes_12.pdf
Publication Location: Human Factors and Ergonomics Society 56th Annual Meeting 2012


NCSU - North Carolina State University
Topic: Developing a User Profile to Predict Phishing Susceptibility and Security Technology Acceptance
Title: Have you smelled something phishy - full title "Have you smelled something phishy? A cross-cultural study on conceptions and experiences of phishing between China and the U.S."
Author(s): Lui, Y., & Mayhorn, C. B.
Hard Problem: Human Behavior
Abstract: Not Found; see "American and Indian conceptualizations of phishing"
below (ID#:14-2546)
URL: Not found; see "American and Indian..." below
Publication Location: Twelfth Annual North Carolina State University Undergraduate Summer Research Symposium 2013


NCSU - North Carolina State University
Topic: Developing a User Profile to Predict Phishing Susceptibility and Security Technology Acceptance
Title: American and Indian conceptualizations of phishing
Author(s): Tembe, R., Hong, K. W., Mayhorn, C. B., Murphy-Hill, E., & Kelley, C. M.
Hard Problem: Human Behavior
Abstract: titled "Phishing in international waters: exploring cross-national differences in phishing conceptualizations between Chinese, Indian and American samples";
Rucha Tembe, Olga Zielinska, Yuqi Liu, Kyung Wha Hong, Emerson Murphy-Hill, Chris Mayhorn, and Xi Ge. 2014. Phishing in international waters: exploring cross-national differences in phishing conceptualizations between Chinese, Indian and American samples. In Proceedings of the 2014 Symposium and Bootcamp on the Science of Security (HotSoS '14). ACM, New York, NY, USA, , Article 8 , 7 pages. DOI=10.1145/2600176.2600178 http://doi.acm.org/10.1145/2600176.2600178
One hundred-sixty four participants from the United States, India and China completed a survey designed to assess past phishing experiences and whether they engaged in certain online safety practices (e.g., reading a privacy policy). The study investigated participants' reported agreement regarding the characteristics of phishing attacks, types of media where phishing occurs and the consequences of phishing. A multivariate analysis of covariance indicated that there were significant differences in agreement regarding phishing characteristics, phishing consequences and types of media where phishing occurs for these three nationalities. Chronological age and education did not influence the agreement ratings; therefore, the samples were demographically equivalent with regards to these variables. A logistic regression analysis was conducted to analyze the categorical variables and nationality data. Results based on self-report data indicated that (1) Indians were more likely to be phished than Americans, (2) Americans took protective actions more frequently than Indians by destroying old documents, and (3) Americans were more likely to notice the "padlock" security icon than either Indian or Chinese respondents. The potential implications of these results are discussed in terms of designing culturally sensitive anti-phishing solutions. (ID#:14-2547)
URL: http://dl.acm.org/citation.cfm?id=2600178
Publication Location: International Workshop on the Socio-Technical Aspects of Security and Trust 2013


NCSU - North Carolina State University
Topic: Developing a User Profile to Predict Phishing Susceptibility and Security Technology Acceptance
Title: Phishing in international waters: Exploring cross-cultural differences in phishing conceptualizations between Chinese, Indian, and American samples
Author(s): Tembe, R., Zielinska, O., Liu, Y., Hong, K. W., Mayhorn, C. B., & Murphy-Hill
Hard Problem: Human Behavior
Abstract:
Rucha Tembe, Olga Zielinska, Yuqi Liu, Kyung Wha Hong, Emerson Murphy-Hill, Chris Mayhorn, and Xi Ge. 2014. Phishing in international waters: exploring cross-national differences in phishing conceptualizations between Chinese, Indian and American samples. In Proceedings of the 2014 Symposium and Bootcamp on the Science of Security (HotSoS '14). ACM, New York, NY, USA, , Article 8 , 7 pages. DOI=10.1145/2600176.2600178 http://doi.acm.org/10.1145/2600176.2600178
One hundred-sixty four participants from the United States, India and China completed a survey designed to assess past phishing experiences and whether they engaged in certain online safety practices (e.g., reading a privacy policy). The study investigated participants' reported agreement regarding the characteristics of phishing attacks, types of media where phishing occurs and the consequences of phishing. A multivariate analysis of covariance indicated that there were significant differences in agreement regarding phishing characteristics, phishing consequences and types of media where phishing occurs for these three nationalities. Chronological age and education did not influence the agreement ratings; therefore, the samples were demographically equivalent with regards to these variables. A logistic regression analysis was conducted to analyze the categorical variables and nationality data. Results based on self-report data indicated that (1) Indians were more likely to be phished than Americans, (2) Americans took protective actions more frequently than Indians by destroying old documents, and (3) Americans were more likely to notice the "padlock" security icon than either Indian or Chinese respondents. The potential implications of these results are discussed in terms of designing culturally sensitive anti-phishing solutions. (ID#:14-2548)
URL: http://dl.acm.org/citation.cfm?id=2600178
Publication Location: First HotSoS: Symposium and Bootcamp on the Science of Security 2014


NCSU - North Carolina State University
Topic: Developing a User Profile to Predict Phishing Susceptibility and Security Technology Acceptance
Title: One Phish, Two Phish, How to Avoid the Internet Phish: Analysis of Training Strategies to Detect Phishing Emails
Author(s): Zielinska, O., Tembe, R., Hong, K. W., Xe, G., Murphy-Hill, E. & Mayhorn, C. B.
Hard Problem: Human Behavior
Abstract: Not found (ID#:14-2549)
URL: Not found
Publication Location: Human Factors and Ergonomics Society.


NCSU - North Carolina State University
Topic: Software Security Metrics
Title: Using Templates to Elicit Implied Security Requirements from Functional Requirements - A Controlled Experiment
Author(s): M. Riaz, J. Slankas, J. King, L. Williams
Hard Problem: Metrics
Abstract: not found (ID#:14-2550)
URL: not found
Publication Location: International Symposium on Empirical Software Engineering and Measurement (ESEM) 2014


NCSU - North Carolina State University
Topic: Software Security Metrics
Title: Hidden in Plain Sight: Automatically Identifying Security Requirements from Natural Language Artifacts
Author(s): M. Riaz, J. Slankas, J. King, L. Williams.
Hard Problem: Metrics
Abstract: Seems to be the same as "Discovering Security Requirements from Natural Language", from NCSU.edu: Natural language artifacts, such as requirements specifications, often explicitly state the security requirements for software systems. However, these artifacts may also imply additional security requirements that developers may overlook but should consider to strengthen the overall security of the system. The goal of this research is to aid requirements engineers in producing a more comprehensive and classified set of security requirements by (1) automatically identifying security-relevantsentences in natural language requirements artifacts, and (2) providing context-specific security requirements templates to help translate the security-relevant sentences into functional security requirements. Using machine learning techniques, we have developed a tool-assisted process that takes as input a set of natural language artifacts. Our process automatically identifies security-relevant sentences in the artifacts and classifies them according to the security objectives, either explicitly stated or implied by the sentences. We classified 10,963 sentences in six different documents from healthcare domain and extracted corresponding security objectives. Our manual analysis showed that 46% of the sentences were security-relevant. Of these, 28% explicitly mention security while 72% of the sentences are functional requirements with security implications. Using our tool, we correctly predict and classify 82% of the security objectives for all the sentences (precision). We identify 79% of all security objectives implied by the sentences within the documents (recall). Based on our analysis, we develop context-specific templates that can be instantiated into a set of functional security requirements by filling in key information from security-relevant sentences. (ID#:14-2551)
URL: http://www4.ncsu.edu/~mriaz/docs/re14main-hidden-in-plain-sight-preprint.pdf
Publication Location: IEEE International Requirements Engineering Conference (RE) 2014


NCSU - North Carolina State University
Topic: Software Security Metrics
Title: Integration of Network and Application Access Control Configuration Verification
Author(s): Mohammed Alsaleh, and Ehab Al-Shaer
Hard Problem: Metrics
Abstract: Not found. See: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=5990556 (ID#:14-2552)
URL: Not found
Publication Location: Journal of Advanced Research 2014


NCSU - North Carolina State University
Topic: Software Security Metrics
Title: A Formal Framework for Network Security Design Synthesis
Author(s): Rahman, M. A. and Al-Shaer, E.
Hard Problem: Metrics
Abstract: Mohammad Ashiqur Rahman and Ehab Al-Shaer. 2013. A Formal Framework for Network Security Design Synthesis. In Proceedings of the 2013 IEEE 33rd International Conference on Distributed Computing Systems (ICDCS '13). IEEE Computer Society, Washington, DC, USA, 560-570. DOI=10.1109/ICDCS.2013.70 http://dx.doi.org/10.1109/ICDCS.2013.70
Due to the extensive use of Internet services and emerging security threats, most enterprise networks deploy varieties of security devices for controlling resource access based on organizational security requirements. These requirements are becoming more fine-grained, where access control depends on heterogeneous isolation patterns like access deny, trusted communication, and payload inspection. However, organizations are looking to design usable and optimal security configurations that can harden the network security within enterprise budget constraints. This requires analyzing various alternative security architectures in order to find a security design that satisfies the organizational security requirements as well as the business constraints. In this paper, we present ConfigSynth, an automated framework for synthesizing network security configurations by exploring various security design alternatives to provide an optimal solution. The main design alternatives include different kinds of isolation patterns for traffic flows in different segments of the network. ConfigSynth takes security requirements and business constraints along with the network topology as inputs. Then it synthesizes optimal and cost-effective security configurations satisfying the constraints. ConfigSynth also provides optimal placements of different security devices in the network according to the given network topology. ConfigSynth uses Satisfiability Modulo Theories (SMT) for modeling this synthesis problem. We demonstrate the scalability of the tool using simulated experiments. (ID#:14-2553)

URL: http://dl.acm.org/citation.cfm?id=2549698
Publication Location: International Conference on Distributed Computing Systems (ICDCS), 2013


NCSU - North Carolina State University
Topic: Software Security Metrics
Title: A Formal Approach for Network Security Management Based on Qualitative Risk Analysis
Author(s): Rahman, M. A. and Al-Shaer, E.
Hard Problem: Metrics
Abstract: Rahman, M.A; Al-Shaer, E., "A formal approach for network security management based on qualitative risk analysis," Integrated Network Management (IM 2013), 2013 IFIP/IEEE International Symposium on , vol., no., pp.244,251, 27-31 May 2013
The risk analysis is an important process for enforcing and strengthening efficient and effective security. Due to the significant growth of the Internet, application services, and associated security attacks, information professionals face challenges in assessing risk of their networks. The assessment of risk may vary with the enterprise's requirements. Hence, a generic risk analysis technique is suitable. Moreover, configuring a network with correct security policy is a difficult problem. The assessment of risk aids in realizing necessary security policy. Risk is a function of security threat and impact. Security threats depend on the traffic reachability. Security devices like firewalls are used to selectively allow or deny traffic. However, the connection between the network risk and the security policy is not easy to establish. A small modification in the network topology or in the security policy, can change the risk significantly. It is hard to manually follow a systematic process for configuring the network towards security hardening. Hence, an automatic generation of proper security controls, e.g., firewall rules and host placements in the network topology, is crucial to keep the overall security risk low. In this paper, we first present a declarative model for the qualitative risk analysis. We consider transitive reachability, i.e., reachability considering one or more intermediate hosts, in order to compute exposure of vulnerabilities. Next, we formalize our risk analysis model and the security requirements as a constraint satisfaction problem using the satisfiability modulo theories (SMT). A solution to the problem synthesizes necessary firewall policies and host placements. We also evaluate the scalability of the proposed risk analysis technique as well as the synthesis model. (ID#:14-2554)
URL: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6572992&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6572992
Publication Location: IFIP/IEEE International Symposium on Integrated Network Management (IM), IEEE, 2013


NCSU - North Carolina State University
Topic: Software Security Metrics
Title: ConfigSynth: A Formal Framework for Optimal Network Security Design
Author(s): Rahman, M. A. and Al-Shaer, E.
Hard Problem: Metrics
Abstract: Rahman, M.A; Al-Shaer, E., "A Formal Framework for Network Security Design Synthesis," Distributed Computing Systems (ICDCS), 2013 IEEE 33rd International Conference on , vol., no., pp.560,570, 8-11 July 2013doi: 10.1109/ICDCS.2013.70
Due to the extensive use of Internet services and emerging security threats, most enterprisenetworks deploy varieties of security devices for controlling resource access based on organizationalsecurity requirements. These requirements are becoming more fine-grained, where access control depends on heterogeneous isolation patterns like access deny, trusted communication, and payload inspection. However, organizations are looking to design usable and optimal security configurations that can harden the network security within enterprise budget constraints. This requires analyzing various alternative security architectures in order to find a security design that satisfies the organizational security requirements as well as the business constraints. In this paper, we present ConfigSynth, an automated framework for synthesizing network security configurations by exploring various security design alternatives to provide an optimal solution. The main design alternatives include different kinds of isolation patterns for traffic flows in different segments of the network. ConfigSynth takes security requirements and business constraints along with the network topology as inputs. Then it synthesizes optimal and cost-effective security configurations satisfying the constraints. ConfigSynth also provides optimal placements of different security devices in thenetwork according to the given network topology. ConfigSynth uses Satisfiability Modulo Theories (SMT) for modeling this synthesis problem. We demonstrate the scalability of the tool using simulated experiments. (ID#:14-2556)
URL: http://ieeeexplore.com/xpl/articleDetails.jsp?tp=&arnumber=6681625&queryText%3Dnetwork+security
Publication Location: Network & Distributed System Security Symposium (NDSS), February 2013 (Short paper)


NCSU - North Carolina State University
Topic: Software Security Metrics
Title: Objective Metrics for Firewall Security: A Holistic View
Author(s): Mohammed Noraden Alsaleh, Saeed Al-Haj and Ehab Al-Shaer
Abstract: Alsaleh, M.N.; Al-Haj, S.; Al-Shaer, E., "Objective metrics for firewall security: A holistic view," Communications and Network Security (CNS), 2013 IEEE Conference on , vol., no., pp.470,477, 14-16 Oct. 2013 doi: 10.1109/CNS.2013.6682762
Firewalls are the primary security devices in cyber defense. Yet, the security of firewalls depends on the quality of protection provided by the firewall policy. The lack of metrics and attack incident data makes measuring the security of firewall policies a challenging task. In this paper, we present a new set of quantitative metrics that can be used to measure, as well as, compare the security level of firewall policies in an enterprise network. The proposed metrics measure the risk of attacks on the network that is imposed due to weaknesses in the firewall policy. We also measure the feasibility of mitigating or removing that risk. The presented metrics are proven to be (1) valid as compared with the ground truth, and (2) practically useful as each one implies actionable security hardening. (ID#:14-2557)
URL: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6682762&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6682762
Publication Location: Symposium on Security Analytics and Automation (SafeConfig), 2013


NCSU Logo


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


UIUC – University of Illinois at Urbana-Champaign

UIUC Publications


These publications were done for the Lablet activities at this school, and were listed in the Quarterly Reports back to the government. Please direct any comments to research (at) securedatabank.net if there are any questions or concerns regarding these publications.


UIUC - University of Illinois at Urbana-Champaign
Topic: Static-Dynamic Analysis of Security Metrics for Cyber-Physical Systems
Title: Proving Abstractions of Dynamical Systems through Numerical Simulations
Author(s): Sayan Mitra
Hard Problem: Scalability and Composability, Metrics
Abstract: ACM Library
Sayan Mitra. 2014. Proving abstractions of dynamical systems through numerical simulations. InProceedings of the 2014 Symposium and Bootcamp on the Science of Security (HotSoS '14). ACM, New York, NY, USA, , Article 12 , 9 pages. DOI=10.1145/2600176.2600188 http://doi.acm.org/10.1145/2600176.2600188

A key question that arises in rigorous analysis of cyberphysical systems under attack involves establishing whether or not the attacked system deviates significantly from the ideal allowed behavior. This is the problem of deciding whether or not the ideal system is an abstraction of the attacked system. A quantitative variation of this question can capture how much the attacked system deviates from the ideal. Thus, algorithms for deciding abstraction relations can help measure the effect of attacks on cyberphysical systems and to develop attack detection strategies. In this paper, we present a decision procedure for proving that one nonlinear dynamical system is a quantitative abstraction of another. Directly computing the reach sets of these nonlinear systems are undecidable in general and reach set over-approximations do not give a direct way for proving abstraction. Our procedure uses (possibly inaccurate) numerical simulations and a model annotation to compute tight approximations of the observable behaviors of the system and then uses these approximations to decide on abstraction. We show that the procedure is sound and that it is guaranteed to terminate under reasonable robustness assumptions. (ID#:14-2558)
URL: http://dl.acm.org/citation.cfm?id=2600188
Publication Location: Hot Topics in Science of Security (HOTSOS) 2014


UIUC - University of Illinois at Urbana-Champaign
Topic: Static-Dynamic Analysis of Security Metrics for Cyber-Physical Systems
Title: Invariant Verification of Nonlinear Hybrid Automata Networks of Cardiac Cells
Author(s): Zhenqi Huang, Chuchu Fan, Alexandru Mereacre, Sayan Mitra and Marta Kwiatkowska
Hard Problem: Scalability and Composability, Metrics
Abstract: Available from Springer via link listed below.. (ID#:14-2559)
URL: http://link.springer.com/chapter/10.1007%2F978-3-319-08867-9_25#
Publication Location: Computer Aided Verification (CAV 2014)


UIUC - University of Illinois at Urbana-Champaign
Topic: Static-Dynamic Analysis of Security Metrics for Cyber-Physical Systems
Title: Decentralized Control of Switched Nested Systems with l2-induced Norm Performance
Author(s): Anshuman Mishra, Cedric Langbort, and Geir Dullerud
Hard Problem: Scalability and Composability, Metrics

Abstract: Abstract not available (ID#:14-2560)

URL: URL not available
Similar paper with same authors and same topic: "Optimal decentralized control of a stochastically switched system..." Link
Publication Location: Proceedings of the American Control Conference (ACC) 2014

Topic: Data Driven Security Models and Analysis
Title: An Experiment Using Factor Graph for Early Attack Detection
Author(s): P. Cao, K.-W. Chung, A. Slagell, Z. Kalbarczyk, R. Iyer
Hard Problem: Metrics, Resilient Architectures, Human Behavior
Abstract: Not Found (ID#:14-2561)
URL: Not found
Similar paper with same authors and same topic : "Preemptive Intrusion Detection" by P. Cao, K.-W. Chung, A. Slagell, Z. Kalbarczyk, R. Iyer Link
Publication Location: Workshop on Learning from Authoritative Security Experiment Results (LASER) 2014


UIUC - University of Illinois at Urbana-Champaign
Topic: A Hypothesis Testing Framework for Network Security
Title: Towards Correct Network Virtualization
Author(s): Soudeh Ghorbani and Brighten Godfrey
Hard Problem: Scalability and Composability, Policy-Governed Secure Collaboration, Metrics, Resilient Architectures
Abstract: ACM Digital Library
In SDN, the underlying infrastructure is usually abstracted for applications that can treat the network as a logical or virtual entity. Commonly, the ``mappings" between virtual abstractions and their actual physical implementations are not one-to-one, e.g., a single "big switch" abstract object might be implemented using a distributed set of physical devices. A key question is, what abstractions could be mapped to multiple physical elements while faithfully preserving their native semantics? E.g., can an application developer always expect her abstract "big switch" to act exactly as a physical big switch, despite being implemented using multiple physical switches in reality?
We show that the answer to that question is "no" for existing virtual-to-physical mapping techniques: behavior can differ between the virtual "big switch" and the physical network, providing incorrect application-level behavior. We also show that that those incorrect behaviors occur despite the fact that the most pervasive and commonly-used correctness invariants, such as per-packet consistency, are preserved throughout. These examples demonstrate that for practical notions of correctness, new systems and a new analytical framework are needed. We take the first steps by defining end-to-end correctness, a correctness condition that focuses on applications only, and outline a research vision to obtain virtualization systems with correct virtual to physical mappings (ID#:14-2562)
URL: http://dl.acm.org/citation.cfm?id=2620754
Publication Location: ACM Workshop on Hot Topics in Software Defined Networks (HotSDN), August 2014



UIUC - University of Illinois at Urbana-Champaign
Topic: Science of Human Circumvention of Security
Title: Agent-Based Modeling of User Circumvention of Security
Author(s): V. Kothari, J. Blythe, S.W. Smith, and R. Koppel
Hard Problem: Human Behavior
Collaborating: In collaboration with Dartmouth University
Abstract: ACM Digital Library
Vijay Kothari, Jim Blythe, Sean Smith, and Ross Koppel. 2014. Agent-based modeling of user circumvention of security. In Proceedings of the 1st International Workshop on Agents and CyberSecurity (ACySE '14). ACM, New York, NY, USA, , Article 5 , 4 pages. DOI=10.1145/2602945.2602948 http://doi.acm.org/10.1145/2602945.2602948
Security subsystems are often designed with flawed assumptions arising from system designers' faulty mental models. Designers tend to assume that users behave according to some textbook ideal, and to consider each potential exposure/interface in isolation. However, fieldwork continually shows that even well-intentioned users often depart from this ideal and circumvent controls in order to perform daily work tasks, and that "incorrect" user behaviors can create unexpected links between otherwise "independent" interfaces. When it comes to security features and parameters, designers try to find the choices that optimize security utility---except these flawed assumptions give rise to an incorrect curve, and lead to choices that actually make security worse, in practice.
We propose that improving this situation requires giving designers more accurate models of real user behavior and how it influences aggregate system security. Agent-based modeling can be a fruitful first step here. In this paper, we study a particular instance of this problem, propose user-centric techniques designed to strengthen the security of systems while simultaneously improving the usability of them, and propose further directions of inquiry. (ID#:14-2563)
URL: http://dl.acm.org/citation.cfm?id=2602948
Publication Location: 1st International Workshop on Agents and CyberSecurity. ACM. May 2014


UIUC - University of Illinois at Urbana-Champaign
Topic: Theoretical Foundations of Threat Assessment by Inverse Optimal Control
Title: Equilibrium configurations of a Kirchhoff elastic rod under quasi-static manipulation
Author(s): Timothy Bretl and Zoe McCarthy
Abstract: Available from Springer via link listed below.. (ID#:14-2564)
URL: http://link.springer.com/chapter/10.1007%2F978-3-642-36279-8_5
ACM Library link for "Quasi-static manipulation of a Kirchhoff elastic rod based on a geometric analysis of equilibrium configurations : http://dl.acm.org/citation.cfm?id=2568347
Publication Location: Workshop on Algorithmic Foundations of Robotics 12




UIUC - University of Illinois at Urbana-Champaign
Title: Mechanics and manipulation of planar elastic kinematic chains

Topic: Science of Human Circumvention of Security
Author(s): Zoe McCarthy and Timothy Bretl
Abstract: McCarthy, Z.; Bretl, T., "Mechanics and manipulation of planar elastic kinematic chains," Robotics and Automation (ICRA), 2012 IEEE International Conference on , vol., no., pp.2798,2805, 14-18 May 2012
doi: 10.1109/ICRA.2012.6224693
In this paper, we study quasi-static manipulation of a planar kinematic chain with a fixed base in which each joint is a linearly-elastic torsional spring. The shape of this chain when in static equilibrium can be represented as the solution to a discrete-time optimal control problem, with boundary conditions that vary with the position and orientation of the last link. We prove that the set of all solutions to this problem is a smooth manifold that can be parameterized by a single chart. For manipulation planning, we show several advantages of working in this chart instead of in the space of boundary conditions, particularly in the context of a sampling-based planning algorithm. Examples are provided in simulation. (ID#:14-2565)
URL: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6224693&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6224693
Publication Location: IEEE International Conference on Robotics and Automation, 2012


UIUC - University of Illinois at Urbana-Champaign
Topic: Science of Human Circumvention of Security
Title: Experiments in quasi-static manipulation of a planar elastic rod
Author(s): D. Matthews and Timothy Bretl
Abstract: Matthews, D.; Bretl, T., "Experiments in quasi-static manipulation of a planar
elastic rod," Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on , vol., no., pp.5420,5427, 7-12 Oct. 2012
doi: 10.1109/IROS.2012.6385876
In this paper, we introduce and experimentally validate a sampling-based planning algorithm for quasi-static manipulation of a planar elastic rod. Our algorithm is an immediate consequence of deriving a global coordinate chart of finite dimension that suffices to describe all possible configurations of the rod that can be placed in static equilibrium by fixing the position and orientation of each end. Hardware experiments confirm this derivation in the case where the "rod" is a thin, flexible strip of metal that has a fixed base and that is held at the other end by an industrial robot. We show an example in which a path of the robot that was planned by our algorithm causes the metal strip to move between given start and goal configurations while remaining in quasi-static equilibrium. (ID#:14-2566)
URL: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6385876&url=http%3A%2F
%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6385876

Publication Location: IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012


UIUC - University of Illinois at Urbana-Champaign
Topic: Science of Human Circumvention of Security
Title: A brain-machine interface to navigate mobile robots along human-like paths amidst obstacles
Author(s): A. Akce, J. Norton, and T. Bretl
Abstract: Akce, A; Norton, J.; Bretl, T., "A brain-machine interface to navigate mobile robots along human-like paths amidst obstacles," Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on , vol., no., pp.4084,4089, 7-12 Oct. 2012
doi: 10.1109/IROS.2012.6386024
This paper presents an interface that allows a human user to specify a desired path for a mobile robot in a planar workspace with noisy binary inputs that are obtained at low bit-rates through an electroencephalograph (EEG). We represent desired paths as geodesics with respect to a cost function that is defined so that each path-homotopy class contains exactly one (local) geodesic. We apply max-margin structured learning to recover a cost function that is consistent with observations of human walking paths. We derive an optimal feedback communication protocol to select a local geodesic-equivalently, a path-homotopy class-using a sequence of noisy bits. We validate our approach with experiments that quantify both how well our learned cost function characterizes human walking data and how well human subjects perform with the resulting interface in navigating a simulated robot with EEG. (ID#:14-2567)
URL: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6386024&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6386024
Publication Location: IEEE/RSJ International Conference on Intelligent Robots and Systems, 2012



UIUC - University of Illinois at Urbana-Champaign
Topic: Science of Human Circumvention of Security
Title: Quasi-Static Manipulation of a Kirchhoff Elastic Rod based on a Geometric Analysis of Equilibrium Configurations
Author(s): T. Bretl and Z. McCarthy
Abstract: Timothy Bretl and Zoe Mccarthy. 2014. Quasi-static manipulation of a Kirchhoff elastic rod based on a geometric analysis of equilibrium configurations. Int. J. Rob. Res. 33, 1 (January 2014), 48-68. DOI=10.1177/0278364912473169 http://dx.doi.org/10.1177/0278364912473169
Consider a thin, flexible wire of fixed length that is held at each end by a robotic gripper. Any curve traced by this wire when in static equilibrium is a local solution to a geometric optimal control problem, with boundary conditions that vary with the position and orientation of each gripper. We prove that the set of all local solutions to this problem over all possible boundary conditions is a smooth manifold of finite dimension that can be parameterized by a single chart. We show that this chart makes it easy to implement a sampling-based algorithm for quasi-static manipulation planning. We characterize the performance of such an algorithm with experiments in simulation. (ID#:14-2568)
URL: http://dl.acm.org/citation.cfm?id=2568347
Publication Location: International Journal of Robotics Research, 2012


UIUC - University of Illinois at Urbana-Champaign
Topic: Science of Human Circumvention of Security
Title: Mechanics and Quasi-Static Manipulation of Planar Elastic Kinematic Chains
Author(s): T. Bretl and Z. McCarthy
Abstract: Bretl, T.; McCarthy, Z., "Mechanics and Quasi-Static Manipulation of Planar Elastic Kinematic Chains," Robotics, IEEE Transactions on , vol.29, no.1, pp.1,14, Feb. 2013 DOI: 10.1109/TRO.2012.2218911
In this paper, we study quasi-static manipulation of a planar kinematic chain with a fixed base in which each joint is a linearly elastic torsional spring. The shape of this chain when in static equilibrium can be represented as the solution to a discrete-time optimal control problem, with boundary conditions that vary with the position and orientation of the last link. We prove that the set of all solutions to this problem is a smooth three-manifold that can be parameterized by a single chart. Empirical results in simulation show that straight-line paths in this chart are uniformly more likely to be feasible (as a function of distance) than straight-line paths in the space of boundary conditions. These results, which are consistent with an analysis of visibility properties, suggest that the chart we derive is a better choice of space in which to apply a sampling-based algorithm for manipulation planning. We describe such an algorithm and show that it is easy to implement. (ID#:14-2569)
URL: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6327684
Publication Location: IEEE Transactions on Robotics, 2012


UIUC - University of Illinois at Urbana-Champaign
Topic: Science of Human Circumvention of Security
Title: Inverse optimal control for deterministic continuous-time nonlinear systems
Author(s): M. Johnson, N. Aghasadeghi, and T. Bretl
Abstract: Johnson, M.; Aghasadeghi, N.; Bretl, T., "Inverse optimal control for deterministic continuous-time nonlinear systems," Decision and Control (CDC), 2013 IEEE 52nd Annual Conference on , vol., no., pp.2906,2913, 10-13 Dec. 2013
doi: 10.1109/CDC.2013.6760325
Inverse optimal control is the problem of computing a cost function with respect to which observed state and input trajectories are optimal. We present a new method of inverse optimal control based on minimizing the extent to which observed trajectories violate first-order necessary conditions for optimality. We consider continuous-time deterministic optimal control systems with a cost function that is a linear combination of known basis functions. We compare our approach with three prior methods of inverse optimal control. We demonstrate the performance of these methods by performing simulation experiments using a collection of nominal system models. We compare the robustness of these methods by analysing how they perform under perturbations to the system. To this purpose, we consider two scenarios: one in which we exactly know the set of basis functions in the cost function, and another in which the true cost function contains an unknown perturbation. Results from simulation experiments show that our new method is more computationally efficient than prior methods, performs similarly to prior approaches under large perturbations to the system, and better learns the true cost function under small perturbations. (ID#:14-2570)
URL: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6760325&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6760325
Publication Location: IEEE International Conference on Robotics and Automation, 2013



UIUC - University of Illinois at Urbana-Champaign
Topic: Toward a Theory of Resilience in Systems: A Game-Theoretic Approach
Title: A Dynamic Game-Theoretic Approach to Resilient Control System Design for Cascading Failures
Author(s): Quanyan Zhu and Tamer Basar
Abstract: Quanyan Zhu and Tamer Basar. 2012. A dynamic game-theoretic approach to resilient control system design for cascading failures. In Proceedings of the 1st international conference on High Confidence Networked Systems (HiCoNS '12). ACM, New York, NY, USA, 41-46. DOI=10.1145/2185505.2185512 http://doi.acm.org/10.1145/2185505.2185512 The migration of many current critical infrastructures, such as power grids and transportations systems, into open public networks has posed many challenges in control systems. Modern control systems face uncertainties not only from the physical world but also from the cyber space. In this paper, we propose a hybrid game-theoretic approach to investigate the coupling between cyber security policy and robust control design. We study in detail the case of cascading failures in industrial control systems and provide a set of coupled optimality criteria in the linear-quadratic case. This approach can be further extended to more general cases of parallel cascading failures. (ID#:14-2571)
URL: http://dl.acm.org/citation.cfm?id=2185512&dl=ACM&coll=DL&CFID=551960065&CFTOKEN=77203732
Publication Location: Conference on High Confidence Networked Systems (HiCoNS) at CPSWeek 2012



UIUC - University of Illinois at Urbana-Champaign
Topic: Toward a Theory of Resilience in Systems: A Game-Theoretic Approach
Title: Game-Theoretic Methods for Distributed Management of Energy Resources in the Smart Grid
Author(s): Quanyan Zhu and Tamer Basar
Abstract: Quanyan Zhu; Jiangmeng Zhang; Sauer, P.W.; Dominguez-Garcia, A; Basar, T., "A game-theoretic framework for control of distributed renewable-based energy resources in smart grids," American Control Conference (ACC), 2012 , vol., no., pp.3623,3628, 27-29 June 2012
doi: 10.1109/ACC.2012.6315275
Renewable energy plays an important role in distributed energy resources in smart grid systems. Deployment and integration of renewable energy resources require an intelligent management to optimize their usage in the current power grid. In this paper, we establish a game-theoretic framework for modeling the strategic behavior of buses that are connected to renewable energy resources and study the equilibrium distributed power generation at each bus. Our framework takes a cross-layer approach, taking into account the economic factors as well as system stability issues at each bus. We propose an iterative algorithm to compute Nash equilibrium solutions based on a sequence of linearized games. Simulations and numerical examples are used to illustrate the algorithm and corroborate the results. (ID#:14-2572)
URL: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6315275&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6315275
Publication Location: Annual CMU Electricity Conference 2012


UIUC - University of Illinois at Urbana-Champaign
Topic: Toward a Theory of Resilience in Systems: A Game-Theoretic Approach
Title: Agent-based cyber control strategy design for resilient control systems: Concepts, architecture and methodologies
Author(s): C. Rieger, Quanyan Zhu and Tamer Basar
Abstract: Rieger, C.; Quanyan Zhu; Basar, T., "Agent-based cyber control strategy design for resilient control systems: Concepts, architecture and methodologies," Resilient Control Systems (ISRCS), 2012 5th International Symposium on , vol., no., pp.40,47, 14-16 Aug. 2012
doi: 10.1109/ISRCS.2012.6309291
The implementation of automated regulatory control has been around since the middle of the last century through analog means. It has allowed engineers to operate the plant more consistently by focusing on overall operations and settings instead of individual monitoring of local instruments (inside and outside of a control room). A similar approach is proposed for cyber security, where current border-protection designs have been inherited from information technology developments that lack consideration of the high-reliability, high consequence nature of industrial control systems. Instead of an independent development, however, an integrated approach is taken to develop a holistic understanding of performance. This performance takes shape inside a multi-agent design, which provides a notional context to model highly decentralized and complex industrial process control systems, the nervous system of critical infrastructure. The resulting strategy will provide a framework for researching solutions to security and unrecognized interdependency concerns with industrial control systems. (ID#:14-2573)
URL: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6309291&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Ftp%3D%26arnumber%3D6309291
Publication Location: Resilient Control Systems 2012


UIUC - University of Illinois at Urbana-Champaign
Topic: Toward a Theory of Resilience in Systems: A Game-Theoretic Approach
Title: Dependable demand response management in smart grid: A Stackelberg game approach
Author(s): S. Maharjan, Quanyan Zhu, Y. Zhang, S. Gjessing, and Tamer Basar
Abstract: Maharjan, S.; Quanyan Zhu; Yan Zhang; Gjessing, S.; Basar, T., "Dependable Demand Response Management in the Smart Grid: A Stackelberg Game Approach," Smart Grid, IEEE Transactions on , vol.4, no.1, pp.120,132, March 2013
doi: 10.1109/TSG.2012.2223766
Demand Response Management (DRM) is a key component in the smart grid to effectively reduce power generation costs and user bills. However, it has been an open issue to address the DRM problem in a network of multiple utility companies and consumers where every entity is concerned about maximizing its own benefit. In this paper, we propose a Stackelberg game between utility companies and end-users to maximize the revenue of each utility company and the payoff of each user. We derive analytical results for the Stackelberg equilibrium of the game and prove that a unique solution exists. We develop a distributed algorithm which converges to the equilibrium with only local information available for both utility companies and end-users. Though DRM helps to facilitate the reliability of power supply, the smart grid can be succeptible to privacy and security issues because of communication links between the utility companies and the consumers. We study the impact of an attacker who can manipulate the price information from the utility companies. We also propose a scheme based on the concept of shared reserve power to improve the grid reliability and ensure its dependability. (ID#:14-2574)
URL: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6464552
Publication Location: IEEE Trans. on Smart Grid, Special Issue on Smart Grid Communication Systems: Reliability, Dependability & Performance, 2012


UIUC - University of Illinois at Urbana-Champaign
Topic: Toward a Theory of Resilience in Systems: A Game-Theoretic Approach
Title: Resilience in consensus dynamics via competitive interconnections
Author(s): Bahman Gharesifard and Tamer Basar
Abstract: We show that competitive engagements within the agents of a network can result in resilience in consensus dynamics with respect to the presence of an adversary. We first show that interconnections with an adversary, with linear dynamics, can make the consensus dynamics diverge, or drive its evolution to a state different from the average. We then introduce a second network, interconnected with the original network via an engagement topology. This network has no information about the adversary and each agent in it has only access to partial information about the state of the other network. We introduce a dynamics on the coupled network which corresponds to a saddle-point dynamics of a certain zero-sum game and is distributed over each network, as well as the engagement topology. We show that, by appropriately choosing a design parameter corresponding to the competition between these two networks, the coupled dynamics can be made resilient with respect to the presence of the adversary. Our technical approach combines notions of graph theory and stable perturbations of nonsymmetric matrices. We demonstrate our results on an example of kinematic-based flocking in presence of an adversary. (ID#:14-2575)
URL: http://www.ifac-papersonline.net/Detailed/56775.html
Publication Location: IFAC Workshop on Distributed Estimation and Control in Networked Systems 2012


UIUC - University of Illinois at Urbana-Champaign
Topic: Toward a Theory of Resilience in Systems: A Game-Theoretic Approach
Title: SODEXO: A system framework for deployment and exploitation of deceptive honeybots in social networks
Author(s): Q. Zhu, A. Clark, R.Poovendran, T. Basar
Abstract: As social networking sites such as Facebook and Twitter are becoming increasingly popular, a growing number of malicious attacks, such as phishing and malware, are exploiting them. Among these attacks, social botnets have sophisticated infrastructure that leverages compromised users accounts, known as bots, to automate the creation of new social networking accounts for spamming and malware propagation. Traditional defense mechanisms are often passive and reactive to non-zero-day attacks. In this paper, we adopt a proactive approach for enhancing security in social networks by infiltrating botnets with honeybots. We propose an integrated system named SODEXO which can be interfaced with social networking sites for creating deceptive honeybots and leveraging them for gaining information from botnets. We establish a Stackelberg game framework to capture strategic interactions between honeybots and botnets, and use quantitative methods to understand the tradeoffs of honeybots for their deployment and exploitation in social networks. We design a protection and alert system that integrates both microscopic and macroscopic models of honeybots and optimally determines the security strategies for honeybots. We corroborate the proposed mechanism with extensive simulations and comparisons with passive defenses. (ID#:14-2576)
URL: http://arxiv.org/abs/1207.5844
Publication Location: IEEE International Conference on Computer Communications (INFOCOM) 2013


UIUC - University of Illinois at Urbana-Champaign
Topic: Toward a Theory of Resilience in Systems: A Game-Theoretic Approach
Title: Deceptive routing in relay networks
Author(s): A. Clark, Q. Zhu, R. Poovendran, T. Basar
Abstract: Available from Springer via link listed below.. (ID#:14-2577)
URL: http://link.springer.com/chapter/10.1007%2F978-3-642-34266-0_10
Publication Location: Conference on Decision and Game Theory for Security (GameSec) 2012


UIUC - University of Illinois at Urbana-Champaign
Topic: Toward a Theory of Resilience in Systems: A Game-Theoretic Approach
Title: Game--theoretic analysis of node capture and cloning attack with multiple attackers in wireless sensor networks
Author(s): Q. Zhu, L. Bushnell, T. Basar
Abstract: Quanyan Zhu; Bushnell, L.; Basar, T., "Game-theoretic analysis of node capture and cloning attack with multiple attackers in wireless sensor networks," Decision and Control (CDC), 2012 IEEE 51st Annual Conference on , vol., no., pp.3404,3411, 10-13 Dec. 2012
doi: 10.1109/CDC.2012.6426481
Wireless sensor networks are subject to attacks such as node capture and cloning, where an attacker physically captures sensor nodes, replicates the nodes, which are deployed into the network, and proceeds to take over the network. In this paper, we develop models for such an attack when there are multiple attackers in a network, and formulate multi-player games to model the noncooperative strategic behavior between the attackers and the network. We consider two cases: a static case where the attackers' node capture rates are time-invariant and the network's clone detection/revocation rate is a linear function of the state, and a dynamic case where the rates are general functions of time. We characterize Nash equilibrium solutions for both cases and derive equilibrium strategies for the players. In the static case, we study both the single-attacker and the multi-attacker games within an optimization framework, provide conditions for the existence of Nash equilibria and characterize them in closed forms. In the dynamic case, we study the underlying multi-person differential game under an open-loop information structure and provide a set of conditions to characterize the open-loop Nash equilibrium. We show the equivalence of the Nash equilibrium for the multi-person game to the saddle-point equilibrium between the network and the attackers as a team. We illustrate our results with numerical examples. (ID#:14-2578)
URL: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6426481
Publication Location: IEEE Conference on Decision and Control (CDC) 2012


UIUC - University of Illinois at Urbana-Champaign
Topic: Toward a Theory of Resilience in Systems: A Game-Theoretic Approach
Title: Deceptive routing games
Author(s): Q. Zhu, A. Clark, R. Poovendran, T. Basar
Abstract: Quanyan Zhu; Clark, A; Poovendran, R.; Basar, T., "Deceptive routing games," Decision and Control (CDC), 2012 IEEE 51st Annual Conference on , vol., no., pp.2704,2711, 10-13 Dec. 2012
doi: 10.1109/CDC.2012.6426515
The use of a shared medium leaves wireless networks, including mobile ad hoc and sensor networks, vulnerable to jamming attacks. In this paper, we introduce a jamming defense mechanism for multiple-path routing networks based on maintaining deceptive flows, consisting of fake packets, between a source and a destination. An adversary observing a deceptive flow will expend energy on disrupting the fake packets, allowing the real data packets to arrive at the destination unharmed. We model this deceptive flow-based defense within a multi-stage stochastic game framework between the network nodes, which choose a routing path and flow rates for the real and fake data, and an adversary, which chooses which fraction of each flow to target at each hop. We develop an efficient, distributed procedure for computing the optimal routing at each hop and the optimal flow allocation at the destination. Furthermore, by studying the equilibria of the game, we quantify the benefit arising from deception, as reflected in an increase in the valid throughput. Our results are demonstrated via a simulation study. (ID#:14-2579)
URL: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6426515&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F6416474%2F6425800%2F06426515.pdf%3Farnumbe...
Publication Location: IEEE Conference on Decision and Control (CDC) 2012


UIUC - University of Illinois at Urbana-Champaign
Topic: Toward a Theory of Resilience in Systems: A Game-Theoretic Approach
Title: GUIDEX: A game-theoretic Incentive-Based mechanism for intrusion detection networks
Author(s): Q. Zhu, C. Fung, R. Boutaba, T. Basar
Abstract: Quanyan Zhu; Fung, C.; Boutaba, R.; Basar, T., "GUIDEX: A Game-Theoretic Incentive-Based Mechanism for Intrusion Detection Networks," Selected Areas in Communications, IEEE Journal on , vol.30, no.11, pp.2220,2230, December 2012
doi: 10.1109/JSAC.2012.121214
Traditional intrusion detection systems (IDSs) work in isolation and can be easily compromised by unknown threats. An intrusion detection network (IDN) is a collaborative IDS network intended to overcome this weakness by allowing IDS peers to share detection knowledge and experience, and hence improve the overall accuracy of intrusion assessment. In this work, we design an IDN system, called GUIDEX, using game-theoretic modeling and trust management for peers to collaborate truthfully and actively. We first describe the system architecture and its individual components, and then establish a game-theoretic framework for the resource management component of GUIDEX. We establish the existence and uniqueness of a Nash equilibrium under which peers can communicate in a reciprocal incentive compatible manner. Based on the duality of the problem, we develop an iterative algorithm that converges geometrically to the equilibrium. Our numerical experiments and discrete event simulation demonstrate the convergence to the Nash equilibrium and the security features of GUIDEX against free riders, dishonest insiders and DoS attacks. (ID#:14-2580)
URL: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6354280&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F49%2F6354264%2F06354280.pdf%3Farnumber%3D6...
Publication Location: IEEE Journal on Selected Areas in Communications (JSAC) Special Issue on Economics of Communication Networks & Systems


UIUC - University of Illinois at Urbana-Champaign
Topic: Toward a Theory of Resilience in Systems: A Game-Theoretic Approach
Title: An Impact-Aware Defense against Stuxnet
Author(s): A. Clark, Q. Zhu, R. Poovendran and T. Basar
Abstract: Clark, A; Quanyan Zhu; Poovendran, R.; Basar, T., "An impact-aware defense against Stuxnet," American Control Conference (ACC), 2013 , vol., no., pp.4140,4147, 17-19 June 2013
doi: 10.1109/ACC.2013.6580475
The Stuxnet worm is a sophisticated malware designed to sabotage industrial control systems (ICSs). It exploits vulnerabilities in removable drives, local area communication networks, and programmable logic controllers (PLCs) to penetrate the process control network (PCN) and the control system network (CSN). Stuxnet was successful in penetrating the control system network and sabotaging industrial control processes since the targeted control systems lacked security mechanisms for verifying message integrity and source authentication. In this work, we propose a novel proactive defense system framework, in which commands from the system operator to the PLC are authenticated using a randomized set of cryptographic keys. The framework leverages cryptographic analysis and control-and game-theoretic methods to quantify the impact of malicious commands on the performance of the physical plant. We derive the worst-case optimal randomization strategy as a saddle-point equilibrium of a game between an adversary attempting to insert commands and the system operator, and show that the proposed scheme can achieve arbitrarily low adversary success probability for a sufficiently large number of keys. We evaluate our proposed scheme, using a linear-quadratic regulator (LQR) as a case study, through theoretical and numerical analysis. (ID#:14-2581)
URL: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6580475&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6580475
Publication Location: American Control Conference (ACC) 2013



UIUC - University of Illinois at Urbana-Champaign
Topic: Toward a Theory of Resilience in Systems: A Game-Theoretic Approach
Title: Price-based distributed control for networked plug-in electric vehicles
Author(s): B. Gharesifard, T. Basar, and A.D. Dominguez-Garcia
Abstract: Gharesifard, B.; Basar, T.; Dominguez-Garcia, AD., "Price-based distributed control for networked plug-in electric vehicles," American Control Conference (ACC), 2013 , vol., no., pp.5086,5091, 17-19 June 2013 doi: 10.1109/ACC.2013.6580628
We introduce a framework for controlling the charging and discharging processes of plug-in electric vehicles (PEVs) via pricing strategies. Our framework consists of a hierarchical decision-making setting with two layers, which we refer to as aggregator layer and retail market layer. In the aggregator layer, there is a set of aggregators that are requested (and will be compensated for) to provide certain amount of energy over a period of time. In the retail market layer, the aggregator offers some price for the energy that PEVs may provide; the objective is to choose a pricing strategy to incentivize the PEVs so as they collectively provide the amount of energy that the aggregator has been asked for. The focus of this paper is on the decision-making process that takes places in the retail market layer, where we assume that each individual PEV is a price-anticipating decision-maker. We cast this decision-making process as a game, and provide conditions on the pricing strategy of the aggregator under which this game has a unique Nash equilibrium. We propose a distributed consensus-based iterative algorithm through which the PEVs can seek for this Nash equilibrium. Numerical simulations are included to illustrate our results. (ID#:14-2582)
URL: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6580628
Publication Location: American Control Conference (ACC) 2013


UIUC - University of Illinois at Urbana-Champaign
Topic: Toward a Theory of Resilience in Systems: A Game-Theoretic Approach
Title: Toward a theory of multi-resolution games
Author(s): Q. Zhu and T. Basar
Abstract: University of Illinois at Urbana-Champaign
Modern critical infrastructures are highly integrated systems composed of many complex interactions between different system modules or agents including cyber and physical components as well as human factors. Their growing complexity demands novel techniques (ID#:14-2583)
URL: https://wiki.engr.illinois.edu/download/attachments/229421613/ZhuBasar_SIAM2013.pdf?version=1&modificationDate=1383488188000
Publication Location: 2013 SIAM Conference on Control and Its Applications (CT13)


UIUC - University of Illinois at Urbana-Champaign
Topic: Toward a Theory of Resilience in Systems: A Game-Theoretic Approach
Title: Resilient distributed control of multi-agent cyber-physical systems
Author(s): Q. Zhu, L. Bushnell, T. Basar
Abstract: Available from Springer via link listed below. (ID#:14-2584)
URL: http://link.springer.com/chapter/10.1007%2F978-3-319-01159-2_16
Publication Location: CISS Workshop on Cyber-Physical Systems 2013


**Title: A price-based approach to control of networked distributed energy resources
** Price-Based Distributed Control for Networked Plug-in Electric Vehicles (alternate title)
Author(s): B. Gharesifard, T. Basar, and A. D. Dominguez-Garcia
Abstract: no abstract found (ID#:14-2585)
URL: http://energy.ece.illinois.edu/aledan/publications_files/ACC_2013.pdf
Publication Location: Special Issue on Cyber-Physical-Systems, IEEE Transactions on Automatic Control


UIUC - University of Illinois at Urbana-Champaign
Topic: Toward a Theory of Resilience in Systems: A Game-Theoretic Approach
Title: Resilient Control of Cyber-Physical Systems against Denial-of-Service Attacks
Author(s): Y. Yuan, Q. Zhu, F. Sun, Q. Wang, and T. Basar
Abstract: Yuan Yuan; Quanyan Zhu; Fuchun Sun; Qinyi Wang; Basar, T., "Resilient control of cyber-physical systems against Denial-of-Service attacks," Resilient Control Systems (ISRCS), 2013 6th International Symposium on , vol., no., pp.54,59, 13-15 Aug. 2013
doi: 10.1109/ISRCS.2013.6623750
The integration of control systems with modern information technologies has posed potential security threats for critical infrastructures. The communication channels of the control system are vulnerable to malicious jamming and Denial-of-Service (DoS) attacks, which lead to severe time-delays and degradation of control performances. In this paper, we design resilient controllers for cyber-physical control systems under DoS attacks. We establish a coupled design framework which incorporates the cyber configuration policy of Intrusion Detection Systems (IDSs) and the robust control of dynamical system. We propose design algorithms based on value iteration methods and linear matrix inequalities for computing the optimal cyber security policy and control laws. We illustrate the design principle with an example from power systems. The results are corroborated by numerical examples and simulations. (ID#:14-2586)
URL: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6623750&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel7%2F6601033%2F6623739%2F06623750.pdf%3Farnumber%3D6623750
Publication Location: International Symposium on Resilient Control Systems (ISRCS 2013)


UIUC - University of Illinois at Urbana-Champaign
Topic: Toward a Theory of Resilience in Systems: A Game-Theoretic Approach
Title: Hierarchical Architectures of Resilient Control Systems: Concepts, Metrics and Design Principles
Author(s): Q. Zhu, D. Wei, K. Ji, C. Rieger, and T. Basar
Abstract: Security of control systems is becoming a pivotal concern in critical national infrastructures such as the power grid and nuclear plants. In this paper, we adopt a hierarchical viewpoint to these security issues, addressing security concerns at each level and emphasizing a holistic cross-layer philosophy for developing security solutions. We propose a bottom-up framework that establishes a model from the physical and control levels to the supervisory level, incorporating concerns from network and communication levels. We show that the game-theoretical approach can yield cross-layer security strategy solutions to the cyber-physical systems. (ID#:14-2587)

URL: See: "A Hierarchical Security Architecture for Cyber-Physical Systems"
Publication Location: Special Issue of the IEEE Transactions on Cybernetics: "Resilient Control Architectures and Systems"


UIUC - University of Illinois at Urbana-Champaign
Topic: Toward a Theory of Resilience in Systems: A Game-Theoretic Approach
Title: Distributed optimization by myopic strategic interactions and the price of heterogeneity
Author(s): B. Gharesifard, B. Touri, T. Basar, and C. Langbort
Abstract: Gharesifard, B.; Touri, B.; Basar, T.; Langbort, C., "Distributed optimization by myopic strategic interactions and the price of heterogeneity," Decision and Control (CDC), 2013 IEEE 52nd Annual Conference on , vol., no., pp.1174,1179, 10-13 Dec. 2013
doi: 10.1109/CDC.2013.6760041
This paper is concerned with the tradeoffs between low-cost heterogenous designs and optimality. We study a class of constrained myopic strategic games on networks which approximate the solutions to a constrained quadratic optimization problem; the Nash equilibria of these games can be found using best-response dynamical systems, which only use local information. The notion of price of heterogeneity captures the quality of our approximations. This notion relies on the structure and the strength of the interconnections between agents. We study the stability properties of these dynamical systems and demonstrate their complex characteristics, including abundance of equilibria on graphs with high sparsity and heterogeneity. We also introduce the novel notions of social equivalence and social dominance, and show some of their interesting implications, including their correspondence to consensus. Finally, using a classical result of Hirsch [1], we fully characterize the stability of these dynamical systems for the case of star graphs with asymmetric interactions. Various examples illustrate our results. (ID#:14-2588)
URL: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6760041&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6760041
Publication Location: IEEE Conference on Decision and Control (CDC), December 2013


UIUC - University of Illinois at Urbana-Champaign
Topic: Toward a Theory of Resilience in Systems: A Game-Theoretic Approach
Title: Game-theoretic methods for robustness, security and resilience of cyber-physical control systems: Games-in-games principle for optimal cross-layer resilient control systems
Author(s): Q. Zhu and T. Basar
Hard Problem: Resilient Architecture, Policy-Governed Secure Collaboration
Abstract: Not IEEE Modern critical infrastructures are highly integrated systems composed of many complex interactions between different system modules or agents including cyber and physical components as well as human factors. Their growing complexity demands novel design techniques for scalable and efficient control and computations for providing system security and resilience. This dissertation develops new game-theoretic frameworks for addressing security and resilience problems residing at multiple layers of the cyber-physical systems including robust and resilient control, secure network routing and management of information security and smart grid energy systems. Hybrid distributed reinforcement learning algorithms are developed as practical modeling tools for defense systems with different levels of rationality and intelligence at different times. The learning algorithms enable online computations of defense strategies, such as routing decisions and configuration policies, for nonzero-sum security games with incomplete information. In addition, games-in-games frameworks are proposed for system-wide modeling of complex hierarchical systems, where games played at different levels interact through their outcomes, action spaces, and costs. This concept is applied to robust and resilient control of power systems in which a zero-sum differential game for physical robust control design is nested in and coupled with a zero-sum stochastic game for security policy design. At the networking layer of the system, multi-hop secure routing games also exhibit the games-in-games structure, and their equilibrium solutions are characterized by backward induction solving a sequence of nested games. This approach leads to a distributed secure routing protocol that enables the resilience of network routing and self-recovery mechanisms in face of adversarial attacks. Finally, in order to address emerging energy management issues of the smart grid, we establish a fundamental game-theoretic framework for analyzing system equilibrium under distributed generations, renewable energy sources and active participation of utility users. Furthermore, we develop a novel game framework and its equilibrium solution, named mirror Stackelberg equilibrium, for modeling the demand response management in the smart grid. This approach enables quantitative study of the value of demand response brought by emerging smart grid technologies as compared to the current supply-side economic dispatch model. It facilitates fundamental understanding of pricing, energy policies and infrastructural investment decision in future highly interconnected and interdependent energy systems. Examples from power systems, cognitive radio communication networks, and the smart grid are used as driving examples for illustrating new solution concepts, distributed algorithms and analytical techniques presented in this dissertation. (ID#:14-2589)
URL: http://oatd.org/oatd/record?record=handle%5C:2142%5C%2F45479
Publication Location: IEEE Control Systems Magazine



UIUC - University of Illinois at Urbana-Champaign
Topic: Toward a Theory of Resilience in Systems: A Game-Theoretic Approach
Title: A three-stage Colonel Blotto game with applications to cyberphysical security
Author(s): A. Gupta, G. Schwartz C. Langbort, S. Sastry, and T. Basar
Hard Problem: Resilient Architecture, Policy-Governed Secure Collaboration
Abstract: Gupta, A; Schwartz, G.; Langbort, C.; Sastry, S.S.; Basar, T., "A three-stage Colonel Blotto game with applications to cyberphysical security," American Control Conference (ACC), 2014 , vol., no., pp.3820,3825, 4-6 June 2014 doi: 10.1109/ACC.2014.6859164
We consider a three-step three-player complete information Colonel Blotto game in this paper, in which the first two players fight against a common adversary. Each player is endowed with a certain amount of resources at the beginning of the game, and the number of battlefields on which a player and the adversary fights is specified. The first two players are allowed to form a coalition if it improves their payoffs. In the first stage, the first two players may add battlefields and incur costs. In the second stage, the first two players may transfer resources among each other. The adversary observes this transfer, and decides on the allocation of its resources to the two battles with the players. At the third step, the adversary and the other two players fight on the updated number of battlefields and receive payoffs. We characterize the subgame-perfect Nash equilibrium (SPNE) of the game in various parameter regions. In particular, we show that there are certain parameter regions in which if the players act according to the SPNE strategies, then (i) one of the first two players add battlefields and transfer resources to the other player (a coalition is formed), (ii) there is no addition of battlefields and no transfer of resources (no coalition is formed). We discuss the implications of the results on resource allocation for securing cyberphysical systems. (ID#:14-2590)
URL: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6859164&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel7%2F6849600%2F6858556%2F06859164.pdf%3Farnumber%3D6859164
Publication Location: 2014 American Control Conference (ACC) 2014


UIUC - University of Illinois at Urbana-Champaign
Topic: Toward a Theory of Resilience in Systems: A Game-Theoretic Approach
Title: Stability properties of infected networks with low curing rates
Author(s): A. Khanafer, T. Basar, and B. Gharesifard
Hard Problem: Resilient Architecture, Policy-Governed Secure Collaboration
Abstract: Khanafer, A; Basar, T.; Gharesifard, B., "Stability properties of infected networks with low curing rates," American Control Conference (ACC), 2014 , vol., no., pp.3579,3584, 4-6 June 2014 doi: 10.1109/ACC.2014.6859418
In this work, we analyze the stability properties of a recently proposed dynamical system that describes the evolution of the probability of infection in a network. We show that this model can be viewed as a concave game among the nodes. This characterization allows us to provide a simple condition, that can be checked in a distributed fashion, for stabilizing the origin. When the curing rates at the nodes are low, a residual infection stays within the network. Using properties of Hurwitz Mertzel matrices, we show that the residual epidemic state is locally exponentially stable. We also demonstrate that this state is globally asymptotically stable. Furthermore, we investigate the problem of stabilizing the network when the curing rates of a limited number of nodes can be controlled. In particular, we characterize the number of controllers required for a class of undirected graphs. Several simulations demonstrate our results. (ID#:14-2591)
URL: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6859418&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6859418
Publication Location: 2014 American Control Conference (ACC) 2014


UIUC - University of Illinois at Urbana-Champaign
Topic: Toward a Theory of Resilience in Systems: A Game-Theoretic Approach
Title: Control over lossy networks: A dynamic game approach
Author(s): J. Moon and T. Basar
Hard Problem: Resilient Architecture, Policy-Governed Secure Collaboration
Abstract: Jun Moon; Basar, T., "Control over TCP-like lossy networks: A dynamic game approach," American Control Conference (ACC), 2013 , vol., no., pp.1578,1583, 17-19 June 2013 doi: 10.1109/ACC.2013.6580060
This paper considers H optimal control of LTI systems where the loop is closed over TCP-like lossy networks. Following a game theoretic formulation of the problem, we first obtain an explicit Hcontroller. We then analyze the infinite-horizon behavior of the H controller. In particular, we provide necessary and sufficient conditions in terms of the packet drop probability and the H disturbance attenuation parameter under which the optimal controller is unique and stabilizes the closed-loop system in the mean-square sense. It is also shown that these conditions are coupled; therefore, they cannot be determined independently. A numerical example is presented to illustrate the main results. (ID#:14-2592)
URL: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6580060&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6580060
Publication Location: 2014 American Control Conference (ACC) 2014

UIUC - University of Illinois at Urbana-Champaign
Topic: Toward a Theory of Resilience in Systems: A Game-Theoretic Approach
Title: Actors Programming for the Mobile Cloud
Author(s): Gul Agha
Hard Problem: Resilient Architecture, Policy-Governed Secure Collaboration
Abstract: From UCLA; different title, same content; no abstract available (ID#:14-2593)
URL: http://www.cs.ucla.edu/~palsberg/course/cs239/papers/karmani-agha.pdf
Publication Location: International Symposium on Parallel and Distributed Computing 2014



UIUC - University of Illinois at Urbana-Champaign
Topic: The Science of Summarizing Systems: Generating Security Properties Using Data Mining and Formal Analysis
Title: Using Control-Flow Techniques in a Security Context: A Survey on Common Prototypes and Their Common Weakness
Author(s): Seeger, M.M
Abstract: Seeger, M.M., "Using Control-Flow Techniques in a Security Context: A Survey on Common Prototypes and Their Common Weakness," Network Computing and Information Security (NCIS), 2011 International Conference on , vol.2, no., pp.133,137, 14-15 May 2011
doi: 10.1109/NCIS.2011.126
Practical approaches using control-flow techniques in order to detect changes in the control-flow of a program have been subject of many scientific works. This work focuses on three common tools making use of control- and data-flow analysis in order to detect alternations and reveals their common weakness in terms of the ability to react directly to a dynamic change in control-flow. With a general focus on static analysis of binaries or source code, detection of dynamic changes in the executive flow cannot be detected. In order to emphasize this shortcoming of static analysis, we present an approach for dynamically changing a program's control-flow and validate it by depicting a proof of concept. (ID#:14-2594)
URL: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5948809&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5948809
Publication Location: Network Computing and Information Security (NCIS), 2011




UIUC - University of Illinois at Urbana-Champaign
Topic: Scalable Methods for Security Against Distributed Attacks
Title: Parameterized concurrent multi-party session types
Author(s): Minas Charalambides, Peter Dinges, and Gul Agha
Abstract: From arXiv: Session types have been proposed as a means of statically verifying implementations of communication protocols. Although prior work has been successful in verifying some classes of protocols, it does not cope well with parameterized, multi-actor scenarios with inherent asynchrony. For example, the sliding window protocol is inexpressible in previously proposed session type systems. This paper describes System-A, a new typing language which overcomes many of the expressiveness limitations of prior work. System-A explicitly supports asynchrony and parallelism, as well as multiple forms of parameterization. We define System-A and show how it can be used for the static verification of a large class of asynchronous communication protocols. (ID#:14-2595)
URL: http://arxiv.org/pdf/1208.4632.pdf
Publication Location: International Workshop on Foundations of Coordination Languages and Self Adaptation 2012


UIUC - University of Illinois at Urbana-Champaign
Topic: Scalable Methods for Security Against Distributed Attacks
Title: Why Do Scala Developers Mix the Actor Model with Other Concurrency Models
Author(s): Samira Tasharo , Peter Dinges, and Ralph Johnson
Abstract: Samira Tasharofi, Peter Dinges, and Ralph E. Johnson. 2013. Why do scala developers mix the actor model with other concurrency models?. In Proceedings of the 27th European conference on Object-Oriented Programming (ECOOP'13), Giuseppe Castagna (Ed.). Springer-Verlag, Berlin, Heidelberg, 302-326. DOI=10.1007/978-3-642-39038-8_13 http://dx.doi.org/10.1007/978-3-642-39038-8_13
Mixing the actor model with other concurrency models in a single program can break the actor abstraction. This increases the chance of creating deadlocks and data races--two mistakes that are hard to make with actors. Furthermore, it prevents the use of many advanced testing, modeling, and verification tools for actors, as these require pure actor programs. This study is the first to point out the phenomenon of mixing concurrency models by Scala developers and to systematically identify the factors leading to it. We studied 15 large, mature, and actively maintained actor programs written in Scala and found that 80% of them mix the actor model with another concurrency model. Consequently, a large part of real-world actor programs does not use actors to their fullest advantage. Inspection of the programs and discussion with the developers reveal two reasons for mixing that can be influenced by researchers and library-builders: weaknesses in the actor library implementations, and shortcomings of the actor model itself. (ID#:14-2596)
URL: http://dl.acm.org/citation.cfm?id=2525001
Publication Location: ECOOP 2013 - Object-Oriented Programming


UIUC - University of Illinois at Urbana-Champaign
Topic: Scalable Methods for Security Against Distributed Attacks
Title: Automated inference of atomic sets for safe concurrent execution
Author(s): Peter Dinges, Minas Charalambides, and Gul Agha
Abstract: Peter Dinges, Minas Charalambides, and Gul Agha. 2013. Automated inference of atomic sets for safe concurrent execution. In Proceedings of the 11th ACM SIGPLAN-SIGSOFT Workshop on Program Analysis for Software Tools and Engineering (PASTE '13). ACM, New York, NY, USA, 1-8. DOI=10.1145/2462029.2462030 http://doi.acm.org/10.1145/2462029.2462030
Atomic sets are a synchronization mechanism in which the programmer specifies the groups of data that must be accessed as a unit. The compiler can check this specification for consistency, detect deadlocks, and automatically add the primitives to prevent interleaved access. Atomic sets relieve the programmer from the burden of recognizing and pruning execution paths which lead to interleaved access, thereby reducing the potential for data races. However, manually converting programs from lock-based synchronization to atomic sets requires reasoning about the program's concurrency structure, which can be a challenge even for small programs. Our analysis eliminates the challenge by automating the reasoning. Our implementation of the analysis allowed us to derive the atomic sets for large code bases such as the Java collections framework in a matter of minutes. The analysis is based on execution traces; assuming all traces reflect intended behavior, our analysis enables safe concurrency by preventing unobserved interleavings which may harbor latentHeisenbugs (ID#:14-2597)
URL: http://dl.acm.org/citation.cfm?id=2462030
Publication Location: Workshop on Program Analysis for Software Tools and Engineering 2013


UIUC - University of Illinois at Urbana-Champaign
Topic: Scalable Methods for Security Against Distributed Attacks
Title: Performance Evaluation of Sensor Networks by Statistical Modeling and Euclidean Model Checking
Author(s): YoungMin Kwon and Gul Agha
Abstract: Youngmin Kwon and Gul Agha. 2013. Performance evaluation of sensor networks by statistical modeling and euclidean model checking. ACM Trans. Sen. Netw. 9, 4, Article 39 (July 2013), 38 pages. DOI=10.1145/2489253.2489256 http://doi.acm.org/10.1145/2489253.2489256
Modeling and evaluating the performance of large-scale wireless sensor networks (WSNs) is a challenging problem. The traditional method for representing the global state of a system as a cross product of the states of individual nodes in the system results in a state space whose size is exponential in the number of nodes. We propose an alternative way of representing the global state of a system: namely, as a probability mass function (pmf) which represents the fraction of nodes in different states. A pmf corresponds to a point in a Euclidean space of possible pmf values, and the evolution of the state of a system is represented by trajectories in this Euclidean space. We propose a novel performance evaluation method that examines all pmf trajectories in a dense Euclidean space by exploring only finite relevant portions of the space. We call our method Euclidean model checking. Euclidean model checking is useful both in the design phase--where it can help determine system parameters based on a specification--and in the evaluation phase--where it can help verify performance properties of a system. We illustrate the utility of Euclidean model checking by using it to design a time difference of arrival (TDoA) distance measurement protocol and to evaluate the protocol's implementation on a 90-node WSN. To facilitate such performance evaluations, we provide a Markov model estimation method based on applying a standard statistical estimation technique to samples resulting from the execution of a system. (ID#:14-2599)
URL: http://dl.acm.org/citation.cfm?id=2489256
Publication Location: ACM Transactions on Sensor Networks




UIUC - University of Illinois at Urbana-Champaign
Topic: Beyond Reachability Properties
Title: A Formal Definition of Protocol Indistinguishability and its Verification on Maude-NPA
Author(s): S. Escobar, C. Meadows, J. Meseguer, and S. Santiago
Abstract: not available: Possible broken lnk. (ID#:14-2600)
URL: http://users.dsic.upv.es/~sescobar/papers-security.html
Publication Location: UIUC Technical Report



UIUC - University of Illinois at Urbana-Champaign
Topic: Classification of Cyber-Physical System Adversaries
Title: Differentially Private Iterative Synchronous Consensus
Author(s): Zhenqi Huang, Sayan Mitra, Geir Dullerud
Abstract: Zhenqi Huang, Sayan Mitra, and Geir Dullerud. 2012. Differentially private iterative synchronous consensus. In Proceedings of the 2012 ACM workshop on Privacy in the electronic society (WPES '12). ACM, New York, NY, USA, 81-90. DOI=10.1145/2381966.2381978 http://doi.acm.org/10.1145/2381966.2381978
The iterative consensus problem requires a set of processes or agents with different initial values, to interact and update their states to eventually converge to a common value. Protocols solving iterative consensus serve as building blocks in a variety of systems where distributed coordination is required for load balancing, data aggregation, sensor fusion, filtering, and synchronization. In this paper, we introduce the private iterative consensus problem where agents are required to converge while protecting the privacy of their initial values from honest but curious adversaries. Protecting the initial states, in many applications, suffice to protect all subsequent states of the individual participants.
We adapt the notion of differential privacy in this setting of iterative computation. Next, we present (i) a server-based and (ii) a completely distributed randomized mechanism for solving differentially private iterative consensus with adversaries who can observe the messages as well as the internal states of the server and a subset of the clients. Our analysis establishes the tradeoff between privacy and the accuracy: for given e, b >0, the e-differentially private mechanism for N agents, is guaranteed to convergence to a value withinO(/1/e bN) of the average of the initial values, with probability at least (1-b). (ID#:14-2601)
URL: http://dl.acm.org/citation.cfm?id=2381978
Publication Location: Workshop on Privacy in the Electronic Society (WPES), collocated with of 19th ACM Conference on Computer and Communications Security (CCS) 2012


UIUC - University of Illinois at Urbana-Champaign
Topic: Classification of Cyber-Physical System Adversaries
Title: Using Run-Time Checking to Provide Safety and Progress for Distributed Cyber-Physical Systems
Author(s): Stanley Bak, Fardin Abdi, Zhenqi Huang and Marco Caccamo
Abstract: Bak, S.; Abad, F.AT.; Zhenqi Huang; Caccamo, M., "Using run-time checking to provide safety and progress for distributed cyber-physical systems," Embedded and Real-Time Computing Systems and Applications (RTCSA), 2013 IEEE 19th International Conference on , vol., no., pp.287,296, 19-21 Aug. 2013
doi: 10.1109/RTCSA.2013.6732229
Cyber-physical systems (CPS) may interact and manipulate objects in the physical world, and therefore ideally would have formal guarantees about their behavior. Performing static-time proofs of safety invariants, however, may be intractable for systems with distributed physical-world interactions. This is further complicated when realistic communication models are considered, for which there may not be bounds on message delays, or even that messages will eventually reach their destination. In this work, we address the challenge of proving safety and progress in distributed CPS communicating over an unreliable communication layer. This is done in two parts. First, we show that system safety can be verified by partially relying upon run-time checks, and that dropping messages if the run-time checks fail will maintain safety. Second, we use a notion of compatible action chains to guarantee system progress, despite unbounded message delays. We demonstrate the effectiveness of our approach on a multi-agent vehicle flocking system, and show that the overhead of the proposed run-time checks is not overbearing. (ID#:14-2602)
URL: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6732229&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel7%2F6720220%2F6732192%2F06732229.pdf%3Farnumber%3D6732229
Publication Location: IEEE International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA 2013)


UIUC - University of Illinois at Urbana-Champaign
Topic: Classification of Cyber-Physical System Adversaries
Title: Classification of Cyber-physical System Adversaries
Author(s): R. Essick, J.-W. Lee, and G.E. Dullerud
Hard Problem: Security-Metrics-Driven Evaluation, Design, Development and Deployment
Abstract: Essick, R.; Lee, J.; Dullerud, G.E., "Control of Linear Switched Systems With Receding Horizon Modal Information," Automatic Control, IEEE Transactions on , vol.59, no.9, pp.2340,2352, Sept. 2014
doi: 10.1109/TAC.2014.2321251
We provide an exact solution to two performance problems--one of disturbance attenuation and one of windowed variance minimization--subject to exponential stability. Considered are switched systems, whose parameters come from a finite set and switch according to a language such as that specified by an automaton. The controllers are path-dependent, having finite memory of past plant parameters and finite foreknowledge of future parameters. Exact, convex synthesis conditions for each performance problem are expressed in terms of nested linear matrix inequalities. The resulting semidefinite programming problem may be solved offline to arrive at a suitable controller. A notion of path-by-path performance is introduced for each performance problem, leading to improved system performance. Non-regular switching languages are considered and the results are extended to these languages. Two simple, physically motivated examples are given to demonstrate the application of these results.(ID#:14-2603)
URL: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6808501&url=http%3A%2F%2Fieeexplore.ieee.org%2Fstamp%2Fstamp.jsp%3Ftp%3D%26arnumber%3D6808501
Publication Location: IEEE Transactions on Automatic Control, 2014


UIUC - University of Illinois at Urbana-Champaign
Topic: Classification of Cyber-Physical System Adversaries
Title: Path-By-Path Output Regulation of Switched Systems With a Receding Horizon of Modal Knowledge
Author(s): R. Essick, J.-W. Lee, and G.E. Dullerud
Hard Problem: Security-Metrics-Driven Evaluation, Design, Development and Deployment
Abstract: Essick, R.; Ji-Woong Lee; Dullerud, G., "Path-by-path output regulation of switched systems with a receding horizon of modal knowledge," American Control Conference (ACC), 2014 , vol., no., pp.2650,2655, 4-6 June 2014
doi: 10.1109/ACC.2014.6859318
We address a discrete-time LQG control problem over a fixed performance window and apply a receding-horizon type control strategy, resulting in an exact solution to the problem in terms of semidefinite programming. The systems considered take parameters from a finite set, and switch between them according to an automaton. The controller has a finite preview of future parameters, beyond which only the set of parameters is known. We provide necessary and sufficient convex conditions for the existence of a controller which guarantees both exponential stability and finite-horizon performance levels for the system; the performance levels may differ according to the particular parameter sequence within the performance window. A simple, physics-based example is provided to illustrate the main results. (ID#:14-2604)
URL: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6859318&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D6859318
Publication Location: proceedings of the American Control Conference (ACC), 2014


UIUC - University of Illinois at Urbana-Champaign
Topic: Classification of Cyber-Physical System Adversaries
Title: Proofs from Simulations and Modular Annotations
Author(s): Zhenqi Huang and Sayan Mitra
Hard Problem: Security-Metrics-Driven Evaluation, Design, Development and Deployment
Abstract: Zhenqi Huang and Sayan Mitra. 2014. Proofs from simulations and modular annotations. InProceedings of the 17th international conference on Hybrid systems: computation and control(HSCC '14). ACM, New York, NY, USA, 183-192. DOI=10.1145/2562059.2562126 http://doi.acm.org/10.1145/2562059.2562126
We present a modular technique for simulation-based bounded verification for nonlinear dynamical systems. We introduce the notion of input-to-state discrepancy of each subsystem Ai in a larger nonlinear dynamical system A which bounds the distance between two (possibly diverging) trajectories of Ai in terms of their initial states and inputs. Using the IS discrepancy functions, we construct a low dimensional deterministic dynamical system M(d). For any two trajectories of A starting d distance apart, we show that one of them bloated by a factor determined by the trajectory of M contains the other. Further, by choosing appropriately small d's the overapproximations computed by the above method can be made arbitrarily precise. Using the above results we develop a sound and relatively complete algorithm for bounded safety verification of nonlinear ODEs. Our preliminary experiments with a prototype implementation of the algorithm show that the approach can be effective for verification of nonlinear models. (ID#:14-2605)
URL: http://dl.acm.org/citation.cfm?id=2562126
Publication Location: International Conference on Hybrid Systems: Computation and Control (HSCC 2014)


UIUC - University of Illinois at Urbana-Champaign
Topic: Classification of Cyber-Physical System Adversaries
Title: On Price of Privacy in Distributed Control Systems
Author(s): Zhenqi Huang, Yu Wang, Sayan Mitra, and Geir Dullerud
Hard Problem: Security-Metrics-Driven Evaluation, Design, Development and Deployment
Abstract: Zhenqi Huang, Yu Wang, Sayan Mitra, and Geir E. Dullerud. 2014. On the cost of differential privacy in distributed control systems. In Proceedings of the 3rd international conference on High confidence networked systems (HiCoNS '14). ACM, New York, NY, USA, 105-114. DOI=10.1145/2566468.2566474 http://doi.acm.org/10.1145/2566468.2566474
Individuals sharing information can improve the cost or performance of a distributed control system. But, sharing may also violate privacy. We develop a general framework for studying the cost of differential privacy in systems where a collection of agents, with coupled dynamics, communicate for sensing their shared environment while pursuing individual preferences. First, we propose a communication strategy that relies on adding carefully chosen random noise to agent states and show that it preserves differential privacy. Of course, the higher the standard deviation of the noise, the higher the cost of privacy. For linear distributed control systems with quadratic cost functions, the standard deviation becomes independent of the number agents and it decays with the maximum eigenvalue of the dynamics matrix. Furthermore, for stable dynamics, the noise to be added is independent of the number of agents as well as the time horizon up to which privacy is desired. Finally, we show that the cost of e-differential privacy up to time T, for a linear stable system with N agents, is upper bounded by O(T3/ Ne2). (ID#:14-2606)
URL: http://dl.acm.org/citation.cfm?id=2566468.2566474
Publication Location: ACM International Conference on High Confidence Networked Systems (HiCoNS) 2014



UIUC - University of Illinois at Urbana-Champaign
Topic: End-to-end Analysis of Side Channels
Title: Website Detection Using Remote Traffic Analysis
Author(s): Xun Gong, Nikita Borisov, Negar Kiyavash, Nabil Schear
Hard Problem: Security-Metrics-Driven Evaluation, Design, Development, and Deployment
Abstract: Xun Gong, Nikita Borisov, Negar Kiyavash, and Nabil Schear. 2012. Website detection using remote traffic analysis. In Proceedings of the 12th international conference on Privacy Enhancing Technologies (PETS'12), Simone Fischer-Hubner and Matthew Wright (Eds.). Springer-Verlag, Berlin, Heidelberg, 58-78. DOI=10.1007/978-3-642-31680-7_4 http://dx.doi.org/10.1007/978-3-642-31680-7_4
Recent work in traffic analysis has shown that traffic patterns leaked through side channels can be used to recover important semantic information. For instance, attackers can find out which website, or which page on a website, a user is accessing simply by monitoring the packet size distribution. We show that traffic analysis is even a greater threat to privacy than previously thought by introducing a new attack that can be carried out remotely. In particular, we show that, to perform traffic analysis, adversaries do not need to directly observe the traffic patterns. Instead, they can gain sufficient information by sending probes from a far-off vantage point that exploits a queuing side channel in routers.
To demonstrate the threat of such remote traffic analysis, we study a remote website detection attack that wo rks against home broadband users. Because the remotely observed traffic patterns are more noisy than those obtained using previous schemes based on direct local traffic monitoring, we take a dynamic time warping (DTW) based approach to detecting fingerprints from the same website. As a new twist on website fingerprinting, we consider a website detection attack, where the attacker aims to find out whether a user browses a particular web site, and its privacy implications. We show experimentally that, although the success of the attack is highly variable, depending on the target site, for some sites very low error rates. We also show how such website detection can be used to deanonymize message board users. (ID#:14-2607)
URL: http://dl.acm.org/citation.cfm?id=2359021
Publication Location: 12th Privacy Enhancing Technologies Symposium (PETS) 2012



UIUC - University of Illinois at Urbana-Champaign
Topic: Attack-Tolerant Systems
Title: An Algorithmic Approach to Error Localization and Partial Recomputation for Low-Overhead Fault Tolerance on Parallel Systems
Author(s): Joseph Sloan, Greg Bronevetsky, and Rakesh Kumar
Abstract: Sloan, J.; Kumar, R.; Bronevetsky, G., "An algorithmic approach to error localization and partial recomputation for low-overhead fault tolerance," Dependable Systems and Networks (DSN), 2013 43rd Annual IEEE/IFIP International Conference on , vol., no., pp.1,12, 24-27 June 2013 doi: 10.1109/DSN.2013.6575309
The increasing size and complexity of massively parallel systems (e.g. HPC systems) is making it increasingly likely that individual circuits will produce erroneous results. For this reason, novel fault tolerance approaches are increasingly needed. Prior fault tolerance approaches often rely on checkpoint-rollback based schemes. Unfortunately, such schemes are primarily limited to rare error event scenarios as the overheads of such schemes become prohibitive if faults are common. In this paper, we propose a novel approach for algorithmic correction of faulty application outputs. The key insight for this approach is that even under high error scenarios, even if the result of an algorithm is erroneous, most of it is correct. Instead of simply rolling back to the most recent checkpoint and repeating the entire segment of computation, our novel resilience approach uses algorithmic error localization and partial recomputation to efficiently correct the corrupted results. We evaluate our approach in the specific algorithmic scenario of linear algebra operations, focusing on matrix-vector multiplication (MVM) and iterative linear solvers. We develop a novel technique for localizing errors in MVM and show how to achieve partial recomputation within this algorithm, and demonstrate that this approach both improves the performance of the Conjugate Gradient solver in high error scenarios by 3x-4x and increases the probability that it completes successfully by up to 60% with parallel experiments up to 100 nodes. (ID#:14-2608)
URL: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6575309
Publication Location: IEEE/IFIP International Conference on Dependable Systems and Networks, DSN, 2013


UIUC - University of Illinois at Urbana-Champaign
Topic: From Measurements to Security Science - Data-Driven Approach
Title: Adapting Bro into SCADA: building a specification-based intrusion detection system for the DNP3 protocol
Author(s): H. Lin, A. Slagell, C. Di Martino, Z. Kalbarczyk, and R. Iyer
Abstract: When SCADA systems are exposed to public networks, attackers can more easily penetrate the control systems that operate electrical power grids, water plants, and other critical infrastructures. To detect such attacks, SCADA systems require an intrusion detection technique that can understand the information carried by their usually proprietary network protocols.
To achieve that goal, we propose to attach to SCADA systems a specification-based intrusion detection framework based on Bro [7][8], a runtime network traffic analyzer. We have built a parser in Bro to support DNP3, a network protocol widely used in SCADA systems that operate electrical power grids. This built-in parser provides a clear view of all network events related to SCADA systems. Consequently, security policies to analyze SCADA-specific semantics related to the network events can be accurately defined. As a proof of concept, we specify a protocol validation policy to verify that the semantics of the data extracted from network packets conform to protocol definitions. We performed an experimental evaluation to study the processing capabilities of the proposed intrusion detection framework .(ID#:14-2609)
URL: http://dl.acm.org/citation.cfm?id=2459982
Publication Location: Annual Cyber Security and Information Intelligence Research Workshop (CSIIRW 2012)


UIUC - University of Illinois at Urbana-Champaign
Topic: From Measurements to Security Science - Data-Driven Approach
Title: Semantic Security Analysis of SCADA Networks to Detect Malicious Control Commands in Power Grids
Author(s): H. Lin, A. Slagell, Z. Kalbarczyk, P. Sauer, R. Iyer
Abstract: Hui Lin, Adam Slagell, Zbigniew Kalbarczyk, Peter W. Sauer, and Ravishankar K. Iyer. 2013. Semantic security analysis of SCADA networks to detect malicious control commands in power grids. In Proceedings of the first ACM workshop on Smart energy grid security (SEGS '13). ACM, New York, NY, USA, 29-34. DOI=10.1145/2516930.2516947 http://doi.acm.org/10.1145/2516930.2516947
In the current generation of SCADA (Supervisory Control And Data Acquisition) systems used in power grids, a sophisticated attacker can exploit system vulnerabilities and use a legitimate maliciously crafted command to cause a wide range of system changes that traditional contingency analysis does not consider and remedial action schemes cannot handle. To detect such malicious commands, we propose a semantic analysis framework based on a distributed network of intrusion detection systems (IDSes). The framework combines system knowledge of both cyber and physical infrastructure in power grid to help IDS to estimate execution consequences of control commands, thus to reveal attacker's malicious intentions. We evaluated the approach on the IEEE 30-bus system. Our experiments demonstrate that: (i) by opening 3 transmission lines, an attacker can avoid detection by the traditional contingency analysis and instantly put the tested 30-bus system into an insecure state and (ii) the semantic analysis provides reliable detection of malicious commands with a small amount of analysis time. (ID#:14-2610)
URL: http://dl.acm.org/citation.cfm?id=2516947
Publication Location: IEEE Int'l Conference on Smart Grid Communications, SmartGridComm 2013



UIUC - University of Illinois at Urbana-Champaign
Topic: Protocol Verification: Beyond Reachability Properties
Title: Asymmetric unification: A new unification paradigm for cryptographic protocol analysis
Author(s): Serdar Erbatur, Santiago Escobar, Deepak Kapur, Zhiqiang Liu, Christopher Lynch, Catherine Meadows, Jos'e Meseguer, Paliath Narendran, Sonia Santiago and Ralf Sasse
Abstract: Available from Springer via link listed below.. (ID#:14-2611)
URL: http://link.springer.com/chapter/10.1007%2F978-3-642-38574-2_16
Publication Location: Intl. Conf. On Automated Deduction (CADE 2013)


UIUC - University of Illinois at Urbana-Champaign
Topic: Protocol Verification: Mathematical Foundations & Analysis Techniques for Protocol Indistinguishability
Title: Sequential Protocol Compositon in Maude-NPA
Author(s): Santiago Escobar, Catherine Meadows, Jose Meseguer and Sonia Santiago
Hard Problem: Scalability and Composability
Abstract: Santiago Escobar, Catherine Meadows, Jos\&\#233; Meseguer, and Sonia Santiago. 2010. Sequential protocol composition in maude-NPA. In Proceedings of the 15th European conference on Research in computer security (ESORICS'10), Dimitris Gritzalis, Bart Preneel, and Marianthi Theoharidou (Eds.). Springer-Verlag, Berlin, Heidelberg, 303-318.
Protocols do not work alone, but together, one protocol relying on another to provide needed services. Many of the problems in cryptographic protocols arise when such composition is done incorrectly or is not well understood. In this paper we discuss an extension to the Maude-NPA syntax and operational semantics to support dynamic sequential composition of protocols, so that protocols can be specified separately and composed when desired. This allows one to reason about many different compositions with minimal changes to the specification. Moreover, we show that, by a simple protocol transformation, we are able to analyze and verify this dynamic composition in the current Maude-NPA tool. We prove soundness and completeness of the protocol transformation with respect to the extended operational semantics, and illustrate our results on some examples. (ID#:14-2612)
URL: http://dl.acm.org/citation.cfm?id=1888906
Publication Location: Computer Security - ESORICS 2010


UIUC - University of Illinois at Urbana-Champaign
Topic: Protocol Verification: Mathematical Foundations & Analysis Techniques for Protocol Indistinguishability
Title: A Rewriting- based forward semantics for Maude-NPA
Author(s): Santiago Escobar, Catherine Meadows, Jose Meseguer and Sonia Santiago
Hard Problem: Scalability and Composability
Abstract: The Maude-NRL Protocol Analyzer (Maude-NPA) is a tool for reasoning about the security of cryptographic protocols in which the cryptosystems satisfy different equational properties. It tries to find secrecy or authentication attacks by searching backwards from an insecure attack state pattern that may contain logical variables, in such a way that logical variables become properly instantiated in order to find an initial state. The execution mechanism for this logical reachability is narrowing modulo an equational theory. Although Maude-NPA also possesses a forwards semantics naturally derivable from the backwards semantics, it is not suitable for state space exploration or protocol simulation.
In this paper we define an executable forwards semantics for Maude-NPA, instead of its usual backwards one, and restrict it to the case of concrete states, that is, to terms without logical variables. This case corresponds to standard rewriting modulo an equational theory. We prove soundness and completeness of the backwards narrowing-based semantics with respect to the rewriting-based forwards semantics. We show its effectiveness as an analysis method that complements the backwards analysis with new prototyping, simulation, and explicit-state model checking features by providing some experimental results. (ID#:14-2613)
URL: http://dl.acm.org/citation.cfm?id=2600186&dl=ACM&coll=DL&CFID=552978724&CFTOKEN=96539078
Publication Location: In Proceedings HotSoS '14



UIUC - University of Illinois at Urbana-Champaign
Topic: Science of Human Circumvention of Security (SHuCS)
Title: Circumvention of Security: Good Users Do Bad Things
Author(s): J. Blythe, R. Koppel, and S.W. Smith
Hard Problem: Human Behavior
Abstract: Blythe, J.; Koppel, R.; Smith, S.W., "Circumvention of Security: Good Users Do Bad Things," Security & Privacy, IEEE , vol.11, no.5, pp.80,83, Sept.-Oct. 2013 doi: 10.1109/MSP.2013.110
Conventional wisdom is that the textbook view describes reality, and only bad people (not good people trying to get their jobs done) break the rules. And yet it doesn't, and good people circumvent. (ID#:14-2614)
URL: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6630017
Publication Location: IEEE Security and Privacy 2013



UIUC - University of Illinois at Urbana-Champaign
Topic: Quantitative Security Metrics for Cyber-Human Systems
Title: Simulation debugging and visualization in the Mobius modeling framework
Author(s): Buchanan, Craig
Hard Problem: Metrics
Abstract: Available from Springer via link listed below.. (ID#:14-2615)
URL: http://link.springer.com/chapter/10.1007/978-3-319-10696-0_18
Publication Location: M.S. Thesis, ECE Dept., Univ. of Illinois


UIUC - University of Illinois at Urbana-Champaign
Topic: No Topic Listed
Title: VeriFlow: Verifying Network-Wide Invariants in Real Time
Author(s): Ahmed Khurshid, Kelvin Zou, Wenxuan Zhou, Matthew Caesar, P. Brighten Godfrey
Abstract: Networks are complex and prone to bugs. Existing tools that check configuration files and data-plane state operate offline at timescales of seconds to hours, and cannot detect or prevent bugs as they arise.

Is it possible to check network-wide invariants in real time, as the network state evolves? The key challenge here is to achieve extremely low latency during the checks so that network performance is not affected. In this paper, we present a preliminary design, VeriFlow, which suggests that this goal is achievable. VeriFlow is a layer between a software-defined networking controller and network devices that checks for network-wide invariant violations dynamically as each forwarding rule is inserted. Based on an implementation using a Mininet OpenFlow network and Route Views trace data, we find that VeriFlow can perform rigorous checking within hundreds of microseconds per rule insertion. (ID#:14-2616)
URL: http://dl.acm.org/citation.cfm?id=2342452
Publication Location: ACM SIGCOMM Workshop on Hot Topics in Software Defined Networking (HotSDN), August 2012




UIUC - University of Illinois at Urbana-Champaign
Topic: Data Driven Security Models and Analysis
Title: An Experiment Using Factor Graph for Early Attack Detection
Author(s): P. Cao, K-W. Chung, A. Slagell, Z. Kalbarcyk, R. Iyer
Hard Problem: Metrics, Resilient Architectures, Human Behavior
Abstract: Not Found (ID#:14-2617)
URL: Not found
See: : "Preemptive Intrusion Detection" by P. Cao, K-W. Chung, A. Slagell, Z. Kalbarczyk, R. Iyer at http://dl.acm.org/citation.cfm?id=2600177


UIUC Logo


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


UMD - University of Maryland

UMD Publications


These publications were done for the Lablet activities at this school, and were listed in the Quarterly Reports back to the government. Please direct any comments to research (at) securedatabank.net if there are any questions or concerns regarding these publications.


UMD - University of Maryland, College Park
Topic: Trust, Recommendation Systems, and Collaboration
Title: A Fresh Look at Network Science: Interdependent Multigraphs Models Inspired From Statistical Physics
Author(s): J.S. Baras
Hard Problem: Scalability and Composability, Policy-Governed Secure Collaboration, Human Behavior
Abstract: Baras, J.S., "A fresh look at network science: Interdependent multigraphs models inspired from statistical physics," Communications, Control and Signal Processing (ISCCSP), 2014 6th International Symposium on , vol., no., pp.497,500, 21-23 May 2014
doi: 10.1109/ISCCSP.2014.6877921
We consider several challenging problems in complex networks (communication, control, social, economic, biological, hybrid) as problems in cooperative multi-agent systems. We describe a general model for cooperative multi-agent systems that involves several interacting dynamic multigraphs and identify three fundamental research challenges underlying these systems from a network science perspective. We show that the framework of constrained coalitional network games captures in a fundamental way the basic tradeoff of benefits vs. cost of collaboration, in multi-agent systems, and demonstrate that it can explain network formation and the emergence or not of collaboration. Multi-metric problems in such networks are analyzed via a novel multiple partially ordered semirings approach. We investigate the interrelationship between the collaboration and communication multigraphs in cooperative swarms and the role of the communication topology, among the collaborating agents, in improving the performance of distributed task execution. Expander graphs emerge as efficient communication topologies for collaborative control. We relate these models and approaches to statistical physics. (ID#:14-2618)
URL: http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6877921&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel7%2F6862736%2F6877795%2F06877921.pdf%3Farnumber%3D6877921
Publication Location: Proceedings 6th International Symposium on Communications, Control and Signal Processing (ISCCSP 2014)

UMD - University of Maryland, College Park
Topic: Trust, Recommendation Systems, and Collaboration
Title: Using Trust in Distributed Consensus With Adversaries in Sensor and Other Networks
Author(s): X. Liu and J.S. Baras
Hard Problem: Scalability and Composability, Policy-Governed Secure Collaboration, Human Behavior
Abstract: From UMD.edu: Extensive research efforts have been devoted to distributed consensus with adversaries. Many diverse applications drive this increased interest in this area including distributed collaborative sensor networks, sensor fusion and distributed collaborative control. We consider the problem of detecting Byzantine adversaries in a network of agents with the goal of reaching consensus. We propose a novel trust model that establishes both local trust based on local evidences and global trust based on local exchange of local trust values. We describe a trust-aware consensus algorithm that integrates the trust evaluation mechanism into the traditional consensus algorithm and propose various local decision rules based on local evidence. To further enhance the robustness of trust evaluation itself, we also provide a trust propagation scheme in order to take into account evidences of other nodes in the network. Then we show by simulation that the trust aware consensus algorithm can effectively detect Byzantine adversaries and excluding them from consensus iterations even in sparse networks with connectivity less than 2f + 1, where f is the number of adversaries. These results can be applied for fusion of trust evidences as well as for sensor fusion when malicious sensors are present like for example in power grid sensing and monitoring (ID#:14-2619)
URL: https://www.isr.umd.edu/~baras/publications/papers/2014/X_%20Liu_J_S_%20Baras_Using_Trust_in_Distributed_Consensus.html
Publication Location: Proceedings of 17th International Confernce on Information Fusion (FUSION 2014)

UMD - University of Maryland, College Park
Topic: Trust, Recommendation Systems, and Collaboration
Title: Soft Contract Verification
Author(s): Nguyen, Tobin-Hochstadt, Van Horn
Abstract: Phuc C. Nguyen, Sam Tobin-Hochstadt, and David Van Horn. 2014. Soft contract verification. InProceedings of the 19th ACM SIGPLAN international conference on Functional programming (ICFP '14). ACM, New York, NY, USA, 139-152. DOI=10.1145/2628136.2628156 http://doi.acm.org/10.1145/2628136.2628156
Behavioral software contracts are a widely used mechanism for governing the flow of values between components. However, run-time monitoring and enforcement of contracts imposes significant overhead and delays discovery of faulty components to run-time.

To overcome these issues, we present soft contract verification, which aims to statically prove either complete or partial contract correctness of components, written in an untyped, higher-order language with first-class contracts. Our approach uses higher-order symbolic execution, leveraging contracts as a source of symbolic values including unknown behavioral values, and employs an updatable heap of contract invariants to reason about flow-sensitive facts. We prove the symbolic execution soundly approximates the dynamic semantics and that verified programs can't be blamed.

The approach is able to analyze first-class contracts, recursive data structures, unknown functions, and control-flow-sensitive refinements of values, which are all idiomatic in dynamic languages. It makes effective use of an off-the-shelf solver to decide problems without heavy encodings. The approach is competitive with a wide range of existing tools - including type systems, flow analyzers, and model checkers - on their own benchmarks. (ID#:14-2620)
URL: http://dl.acm.org/citation.cfm?id=2628156
Publication Location: proceedings of The 19th ACM SIGPLAN International Conference on Functional Programming 2014


UMD - University of Maryland, College Park
Topic: Verifcation of Hyperproperties
Title: Temporal Logics for Hyperproperties
Author(s): Michael R. Clarkson, Bernd Finkbeiner, Masoud Koleini, Kristopher K. Micinski, Markus N. Rabe, and Cesar Sanchez
Abstract: Available from Springer the via link listed below. (ID#:14-2621)
URL: http://link.springer.com/chapter/10.1007%2F978-3-642-54792-8_15
Publication Location: Conference on Principles of Security and Trust 2014


UMD Logo


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.