Science of Security (SoS) Newsletter (2015 - Issue 5)

 

 

Newsletter Banner

Science of Security (SoS) Newsletter (2015 - Issue 5)


Each issue of the SoS Newsletter highlights achievements in current research, as conducted by various global members of the Science of Security (SoS) community. All presented materials are open-source, and may link to the original work or web page for the respective program. The SoS Newsletter aims to showcase the great deal of exciting work going on in the security community, and hopes to serve as a portal between colleagues, research projects, and opportunities.

Please feel free to click on any issue of the Newsletter, which will bring you to their corresponding subsections:

Publications of Interest

The Publications of Interest provides available abstracts and links for suggested academic and industry literature discussing specific topics and research problems in the field of SoS. Please check back regularly for new information, or sign up for the CPSVO-SoS Mailing List.

Table of Contents

Science of Security (SoS) Newsletter (2015 - Issue 5)

(ID#:15-5926)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


In the News

 
SoS Logo

In The News

This section features topical, current news items of interest to the international security community. These articles and highlights are selected from various popular science and security magazines, newspapers, and online sources.


US News     

"Feds Extradite 'Most Wanted' ATM Hacker", GovInfoSecurity, 24 June 2015. [Online]. After his arrest by German police in late 2013, Turkish citizen Ercan Findikoglu is set to face trial in New York on June 23rd. Findikoglu is accused of stealing from ATMs since 2008, netting himself close to $60 million. He and his partners-in-crime used a hacking technique targeted at credit card processors, allowing them to withdraw vast sums of money from compromised accounts. (ID#: 15-50342) See http://www.govinfosecurity.com/feds-extradite-most-wanted-atm-hacker-a-8341

"iOS 9, Android M Place New Focus On Security, Privacy", Information Week, 24 June 2015. [Online]. Managing permissions on Apple and Android mobile devices has become a major battle between users and app developers, who often seek valuable user information for targeted advertising. Google and Apple both have announced plans to give the user more control over privacy and encryption with tools like iOS 9's "App Transport Security" tool. (ID#: 15-50345) See http://www.informationweek.com/it-life/ios-9-android-m-place-new-focus-on-security-privacy/a/d-id/1321005?

"Swift Key Vulnerability on Samsung Phones", Information Security Buzz, 24 June 2015. [Online]. Swift Key, a pre-installed keyboard software that comes on Samsung phones and cannot be disabled, was found to have a severe security flaw. When exploited, hackers can remotely control network traffic and execute high-privileged code. (ID#: 15-50346) See http://www.informationsecuritybuzz.com/swift-key-vulnerability-on-samsung-phones/

"NIST releases cyber security guidelines for government contractors", Cyber Defense Magazine, 24 June 2015. [Online]. NIST has released "Publication Citation: Protecting Controlled Unclassified Information in Nonfederal Information Systems and Organizations", a guideline on how federal agencies should handle the way they allow contractors and "outsiders" to handle their sensitive data. (ID#: 15-50361) See http://www.cyberdefensemagazine.com/nist-releases-cyber-security-guidelines-for-government-contractors/

"Hackers targeted the Polish Airline LOT, grounded 1,400 Passengers", Cyber Defense Magazine, 23 June 2015. [Online]. Over 1,400 passengers hoping to fly on flights with Polish carrier LOT had wrench throw in their travel plans when a cyber attack forced LOT to cancel 10 flights. The attack affected ground computer systems, preventing the creation of flight plans for outbound flights. The attack was resolved within five hours. (ID#: 15-50362) See http://www.cyberdefensemagazine.com/hackers-targeted-the-polish-airline-lot-grounded-1400-passengers/

"Major Banking Corporation Investing in Quantum Cryptography Company", Information Security Buzz, 23 June 2015. [Online]. The banking industry is becoming increasingly interested in the security that quantum cryptography can provide. QLabs has developed a way to use quantum technology to encrypt sensitive data with keys that are "unhackable", with help from banking investors. Though quantum cryptography has a great deal of potential, it is not a "fix-all" for cybersecurity issues. (ID#: 15-50347) See http://www.informationsecuritybuzz.com/major-banking-corporation-investing-in-quantum-cryptography-company/

"There's Another Adobe Flash Zero-Day And Chinese Hackers Are Abusing It", Forbes, 23 June 2015. [Online]. A group of Chinese hackers known as APT3 have been sending uncharacteristically unimaginative and generic phishing emails to exploit an Adobe Flash Zero-day. Once in a network, the attackers use advanced C&C techniques, backdoors, and other tricks to spy and steal information. The flaw, CVE-2015-3113, affects Internet Explorer and Firefox on certain operating systems. (ID#: 15-50348) See http://www.forbes.com/sites/thomasbrewster/2015/06/23/adobe-flash-zero-day-used-by-china-hackers/?ss=Security

"Facebook Makes It Free And Easy To Kill Latest Mac And iPhone Zero-Days", Forbes, 23 June 2015. [Online]. Facebook has developed "osquery", a tool to help detect vulnerabilities in Apple desktop and mobile operating systems. Osquery was developed in response to the discovery of cross-app resource attack vulnerability by Indiana University Bloomington researcher Luyi Xing. (ID#: 15-50349) See http://www.forbes.com/sites/thomasbrewster/2015/06/23/facebook-offers-apple-zero-day-protection/?ss=Security

"OPM Director Rejects Blame for Breach", GovInfoSecurity, 23 June 2015. [Online]. OPM Director Katherine Archuleta testified to a Senate panel that nobody in the OPM should be held personally accountable for the massive OPM breach, leaving the panel and many others with frustration over the lack of "clear lines of accountability". The blame, according to Archuleta, rests in the legacy systems used by the agency, as well as the attackers themselves. (ID#: 15-50343) See http://www.govinfosecurity.com/opm-director-rejects-blame-for-breach-a-8338

"IRS, industry to share data to fight tax fraud", GCN, 23 June 2015. [Online]. The IRS is taking steps towards fighting tax fraud by leveraging data sharing with industry. By using analytics and shared data, the IRS hopes to be able to better validate tax returns. (ID#: 15-50357) See http://gcn.com/articles/2015/06/23/irs-industry-tax-fraud-fight.aspx?admgarea=TC_SecCybersSec

"FitBit, Acer Liquid Leap Fail In Security Fitness", Dark Reading, 22 June 2016. [Online]. Security testing organization AV-TEST evaluated nine different fitness tracking devices, and found that all were lacking to at least a small degree in Bluetooth security. Some of them — especially the Acer Liquid Leap and FitBit Charge — are host to numerous serious security threats, including lack of code obfuscation and pairing with a smartphone without confirmation. (ID#: 15-50339) See http://www.darkreading.com/endpoint/fitbit-acer-liquid-leap-fail-in-security-fitness/d/d-id/1320984

"Getting Proactive Against Insider Threats", Security Week, 22 June 2015. [Online]. Mitigating insider threats requires a proactive, "forward-leaning" approach: allocating capital towards preventative measures, using "hyper-vigilant" security models, and focusing on the critical assets can vastly decrease the likeliness of suffering from a malicious insider. (ID#: 15-50350) See http://www.securityweek.com/getting-proactive-against-insider-threats

"How encryption keys could be stolen by your lunch", Computerworld, 22 June 2015. [Online]. Israeli researchers have developed a tool that can sniff electromagnetic signals from a computer and extrapolate information. Fittingly named "PITA", the device is around the size of a small piece of pita bread. It has been designed to sniff encryption keys from laptops running the open-source encryption software GnuPG 1.x. (ID#: 15-50354) See http://www.computerworld.com/article/2938753/security/how-encryption-keys-could-be-stolen-by-your-lunch.html

"Toshiba Working on Unbreakable Encryption Technology", The Wall Street Journal, 22 June 2014. [Online]. Toshiba has plans for a completely unbreakable encryption technology that relies on one of the fundamental concepts of the quantum world: observing a particle must change, in some manner, the state of the particle. One-time use encryption keys would be transmitted on a custom fiber optic cable; any attempt to steal these keys would change the data and make the snooping detectable. (ID#: 15-50355) See http://blogs.wsj.com/digits/2015/06/22/toshiba-working-on-unbreakable-encryption-technology/

"RubyGems Software Flaw Affects Millions of Installs", Infosecurity Magazine, 22 June 2015. [Online]. Researchers have discovered a security flaw in RubyGems, a distribution software that helps businesses and developers distribute software. The flaw allows attackers to bypass HTTPS and thereby force redirection to their own "gem servers", allowing them to distribute malware. (ID#: 15-50358) See http://www.infosecurity-magazine.com/news/rubygems-software-flaw-affects/

"Cybercrime is paying with 1,425% return on investment", Cyber Defense Magazine, 22 June 2015. [Online]. A report by Trustwave indicates that the typical cybercriminal should expect to make a 1,425% return on the money they spend executing attacks. They found that ransomware and exploit kits are among the most common methods used to compromise a victim's systems, along with extortion and ransoms. CTB-Locker was found to be one of the more notable pieces of malware. (ID#: 15-50363) See http://www.cyberdefensemagazine.com/cybercrime-is-paying-with-1425-return-on-investment/

"Underwriters of cyberinsurance policies need better understanding of cyber risks", Homeland Security Newswire, 19 June 2015. [Online]. With cyberattacks increasingly becoming a norm in the business world, cybersecurity insurance is becoming more popular. For the cyber insurance market to succeed, insurance underwriters need to have a firm grasp on the severity and significance of varying forms of cyber risks. This is made difficult by the constantly changing, dynamic nature of cyber threats. (ID#: 15-50353) See http://www.homelandsecuritynewswire.com/dr20150619-underwriters-of-cyberinsurance-policies-need-better-understanding-of-cyber-risks

"UC Irvine Medical Center announces breach affecting 4,859 patients", SC Magazine, 19 June 2015. [Online]. University of California (UC) Irvine Medical Center announced that it suffered a breach, affecting nearly 5,000 patients. The perpetrator was a single employee, who is said to have accessed records over the course of four years for a non-job-related purpose. (ID#: 15-50364) See http://www.scmagazine.com/uc-irvine-medical-center-announces-breach-affecting-4859-patients/article/421645/

"New Apple iOS, OS X Flaws Pose Serious Risk", Dark Reading, 18 June 2015. [Online]. Indiana University in Bloomington researchers discovered vulnerabilities in iOS and OS X that put passwords and other sensitive data at risk. The fault lies in a weakness in application isolation and authentication that could allow cross-application resource access attacks (XARAs). (ID#: 15-50340) See http://www.darkreading.com/endpoint/new-apple-ios-os-x-flaws-pose-serious-risk/d/d-id/1320949

"Routers Becoming Juicy Targets for Hackers", Tech News World, 18 June 2015. [Online]. Router vulnerabilities have become an item of increased interest for hackers, as they are often "low-hanging fruit": full of security holes and often under appreciated by users and ISPs. Backdoors and cross-site request forgery attacks make it easy for attackers to use routers to gain access to networks or mount Denial of Service attacks. (ID#: 15-50352) See http://www.technewsworld.com/story/82188.html

"White House: IRS, OMB appropriations bill increases cyber risk", FCW, 18 June 2015. [Online]. A bill approved by the House Appropriations Committee is set to reduce the IRS budget by nearly a billion dollars. Supporters point out the increased need for money elsewhere; however, others argue that the IRS needs money to better secure itself against attacks, such as the recent theft of taxpayer data. (ID#: 15-50356) See http://fcw.com/articles/2015/06/18/appropriations-cyber-risk.aspx

"U.S. Coast Guard commandant releases cyber strategy", GSN, 17 June 2015. [Online]. The U.S. Coast Guard has rolled out a new cyber strategy to help "ensure the prosperity and security of the nation's Maritime Transportation System (MTS) in the face of a rapidly evolving cyber domain." The framework is based on three priorities: maintaining information networks, combat adversaries by means of cyber capabilities, and to protect critical infrastructure. (ID#: 15-50351) See http://gsnmagazine.com/node/44709?c=cyber_security

"FBI Investigates Baseball Hack Attack", GovInfoSecurity, 17 June 2015. [Online]. An alleged cyber attack on the Houston Astros baseball team by rival team St. Louis Cardinals is being investigated by the FBI and DoJ. If it did indeed happen, the hack might be related to a data leak the Astros suffered in 2014, in which the team's internal trade talks were leaked onto a text-sharing site. (ID#: 15-50344) See http://www.govinfosecurity.com/fbi-investigates-baseball-hack-attack-a-8321

"New Malware Found Hiding Inside Image Files", Dark Reading, 16 June 2015. [Online]. Dell researchers published their findings on Stegoloader, a malware family that uses PNG files to store malicious code. This technique is a form of digital stenography, which has been used by malware authors for years; however, researchers fear that Stegoloader might indicate a new resurgence in stenography and similar countermeasures. (ID#: 15-50341) See http://www.darkreading.com/endpoint/new-malware-found-hiding-inside-image-files/d/d-id/1320895

"Wikipedia to Switch on HTTPS to Counter Surveillance Threat", InfoSecurity Magazine, 15 June 2015. [Online]. The Wikimedia Foundation announced that Wikipedia will be switching to HTTPS to hamper censorship and surveillance. The transition has been slow, as Wikimedia has been optimizing performance for people in regions where telecommunications infrastructure is less advanced. (ID#: 15-50326) See http://www.infosecurity-magazine.com/news/wikipedia-switch-https-counter/

"Update: Russia and China cracked Snowden's files, identified U.S., UK spies", Computerworld, 14 June 2015. [Online]. Agents with the U.K.'s MI6 have reportedly been withdrawn from foreign nations after Russian and Chinese agencies decrypted Snowden files that revealed the identities of UK and US agents. (ID#: 15-50329) See http://www.computerworld.com/article/2935558/cybercrime-hacking/russia-and-china-cracked-snowdens-files-identified-us-uk-spies.html

"Private security clearance info accessed in second OPM breach", SC Magazine, 13 June 2015. [Online]. In the wake of the massive OPM breach, the agency revealed that it suffered a separate breach. The attackers appear to have accessed Standard Form 86 documents, which contain information on not only security clearance applicants themselves but relatives and associates as well. (ID#: 15-50323) See http://www.scmagazine.com/detailed-sf-86-forms-may-have-been-tapped-by-chinese-operatives/article/420581/

"OPM's 'Cyber Pearl Harbor' might affect 14 million", FCW, 12 June 2015. [Online]. The OPM breach was first thought to have exposed sensitive information from 4 million current and former federal employees, though there is reason to believe that the number might be closer to 14 million, according to a scathing letter by the American Federation of Government Employees. (ID#: 15-50322) See http://fcw.com/articles/2015/06/12/opm-breach-gets-worse.aspx

"Apple iOS flaw exploitable to steal user password with a phishing email", Cyber Defense Magazine, 12 June 2015. [Online]. Forensics experts were able to create the iOS 8.3 Mail.app inject kit, which uses a vulnerability in Apple's default email app to inject HTML and collect user's passwords. Millions of users are vulnerable to the bug, which remains unpatched as of yet. (ID#: 15-50325) See http://www.cyberdefensemagazine.com/apple-ios-flaw-exploitable-to-steal-user-password-with-a-phishing-email/

"Chinese Hackers Circumvent Popular Web Privacy Tools", New York Times, 12 June 2015. [Online]. Silicon Valley security company AlienVault discovered that Chinese hackers have been able to circumvent TOR and VPNs, tools which Chinese citizens and journalists use to bypass China's internet censorship. Attackers targeted the websites of journalists and ethnic minorities, stealing personal info including names, addresses, and browser cookies. (ID#: 15-50334) See http://www.nytimes.com/2015/06/13/technology/chinese-hackers-circumvent-popular-web-privacy-tools.html?_r=1

"Hundreds Of Wind Turbines And Solar Systems Wide Open To Easy Exploits", Forbes, 12 June 2015. [Online]. German researcher Maxim Rupp discovered serious flaws in several clean energy systems, including wind turbines and solar-powered lighting. The vulnerabilities, which could allow an attacker to remotely disable or control the systems, were given a severity rating of 10 out of 10 due to their ease of implementation. (ID#: 15-50336) See http://www.forbes.com/sites/thomasbrewster/2015/06/12/hacking-wind-solar-systems-is-easy/?ss=Security

"Duqu2.0 knocks Kaspersky and security peers", SC Magazine, 11 June 2015. [Online]. After recovering from a major attack, Kaspersky Labs believes it was the victim of a version 2.0 of the highly sophisticated Duqu malware. Though it relies on numerous particularly advanced techniques and three individual zero-day vulnerabilities, the delivery method was nothing new — a simple email attachment. (ID#: 15-50321) See http://www.scmagazine.com/duqu20-knocks-kaspersky-and-security-peers/article/420251/

"SC Congress Toronto: Experts discuss incident response in a breach era", SC Magazine, 11 June 2015. [Online]. Security experts converged on the SC Congress Toronto to discuss management of and response to breaches. Transparency, straightforwardness, and a lack of speculation can help prevent further problems in the wake of a breach, while proper preparation can mitigate potential future security incidents. (ID#: 15-50324) See http://www.scmagazine.com/sc-congress-toronto-panel-plies-ways-to-improve-incident-response/article/420231/

"Why Organisations Should Consider Cloud Based Disaster Recovery", InformationSecurity Buzz, 11 June 2015. [Online]. Recent large-scale security issues with cloud-based data storage have damaged trust in cloud-based tech; however, cloud-based disaster recovery may not be under fair consideration by many businesses. Whereas "in-house" data takes up resources, a cloud-based solution can be used by an organization to make quick and flexible responses to problems without the need for worrying about hardware. (ID#: 15-50337) See http://www.informationsecuritybuzz.com/why-organisations-should-consider-cloud-based-disaster-recovery/

"Russia Pegged for ‘Cyber Caliphate’ Attack on TV5Monde", InfoSecurity Magazine, 10 June 2015. [Online]. Government-supported Russian hacking group APT28 is believed to be behind an attack on French television company TV5Monde this past Spring, which caused several channels to go down. The code was written in Cyrillic during St. Petersburg / Moscow business hours, and used the same server and registrar that has been used by the group in the past. (ID#: 15-50327) See http://www.infosecurity-magazine.com/news/russia-pegged-cyber-caliphate/

"VMware patches virtual machine escape flaw on Windows", Computerworld, 10 June 2015. [Online]. VMware has fixed an issue with a Windows printer interface feature in its virtual machine software that would allow hackers to jump out of the VM and execute code on the host operating system. This vulnerability, along with a few others that were also patched, would have helped facilitate DoS attacks. (ID#: 15-50330) See http://www.computerworld.com/article/2934185/security0/vmware-patches-virtual-machine-escape-flaw-on-windows.html

"Army fights a two-front cyber war", FCW, 10 June 2015. [Online]. A recent attack on the public website of the U.S. Army, believed to have been orchestrated by Syrian hackers, took down the website for several hours. The Army has taken extensive measures to keep communications systems secure, though high-level cyber personnel believe there is still work to be done — especially when it comes to fielded systems. (ID#: 15-50333) See http://fcw.com/articles/2015/06/10/army-fights-cyber-war.aspx

"Can the power grid survive a cyberattack?", Homeland Security Newswire, 10 June 2015. [Online]. As America's economy and capabilities become increasingly reliant on an already stressed power grid, the threat of cyber attack is becoming an item of concern. The increasing connectivity and networking of power systems could open new vectors from which to attack. Further research and development into securing the power grid may be one of the most significant challenges for our generation. (ID#: 15-50335) See http://www.homelandsecuritynewswire.com/dr20150610-can-the-power-grid-survive-a-cyberattack

"Flash Malware Soars Over 300% in Q1 2015", InfoSecurity Magazine, 09 June 2015. [Online]. A recent and dramatic rise in the prevalence of flash malware could be due to the CTB-Locker and other new malware families. These advanced examples of malware are very hard to detect; for example, CTB-locker uses C&C servers on the TOR network, as well as self-healing viruses that re-download themselves even after reformatting the HDD. (ID#: 15-50328) See http://www.infosecurity-magazine.com/news/flash-malware-soars-over-300-in-q1/

"Google: Let the machines remember the password", GCN, 09 June 2015. [Online]. Google’s Advanced Technology and Projects group has developed Project Vault, a microSD-based computing environment that uses complete encryption and other features to securely store and transmit data, especially passwords and biometric data. Google hopes that this technology, along with advanced behavioral biometrics being developed under Project Abacus, will vastly improve outdate alphanumeric password-based security. (ID#: 15-50331) See http://gcn.com/articles/2015/06/09/google-project-vault-abacus.aspx?admgarea=TC_SecCybersSec

"White House Calls For Encryption By Default On Federal Websites By Late 2016", Dark Reading, 09 June 2015. [Online]. With a mere third of federal agencies employing encrypted, HTTPS websites, the Office of Management and Budget (OMB) has decided on a deadline (December 31, 2016) by which all agencies will be required to use HTTPS. This move will help, but not fix, security issues: "HTTPS-only guarantees the integrity of the connection between two systems, not the systems themselves." (ID#: 15-50338) See http://www.darkreading.com/application-security/white-house-calls-for-encryption-by-default-on-federal-websites-by-late-2016/d/d-id/1320789

"Office 365 gets additional email protections", GCN, 08 June 2015. [Online]. Microsoft is offering Office 365 customers additional protection against malicious emails. Exchange Online Advanced Threat Protection is available to government and commercial customers; non-profit and educational organizations will have to wait till a later date. The software detects malicious attachments, blocks unsafe links, and lets IT staff track malicious links that users have clicked. (ID#: 15-50332) See http://gcn.com/articles/2015/06/08/office-365-email-apt.aspx?admgarea=TC_SecCybersSec

"Russia Blamed for Data Stealing Attack on German Parliament", Infosecurity Magazine, 05 June 2015. [Online]. Almost a month after the IT staff at Bundestag (the German Parliament) noticed that attackers were trying to penetrate their computer network, an investigation has indicated that the attack was likely state-sponsored. The trojan used in the attack resembles malware used previously in a suspected Russian attack on German computer networks, and given tensions between Russians and the West over Ukraine, many are pointing to the Russia as a culprit. (ID#: 15-50294) See http://www.infosecurity-magazine.com/news/russia-blamed-data-stealing-attack/

"Auditing GitHub users’ SSH key quality", Benjojo.co.uk (Blog post), 05 June 2015. [Online]. Analysis of nearly 1.4 million publicly-available GitHub SSH keys reveals that a large number of GitHub users have very small and weak encryption keys, opening their account to being compromised. Some of these users even have commit access to significant projects like Django and Python crypto libraries. (ID#: 15-50296) See https://blog.benjojo.co.uk/post/auditing-github-users-keys

"Data hacked from U.S. government dates back to 1985: U.S. official", Reuters, 05 June 2015. [Online]. Hackers believed to be Chinese have breached the Office of Personnel Management computers, compromising personal data of as many as 4 million current and retired government employees, dating back to 1985. It is not known if that attack was state-sponsored. (ID#: 15-50297) See http://www.reuters.com/article/2015/06/06/us-cybersecurity-usa-idUSKBN0OL1V320150606

"Situational Awareness: Elusive Key Ingredient of Worthwhile Cyber Threat Intelligence", Security Week, 05 June 2015. [Online]. Combatting cyber security threats requires situational awareness in all levels of an organization. Four different mindsets can provide a barrier to situational awareness: "Somebody else's problem", tunnel-vision, "a case of mistaken identity", and a "nice to have" attitude towards security. (ID#: 15-50300) See http://www.securityweek.com/situational-awareness-elusive-key-ingredient-worthwhile-cyber-threat-intelligence

"After breaches, higher-ed schools adopt two-factor authentication", Computerworld, 05 June 2015. [Online]. Early last year, an unusually sophisticated phishing attack allowed criminals to steal the paychecks of thirteen faculty members at Boston University. Similar incidents at other universities have prompted many universities to adopt better two-factor authentication techniques. (ID#: 15-50311) See http://www.computerworld.com/article/2931843/security0/after-breaches-higher-ed-schools-adopt-two-factor-authentication.html

"The NSA boosted Internet monitoring to catch hackers", Computerworld, 04 June 2015. [Online]. According to leaked NSA documents, the DoJ permitted the NSA to target internet traffic for the purpose of catching hackers from crime organizations or foreign governments. Despite controversy over the lack of warrants, the NSA points out that it "plays a pivotal role in producing intelligence against foreign powers that threaten our citizens and allies while safeguarding personal privacy and civil liberties." (ID#: 15-50312) See http://www.computerworld.com/article/2931724/internet/the-nsa-boosted-internet-monitoring-to-catch-hackers.html

"Tesla Debuts Bug Bounty Program", Infosecurity Magazine, 05 June 2015. [Online]. Tesla Motors has started a bug-bounty program for teslamotors.com and other domains owned by the website. It does not offer bounties for vulnerabilities in the cars themselves; the program for finding these vulnerabilities is much less publicized. (ID#: 15-50315) See http://www.infosecurity-magazine.com/news/tesla-debuts-bug-bounty-program/

"Windows PowerShell and OpenSSH: Together at last after nearly a decade", ExtremeTech, 04 June 2015. [Online]. PowerShell, Microsoft's response to UNIX and Linux shells like BASH, will be receiving an unexpected upgrade: PowerShell will soon be able to support OpenSSH. The ability to work with the open-source tool will give admins and network specialists a much better way to communicate data securely while mitigating attacks that unencrypted protocols like telnet, rlogin, and ftp are vulnerable to. (ID#: 15-50295) See http://www.extremetech.com/computing/207364-windows-powershell-and-openssh-together-at-last-after-nearly-a-decade

"Google's Android Permissions Get Granular", TechNewsWorld, 04 June 2015. [Online]. Google announced at Google I/O that the new Android M mobile operating system will take a more "granular" approach to app permissions. By simplifying the permission set and allowing the user to decide what data their apps can access, Google hopes to increase user freedom and increase consumer security. (ID#: 15-50301) See http://www.technewsworld.com/story/82139.html

"DISA redoing content delivery", FCW, 04 June 2015. [Online]. HP was recently awarded a $469 million contract that aims to create a unified platform that the Defense Department can use for both classified and unclassified telecommunications. The new system, called the Universal Content Delivery Service, will replace the existing Global Content Delivery Service. (ID#: 15-50307) See http://fcw.com/articles/2015/06/04/disa-content-delivery.aspx

"Mac zero-day makes rootkit infection very easy", Cyber Defense Magazine, 04 June 2015. [Online]. Security Researcher Pedro Vilaça discovered a flaw that makes EFI firmware in a few specific Apple computers vulnerable to attack. By utilizing a flaw in the way that the computer protects flash memory while the computer cycles in and out of sleep mode, an attacker could remotely access a very low level on the machine and install an EFI rootkit. (ID#: 15-50317) See http://www.cyberdefensemagazine.com/mac-zero-day-makes-rootkit-infection-very-easy/

" 'MEDJACK' tactic allows cyber criminals to enter healthcare networks undetected", SC Magazine, 04 June 2015. [Online]. Security company TrapX released a report describing "MEDJACK", a tactic which hackers could use (and likely have used) to move throughout the main network of healthcare providers by utilizing outdated, unpatched medical devices such as an X-ray scanner. So-called medjacking could be partly to blame for recent breaches like that of CareFirst BlueCross BlueShield. (ID#: 15-50318) See http://www.scmagazine.com/trapx-profiles-medjack-threat/article/418811/

"States flex cyber leadership muscle", GCN, 03 June 2015. [Online]. With the increase of cyberattacks on state and local governments, Governor Chris Christie followed a recent trend in state-level defensive cyber measures by signing the New Jersey Cybersecurity and Communications Integration Cell (NJCCIC). The NJCCIC is intended to promote cooperation and sharing between industry and government in the state. (ID#: 15-50308) See http://gcn.com/articles/2015/06/03/states-cyber-leadership.aspx?admgarea=TC_SecCybersSec

"NIST drafts framework for privacy risk", GCN, 03 June 2015 [Online]. With more and more of people's private information being put in potentially compromising situations, the NIST has responded with a draft of a framework for managing privacy risks in federal information systems. The document, called Privacy Risk Management for Federal Information Systems, will utilize systematic methods for both identifying and addressing security risks. (ID#: 15-50310) See http://gcn.com/articles/2015/06/03/nist-privacy-risk.aspx?admgarea=TC_SecCybersSec

"Locker ransomware author quickly apologizes, decrypts victims' files", SC Magazine, 03 June 2015. [Online]. In an unusual turn of events, the author of a new ransomware called "Locker" apologized to victims and posted decryption keys so they could retrieve their locked files. It's not certain exactly why the hacker, who made a mere $169, had such a sudden change of heart. (ID#: 15-50319) See http://www.scmagazine.com/locker-author-apologizes-for-resulting-scams/article/418529/

"FireEye Researchers Identify Malware Threat Targeting POS Terminals", Information Security Buzz, 02 June 2015. [Online]. Security researchers have identified a new instance of POS malware that masquerades as an attachment on a fake resume email. When opened, it uses an unusually well-engineered method to try and infiltrate a specific Windows-based POS system. (ID#: 15-50299) See http://www.informationsecuritybuzz.com/fireeye-researchers-identify-malware-threat-targeting-pos-terminals/

"Google Creates One-Stop Privacy and Security Shop", TechNewsWorld, 02 June 2015. [Online]. Tech giant Google released a feature called "My Account", which allows users to manage the privacy and security settings for their Google accounts. The changes "are mainly about multifactor authentication and SSO, and linking all of your devices so Google knows it's you," though centralization might make personal data more vulnerable to attack. (ID#: 15-50302) See http://www.technewsworld.com/story/82126.html

"Feds' Photobucket Strategy Could Hobble White Hats", 02 June 2015. [Online]. To combat the spread of new malware, the DoJ and FBI are taking a new approach: by going after distributers, they hope to stem the ever-steady flow of new malware samples into the hands of those who use it. This new trend is indicated by an increase in arrests of distributers, notably the recent arrests of two men who made an app for stealing Photobucket content. (ID#: 15-50303) See http://www.technewsworld.com/story/82125.html

"USMobile launches Scrambl3 mobile, Top Secret communication-standard app", Homeland Security Newswire, 02 June 2015. [Online]. Mobile phone services developer USMobile announced a new tool called Scrambl3, which uses "tunnels" in the deep web to encrypt and protect smartphone communications. This technology can help anyone from business, government, or even just personal communications. (ID#: 15-50305) See http://www.homelandsecuritynewswire.com/dr20150602-usmobile-launches-scrambl3-mobile-top-secret-communicationstandard-app

"Facebook Can Now Encrypt Notification Emails", PC Mag., 02 June 2015. [Online]. Though Facebook has offered encryption for a few years already, it wasn't until now that user have the option of protecting their Facebook notification emails with encryption. This is accomplished with OpenPGP, which requires the user to manually manage two keys (one private, one public). (ID#: 15-50306) See http://www.pcmag.com/article2/0,2817,2485201,00.asp

"New SOHO router security audit uncovers more than 60 flaws in 22 models", Computerworld, 02 June 2015. [Online]. As part of a master's thesis, researchers at Spain's Universidad Europea de Madrid released a list of over 60 security flaws in 22 routers from various manufacturers. The vulnerabilities allow hackers to inject malicious code, use malicious websites to control a router, and cross-site request forgery (CSRF) attacks, among others. (ID#: 15-50313) See http://www.computerworld.com/article/2930554/security/new-soho-router-security-audit-uncovers-more-than-60-flaws-in-22-models.html

"Cyber-pledge: US, Japan Have Each Other's Backs", Infosecurity Magazine, 02 June 2015. [Online]. The U.S. and Japan have agreed to lend help to each other in the event of a cyber attack. This is particularly important for Japan, which has a cybersecurity strength that some would consider disproportionately small considering the nation's prominence in the tech world. (ID#: 15-50314) See http://www.infosecurity-magazine.com/news/cyberpledge-us-japan-have-each/

"US Healthworks Suffers Data Breach Via Unencrypted Laptop", Forbes, 01 June 2015. [Online]. Early this past May, a US Healthworks laptop was stolen from an employee. The laptop, though password protected, was unencrypted and had personally identifiable information on it, potentially exposing names, addresses, date of birth and Social Security numbers. (ID#: 15-50359) See http://www.forbes.com/sites/davelewis/2015/06/01/us-healthworks-suffers-data-breach-via-unencrypted-laptop/?ss=Security

"Apple 'Text of Death' Flaw Hits Twitter, Snapchat", Infosecurity Magazine, 31 May 2015. [Online]. A flaw in CoreText, Apple's text-processing system, leaves all Apple products — everything from desktops to smart watches — vulnerable to DoS attacks. When CoreText tries to handle a specific sequence of non-Latin characters, the entire device crashes. The flaw is not limited to Apple devices, as both Twitter and Snapchat were found to be vulnerable to the bug. (ID#: 15-50316) See http://www.infosecurity-magazine.com/news/apple-text-of-death-flaw-hits/

"FBI to Dig Into IRS Data Breach Debacle", TechNewsWorld, 29 May 2015. [Online]. Following warnings by the U.S. Government Accountability Office in March, personal data from at least 100,000 taxpayers was stolen when hackers were able to use the Get Transcript application to access IRS user accounts.  (ID#: 15-50304) See http://www.technewsworld.com/story/82111.html

"Biz Email Fraud Could Hit $1 Billion", GovInfoSecurity, 28 May 2015. [Online]. By utilizing social engineering and "spearphishing" tactics, cyber criminals can trick staff at a business or banking institution into helping them facilitate wire fraud schemes. The cost of these "masquerading" schemes, by the estimate of bank fraud prevention officer David Pollino, could reach $1 billion in 2015. Criminals may call or email staff at an institution, using social engineering to claim to be someone they aren't, and make an urgent request for money to be wired. (ID#: 15-50298) See http://www.govinfosecurity.com/biz-email-fraud-could-hit-1-billion-a-8266

"ACLU urges gov't to establish bug bounty programs, disclosure policies", SC Magazine, 28 May 2015. [Online]. In a recent letter to the U.S. Department of Commerce's Internet Policy Task Force, the ACLU called for more incentives for researchers to help the government's cybersecurity cause, saying "…we are not aware of any U.S government agency that has established a bug bounty program intended to reward researchers who find flaws in U.S. government systems and websites." The ACLU believes that, in addition to bug bounties, publishing a disclosure policy and publicizing bug bounty programs can help government agencies stay cyber-secure. (ID#: 15-50320) See http://www.scmagazine.com/govt-needs-to-catch-up-with-tech-coss-researcher-friendly-policies-aclu-says/article/417266/

"VA security holding in face of mounting threats", GCN, 27 May 2015. [Online]. Despite facing the never-ending onslaught of cyber attacks, the Department of Veterans Affairs appears to be fairing well, with a lower-than average number of medical devices being compromised, fewer suspicious emails, and zero data lost through breaches. (ID#: 15-50309) See http://gcn.com/articles/2015/05/27/va-malware.aspx?admgarea=TC_SecCybersSec

 


    International News 

 

 

“How a Hacker Could Hijack a Plane From Their Seat”, Homeland Security News Wire, 21 May 2015. [Online].  If flying in an airplane did not scare you before, it may now. Reports have surfaced that an anonymous cybersecurity professional successfully took over an airplane’s controls, all from his seat. The safety of wireless networks on aircrafts is now in question. (ID#: 15-60013)
See: http://www.homelandsecuritynewswire.com/dr20150521-how-a-hacker-could-hijack-a-plane-from-their-seat

“Iran Blames U.S. for Cyber-Attack on Oil Ministry”, Info Security, 29 May 2015. [Online].  The Iranian Oil Ministry was hit with a cyber-attack in March, the country has since blamed the U.S.  The U.S. also partnered with Israel to create the Stuxnet attack in 2009. However, Cylance, a cyber-security company, published a report in December that dubbed Iran as “the new China” after uncovering their plans to launch a massive cyber-attack with the intention of stealing military information. (ID#: 15-60019)
See: http://www.infosecurity-magazine.com/news/iran-blames-us-for-cyber-attack-on/

“U.S. to Bring Japan Under Its Cyber Defense Umbrella”, NBC News, 01 June 2015. [Online]. The U.S. agreed to lend a helping hand to Japan's cybersecurity forces. The partnership will provide much needed assistance to the Japanese, as their cybersecurity development has fallen behind that of the United States. The U.S. and Japan aim to protect against attacks from common adversaries such as China as well as attacks from independent parties. (ID#: 15-60015)
See: http://www.nbcnews.com/tech/security/u-s-bring-japan-under-its-cyber-defense-umbrella-n367651

“United Nations: We Need Strong Encryption to Defend Free Speech”, Info Security, 01 June 2015. [Online].  The United Nations is calling for stronger encryption to protect people’s rights to anonymity online. By using the internet anonymously, people are guaranteed free speech. Governments have argued back by saying that strong encryption aids criminals in cyber crime. (ID#: 15-60017)
See: http://www.infosecurity-magazine.com/news/united-nations-strong-encryption/

“Japan Pension Service hack used classic attack method”, The Japan Times, 02 June 2015. [Online]. Personal data of over 1 Million people was stolen from the Japan Pension Service (JPS). Unlike many of the major complex hacks in the news lately, it is believed that the attack was carried out when employees accidentally opened a well-disguised email containing a virus. The JPS stated that the attack was limited to ID numbers, names, and addresses but all vital information remained secure. (ID#: 15-60016)
See: http://www.japantimes.co.jp/news/2015/06/02/national/social-issues/japan-pension-service-hack-used-classic-attack-method/#.VXrgsOsW15g

“U.K. Government Urges Action as Cost of Cyber Security Breaches Doubles”Forbes, 02 June 2015. [Online].  Over the last year, certain cyber-attacks have cost U.K. businesses as much as $5 million.  Despite more companies utilizing the government’s cyber security advice and resources, new threats continue to emerge such as employees. A shocking 75% of big businesses suffered a cyber-attack involving one of their own employees. (ID#: 15-60017
See: 
http://www.forbes.com/sites/dinamedland/2015/06/02/uk-government-urges-action-as-cost-of-cyber-security-breaches-doubles/?ss=Security

Google Creates One-Stop Privacy and Security Shop”, Tech News World, 02 June 2015. [Online].  Google released a new centralized interface for users to manage all of their settings, security, and privacy in one location called “My Account.” It intends to allow users to easily find and manage information that they would’ve had to dig for in the past, however, there are potential downsides. Jon Rudolph commented that if an attacker gets access to a user’s My Account page then it is just that much easier for information to be taken in bulk. (ID#: 15-60022)
See: http://www.technewsworld.com/story/82126.html

“Microsoft lets EU governments inspect source code for security issues”, Computer World, 03 June 2015. [Online]. Microsoft opened the doors of what they are calling a “Transparency Center” in Belgium.  The goal of the new facility is to provide governments with a convenient place to come and look in depth at the code behind some of Microsoft’s products so they can review potential threats and security flaws. Microsoft hopes to strengthen their relationship with different governments through this new program. (ID#: 15-60018)  
See: http://www.computerworld.com/article/2931107/government-it/microsoft-lets-eu-governments-inspect-source-code-for-security-issues.html

“Obama: U.S. Cybersecurity Problems Will Get Worse”, NBC News, 08 June 2015. [Online]. The president reiterated that the government is well aware the country’s cyber security is not up to par.  This follows the data breach at the office of personal management in which roughly 4 million government workers had private information stolen. (ID#: 15-60023)
See: http://www.nbcnews.com/tech/security/obama-u-s-cybersecurity-problems-will-get-worse-n371651

“Federal Government Suffers Massive Hacking Attack”Huffington Post, 09 June 2015. [Online].
The FBI is looking in to a cyber-attack on government officials, believed to have originated in China.  The hack targeted the personal information of members of the Office of Personnel Management and the Interior Department.  At this time it is still unclear as to why the attack was not detected. (ID#: 15-60012)
See: http://www.huffingtonpost.com/2015/06/04/government-data-breach_n_7514620.html?utm_hp_ref=cybersecurity

"Cybersecurity Firm Says Spying Operation Targeted Hotels Hosting Iran Nuclear Talks"Star Tribune, 10 June 2015. [Online]. Russian based cybersecurity firm, Kaspersky Lab, was hacked into via three separate, and previously unknown, exploits in Microsoft Software Installer. The firm also reported that the creator of this attack used it to hack into multiple hotels where Iranian government officials were meeting with other leaders.  (ID#: 15-60014)
See: http://www.startribune.com/cybersecurity-firm-says-spying-campaign-targeted-iran-talks/306790151/

The Dinosaurs of Cybersecurity Are Planes, Power Grids and Hospitals”, Tech Crunch, 10 July 2015. [Online]. One of the most prominent risks in cybersecurity comes in the form of infrastructure and things like airplanes and hospitals. As these systems are compromised, patches are developed to remedy the problem. However patches are slow to roll out and take a great deal of time to develop. By the time patches are complete, often, the damage has already been done. (ID#: 15-60040)
See: http://techcrunch.com/2015/07/10/the-dinosaurs-of-cybersecurity-are-planes-power-grids-and-hospitals/

“Why Cyber Security Experts Are Taking Aim At Sourced Traffic”, Ad Exchanger, 22 June 2015. [Online]. Almost 90% of all sourced traffic is generated from bots. These bots are designed to operate as closely as possible to a normal user. Cutting off this source of income would have a large impact on criminals committing fraud. (ID#: 15-60024)
See: http://adexchanger.com/online-advertising/why-cyber-security-experts-are-taking-aim-at-sourced-traffic/

“Up to the US to resume cyber-security talks, says China”, Reuters, 23 June 2015. [Online]. Tensions are growing after the United States accused China of stealing personal information from the Office of Personnel Management. In the past, the two nations had worked together to improve security however, as of late that relationship has been strained. China now says that if the U.S. wants to continue to negotiate cyber security, it is up to it to jump start the conversation. (ID#: 15-60020)
See: http://www.reuters.com/article/2015/06/23/us-usa-china-cybersecurity-idUSKBN0P30ZA20150623

“The Real Cost of Ignoring Cybersecurity”, Bisnow, 23 June 2015. [Online].  A saddening number of small businesses are being forced to shut down due to cyber-attacks. Bytegrid chief revenue officer Drew Fassett estimates that losing just 20MB of data to a cyber-attack can cost a company nearly $20,000. (ID#: 15-60025)
See: https://www.bisnow.com/national/news/data-center-bisnow-national/the-real-cost-of-ignoring-cybersecurity-47421

“Jeb Bush blasts the White House on cybersecurity”, Fortune, 23 June 2015. [Online].  Jeb Bush called for the president to prioritize the country’s cybersecurity.  Bush was critical of President Obama’s handling of the OPM data breach. (ID#: 15-60026)
See: http://fortune.com/2015/06/23/jeb-bush-cybsersecurity/

“U.S., China agree to cybersecurity code of conduct”, SC Magazine, 26 June 2015. [Online]. The US and China hope to bring peace to their long unsteady cyber relationship with a new agreement. The agreement was one of the results of a three-day meeting between the two countries. (ID#: 15-60027)
See: http://www.scmagazine.com/us-china-summit-talks-turn-to-cybersecurity/article/423175/

Report on Sony's Cybersecurity Blunder Shows the Pitfalls of Negligence”, Inc, 26 June 2015. [Online]. A new report claims that Sony was well aware of the risk they were taking after releasing The Interview.  Instead of adding to their cybersecurity, Sony opted to alter scenes from the movie to try and make it less offensive.  According to the report, Sony lacked even basic precautions that more than likely would have been enough to repel the hack. (ID#: 15-60028)
See: http://www.inc.com/spencer-bokat-lindell/sony-was-made-aware-of-risks-to-their-cybersecurity-and-did-little-to-prevent-them.html

Masdar and MIT to Collaborate in Cybersecurity”, Gulf News, 28 June 2015. [Online].  Over the next two years, MIT and the Masdar Institute for Science and Technology will collaborate to study, and hopefully prevent, cyber-attacks on the infrastructure of the United Arab Emirates. The two groups hope that this case study will not only help them to share knowledge and skills, but also generate some interest in cybersecurity in the UAE. (ID#: 15-60021)
See: http://gulfnews.com/news/uae/crime/masdar-and-mit-to-collaborate-in-cybersecurity-1.1542037

“New Tactics for Improving Critical Infrastructure Cybersecurity Pushed by MIT Consortium”, Search Compliance, 29 June 2015. [Online]. MIT created the (IC)3 consortium as a way to push the cybersecurity of critical infrastructure. Professor Stuart Madnick said that they want to focus on management and strategy rather than the technical issues. (ID#: 15-60029)
See: http://searchcompliance.techtarget.com/feature/New-tactics-for-improving-critical-infrastructure-cybersecurity-pushed-by-MIT-consortium

“A Bird’s Eye View of the Legal Landscape for Cybersecurity”, Inside Counsel, 29 June 2015. [Online]. The field of cybersecurity is constantly undergoing rapid changes. With so many laws, regulations, and guidelines to abide by, businesses must adapt quickly. (ID#: 15-60030)
See: http://www.insidecounsel.com/2015/06/29/a-birds-eye-view-of-the-legal-landscape-for-cybers

“Teach All Computing Students About Cybersecurity, Universities Told”, Times Higher Education, 30 June 2015. [Online]. New regulations for UK universities now require cybersecurity to be included as part of a computer science degree. The regulations say students need to learn about securing systems and responding to attacks. Universities will be given two years to comply with the new guidelines. (ID#: 15-60031)
See: https://www.timeshighereducation.co.uk/news/teach-all-computing-students-about-cybersecurity-universities-told

“Doctors See Big Cybersecurity Risks, Compliance as Key for Hospitals”, Xconomy, 01 July 2015. [Online]. A recent poll shows that doctors are not confident in their hospitals' cybersecurity. Poll results show where opinions on the issue differ between the doctors and some of the system administrators. Both parties agree that the key to solving their problem is compliance. (ID#: 15-60032)
See: http://www.xconomy.com/boston/2015/07/01/doctors-see-big-cybersecurity-risks-compliance-as-key-for-hospitals/

“Is the information security industry having a midlife crisis?”, CSO Online, 01 July 2015. [Online]. The battle of information security is one that is being lost. Tsion Gonen, CSO of SafeNet, said that it is not reasonable to still attempt and prevent breaches.  Instead, he suggested that focus needs to be on preventing attackers from finding any important data once they are inside. (ID#: 15-60033)
See: http://www.csoonline.com/article/2941097/security-awareness/is-the-information-security-industry-having-a-midlife-crisis.html

"Germany passes strict cyber-security law to protect 'critical infrastructure'", RT, 11 July 2015. [Online]. A new law in Germany requires over 2000 service providers to comply with new information security standards.  The law covers any service labelled as "critical infrastructure" including transportation, utilities, finance and more. (ID#: 15-60034)
See: http://rt.com/news/273058-german-cyber-security-law/

“Google Bets Big on Cybersecurity with $100M Investment”, PYMNTS, 14 July 2015. [Online]. CrowdStrike, a cybersecurity company specializing in endpoint protection, confirmed they received $100 million from Google Capital.  The company believes that as firewalls and antivirus slowly become less and less effective, their endpoint protection service puts them ahead of the competition.  Google released a statement saying that they were very impressed with the rapid growth of the company. (ID#: 15-60035)
See: http://www.pymnts.com/news/2015/google-bets-big-on-cybersecurity-with-100m-investment/

“Can artificial intelligence stop hackers?”, Fortune, 14 July 2015. [Online]. Over the last 10 years, cybersecurity spending has seen an increase of almost 700%. Despite that staggering number, the results have not followed.  Symantec CTO Amit Mital believes that the answer lies with artificial intelligence.  He stated that humans simply take too long to identify an attack and then counter it. However, this raises the question of how much power humans are willing to turn over to machines. (ID#: 15-60036)
See: http://fortune.com/2015/07/14/artificial-intelligence-hackers/

“Chinese Bid for US Chipmaker Could Raise Cybersecurity Fears”, The Hill, 14 July 2015. [Online]. The Chinese government-operated company, Tsinghua Unigroup, is attempting to purchase Micron Technology, an American chip producer. Meanwhile, China is pushing a new law that would allow them to review all networking equipment. Some fear that this may be an attempt to gain access to the source code of many foreign products and use it to their advantage in hacking campaigns or other malicious activities. (ID#: 15-60037)
See: http://thehill.com/policy/cybersecurity/247801-chinese-bid-for-us-chip-maker-could-raise-cybersecurity-fears

Automakers Unite to Prevent Cars from Being Hacked”, Fortune, 14 July 2015. [Online]. The Alliance of Automobile Manufacturers, a group of car companies including Ford and GM, teamed up to create an information hub that will be used primarily to share data in an effort to improve the cybersecurity of motor vehicles. The center is expected to be operational by the end of this calendar year. (ID#: 15-60038)
See: http://fortune.com/2015/07/14/automakers-share-security-data/

Cybersecurity Intern Accused in Huge Hacking Bust”, CNN Money, 15 July 2015. [Online]. The US Department of Justice announced that they were able to take down Darkode, one of many online “black markets”.  Among those charged with crimes was Morgan Culbertson, an intern of the cybersecurity firm FireEye, where he has researched malware on Android devices. He is charged with creating a malicious program known as Dendroid, capable of stealing data and assuming control over any Android device that it infects. (ID#: 15-60039)
See: http://money.cnn.com/2015/07/15/technology/hacker-fireeye-intern/

 


(ID#: 15-5612)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

International Security Related Conferences

 

 
SoS Logo

Conferences

 

The following pages provide highlights on Science of Security related research presented at the following International Conferences:

(ID#: 15-5614)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


 

International Conferences: Software Analysis, Evolution and Reengineering (SANER) Quebec, Canada

 
SoS Logo

International Conferences: Software Analysis, Evolution and Reengineering (SANER) Quebec, Canada

 

The 2015 IEEE 22nd International Conference Software Analysis, Evolution and Reengineering (SANER) was held in Montréal from 2-6 March 2015, at École Polytechnique de Montréal, Québec, Canada.  SANER is a research conference on the theory and practice of recovering information from existing software and systems. It explores innovative methods of extracting the many kinds of information that can be recovered from software, software engineering documents, and systems artifacts, and examines innovative ways of using this information in system renovation and program understanding.  Details about the conference can be found on its web page at: http://saner.soccerlab.polymtl.ca/doku.php?id=en:start   The presentations cited here relate specifically to the Science of Security.


 

Saied, M.A.; Benomar, O.; Abdeen, H.; Sahraoui, H., "Mining Multi-level API Usage Patterns," Software Analysis, Evolution and Reengineering (SANER), 2015 IEEE 22nd International Conference on, pp. 23, 32, 2-6 March 2015. doi: 10.1109/SANER.2015.7081812
Abstract: Software developers need to cope with complexity of Application Programming Interfaces (APIs) of external libraries or frameworks. However, typical APIs provide several thousands of methods to their client programs, and such large APIs are difficult to learn and use. An API method is generally used within client programs along with other methods of the API of interest. Despite this, co-usage relationships between API methods are often not documented. We propose a technique for mining Multi-Level API Usage Patterns (MLUP) to exhibit the co-usage relationships between methods of the API of interest across interfering usage scenarios. We detect multi-level usage patterns as distinct groups of API methods, where each group is uniformly used across variable client programs, independently of usage contexts. We evaluated our technique through the usage of four APIs having up to 22 client programs per API. For all the studied APIs, our technique was able to detect usage patterns that are, almost all, highly consistent and highly cohesive across a considerable variability of client programs.
Keywords: application program interfaces; data mining; software libraries; MLUP; application programming interface; multilevel API usage pattern mining; Clustering algorithms; Context; Documentation; Graphical user interfaces; Java; Layout; Security; API Documentation; API Usage; Software Clustering; Usage Pattern (ID#: 15-5411)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7081812&isnumber=7081802

 

Ladanyi, G.; Toth, Z.; Ferenc, R.; Keresztesi, T., "A Software Quality Model for RPG," Software Analysis, Evolution and Reengineering (SANER), 2015 IEEE 22nd International Conference on, pp. 91, 100, 2-6 March 2015. doi: 10.1109/SANER.2015.7081819
Abstract: The IBM i mainframe was designed to manage business applications for which the reliability and quality is a matter of national security. The RPG programming language is the most frequently used one on this platform. The maintainability of the source code has big influence on the development costs, probably this is the reason why it is one of the most attractive, observed and evaluated quality characteristic of all. For improving or at least preserving the maintainability level of software it is necessary to evaluate it regularly. In this study we present a quality model based on the ISO/IEC 25010 international standard for evaluating the maintainability of software systems written in RPG. As an evaluation step of the quality model we show a case study in which we explain how we integrated the quality model as a continuous quality monitoring tool into the business processes of a mid-size software company which has more than twenty years of experience in developing RPG applications.
Keywords: DP industry; IBM computers; IEC standards; ISO standards; automatic programming; report generators; software maintenance; software quality; software reliability; software standards; source code (software); IBM i mainframe; ISO/IEC 25010 international standard; RPG programming language; business applications management; business processes; continuous quality monitoring tool; development costs; mid-size software company; national security; quality characteristic; reliability; reporting program generator; software maintainability level; software quality model; source code maintainability; Algorithms; Cloning; Complexity theory; Measurement; Object oriented modeling; Software; Standards; IBM i mainframe; ISO/IEC 25010;RPG quality model; Software maintainability; case study (ID#: 15-5412)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7081819&isnumber=7081802

 

Xiaoli Lian; Li Zhang, "Optimized Feature Selection Towards Functional And Non-Functional Requirements In Software Product Lines," Software Analysis, Evolution and Reengineering (SANER), 2015 IEEE 22nd International Conference on, pp. 191, 200, 2-6 March 2015. doi: 10.1109/SANER.2015.7081829
Abstract: As an important research issue in software product line, feature selection is extensively studied. Besides the basic functional requirements (FRs), the non-functional requirements (NFRs) are also critical during feature selection. Some NFRs have numerical constraints, while some have not. Without clear criteria, the latter are always expected to be the best possible. However, most existing selection methods ignore the combination of constrained and unconstrained NFRs and FRs. Meanwhile, the complex constraints and dependencies among features are perpetual challenges for feature selection. To this end, this paper proposes a multi-objective optimization algorithm IVEA to optimize the selection of features with NFRs and FRs by considering the relations among these features. Particularly, we first propose a two-dimensional fitness function. One dimension is to optimize the NFRs without quantitative constraints. The other one is to assure the selected features satisfy the FRs, and conform to the relations among features. Second, we propose a violation-dominance principle, which guides the optimization under FRs and the relations among features. We conducted comprehensive experiments on two feature models with different sizes to evaluate IVEA with state-of-the-art multi-objective optimization algorithms, including IBEAHD, IBEAε+, NSGA-II and SPEA2. The results showed that the IVEA significantly outperforms the above baselines in the NFRs optimization. Meanwhile, our algorithm needs less time to generate a solution that meets the FRs and the constraints on NFRs and fully conforms to the feature model.
Keywords: feature selection; genetic algorithms; software product lines; IBEAε+; IBEAHD; IVEA; NFR optimization; NSGA-II;SPEA2;multiobjective optimization algorithm; nonfunctional requirements; numerical constraint; optimized feature selection; selection method; software product line; two-dimensional fitness function; violation-dominance principle; Evolutionary computation;Optimization;Portals;Security;Sociology;Software;Statistics; Feature Models; Feature Selection; Multi-objective Optimization; Non-functional requirements optimization; Software Product Line (ID#: 15-5413)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7081829&isnumber=7081802

 

Zekan, B.; Shtern, M.; Tzerpos, V., "Protecting Web Applications Via Unicode Extension," Software Analysis, Evolution and Reengineering (SANER), 2015 IEEE 22nd International Conference on, pp. 419, 428, 2-6 March 2015. doi: 10.1109/SANER.2015.7081852
Abstract: Protecting web applications against security attacks, such as command injection, is an issue that has been attracting increasing attention as such attacks are becoming more prevalent. Taint tracking is an approach that achieves protection while offering significant maintenance benefits when implemented at the language library level. This allows the transparent re-engineering of legacy web applications without the need to modify their source code. Such an approach can be implemented at either the string or the character level.
Keywords: program debugging; security of data; software maintenance; command injection; language library level; legacy Web application; maintenance benefit; security attack; taint tracking; unicode extension; Databases; Java; Operating systems; Prototypes; Security; Servers (ID#: 15-5414)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7081852&isnumber=7081802

 

Cadariu, M.; Bouwers, E.; Visser, J.; van Deursen, A., "Tracking Known Security Vulnerabilities In Proprietary Software Systems," Software Analysis, Evolution and Reengineering (SANER), 2015 IEEE 22nd International Conference on, pp. 516, 519, 2-6 March 2015. doi: 10.1109/SANER.2015.7081868
Abstract: Known security vulnerabilities can be introduced in software systems as a result of being dependent upon third-party components. These documented software weaknesses are “hiding in plain sight” and represent low hanging fruit for attackers. In this paper we present the Vulnerability Alert Service (VAS), a tool-based process to track known vulnerabilities in software systems throughout their life cycle. We studied its usefulness in the context of external software product quality monitoring provided by the Software Improvement Group, a software advisory company based in Amsterdam, the Netherlands. Besides empirically assessing the usefulness of the VAS, we have also leveraged it to gain insight and report on the prevalence of third-party components with known security vulnerabilities in proprietary applications.
Keywords: outsourcing; safety-critical software; software houses; software quality; Amsterdam; Netherlands; VAS usefulness assessment; documented software weaknesses; empirical analysis; external software product quality monitoring; known security vulnerability tracking; proprietary applications; proprietary software systems; software advisory company; software improvement group; software life cycle; software systems; third-party components; tool-based process; vulnerability alert service; Companies; Context; Java; Monitoring; Security; Software systems (ID#: 15-5415)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7081868&isnumber=7081802

 

Kula, R.G.; German, D.M.; Ishio, T.; Inoue, K., "Trusting A Library: A Study Of The Latency To Adopt The Latest Maven Release," Software Analysis, Evolution and Reengineering (SANER), 2015 IEEE 22nd International Conference on, pp. 520, 524, 2-6 March 2015. doi: 10.1109/SANER.2015.7081869
Abstract: With the popularity of open source library (re)use in both industrial and open source settings, `trust' plays vital role in third-party library adoption. Trust involves the assumption of both functional and non-functional correctness. Even with the aid of dependency management build tools such as Maven and Gradle, research have still found a latency to trust the latest release of a library. In this paper, we investigate the trust of OSS libraries. Our study of 6,374 systems in Maven Super Repository suggests that 82% of systems are more trusting of adopting the latest library release to existing systems. We uncover the impact of maven on latent and trusted library adoptions.
Keywords: public domain software; security of data; software libraries; trusted computing; Maven superrepository; OSS library; open source software library; trusted library adoption; Classification algorithms; Data mining; Java; Libraries; Market research; Software systems (ID#: 15-5416)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7081869&isnumber=7081802

 

Laverdiere, M.-A.; Berger, B.J.; Merloz, E., "Taint Analysis of Manual Service Compositions Using Cross-Application Call Graphs," Software Analysis, Evolution and Reengineering (SANER), 2015 IEEE 22nd International Conference on, pp. 585, 589, 2-6 March 2015. doi: 10.1109/SANER.2015.7081882
Abstract: We propose an extension over the traditional call graph to incorporate edges representing control flow between web services, named the Cross-Application Call Graph (CACG). We introduce a construction algorithm for applications built on the Jax-WS standard and validate its effectiveness on sample applications from Apache CXF and JBossWS. Then, we demonstrate its applicability for taint analysis over a sample application of our making. Our CACG construction algorithm accurately identifies service call targets 81.07% of the time on average. Our taint analysis obtains a F-Measure of 95.60% over a benchmark. The use of a CACG, compared to a naive approach, improves the F-Measure of a taint analysis from 66.67% to 100.00% for our sample application.
Keywords: Web services; data flow analysis; flow graphs; Apache CXF; CACG construction algorithm; F-measure; JBossWS; Jax-WS standard; Web services; control flow; cross-application call graph; manual service compositions; service call targets; taint analysis; Algorithm design and analysis; Androids; Benchmark testing; Java; Manuals; Security; Web services (ID#: 15-5417)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7081882&isnumber=7081802

 

Qingtao Jiang; Xin Peng; Hai Wang; Zhenchang Xing; Wenyun Zhao, "Summarizing Evolutionary Trajectory by Grouping and Aggregating Relevant Code Changes," Software Analysis, Evolution and Reengineering (SANER), 2015 IEEE 22nd International Conference on, pp. 361, 370, 2-6 March 2015. doi: 10.1109/SANER.2015.7081846
Abstract: The lifecycle of a large-scale software system can undergo many releases. Each release often involves hundreds or thousands of revisions committed by many developers over time. Many code changes are made in a systematic and collaborative way. However, such systematic and collaborative code changes are often undocumented and hidden in the evolution history of a software system. It is desirable to recover commonalities and associations among dispersed code changes in the evolutionary trajectory of a software system. In this paper, we present SETGA (Summarizing Evolutionary Trajectory by Grouping and Aggregation), an approach to summarizing historical commit records as trajectory patterns by grouping and aggregating relevant code changes committed over time. SETGA extracts change operations from a series of commit records from version control systems. It then groups extracted change operations by their common properties from different dimensions such as change operation types, developers and change locations. After that, SETGA aggregates relevant change operation groups by mining various associations among them. The proposed approach has been implemented and applied to three open-source systems. The results show that SETGA can identify various types of trajectory patterns that are useful for software evolution management and quality assurance.
Keywords: public domain software; software maintenance; software quality; SETGA; evolution history; historical commit records; large-scale software system; open-source systems; relevant code changes; software evolution management; software quality assurance; summarizing evolutionary trajectory by grouping and aggregation; trajectory patterns; Data mining; History; Software systems; Systematics; Trajectory; Code Change; Evolution; Mining; Pattern; Version Control System (ID#: 15-5418)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7081846&isnumber=7081802


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

International Conferences: CODASPY 15, San Antonio, Texas

 
SoS Logo

International Conferences: CODASPY 15, San Antonio, Texas

 

Fifth ACM Conference on Data and Application Security and Privacy (CODASPY 15) was held in San Antonio, Texas on March 2-5, 2015.  The conference offers to provide a dedicated venue for high-quality research in the data and applications arena and seeks to foster a community with the focus in cyber security. The CODASPY web page is available at: http://codaspy.org/  


 

Jonathan Dautrich, Chinya Ravishankar; “Tunably-Oblivious Memory: Generalizing ORAM to Enable Privacy-Efficiency Tradeoffs;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 313-324. Doi: 10.1145/2699026.2699097
Abstract: We consider the challenge of providing privacy-preserving access to data outsourced to an untrusted cloud provider. Even if data blocks are encrypted, access patterns may leak valuable information. Oblivious RAM (ORAM) protocols guarantee full access pattern privacy, but even the most efficient ORAMs to date require roughly L log2 N block transfers to satisfy an L-block query, for block store capacity N.  We propose a generalized form of ORAM called Tunably-Oblivious Memory (lambda-TOM) that allows a query's public access pattern to assume any of lambda possible lengths. Increasing lambda yields improved efficiency at the cost of weaker privacy guarantees. 1-TOM protocols are as secure as ORAM.  We also propose a novel, special-purpose TOM protocol called Staggered-Bin TOM (SBT), which efficiently handles large queries that are not cache-friendly. We also propose a read-only SBT variant called Multi-SBT that can satisfy such queries with only O(L + log N) block transfers in the best case, and only O(L log N) transfers in the worst case, while leaking only O(log log log N) bits of information per query. Our experiments show that for N = 2^24 blocks, Multi-SBT achieves practical bandwidth costs as low as 6X those of an unprotected protocol for large queries, while leaking at most 3 bits of information per query.
Keywords: data privacy, oblivious ram, privacy trade off (ID#: 15-5533)
URL: http://doi.acm.org/10.1145/2699026.2699097

 

Matthias Neugschwandtner, Paolo Milani Comparetti, Istvan Haller, Herbert Bos; “The BORG: Nanoprobing Binaries for Buffer Overreads;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 87-97. Doi: 10.1145/2699026.2699098
Abstract: Automated program testing tools typically try to explore, and cover, as much of a tested program as possible, while attempting to trigger and detect bugs. An alternative and complementary approach can be to first select a specific part of a program that may be subject to a specific class of bug, and then narrowly focus exploration towards program paths that could trigger such a bug.  In this work, we introduce the BORG (Buffer Over-Read Guard), a testing tool that uses static and dynamic program analysis, taint propagation and symbolic execution to detect buffer overread bugs in real-world programs. BORG works by first selecting buffer accesses that could lead to an overread and then guiding symbolic execution towards those accesses along program paths that could actually lead to an overread. BORG operates on binaries and does not require source code. To demonstrate BORG's effectiveness, we use it to detect overreads in six complex server applications and libraries, including lighttpd, FFmpeg and ClamAV.
Keywords: buffer overread, dynamic symbolic execution, out-of-bounds access, symbolic execution guidance, targeted testing (ID#: 15-5534)
URL: http://doi.acm.org/10.1145/2699026.2699098

 

Sebastian Banescu, Alexander Pretschner, Dominic Battre, Stefano Cazzulani, Robert Shield, Greg Thompson; “Software-Based Protection against ‘Changeware’;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 231-242. Doi: 10.1145/2699026.2699099
Abstract: We call changeware software that surreptitiously modifies resources of software applications, e.g., configuration files. Changeware is developed by malicious entities which gain profit if their changeware is executed by large numbers of end-users of the targeted software. Browser hijacking malware is one popular example that aims at changing web-browser settings such as the default search engine or the home page. Changeware tends to provoke end-user dissatisfaction with the target application, e.g. due to repeated failure of persisting the desired configuration. We describe a solution to counter changeware, to be employed by vendors of software targeted by changeware. It combines several protection mechanisms: white-box cryptography to hide a cryptographic key, software diversity to counter automated key retrieval attacks, and run-time process memory integrity checking to avoid illegitimate calls of the developed API.
Keywords: integrity protection, malware defense, obfuscation, software diversity, software protection, white-box cryptography (ID#: 15-5535)
URL: http://doi.acm.org/10.1145/2699026.2699099

 

Jan Henrik Ziegeldorf,  Fred Grossmann, Martin Henze, Nicolas Inden, Klaus Wehrle; “CoinParty: Secure Multi-Party Mixing of Bitcoins;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 75-86. Doi: 10.1145/2699026.2699100
Abstract: Bitcoin is a digital currency that uses anonymous cryptographic identities to achieve financial privacy. However, Bitcoin's promise of anonymity is broken as recent work shows how Bitcoin's blockchain exposes users to reidentification and linking attacks. In consequence, different mixing services have emerged which promise to randomly mix a user's Bitcoins with other users' coins to provide anonymity based on the unlinkability of the mixing. However, proposed approaches suffer either from weak security guarantees and single points of failure, or small anonymity sets and missing deniability. In this paper, we propose CoinParty a novel, decentralized mixing service for Bitcoin based on a combination of decryption mixnets with threshold signatures. CoinParty is secure against malicious adversaries and the evaluation of our prototype shows that it scales easily to a large number of participants in real-world network settings. By the application of threshold signatures to Bitcoin mixing, CoinParty achieves anonymity by orders of magnitude higher than related work as we quantify by analyzing transactions in the actual Bitcoin blockchain and is first among related approaches to provide plausible deniability.
Keywords: anonymity, bitcoin, secure multi-party computation (ID#: 15-5536)
URL: http://doi.acm.org/10.1145/2699026.2699100

 

Muhammad Ihsanulhaq Sarfraz, Mohamed Nabeel, Jianneng Cao, Elisa Bertino; “DBMask: Fine-Grained Access Control on Encrypted Relational Databases;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 1-11. Doi: 10.1145/2699026.2699101
Abstract: For efficient data management and economic benefits, organizations are increasingly moving towards the paradigm of "database as a service" by which their data are managed by a database management system (DBMS) hosted in a public cloud. However, data are the most valuable asset in an organization, and inappropriate data disclosure puts the organization's business at risk. Therefore, data are usually encrypted in order to preserve their confidentiality. Past research has extensively investigated query processing on encrypted data. However, a naive encryption scheme negates the benefits provided by the use of a DBMS. In particular, past research efforts have not adequately addressed flexible cryptographically enforced access control on encrypted data at different granularity levels which is critical for data sharing among different users and applications. In this paper, we propose DBMask, a novel solution that supports fine-grained cryptographically enforced access control, including column, row and cell level access control, when evaluating SQL queries on encrypted data. Our solution does not require modifications to the database engine, and thus maximizes the reuse of the existing DBMS infrastructures. Our experiments evaluate the performance and the functionality of an encrypted database and results show that our solution is efficient and scalable to large datasets.
Keywords: attribute-based group key management, database-as-a-service, encrypted query processing (ID#: 15-5537)
URL: http://doi.acm.org/10.1145/2699026.2699101

 

Mihai Maruseac, Gabriel Ghinita; “Differentially-Private Mining of Moderately-Frequent High-Confidence Association Rules;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 13-24. Doi: 10.1145/2699026.2699102
Abstract: Association rule mining allows discovering of patterns in large data repositories, and benefits diverse application domains such as healthcare, marketing, social studies, etc. However, mining datasets that contain data about individuals may cause significant privacy breaches, and disclose sensitive information about one's health status, political orientation or alternative lifestyle. Recent research addressed the privacy threats that arise when mining sensitive data, and several techniques allow data mining with differential privacy guarantees. However, existing methods only discover rules that have very large support, i.e., occur in a large fraction of the dataset transactions (typically, more than 50%). This is a serious limitation, as numerous high-quality rules do not reach such high frequencies (e.g., rules about rare diseases, or luxury merchandise).  In this paper, we propose a method that focuses on mining high-quality association rules with moderate and low frequencies. We employ a novel technique for rule extraction that combines the exponential mechanism of differential privacy with reservoir sampling. The proposed algorithm allows us to directly mine association rules, without the need to compute noisy supports for large numbers of itemsets. We provide a privacy analysis of the proposed method, and we perform an extensive experimental evaluation which shows that our technique is able to sample low- and moderate-support rules with high precision.
Keywords: association rule mining, differential privacy (ID#: 15-5538)
URL: http://doi.acm.org/10.1145/2699026.2699102

 

Zhongwen Zhang, Peng Liu, Ji Xiang, Jiwu Jing, Lingguang Lei; ”How Your Phone Camera Can Be Used to Stealthily Spy on You: Transplantation Attacks against Android Camera Service;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 99-110. Doi: 10.1145/2699026.2699103
Abstract: Based on the observations that spy-on-user attacks by calling Android APIs will be detected out by Android API auditing, we studied the possibility of a "transplantation attack", through which a malicious app can take privacy-harming pictures to spy on users without the Android API auditing being aware of it. Usually, to take a picture, apps need to call APIs of Android Camera Service which runs in mediaserver process. Transplantation attack is to transplant the picture taking code from mediaserver process to a malicious app process, and the malicious app can call this code to take a picture in its own address space without any IPC. As a result, the API auditing can be evaded. Our experiments confirm that transplantation attack indeed exists. Also, the transplantation attack makes the spy-on-user attack much more stealthy. The evaluation result shows that nearly a half of 69 smartphones (manufactured by 8 vendors) tested let the transplantation attack discovered by us succeed. Moreover, the attack can evade 7 Antivirus detectors, and Android Device Administration which is a set of APIs that can be used to carry out mobile device management in enterprise environments. The transplantation attack inspires us to uncover a subtle design/implementation deficiency of the Android security.
Keywords: android, android camera service, spy on users, transportation attack (ID#: 15-5539)
URL: http://doi.acm.org/10.1145/2699026.2699103

 

Irfan Ahmed, Vassil Roussev, Aisha Ali Gombe; “Robust Fingerprinting for Relocatable Code;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 219-229. Doi: 10.1145/2699026.2699104
Abstract: Robust fingerprinting of executable code contained in a memory image is a prerequisite for a large number of security and forensic applications, especially in a cloud environment. Prior state of the art has focused specifically on identifying kernel versions by means of complex differential analysis of several aspects of the kernel code implementation.  In this work, we present a novel technique that can identify any relocatable code, including the kernel, based on inherent patterns present in relocation tables. We show that such patterns are very distinct and can be used to accurately and efficiently identify known executables in a memory snapshot, including remnants of prior executions. We develop a research prototype, codeid, and evaluate its efficacy on more than 50,000 sample executables containing kernels, kernel modules, applications, dynamic link libraries, and malware. The empirical results show that our method achieves almost 100% accuracy with zero false negatives.
Keywords: cloud securityn, code fingerprinting, codeid, malware detection, memory analisys, virtual machine introspection (ID#: 15-5540)
URL: http://doi.acm.org/10.1145/2699026.2699104

 

Yury Zhauniarovich, Maqsood Ahmad, Olga Gadyatskaya, Bruno Crispo, Fabio Massacci; ”StaDynA: Addressing the Problem of Dynamic Code Updates in the Security Analysis of Android Applications;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 37-48. Doi: 10.1145/2699026.2699105
Abstract: Static analysis of Android applications can be hindered by the presence of the popular dynamic code update techniques: dynamic class loading and reflection. Recent Android malware samples do actually use these mechanisms to conceal their malicious behavior from static analyzers. These techniques defuse even the most recent static analyzers that usually operate under the "closed world" assumption (the targets of reflective calls can be resolved at analysis time; only classes reachable from the class path at analysis time are used at runtime). Our proposed solution allows existing static analyzers to remove this assumption. This is achieved by combining static and dynamic analysis of applications in order to reveal the hidden/updated behavior and extend static analysis results with this information. This paper presents design, implementation and preliminary evaluation results of our solution called StaDynA.
Keywords: android, dynamic code updates, security analysis (ID#: 15-5541)
URL: http://doi.acm.org/10.1145/2699026.2699105

 

Fang Liu,  Xiaokui Shu, Danfeng Yao, Ali R. Butt; “Privacy-Preserving Scanning of Big Content for Sensitive Data Exposure with MapReduce;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 195-206. Doi: 10.1145/2699026.2699106
Abstract: The exposure of sensitive data in storage and transmission poses a serious threat to organizational and personal security. Data leak detection aims at scanning content (in storage or transmission) for exposed sensitive data. Because of the large content and data volume, such a screening algorithm needs to be scalable for a timely detection. Our solution uses the MapReduce framework for detecting exposed sensitive content, because it has the ability to arbitrarily scale and utilize public resources for the task, such as Amazon EC2. We design new MapReduce algorithms for computing collection intersection for data leak detection. Our prototype implemented with the Hadoop system achieves 225 Mbps analysis throughput with 24 nodes. Our algorithms support a useful privacy-preserving data transformation. This transformation enables the privacy-preserving technique to minimize the exposure of sensitive data during the detection. This transformation supports the secure outsourcing of the data leak detection to untrusted MapReduce and cloud providers.
Keywords: collection intersection, data leak detection, mapreduce, scalability (ID#: 15-5542)
URL: http://doi.acm.org/10.1145/2699026.2699106

 

Jason Gionta, William Enck, Peng Ning; “HideM: Protecting the Contents of Userspace Memory in the Face of Disclosure Vulnerabilities:” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 325-336. Doi: 10.1145/2699026.2699107
Abstract: Memory disclosure vulnerabilities have become a common component for enabling reliable exploitation of systems by leaking the contents of executable data. Previous research towards protecting executable data from disclosure has failed to gain popularity due to large performance penalties and required architectural changes. Other research has focused on protecting application data but fails to consider a vulnerable application that leaks its own executable data.  In this paper we present HideM, a practical system for protecting against memory disclosures in contemporary commodity systems. HideM addresses limitations in existing advanced security protections (e.g., fine-grained ASLR, CFI) wherein an adversary discloses executable data from memory, reasons about protection weaknesses, and builds corresponding exploits. HideM uses the split-TLB architecture, commonly found in CPUs, to enable fine-grained execute and read permission on memory. HideM enforces fine-grained permission based on policy generated from binary structure thus enabling protection of Commercial-Off-The-Shelf (COTS) binaries. In our evaluation of HideM, we find application overhead ranges from a 6.5% increase to a 2% reduction in runtime and observe runtime memory overhead ranging from 0.04% to 25%. HideM requires adversaries to guess ROP gadget locations making exploitation unreliable. We find adversaries have less than a 16% chance of correctly guessing a single gadget across all 28 evaluated applications. Thus, HideM is a practical system for protecting vulnerable applications which leak executable data.
Keywords: code reuse attacks, information leaks, memory disclosure exploits, memory protection, return-oriented programming (ID#: 15-5543)
URL: http://doi.acm.org/10.1145/2699026.2699107

 

Christopher S. Gates, Jing Chen, Zach Jorgensen, Ninghui Li, Robert W. Proctor, Ting Yu; “Understanding and Communicating Risk for Mobile Applications;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 49-60. Doi: 10.1145/2699026.2699108
Abstract: Mobile platforms, such as Android, warn users about the permissions an app requests and trust that the user will make the correct decision about whether or not to install the app. Unfortunately many users either ignore the warning or fail to understand the permissions and the risks they imply. As a step toward developing an indicator of risk that decomposes risk into several categories, or dimensions, we conducted two studies designed to assess the dimensions of risk deemed most important by experts and novices. In Study 1, semi-structured interviews were conducted with 19 security experts, who also performed a card sorting task in which they categorized permissions. The experts identified three major risk dimensions in the interviews (personal information privacy, monetary risk, and device availability/stability), and a forth dimension (data integrity) in the card sorting task. In Study 2, 350 typical Android users, recruited via Amazon Mechanical Turk, filled out a questionnaire in which they (a) answered questions concerning their mobile device usage, (b) rated how often they considered each of several types of information when installing apps, (c) indicated what they considered to be the biggest risk associated with installing an app on their mobile device, and (d) rated their concerns with regard to specific risk types and about apps having access to specific types of information. In general, the typical users' concerns were similar to those of the security experts. The results of the studies suggest that risk information should be organized into several risk types that can be better understood by users and that a mid-level risk summary should incorporate the dimensions of personal information privacy, monetary risk, device availability/stability risk and data integrity risk.
Keywords: android, mobile security, risk, smartphones (ID#: 15-5544)
URL: http://doi.acm.org/10.1145/2699026.2699108

 

Jing Qiu, Babak Yadegari, Brian Johannesmeyer, Saumya Debray, Xiaohong Su; “Identifying and Understanding Self-Checksumming Defenses in Software;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 207-218. Doi: 10.1145/2699026.2699109
Abstract: Software self-checksumming is widely used as an anti-tampering mechanism for protecting intellectual property and deterring piracy. This makes it important to understand the strengths and weaknesses of various approaches to self-checksumming. This paper describes a dynamic information-flow-based attack that aims to identify and understand self-checksumming behavior in software. Our approach is applicable to a wide class of self chesumming defenses and the information obtained can be used to determine how the checksumming defenses may be bypassed. Experiments using a prototype implementation of our ideas indicate that our approach can successfully identify self-checksumming behavior in (our implementations of) proposals from the research literature.
Keywords: checksum, dynamic taint analysis, tamperproofing (ID#: 15-5545)
URL: http://doi.acm.org/10.1145/2699026.2699109

 

Erman Pattuk, Murat Kantarcioglu, Huseyin Ulusoy; “BigGate: Access Control Framework for Outsourced Key-Value Stores;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 171-182. Doi: 10.1145/2699026.2699110
Abstract: Due to its scalable design, key-value stores have become the backbone of many large-scale Internet companies that need to cope with millions of transactions every day. It is also an attractive cloud outsourcing technology: driven by economical benefits, many major companies like Amazon, Google, and Microsoft provide key-value storage services to their customers. However, customers are reluctant to utilize such services due to security and privacy concerns. Outsourced sensitive key-value data (e.g., social security numbers as keys, and health reports as value) may be stolen by third-party adversaries and/or malicious insiders. Furthermore, an institution, who is utilizing key-value storage services, may naturally desire to have access control mechanisms among its departments or users, while leaking as little information as possible to the cloud provider to preserve data privacy. We believe that addressing these security and privacy concerns are crucial in further adoption of key-value storage services. In this paper, we present a novel system, BigGate, that provides secure outsourcing and efficient processing of encrypted key-value data, and enforces access control policies. We formally prove the security of our system, and by carefully implemented empirical analysis, show that the overhead induced by \sysname can be as low as 2%.
Keywords: access control, cloud computing, key-value stores, outsourcing, searchable encryption, security and privacy (ID#: 15-5546)
URL: http://doi.acm.org/10.1145/2699026.2699110

 

Syed Hussain, Asmaa Sallam, Elisa Bertino; “DetAnom: Detecting Anomalous Database Transactions by Insiders;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 25-35. Doi: 10.1145/2699026.2699111
Abstract: Database Management Systems (DBMSs) provide access control mechanisms that allow database administrators (DBA) to grant application programs access privileges to databases. However, securing the database alone is not enough, as attackers aiming at stealing data can take advantage of vulnerabilities in the privileged applications and make applications to issue malicious database queries. Therefore, even though the access control mechanism can prevent application programs from accessing the data to which the programs are not authorized, it is unable to prevent misuse of the data to which application programs are authorized for access. Hence, we need a mechanism able to detect malicious behavior resulting from previously authorized applications. In this paper, we design and implement an anomaly detection mechanism, DetAnom, that creates a profile of the application program which can succinctly represent the application's normal behavior in terms of its interaction (i.e., submission of SQL queries) with the database. For each query, the profile keeps a signature and also the corresponding constraints that the application program must satisfy to submit that query. Later in the detection phase, whenever the application issues a query, the corresponding signature and constraints are checked against the current context of the application. If there is a mismatch, the query is marked as anomalous. The main advantage of our anomaly detection mechanism is that we need neither any previous knowledge of application vulnerabilities nor any example of possible attacks to build the application profiles. As a result, our DetAnom mechanism is able to protect the data from attacks tailored to database applications such as code modification attacks, SQL injections, and also from other data-centric attacks as well. We have implemented our mechanism with a software testing technique called concolic testing and the PostgreSQL DBMS. Experimental results show that our profiling technique is close to accurate, and requires acceptable amount of time, and that the detection mechanism incurs low run-time overhead.
Keywords: anomaly detection, application profile, database, insider attacks, sql injection (ID#: 15-5547)
URL: http://doi.acm.org/10.1145/2699026.2699111

 

Khalid Bijon, Ram Krishnan, Ravi Sandhu; “Virtual Resource Orchestration Constraints in Cloud Infrastructure as a Service;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 183-194. Doi: 10.1145/2699026.2699112
Abstract: In an infrastructure as a service (IaaS) cloud, virtualized IT resources such as compute, storage and network are offered on demand by a cloud service provider (CSP) to its tenants (customers). A major problem for enterprise-scale tenants that typically obtain significant amount of resources from a CSP concerns orchestrating those resources in a secure manner. For instance, unlike configuring physical hardware, virtual resources in IaaS are configured using software, and hence prone to misconfigurations that can lead to critical security violations. Examples of such resource orchestration operations include creating virtual machines with appropriate operating system and software images depending on their purpose, creating networks, connecting virtual machines to networks, attaching a storage volume to a particular virtual machine, etc. In this paper, we propose attribute-based constraints specification and enforcement as a means to mitigate this issue. High-level constraints specified using attributes of virtual resources prevent resource orchestration operations that can lead to critical misconfigurations. Our model allows tenants to customize the attributes of their resources and specify fine-grained constraints. We further propose a constraint mining approach to automatically generate constraints once the tenants specify the attributes for virtual resources. We present our model, enforcement challenges, and its demonstration in OpenStack, the de facto open-source cloud IaaS software.
Keywords: cloud iaas, configuration policy, constraints, security policy mining (ID#: 15-5548)
URL: http://doi.acm.org/10.1145/2699026.2699112

 

Kadhim Hayawi, Alireza Mortezaei, Mahesh Tripunitara; “The Limits of the Trade-Off Between Query-Anonymity and Communication-Cost in Wireless Sensor Networks;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 337-348. Doi: 10.1145/2699026.2699113
Abstract: We address query-anonymity, the property that the destination of a client's query is indistinguishable from other potential destinations, in the context of wireless sensor networks. Prior work has established that this is an important issue, and has also pointed out that there appears to be a natural trade-off between query-anonymity and communication-cost. We explore what we call the limits of this trade-off: what is the communication-cost that is sufficient to achieve a certain query-anonymity, and what is the communication-cost that we must necessarily incur to achieve a certain query-anonymity? Towards this, we point out that two notions of query-anonymity that prior work in this context proposes are not meaningful. We propose an unconditional notion of query-anonymity that we argue has intuitive appeal. We then establish the limits of the trade-off. In particular, we show that in wireless sensor networks whose topology is a square grid and are source-routed, the necessary and sufficient communication-cost for query-anonymity asymptotically smaller than n, where n is the number of nodes in the network, is dependent on n only, and the necessary and sufficient communication-cost for query-anonymity larger than n is dependent on the desired query-anonymity only. We then generalize to topologies that are arbitrary connected undirected graphs, an exercise that involves a novel approach based on a spanning tree for the graph. We show that the diameter of the graph is the inflection point in the trade-off. We discuss extensions of our results to other settings, such as those in which routes are not necessarily shortest-paths. We also validate our analytical insights empirically, via simulations in Tossim, a de facto standard approach for wireless sensor networks. In summary, our work establishes sound and interesting theoretical results for query-anonymity in wireless sensor networks, and validates them empirically.
Keywords: query-anonymity, wireless sensor networks (ID#: 15-5549)
URL: http://doi.acm.org/10.1145/2699026.2699113

 

Zhi Xu, Sencun Zhu; “SemaDroid: A Privacy-Aware Sensor Management Framework for Smartphones,” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 61-72. Doi: 10.1145/2699026.2699114
Abstract: While mobile sensing applications are booming, the sensor management mechanisms in current smartphone operating systems are left behind -- they are incomprehensive and coarse-grained, exposing a huge attack surface for malicious or aggressive third party apps to steal user's private information through mobile sensors.  In this paper, we propose a privacy-aware sensor management framework, called SemaDroid, which extends the existing sensor management framework on Android to provide comprehensive and fine-grained access control over onboard sensors. SemaDroid allows the user to monitor the sensor usage of installed apps, and to control the disclosure of sensing information while not affecting the app's usability. Furthermore, SemaDroid supports context-aware and quality-of-sensing based access control policies. The enforcement and update of the policies are in real-time. Detailed design and implementation of SemaDroid on Android are presented to show that SemaDroid works compatible with the existing Android security framework. Demonstrations are also given to show the capability of SemaDroid on sensor management and on defeating emerging sensor-based attacks. Finally, we show the high efficiency and security of SemaDroid.
Keywords: android, phone sensing, privacy-aware, sensor management, smartphone (ID#: 15-5550)
URL: http://doi.acm.org/10.1145/2699026.2699114

 

Keith Dyer, Rakesh Verma; “On the Character of Phishing URLs: Accurate and Robust Statistical Learning Classifiers;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 111-122. Doi: 10.1145/2699026.2699115
Abstract: Phishing attacks resulted in an estimated $3.2 billion dollars worth of stolen property in 2007, and the success rate for phishing attacks is increasing each year [17]. Phishing attacks are becoming harder to detect and more elusive by using short time windows to launch attacks. In order to combat the increasing effectiveness of phishing attacks, we propose that combining statistical analysis of website URLs with machine learning techniques will give a more accurate classification of phishing URLs. Using a two-sample Kolmogorov-Smirnov test along with other features we were able to accurately classify 99.3% of our dataset, with a false positive rate of less than 0.4%. Thus, accuracy of phishing URL classification can be greatly increased through the use of these statistical measures.
Keywords: character distributions, kolmogorov-smirnov distance, kullback-leibler divergence, phishing url classification (ID#: 15-5551)
URL: http://doi.acm.org/10.1145/2699026.2699115

 

Mehmet Kuzu, Mohammad Saiful Islam, Murat Kantarcioglu; “Distributed Search over Encrypted Big Data;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 271-278. Doi: 10.1145/2699026.2699116
Abstract: Nowadays, huge amount of documents are increasingly transferred to the remote servers due to the appealing features of cloud computing. On the other hand, privacy and security of the sensitive information in untrusted cloud environment is a big concern. To alleviate such concerns, encryption of sensitive data before its transfer to the cloud has become an important risk mitigation option. Encrypted storage provides protection at the expense of a significant increase in the data management complexity. For effective management, it is critical to provide efficient selective document retrieval capability on the encrypted collection. In fact, considerable amount of searchable symmetric encryption schemes have been designed in the literature to achieve this task. However, with the emergence of big data everywhere, available approaches are insufficient to address some crucial real-world problems such as scalability. In this study, we focus on practical aspects of a secure keyword search mechanism over encrypted data. First, we propose a provably secure distributed index along with a parallelizable retrieval technique that can easily scale to big data. Second, we integrate authorization into the search scheme to limit the information leakage in multi-user setting where users are allowed to access only particular documents. Third, we offer efficient updates on the distributed secure index. In addition, we conduct extensive empirical analysis on a real dataset to illustrate the efficiency of the proposed practical techniques.
Keywords: privacy, searchable encryption, security (ID#: 15-5552)
URL: http://doi.acm.org/10.1145/2699026.2699116

 

Jonathan Dautrich, Chinya Ravishankar; “Combining ORAM with PIR to Minimize Bandwidth Costs;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 289-296. Doi: 10.1145/2699026.2699117
Abstract: Cloud computing allows customers to outsource the burden of data management and benefit from economy of scale, but privacy concerns limit its reach. Even if the stored data are encrypted, access patterns may leak valuable information. Oblivious RAM (ORAM) protocols guarantee full access pattern privacy, but even the most efficient ORAMs proposed to date incur large bandwidth costs.  We combine Private Information Retrieval (PIR) techniques with the most bandwidth-efficient existing ORAM scheme known to date (ObliviStore), to create OS+PIR, a new ORAM with bandwidth costs only half those of ObliviStore. For data block counts ranging from 2^20 to 2^30, OS+PIR achieves a total bandwidth cost of only 11X-13X blocks transferred per client block read+write, down from ObliviStore's 18X-26X. OS+PIR introduces several enhancements in addition to PIR in order to achieve its lower costs, including mechanisms for eliminating unused dummy blocks.
Keywords: data privacy, oblivious ram, private information retrieval (ID#: 15-5553)
URL: http://doi.acm.org/10.1145/2699026.2699117

 

Steven Van Acker, Daniel Hausknecht, Andrei Sabelfeld; “Password Meters and Generators on the Web: From Large-Scale Empirical Study to Getting It Right;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 253-262. Doi: 10.1145/2699026.2699118
Abstract: Web services heavily rely on passwords for user authentication. To help users chose stronger passwords, password meter and password generator facilities are becoming increasingly popular. Password meters estimate the strength of passwords provided by users. Password generators help users with generating stronger passwords. This paper turns the spotlight on the state of the art of password meters and generators on the web. Orthogonal to the large body of work on password metrics, we focus on getting password meters and generators right in the web setting. We report on the state of affairs via a large-scale empirical study of web password meters and generators. Our findings reveal pervasive trust to third-party code to have access to the passwords. We uncover three cases when this trust is abused to leak the passwords to third parties. Furthermore, we discover that often the passwords are sent out to the network, invisibly to users, and sometimes in clear. To improve the state of the art, we propose SandPass, a general web framework that allows secure and modular porting of password meter and generation modules. We demonstrate the usefulness of the framework by a reference implementation and a case study with a password meter by the Swedish Post and Telecommunication Agency.
Keywords: passwords, sandboxing, web security (ID#: 15-5554)
URL: http://doi.acm.org/10.1145/2699026.2699118

 

Mauro Conti, Luigi Mancini, Riccardo Spolaor, Nino Vincenzo Verde; “Can’t You Hear Me Knocking: Identification of User Actions On Android Apps Via Traffic Analysis;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 297-304. Doi: 10.1145/2699026.2699119
Abstract: While smartphone usage become more and more pervasive, people start also asking to which extent such devices can be maliciously exploited as "tracking devices". The concern is not only related to an adversary taking physical or remote control of the device, but also to what a passive adversary without the above capabilities can observe from the device communications. Work in this latter direction aimed, for example, at inferring the apps a user has installed on his device, or identifying the presence of a specific user within a network. In this paper, we move a step forward: we investigate to which extent it is feasible to identify the specific actions that a user is doing on mobile apps, by eavesdropping their encrypted network traffic. We design a system that achieves this goal by using advanced machine learning techniques. We did a complete implementation of this system and run a thorough set of experiments, which show that it can achieve accuracy and precision higher than 95% for most of the considered actions.
Keywords: machine learning, mobile security, network traffic analysis, privacy (ID#: 15-5555)
URL: http://doi.acm.org/10.1145/2699026.2699119   

 

Mohammad Islam, Mehmet Kuzu, Murat Kantarcioglu; “A Dynamic Approach to Detect Anomalous Queries on Relational Databases;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 235-246. Doi: 10.1145/2557547.2557561
Abstract: To mitigate security concerns of outsourced databases, quite a few protocols have been proposed that outsource data in encrypted format and allow encrypted query execution on the server side. Among the more practical protocols, the "bucketization" approach facilitates query execution at the cost of reduced efficiency by allowing some false positives in the query results. Precise Query Protocols (PQPs), on the other hand, enable the server to execute queries without incurring any false positives. Even though these protocols do not reveal the underlying data, they reveal query access pattern to an adversary. In this paper, we introduce a general attack on PQPs based on access pattern disclosure in the context of secure range queries. Our empirical analysis on several real world datasets shows that the proposed attack is able to disclose significant amount of sensitive data with high accuracy provided that the attacker has reasonable amount of background knowledge. We further demonstrate that a slight variation of such an attack can also be used on imprecise protocols (e.g., bucketization) to disclose significant amount of sensitive information.
Keywords: database-as-a-service, encrypted range query, inference attack (ID#: 15-5556)
URL: http://doi.acm.org/10.1145/2557547.2557561

 

Xiaofeng Xu, Li Xiong, Jinfei Liu; “Database Fragmentation with Confidentiality Constraints: A Graph Search Approach;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 263-270. Doi: 10.1145/2699026.2699121
Abstract: Database fragmentation is a promising approach that can be used in combination with encryption to achieve secure data outsourcing which allows clients to securely outsource their data to remote untrusted server(s) while enabling query support using the outsourced data. Given a set of confidentiality constraints, it vertically partitions the database into fragments such that the set of attributes in each constraint do not appear together in any one fragment. The optimal fragmentation problem is to find a fragmentation with minimum cost for query support. In this paper, we propose an efficient graph search based approach which obtains near optimal fragmentation. We model the fragmentation search space as a graph and propose efficient search algorithms on the graph. We present static and dynamic search strategies as well as a novel level-wise graph expansion technique which dramatically reduces the search time. Extensive experiments showed that our method significantly outperforms other state-of-the-art methods.
Keywords: confidentiality constraints, fragmentation, graph search, secure data outsourcing (ID#: 15-5557)
URL: http://doi.acm.org/10.1145/2699026.2699121

 

Bo Chen, Anil Kumar Ammula, Reza Curtmola; “Towards Server-side Repair for Erasure Coding-based Distributed Storage Systems;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 281-288. Doi: 10.1145/2699026.2699122
Abstract: Erasure coding is one of the main mechanisms to add redundancy in a distributed storage system, by which a file with k data segments is encoded into a file with n coded segments such that any k coded segments can be used to recover the original k data segments. Each coded segment is stored at a storage server. Under an adversarial setting in which the storage servers can exhibit Byzantine behavior, remote data checking (RDC) can be used to ensure that the stored data remains retrievable over time. The main previous RDC scheme to offer such strong security guarantees, HAIL, has an inefficient repair procedure, which puts a high load on the data owner when repairing even one corrupt data segment. In this work, we propose RDC-EC, a novel RDC scheme for erasure code-based distributed storage systems that can function under an adversarial setting. With RDC-EC we offer a solution to an open problem posed in previous work and build the first such system that has an efficient repair phase. The main insight is that RDC-EC is able to reduce the load on the data owner during the repair phase (i.e., lower bandwidth and computation) by shifting most of the burden from the data owner to the storage servers during repair. RDC-ECis able to maintain the advantages of systematic erasure coding: optimal storage for a certain reliability level and sub-file access. We build a prototype for RDC-EC and show experimentally that RDC-EC can handle efficiently large amounts of data.
Keywords: cloud storage, erasure coding, remote data integrity checking, server-side repair (ID#: 15-5558)
URL: http://doi.acm.org/10.1145/2699026.2699122

 

Dave Tian, Kevin Butler, Patrick Mcdaniel; Padma Krishnaswamy; ”Securing ARP from the Ground Up;” CODASPY '15 Proceedings of the 5th ACM Conference on Data and Application Security and Privacy, March 2015, Pages 305-312. Doi: 10.1145/2699026.2699123
Abstract: The basis for all IPv4 network communication is the Address Resolution Protocol (ARP), which maps an IP address to a device's Media Access Control (MAC) identifier. ARP has long been recognized as vulnerable to spoofing and other attacks, and past proposals to secure the protocol have often involved modifying the basic protocol. This paper introduces arpsec, a secure ARP/RARP protocol suite which a) does not require protocol modification, b) enables continual verification of the identity of the tar- get (respondent) machine by introducing an address binding repository derived using a formal logic that bases additions to a host's ARP cache on a set of operational rules and properties, c) utilizes the TPM, a commodity component now present in the vast majority of modern computers, to augment the logic-prover-derived assurance when needed, with TPM-facilitated attestations of system state achieved at viably low processing cost. Using commodity TPMs as our attestation base, we show that arpsec incurs an overhead ranging from 7% to 15.4% over the standard Linux ARP implementation and provides a first step towards a formally secure and trustworthy networking stack.
Keywords: arp, logic, spoofing, trusted computing, trusted protocols (ID#: 15-5559)
URL: http://doi.acm.org/10.1145/2699026.2699123


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

International Conferences: Cloud Engineering (IC2E), 2015 Arizona

 
SoS Logo

International Conferences: Cloud Engineering (IC2E), 2015 Arizona

 

2015 IEEE International Conference on Cloud Engineering (IC2E) was held 9-13 March 2015 in Tempe, Arizona. The conference addresses cloud computing as

“a new paradigm for the use and delivery of information technology (IT), including on-demand access, economies of scale, and dynamic sourcing options. In the cloud context, a wide range of IT resources and capabilities, including servers, networking, storage, middleware, data, security, applications, and business processes, are available as services enabled for rapid provisioning, flexible pricing, elastic scaling, and resilience. These new forms of IT services are challenging conventional wisdom and practices. Fully reaping the benefits of cloud computing calls for holistic treatment of key technical and business issues, as well as for engineering methodology that draws upon innovations from diverse areas of computer science and business informatics. “

The conference home page is available at: http://conferences.computer.org/IC2E/2015/  Articles cited here are deemed of interest to the Cyber Physical Systems Science of Security community.


 

Youngchoon Park, "Connected Smart Buildings, a New Way to Interact with Buildings," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 5, 5, 9-13 March 2015. doi: 10.1109/IC2E.2015.57
Abstract: Summary form only given. Devices, people, information and software applications rarely live in isolation in modern building management. For example, networked sensors that monitor the performance of a chiller are common and collected data are delivered to building automation systems to optimize energy use. Detected possible failures are also handed to facility management staffs for repairs. Physical and cyber security services have to be incorporated to prevent improper access of not only HVAC (Heating, Ventilation, Air Conditioning) equipment but also control devices. Harmonizing these connected sensors, control devices, equipment and people is a key to provide more comfortable, safe and sustainable buildings. Nowadays, devices with embedded intelligences and communication capabilities can interact with people directly. Traditionally, few selected people (e.g., facility managers in building industry) have access and program the device with fixed operating schedule while a device has a very limited connectivity to an operating environment and context. Modern connected devices will learn and interact with users and other connected things. This would be a fundamental shift in ways in communication from unidirectional to bi-directional. A manufacturer will learn how their products and features are being accessed and utilized. An end user or a device on behalf of a user can interact and communicate with a service provider or a manufacturer without go though a distributer, almost real time basis. This will requires different business strategies and product development behaviors to serve connected customers' demands. Connected things produce enormous amount of data that result many questions and technical challenges in data management, analysis and associated services. In this talk, we will brief some of challenges that we have encountered In developing connected building solutions and services. More specifically, (1) semantic interoperability requirements among smart s- nsors, actuators, lighting, security and control and business applications, (2) engineering challenges in managing massively large time sensitive multi-media data in a cloud at global scale, and (3) security and privacy concerns are presented.
Keywords: HVAC; building management systems; intelligent sensors; HVAC; actuators; building automation systems; building management; business strategy; chiller performance; connected smart buildings; control devices; cyber security services; data management; facility management staffs; heating-ventilation-air conditioning equipment; lighting; networked sensors; product development behaviors; service provider; smart sensors; time sensitive multimedia data; Building automation; Business; Conferences; Intelligent sensors; Security; Building Management; Cloud; Internet of Things (ID#: 15-5429)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092892&isnumber=7092808

 

Singh, J.; Pasquier, T.F.J.-M.; Bacon, J.; Eyers, D., "Integrating Messaging Middleware and Information Flow Control," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 54, 59, 9-13 March 2015. doi: 10.1109/IC2E.2015.13
Abstract: Security is an ongoing challenge in cloud computing. Currently, cloud consumers have few mechanisms for managing their data within the cloud provider's infrastructure. Information Flow Control (IFC) involves attaching labels to data, to govern its flow throughout a system. We have worked on kernel-level IFC enforcement to protect data flows within a virtual machine (VM). This paper makes the case for, and demonstrates the feasibility of an IFC-enabled messaging middleware, to enforce IFC within and across applications, containers, VMs, and hosts. We detail how such middleware can integrate with local (kernel) enforcement mechanisms, and highlight the benefits of separating data management policy from application/service-logic.
Keywords: cloud computing; data protection; middleware; security of data; virtual machines; VM; application logic; cloud computing; cloud consumers; cloud provider infrastructure; data flow protection; data management policy; information flow control; kernel enforcement mechanisms; kernel-level IFC enforcement; local enforcement mechanisms; messaging middleware integration; service-logic; virtual machine; Cloud computing; Context; Kernel; Runtime; Security; Servers; Information Flow Control; cloud computing; distributed systems; middleware; policy; security (ID#: 15-5430)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092899&isnumber=7092808

 

Routray, R., "Cloud Storage Infrastructure Optimization Analytics," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 92, 92, 9-13 March 2015. doi: 10.1109/IC2E.2015.83
Abstract: Summary form only given. Emergence and adoption of cloud computing have become widely prevalent given the value proposition it brings to an enterprise in terms of agility and cost effectiveness. Big data analytical capabilities (specifically treating storage/system management as a big data problem for a service provider) using Cloud delivery models is defined as Analytics as a Service or Software as a Service. This service simplifies obtaining useful insights from an operational enterprise data center leading to cost and performance optimizations.Software defined environments decouple the control planes from the data planes that were often vertically integrated in a traditional networking or storage systems. The decoupling between the control planes and the data planes enables opportunities for improved security, resiliency and IT optimization in general. This talk describes our novel approach in hosting the systems management platform (a.k.a. control plane) in the cloud offered to enterprises in Software as a Service (SaaS) model. Specifically, in this presentation, focus is on the analytics layer with SaaS paradigm enabling data centers to visualize, optimize and forecast infrastructure via a simple capture, analyze and govern framework. At the core, it uses big data analytics to extract actionable insights from system management metrics data. Our system is developed in research and deployed across customers, where core focus is on agility, elasticity and scalability of the analytics framework. We demonstrate few system/storage management analytics case studies to demonstrate cost and performance optimization for both cloud consumer as well as service provider. Actionable insights generated from the analytics platform are implemented in an automated fashion via an OpenStack based platform.
Keywords: cloud computing; data analysis; optimisation; Analytics as a Service; OpenStack based platform; SaaS model; Software as a Service; cloud computing; cloud delivery models; cloud storage infrastructure optimization analytics; data analytical capabilities; data analytics; data planes; management metric data system; management platform system; operational enterprise data center; performance optimizations;software defined environments; value proposition; Big data; Cloud computing; Computer science;Conferences; Optimization; Software as a service; Storage management (ID#: 15-5431)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092904&isnumber=7092808

 

Strizhov, M.; Ray, I., "Substring Position Search over Encrypted Cloud Data Using Tree-Based Index," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 165, 174, 9-13 March 2015. doi: 10.1109/IC2E.2015.33
Abstract: Existing Searchable Encryption (SE) solutions are able to handle simple boolean search queries, such as single or multi-keyword queries, but cannot handle substring search queries over encrypted data that also involves identifying the position of the substring within the document. These types of queries are relevant in areas such as searching DNA data. In this paper, we propose a tree-based Substring Position Searchable Symmetric Encryption (SSP-SSE) to overcome the existing gap. Our solution efficiently finds occurrences of a substrings over encrypted cloud data. We formally define the leakage functions and security properties of SSP-SSE. Then, we prove that the proposed scheme is secure against chosen-keyword attacks that involve an adaptive adversary. Our analysis demonstrates that SSP-SSE introduces very low overhead on computation and storage.
Keywords: cloud computing; cryptography; query processing; trees (mathematics); DNA data; SSP-SSE; adaptive adversary; boolean search queries; chosen-keyword attacks; cloud data; leakage functions; multikeyword queries; security properties; single keyword queries; substring position search; substring position searchable symmetric encryption; tree-based index; Cloud computing; Encryption; Indexes; Keyword search; Probabilistic logic; cloud computing; position heap tree; searchable symmetric encryption; substring position search (ID#: 15-5432)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092914&isnumber=7092808

 

Qingji Zheng; Shouhuai Xu, "Verifiable Delegated Set Intersection Operations on Outsourced Encrypted Data," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 175, 184, 9-13 March 2015. doi: 10.1109/IC2E.2015.38
Abstract: We initiate the study of the following problem: Suppose Alice and Bob would like to outsource their encrypted private data sets to the cloud, and they also want to conduct the set intersection operation on their plaintext data sets. The straightforward solution for them is to download their outsourced cipher texts, decrypt the cipher texts locally, and then execute a commodity two-party set intersection protocol. Unfortunately, this solution is not practical. We therefore motivate and introduce the novel notion of Verifiable Delegated Set Intersection on outsourced encrypted data (VDSI). The basic idea is to delegate the set intersection operation to the cloud, while (i) not giving the decryption capability to the cloud, and (ii) being able to hold the misbehaving cloud accountable. We formalize security properties of VDSI and present a construction. In our solution, the computational and communication costs on the users are linear to the size of the intersection set, meaning that the efficiency is optimal up to a constant factor.
Keywords: cryptographic protocols; set theory; VDSI; encrypted private data sets; intersection protocol; outsourced cipher texts; outsourced encrypted data; plaintext data sets; set intersection operation; verifiable delegated set intersection operations; Cloud computing; Encryption; Gold; Polynomials; Protocols; outsourced encrypted data; verifiable outsourced computing; verifiable set intersection (ID#: 15-5433)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092915&isnumber=7092808

 

Berger, S.; Goldman, K.; Pendarakis, D.; Safford, D.; Valdez, E.; Zohar, M., "Scalable Attestation: A Step Toward Secure and Trusted Clouds," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 185, 194, 9-13 March 2015. doi: 10.1109/IC2E.2015.32
Abstract: In this work we present Scalable Attestation, a method which combines both secure boot and trusted boot technologies, and extends them up into the host, its programs, and up into the guest's operating system and workloads, to both detect and prevent integrity attacks. Anchored in hardware, this integrity appraisal and attestation protects persistent data (files) from remote attack, even if the attack is root privileged. As an added benefit of a hardware rooted attestation, we gain a simple hardware based geolocation attestation to help enforce regulatory requirements. This design is implemented in multiple cloud test beds based on the QEMU/KVM hypervisor, Open Stack, and Open Attestation, and is shown to provide significant additional integrity protection at negligible cost.
Keywords: cloud computing; operating systems (computers);security of data; trusted computing; Open Attestation; Open Stack; QEMU/KVM hypervisor; cloud test beds; guest operating system; hardware based geolocation attestation; hardware rooted attestation; integrity attack detection; integrity attack prevention; integrity protection; regulatory requirements; scalable attestation; secure boot; secure clouds; trusted boot technologies; trusted clouds; Appraisal; Hardware; Kernel; Linux; Public key; Semiconductor device measurement; Attestation; Integrity; Security (ID#: 15-5434)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092916&isnumber=7092808

 

Kanstren, T.; Lehtonen, S.; Savola, R.; Kukkohovi, H.; Hatonen, K., "Architecture for High Confidence Cloud Security Monitoring," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 195, 200, 9-13 March 2015. doi: 10.1109/IC2E.2015.21
Abstract: Operational security assurance of a networked system requires providing constant and up-to-date evidence of its operational state. In a cloud-based environment we deploy our services as virtual guests running on external hosts. As this environment is not under our full control, we have to find ways to provide assurance that the security information provided from this environment is accurate, and our software is running in the expected environment. In this paper, we present an architecture for providing increased confidence in measurements of such cloud-based deployments. The architecture is based on a set of deployed measurement probes and trusted platform modules (TPM) across both the host infrastructure and guest virtual machines. The TPM are used to verify the integrity of the probes and measurements they provide. This allows us to ensure that the system is running in the expected environment, the monitoring probes have not been tampered with, and the integrity of measurement data provided is maintained. Overall this gives us a basis for increased confidence in the security of running parts of our system in an external cloud-based environment.
Keywords: cloud computing; security of data; virtual machines; TPM; external cloud-based environment; external hosts; guest virtual machines; high confidence cloud security monitoring; host infrastructure; measurement probes; networked system; operational security assurance; operational state; trusted platform modules; Computer architecture; Cryptography; Monitoring; Probes; Servers; Virtual machining; TPM; cloud; monitoring; secure element; security assurance (ID#: 15-5435)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092917&isnumber=7092808

 

Calyam, P.; Seetharam, S.; Homchaudhuri, B.; Kumar, M., "Resource Defragmentation Using Market-Driven Allocation in Virtual Desktop Clouds," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 246, 255, 9-13 March 2015. doi: 10.1109/IC2E.2015.37
Abstract: Similar to memory or disk fragmentation in personal computers, emerging "virtual desktop cloud" (VDC) services experience the problem of data center resource fragmentation which occurs due to on-the-fly provisioning of virtual desktop (VD) resources. Irregular resource holes due to fragmentation lead to sub-optimal VD resource allocations, and cause: (a)decreased user quality of experience (QoE), and (b) increased operational costs for VDC service providers. In this paper, we address this problem by developing a novel, optimal "Market-Driven Provisioning and Placement" (MDPP) scheme that is based upon distributed optimization principles. The MDPP scheme channelizes inherent distributed nature of the resource allocation problem by capturing VD resource bids via a virtual market to explore soft spots in the problem space, and consequently defragments a VDC through cost-aware utility-maximal VD re-allocations or migrations. Through extensive simulations of VD request allocations to multiple data centers for diverse VD application and user QoE profiles, we demonstrate that our MDPP scheme outperforms existing schemes that are largely based on centralized optimization principles. Moreover, MDPP scheme can achieve high VDC performance and scalability, measurable in terms of a 'Net Utility' metric, even when VD resource location constraints are imposed to meet orthogonal security objectives.
Keywords: cloud computing; computer centres; microcomputers; quality of experience; resource allocation; MDPP scheme; VD request allocation simulations; VD resource on-the-fly provisioning; VDC service providers; centralized optimization principles; cost-aware utility-maximal VD re-allocations; data center resource fragmentation; disk fragmentation; distributed optimization principles; irregular resource holes; market-driven allocation; market-driven provisioning and placement scheme; memory fragmentation; multiple data centers; net utility metric; operational costs; orthogonal security; personal computers; sub-optimal VD resource allocation; user QoE profiles; user quality of experience; virtual desktop clouds services; Bandwidth; Joints; Measurement; Optimization; Resource management; Scalability; Virtual machining (ID#: 15-5436)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092926&isnumber=7092808

 

Pasquier, T.F.J.-M.; Singh, J.; Bacon, J., "Information Flow Control for Strong Protection with Flexible Sharing in PaaS," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 279, 282, 9-13 March 2015. doi: 10.1109/IC2E.2015.64
Abstract: The need to share data across applications is becoming increasingly evident. Current cloud isolation mechanisms focus solely on protection, such as containers that isolate at the OS-level, and virtual machines that isolate through the hypervisor. However, by focusing rigidly on protection, these approaches do not provide for controlled sharing. This paper presents how Information Flow Control (IFC) offers a flexible alternative. As a data-centric mechanism it enables strong isolation when required, while providing continuous, fine grained control of the data being shared. An IFC-enabled cloud platform would ensure that policies are enforced as data flows across all applications, without requiring any special sharing mechanisms.
Keywords: cloud computing; data protection; operating systems (computers); virtual machines; IFC-enabled cloud platform; OS-level; PaaS; cloud isolation mechanisms; data-centric mechanism; fine grained data control; flexible data sharing mechanism; hypervisor; information flow control; virtual machines; Cloud computing; Computers; Containers; Context; Kernel; Security (ID#: 15-5437)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092930&isnumber=7092808

 

Tawalbeh, L.; Haddad, Y.; Khamis, O.; Aldosari, F.; Benkhelifa, E., "Efficient Software-Based Mobile Cloud Computing Framework," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 317, 322, 9-13 March 2015. doi: 10.1109/IC2E.2015.48
Abstract: This paper proposes an efficient software based data possession mobile cloud computing framework. The proposed design utilizes the characteristics of two frameworks. The first one is the provable data possession design built for resource-constrained mobile devices and it uses the advantage of trusted computing technology, and the second framework is a lightweight resilient storage outsourcing design for mobile cloud computing systems. Our software based framework utilizes the strength aspects in both mentioned frameworks to gain better performance and security. The evaluation and comparison results showed that our design has better flexibility and efficiency than other related frameworks.
Keywords: cloud computing; data handling; mobile computing; outsourcing; resource constrained mobile devices; software based data possession mobile cloud computing framework; software based framework; storage outsourcing design; trusted computing technology; Cloud computing; Computational modeling; Encryption; Mobile communication; Mobile handsets; Servers; Mobile Cloud Computing; Security; Software Defined Storage; Software Defined Systems; Trusted Cloud Computing (ID#: 15-5438)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092937&isnumber=7092808

 

Slominski, A.; Muthusamy, V.; Khalaf, R., "Building a Multi-tenant Cloud Service from Legacy Code with Docker Containers," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 394, 396, 9-13 March 2015. doi: 10.1109/IC2E.2015.66
Abstract: In this paper we address the problem of migrating a legacy Web application to a cloud service. We develop a reusable architectural pattern to do so and validate it with a case study of the Beta release of the IBM Bluemix Workflow Service [1] (herein referred to as the Beta Workflow service). It uses Docker [2] containers and a Cloudant [3] persistence layer to deliver a multi-tenant cloud service by re-using a legacy codebase. We are not aware of any literature that addresses this problem by using containers.The Beta Workflow service provides a scalable, stateful, highly available engine to compose services with REST APIs. The composition is modeled as a graph but authored in a Javascript-based domain specific language that specifies a set of activities and control flow links among these activities. The primitive activities in the language can be used to respond to HTTP REST requests, invoke services with REST APIs, and execute Javascript code to, among other uses, extract and construct the data inputs and outputs to external services, and make calls to these services.Examples of workflows that have been built using the service include distributing surveys and coupons to customers of a retail store [1], the management of sales requests between a salesperson and their regional managers, managing the staged deployment of different versions of an application, and the coordinated transfer of jobs among case workers.
Keywords: Java; application program interfaces; cloud computing; specification languages; Beta Workflow service; Cloudant persistence layer; HTTP REST requests;IBM Bluemix Workflow Service; Javascript code; Javascript-based domain specific language; REST API; docker containers; legacy Web application; legacy codebase; multitenant cloud service; reusable architectural pattern; Browsers; Cloud computing; Containers; Engines; Memory management; Organizations; Security (ID#: 15-5439)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092950&isnumber=7092808

 

Paul, M.; Collberg, C.; Bambauer, D., "A Possible Solution for Privacy Preserving Cloud Data Storage," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 397, 403, 9-13 March 2015. doi: 10.1109/IC2E.2015.103
Abstract: Despite the economic advantages of cloud data storage, many corporations have not yet migrated to this technology. While corporations in the financial sector cite data security as a reason, corporations in other sectors cite privacy concerns for this reluctance. In this paper, we propose a possible solution for this problem inspired by the HIPAA safe harbor methodology for data anonymization. The proposed technique involves using a hash function that uniquely identifies the data and then splitting data across multiple cloud providers. We propose that such a "Good Enough" approach to privacy-preserving cloud data storage is both technologically feasible and financially advantageous. Following this approach addresses concerns about privacy harms resulting from accidental or deliberate data spills from cloud providers. The "Good Enough" method will enable firms to move their data into the cloud without incurring privacy risks, enabling them to realize the economic advantages provided by the pay-per-use model of cloud data storage.
Keywords: cloud computing; data privacy; security of data; HIPAA safe harbor methodology; data anonymization; data security; data splitting; financial sector; good enough approach; multiple cloud providers; pay-per-use model; privacy concerns; privacy preserving cloud data storage; Cloud computing; Data privacy; Indexes; Memory; Privacy; Security; Data Privacy; Cloud; Obfuscation (ID#: 15-5440)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092951&isnumber=7092808

 

Mutkoski, S., "National Cloud Computing Principles: Guidance for Public Sector Authorities Moving to the Cloud," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 404, 409, 9-13 March 2015. doi: 10.1109/IC2E.2015.104
Abstract: Governments around the world are actively seeking to leverage the many benefits of cloud computing while also ensuring that they manage risks that deployment of the new technologies can raise. While laws and regulations related to the privacy and security of government data may already exist, many were drafted in the "pre-cloud" era and could therefore benefit from an update and revision. This paper explores some of the concepts that should be incorporated into new or amended laws that seek to guide public sector entities as they move their data and workloads to the cloud.
Keywords: cloud computing; legislation; government data; national cloud computing legislation principles; precloud era; public sector authorities; Certification; Cloud computing; Computational modeling; Data privacy; Government; Legislation; Security; Cloud Computing; Public Sector; Regulation and Legislation; Risk Management; Security (ID#: 15-5441)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092952&isnumber=7092808

 

Pasquier, T.F.J.-M.; Powles, J.E., "Expressing and Enforcing Location Requirements in the Cloud Using Information Flow Control," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 410, 415, 9-13 March 2015. doi: 10.1109/IC2E.2015.71
Abstract: The adoption of cloud computing is increasing and its use is becoming widespread in many sectors. As cloud service provision increases, legal and regulatory issues become more significant. In particular, the international nature of cloud provision raises concerns over the location of data and the laws to which they are subject. In this paper we investigate Information Flow Control (IFC) as a possible technical solution to expressing, enforcing and demonstrating compliance of cloud computing systems with policy requirements inspired by data protection and other laws. We focus on geographic location of data, since this is the paradigmatic concern of legal/regulatory requirements on cloud computing and, to date, has not been met with robust technical solutions and verifiable data flow audit trails.
Keywords: cloud computing; data protection; geography; law; IFC; cloud computing; cloud service provision; data protection; geographic data location; information flow control; legal issues; legal/regulatory requirements; location requirement enforcement; location requirement expression; policy requirements; regulatory issues; verifiable data flow audit trails; Cloud computing; Companies; Context; Europe; Law; Security (ID#: 15-5442)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092953&isnumber=7092808

 

D'Errico, M.; Pearson, S., "Towards a Formalised Representation for the Technical Enforcement of Privacy Level Agreements," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 422, 427, 9-13 March 2015. doi: 10.1109/IC2E.2015.72Abstract: Privacy Level Agreements (PLAs) are likely to be increasingly adopted as a standardized way for cloud providers to describe their data protection practices. In this paper we propose an ontology-based model to represent the information disclosed in the agreement to turn it into a means that allows software tools to use and further process that information for different purposes, including automated service offering discovery and comparison. A specific usage of the PLA ontology is presented, showing how to link high level policies to operational policies that are then enforced and monitored. Through this established link, cloud users gain greater assurance that what is expressed in such agreements is actually being met, and thereby can take this information into account when choosing cloud service providers. Furthermore, the created link can be used to enable policy enforcement tools to add semantics to the evidence they produce; this mainly takes the form of logs that are associated with the specific policy of which execution they provide evidence. Furthermore, the use of the ontology model allows a means of enabling interoperability among tools that are in charge of the enforcement and monitoring of possible violations to the terms of the agreement.
Keywords: data protection; ontologies (artificial intelligence); open systems; software tools; PLA ontology; cloud providers; data protection practices; formalised representation; high level policies; interoperability; ontology-based model; operational policies; policy enforcement tools; privacy level agreements; software tools; technical enforcement; Data models; Data privacy; Engines; Monitoring; Ontologies; Privacy; Programmable logic arrays; privacy policy; assurance; policy enforcement; Privacy Level Agreement (ID#: 15-5443)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092955&isnumber=7092808

 

Adelyar, S.H., "Towards Secure Agile Agent-Oriented System Design," Cloud Engineering (IC2E), 2015 IEEE International Conference on, pp. 499, 501, 9-13 March 2015. doi: 10.1109/IC2E.2015.95
Abstract: Agile methods are criticized to be inadequate for developing secure digital services. Currently, the software research community only partially studies security for agile practices. Our more holistic approach is identifying the security challenges / benefits of agile practices that relate to the core "embrace-changes" principle. For this case-study based research, we consider eXtreme Programming (XP) for a holistic security integration into agile practices.
Keywords: object-oriented programming; security of data; software agents; software prototyping; XP; embrace-change principle; extreme programming; holistic security integration; secure agile agent-oriented system design; secure digital services; software research community; Agile software development; Cloud computing; Context; Planning; Programming; Security; Agile; Embrace-changes; Security; Challenges; Benefits (ID#: 15-5444)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7092968&isnumber=7092808


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

International Conferences: Cryptography and Security in Computing Systems, 2015, Amsterdam

 
SoS Logo

International Conferences: Cryptography and Security in Computing Systems, 2015, Amsterdam

 

The Second Workshop on Cryptography and Security in Computing Systems (CS2) was held in Amsterdam 19 January 2015. The workshop describes itself as “a venue for security and cryptography experts to interact with the computer architecture and compilers community, aiming at cross-fertilization and multi-disciplinary approaches to security in computing systems.”  Conference details are available on its web page at: http://www.cs2.deib.polimi.it/   


 

Apostolos P. Fournaris, Nicolaos Klaoudatos, Nicolas Sklavos.Educational  Christos Koulamas ; “Fault and Power Analysis Attack Resistant RNS based Edwards Curve Point Multiplication;” CS2 '15 Proceedings of the Second Workshop on Cryptography and Security in Computing Systems, January 2015, Pages 43. Doi: 10.1145/2694805.2694814
Abstract: In this paper, a road-map toward Fault (FA) and Power Analysis Attack (PA) resistance is proposed that combines the Edwards Curves innate PA resistance and a base point randomization Montgomery Power Ladder point multiplication (PM) algorithm, capable of providing broad FA and PA resistance, with the Residue number system (RNS) representation for all GF(p) operations in an effort to enhance the FA-PA resistance of point multiplication algorithms and additional provide performance efficiency in terms of speed and hardware resources. The proposed methodology security is analyzed and its efficiency is verified by designing a PM hardware architecture and FPGA implementation.
Keywords:  (not provided) (ID#: 15-5445)
URL: http://doi.acm.org/10.1145/2694805.2694814

 

Mathieu Carbone, Yannick Teglia, Philippe Maurine, Gilles R. Ducharme; “Interest of MIA in Frequency Domain?;”  CS2 '15 Proceedings of the Second Workshop on Cryptography and Security in Computing Systems, January 2015, pages 35. Doi: 10.1145/2694805.2694812
Abstract: Mutual Information Analysis (MIA) has a main advantage over Pearson's correlation Analysis (CPA): its ability in detecting any kind of leakage within traces. However, it remains rarely used and less popular than CPA; probably because of two reasons. The first one is related to the appropriate choice of hyperparameters involved in MIA, choice that determines its efficiency and genericity. The second one is surely the high computational burden associated to MIA. The interests of applying MIA in the frequency domain rather than in the time domain are discussed. It is shown that MIA running into the frequency domain is really effective and fast when combined with the use of an accurate frequency leakage model.
Keywords: (not provided) (ID#: 15-5446)
URL: http://doi.acm.org/10.1145/2694805.2694812

 

Alexander Herrmann, Marc Stöttinger; “Evaluation Tools for Multivariate Side-Channel Analysis;” CS2 '15 Proceedings of the Second Workshop on Cryptography and Security in Computing Systems, January 2015, Pages 1. Doi: 10.1145/2694805.2694806
Abstract: The goal of side-channel evaluation is to estimate the vulnerability of an implementation against the most powerful attacks. In this paper, we present a closed equation for the success rate computation in a profiling-based side-channel analysis scenario. From this equation, we derive a metric that can be used for optimizing the attack scenario by finding the best set of considered points in time. Practical experiments demonstrate the advantages of this new method against other previously used feature selection algorithms.
Keywords: Feature Selection, Multivariate Side-Channel Analysis (ID#: 15-5447)
URL: http://doi.acm.org/10.1145/2694805.2694806

 

Harris E. Michail, Lenos Ioannou, Artemios G. Voyiatzis; “Pipelined SHA-3 Implementations on FPGA: Architecture and Performance Analysis;” CS2 '15 Proceedings of the Second Workshop on Cryptography and Security in Computing Systems, January 2015, Pages 13. Doi: 10.1145/2694805.2694808
Abstract: Efficient and high-throughput designs of hash functions will be in great demand in the next few years, given that every IPv6 data packet is expected to be handled with some kind of security features. In this paper, pipelined implementations of the new SHA-3 hash standard on FPGAs are presented and compared aiming to map the design space and the choice of the number of pipeline stages. The proposed designs support all the four SHA-3 modes of operation. They also support processing of multiple messages each comprising multiple blocks. Designs for up to a four-stage pipeline are presented for three generations of FPGAs and the performance of the implementations is analyzed and compared in terms of the throughput/area metric.  Several pipeline designs are explored in order to determine the one that achieves the best throughput/area performance. The results indicate that the FPGA technology characteristics must also be considered when choosing an efficient pipeline depth. Our designs perform better compared to the existing literature due to the extended optimization effort on the synthesis tool and the efficient design of multi-block message processing.
Keywords: Cryptography, FPGA, Hash function, Pipeline, Security (ID#: 15-5448)
URL: http://doi.acm.org/10.1145/2694805.2694808

 

Wei He, Alexander Herrmann; “Placement Security Analysis for Side-Channel Resistant Dual-Rail Scheme in FPGA;” CS2 '15 Proceedings of the Second Workshop on Cryptography and Security in Computing Systems, January 2015, Pages 39. Doi: 10.1145/2694805.2694813
Abstract: Physical implementations have significant impacts to the security level of hardware cryptography, mainly due to the fact that the bottom-layer logic fundamentals typically act as the exploitable SCA leakage sources. As a widely studied countermeasure category, dual-rail precharged logic theoretically withstands side-channel analysis by compensating the data-dependent variations between two rails. In this paper, different placement schemes, considering dual-rail framework in Xilinx FPGA, are investigated concerning silicon process variations. The presented work is based on the practical implementation of a light-weight crypto coprocessor. Stochastic Approach [9] based SNR estimation is used as a metric to quantify the measurable leakage, over a series of EM traces acquired by surface scanning over a decapsulated Virtex-5 device. Experimental results show that by employing a highly interleaved and identical dual-rail style in diagonal direction, the routing symmetry can be further optimized. This improvement results in less influence from process variation between the dual rails, which in turn yields a higher security grade in terms of signal-to-noise ratio.
Keywords: Dual-rail Precharge Logic, EM Surface Scan, FPGA, Side-Channel Analysis, Signal-to-Noise Ratio (SNR), Stochastic Approach (ID#: 15-5449)
URL: http://doi.acm.org/10.1145/2694805.2694813

 

Mohsen Toorani; “On Continuous After-the-Fact Leakage-Resilient Key Exchange;” CS2 '15 Proceedings of the Second Workshop on Cryptography and Security in Computing Systems, January 2015, Pages 31. doi: 10.1145/2694805.2694811
Abstract: Recently, the Continuous After-the-Fact Leakage (CAFL) security model has been introduced for two-party authenticated key exchange (AKE) protocols. In the CAFL model, an adversary can adaptively request arbitrary leakage of long-term secrets even after the test session is activated. It supports continuous leakage even when the adversary learns certain ephemeral secrets or session keys. The amount of leakage is limited per query, but there is no bound on the total leakage. A generic leakage-resilient key exchange protocol π has also been introduced that is formally proved to be secure in the CAFL model. In this paper, we comment on the CAFL model, and show that it does not capture its claimed security. We also present an attack and counterproofs for the security of protocol π which invalidates the formal security proofs of protocol π in the CAFL model.
Keywords: Cryptographic protocols, Key exchange, Leakage-resilient cryptography, Security models (ID#: 15-5450)
URL: http://doi.acm.org/10.1145/2694805.2694811

 

Rainer Plaga, Dominik Merli;  “A New Definition and Classification of Physical Unclonable Functions;” CS2 '15 Proceedings of the Second Workshop on Cryptography and Security in Computing Systems, January 2015, Pages 7.  doi: 10.1145/2694805.2694807
Abstract: A new definition of "Physical Unclonable Functions" (PUFs), the first one that fully captures its intuitive idea among experts, is presented. A PUF is an information-storage system with a security mechanism that is 1. meant to impede the duplication of a precisely described storage-functionality in another, separate system and 2. remains effective against an attacker with temporary access to the whole original system.  A novel classification scheme of the security objectives and mechanisms of PUFs is proposed and its usefulness to aid future research and security evaluation is demonstrated. One class of PUF security mechanisms that prevents an attacker to apply all addresses at which secrets are stored in the information-storage system, is shown to be closely analogous to cryptographic encryption. Its development marks the dawn of a new fundamental primitive of hardware-security engineering: cryptostorage. These results firmly establish PUFs as a fundamental concept of hardware security.
Keywords: Physical Unclonable Functions (ID#: 15-5451)
URL: http://doi.acm.org/10.1145/2694805.2694807

 

Loïc Zussa, Ingrid Exurville, Jean-Max Dutertre, Jean-Baptiste Rigaud, Bruno Robisson, Assia Tria, Jessy Clédière; “Evidence of an Information Leakage Between Logically Independent Blocks;” CS2 '15 Proceedings of the Second Workshop on Cryptography and Security in Computing Systems, January 2015, Pages 25.  doi: 10.1145/2694805.2694810
Abstract: In this paper we study the information leakage that may exist, due to electrical coupling, between logically independent blocks of a secure circuit as a new attack path to retrieve secret information. First, an aes-128 has been implemented on a FPGA board. Then, this AES implementation has been secured with a delay-based countermeasure against fault injection related to timing constraints violations. The countermeasure's detection threshold was supposed to be logically independent from the data handled by the cryptographic algorithm. Thus, it theoretically does not leak any information related to sensitive values. However experiments point out an existing correlation between the fault detection threshold of the countermeasure and the AES's calculations. As a result, we were able to retrieve the secret key of the AES using this correlation. Finally, different strategies were tested in order to minimize the number of triggered alarm to retrieve the secret key.
Keywords: 'DPA-like' analysis, Delay-based countermeasure, information leakage, side effects (ID#: 15-5452)
URLhttp://doi.acm.org/10.1145/2694805.2694810

 

Paulo Martins, Leonel Sousa;  “Stretching the Limits of Programmable Embedded Devices for Public-key Cryptography;” CS2 '15 Proceedings of the Second Workshop on Cryptography and Security in Computing Systems, January 2015, Pages 19.  doi: 10.1145/2694805.2694809
Abstract: In this work, the efficiency of embedded devices when operating as cryptographic accelerators is assessed, exploiting both multithreading and Single Instruction Multiple Data (SIMD) parallelism. The latency of a single modular multiplication is reduced, by splitting computation across multiple cores, and the technique is applied to the Rivest-Shamir-Adleman (RSA) cryptosystem, reducing its central operation execution time by up to 2.2 times, on an ARM A15 4-core processor. Also, algorithms are proposed to simultaneously perform multiple modular multiplications. The parallel algorithms are used to enhance the RSA and Elliptic Curve (EC) cryptosystems, obtaining speedups of upto 7.2 and 3.9 on the ARM processor, respectively. Whereas the first approach is most beneficial when a single RSA exponentiation is required, the latter provides a better performance when multiple RSA exponentiations have to be computed.
Keywords: Embedded Systems, Parallel Algorithms, Public-key Cryptography, Single Instruction Multiple Data (ID#: 15-5453)
URL: http://doi.acm.org/10.1145/2694805.2694809


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

International Conferences: Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 Singapore

 
SoS Logo

International Conferences: Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 Singapore

 

The Conference on Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP) was held on 7 -9 April 2015 in Singapore.  ISSNIP is a network of researchers created in 2004 to address fundamental cross-disciplinary issues of sensor networks and Information Processing in large, complex, distributed interacting systems with direct applications in health, environment and security. It brings together distinguished Australian and international researchers from mathematics, statistics, computing, biology, electrical engineering and mechanical engineering. The program seeks to advance knowledge; deliver generic models, algorithms and implementations; develop directly end-product deployable intellectual property and create human resource for future research and employment in multiple domains. It is an Australian Research Council initiative.  The conference home page is available at: http://www.issnip.org/  Articles cited here are deemed of particular interest to the Cyber-Physical Systems Science of Security virtual organization.


 

Nigussie, Ethiopia; Xu, Teng; Potkonjak, Miodrag, "Securing Wireless Body Sensor Networks Using Bijective Function-Based Hardware Primitive," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, pp. 1, 6, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106907
Abstract: We present a novel lightweight hardware security primitive for wireless body sensor networks (WBSNs). Security of WBSNs is crucial and the security solution must be lightweight due to resource constraints in the body senor nodes. The presented security primitive is based on digital implementation of bidirectional bijective function. The one-to-one input-output mapping of the function is realized using a network of lookup tables (LUTs). The bidirectionality of the function enables implementation of security protocols with lower overheads. The configuration of the interstage interconnection between the LUTs serves as the shared secret key. Authentication, encryption/decryption and message integrity protocols are formulated using the proposed security primitive. NIST randomness benchmark suite is applied to this security primitive and it passes all the tests. It also achieves higher throughput and requires less area than AES-CCM.
Keywords: Authentication; Encryption; Protocols; Radiation detectors; Receivers; Table lookup (ID#: 15-5419)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106907&isnumber=7106892

 

Hoang Giang Do; Wee Keong Ng, "Privacy-Preserving Approach For Sharing And Processing Intrusion Alert Data," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, pp. 1, 6, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106911
Abstract: Amplified and disrupting cyber-attacks might lead to severe security incidents with drastic consequences such as large property damage, sensitive information breach, or even disruption of the national economy. While traditional intrusion detection and prevention system might successfully detect low or moderate levels of attack, the cooperation among different organizations is necessary to defend against multi-stage and large-scale cyber-attacks. Correlating intrusion alerts from a shared database of multiple sources provides security analysts with succinct and high-level patterns of cyber-attacks - a powerful tool to combat with sophisticate attacks. However, sharing intrusion alert data raises a significant privacy concern among data holders, since publishing this information means a risk of exposing other sensitive information such as intranet topology, network services, and the security infrastructure. This paper discusses possible cryptographic approaches to tackle this issue. Organizers can encrypt their intrusion alert data to protect data confidentiality and outsource them to a shared server to reduce the cost of storage and maintenance, while, at the same time, benefit from a larger source of information for alert correlation process. Two privacy preserving alert correlation techniques are proposed under a semi-honest model. These methods are based on attribute similarity and prerequisite/consequence conditions of cyber-attacks.
Keywords: Encryption; Sensors (ID#: 15-5420)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106911&isnumber=7106892

 

Silva, Ricardo; Sa Silva, Jorge; Boavida, Fernando, "A Symbiotic Resources Sharing IoT Platform In The Smart Cities Context," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, pp. 1, 6, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106922
Abstract: Large urban areas are nowadays covered by millions of wireless devices, including not only cellular equipment carried by their inhabitants, but also several ubiquitous and pervasive platforms used to monitor and/or actuate on a variety of phenomena in the city area. Whereas the former are increasingly powerful devices equipped with advanced processors, large memory capacity, high bandwidth, and several wireless interfaces, the latter are typically resource constrained systems. Despite their differences, both kinds of systems share the same ecosystem, and therefore, it is possible to build symbiotic relationships between them. Our research aims at creating a resource-sharing platform to support such relationships, in the perspective that resource unconstrained devices can assist constrained ones, while the latter can extend the features of the former. Resource sharing between heterogeneous networks in an urban area poses several challenges, not only from a technical point of view, but also from a social perspective. In this paper we present our symbiotic resource-sharing proposal while discussing its impact on networks and citizens.
Keywords: Cities and towns; Mobile communication; Mobile handsets; Security; Symbiosis; Wireless communication; Wireless sensor networks (ID#: 15-5421)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106922&isnumber=7106892

 

Alohali, Bashar Ahmed; Vassialkis, Vassilios G., "Secure And Energy-Efficient Multicast Routing In Smart Grids," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, pp. 1, 6, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106929
Abstract: A smart grid is a power system that uses information and communication technology to operate, monitor, and control data flows between the power generating source and the end user. It aims at high efficiency, reliability, and sustainability of the electricity supply process that is provided by the utility centre and is distributed from generation stations to clients. To this end, energy-efficient multicast communication is an important requirement to serve a group of residents in a neighbourhood. However, the multicast routing introduces new challenges in terms of secure operation of the smart grid and user privacy. In this paper, after having analysed the security threats for multicast-enabled smart grids, we propose a novel multicast routing protocol that is both sufficiently secure and energy efficient. We also evaluate the performance of the proposed protocol by means of computer simulations, in terms of its energy-efficient operation.
Keywords: Authentication; Protocols; Public key; Routing; Smart meters; Multicast; Secure Routing; Smart Grid (ID#: 15-5422)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106929&isnumber=7106892

 

Saleh, Mohamed; El-Meniawy, Nagwa; Sourour, Essam, "Routing-guided authentication in Wireless Sensor Networks," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on , vol., no., pp.1,6, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106939
Abstract: Entity authentication is a crucial security objective since it enables network nodes to verify the identity of each other. Wireless Sensor Networks (WSNs) are composed of a large number of possibly mobile nodes, which are limited in computational, storage and energy resources. These characteristics pose a challenge to entity authentication protocols and security in general. We propose an authentication protocol whose execution is integrated within routing. This is in contrast to currently proposed protocols, in which a node tries to authenticate itself to other nodes without an explicit tie to the underlying routing protocol. In our protocol, nodes discover shared keys, authenticate themselves to each other and build routing paths all in a synergistic way.
Keywords: Ad hoc networks; Cryptography; Media Access Protocol; Mobile computing; Wireless sensor networks (ID#: 15-5423)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106939&isnumber=7106892

 

Bose, Tulika; Bandyopadhyay, Soma; Ukil, Arijit; Bhattacharyya, Abhijan; Pal, Arpan, "Why Not Keep Your Personal Data Secure Yet Private In IoT?: Our Lightweight Approach," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, pp.1,6, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106942
Abstract: IoT (Internet of Things) systems are resource-constrained and primarily depend on sensors for contextual, physiological and behavioral information. Sensitive nature of sensor data incurs high probability of privacy breaching risk due to intended or malicious disclosure. Uncertainty about privacy cost while sharing sensitive sensor data through Internet would mostly result in overprovisioning of security mechanisms and it is detrimental for IoT scalability. In this paper, we propose a novel method of optimizing the need for IoT security enablement, which is based on the estimated privacy risk of shareable sensor data. Particularly, our scheme serves two objectives, viz. privacy risk assessment and optimizing the secure transmission based on that assessment. The challenges are, firstly, to determine the degree of privacy, and evaluate a privacy score from the fine-grained sensor data and, secondly, to preserve the privacy content through secure transfer of the data, adapted based on the measured privacy score. We further meet this objective by introducing and adapting a lightweight scheme for secure channel establishment between the sensing device and the data collection unit/ backend application embedded within CoAP (Constrained Application Protocol), a candidate IoT application protocol and using UDP as a transport. We consider smart energy management, a killer IoT application, as the use-case where smart energy meter data contains private information about the residents. Our results with real household smart meter data demonstrate the efficacy of our scheme.
Keywords: Encryption; IP networks; Optimization; Physiology; Privacy; Sensitivity; CoAP; IoT; Lightweight; Privacy; Security; Smart meter (ID#: 15-5424)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106942&isnumber=7106892

 

Unger, Sebastian; Timmermann, Dirk, "Dpwsec: Devices Profile For Web Services Security," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, pp. 1, 6, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106961
Abstract: As cyber-physical systems (CPS) build a foundation for visions such as the Internet of Things (IoT) or Ambient Assisted Living (AAL), their communication security is crucial so they cannot be abused for invading our privacy and endangering our safety. In the past years many communication technologies have been introduced for critically resource-constrained devices such as simple sensors and actuators as found in CPS. However, many do not consider security at all or in a way that is not suitable for CPS. Also, the proposed solutions are not interoperable although this is considered a key factor for market acceptance. Instead of proposing yet another security scheme, we looked for an existing, time-proven solution that is widely accepted in a closely related domain as an interoperable security framework for resource-constrained devices. The candidate of our choice is the Web Services Security specification suite. We analysed its core concepts and isolated the parts suitable and necessary for embedded systems. In this paper we describe the methodology we developed and applied to derive the Devices Profile for Web Services Security (DPWSec). We discuss our findings by presenting the resulting architecture for message level security, authentication and authorization and the profile we developed as a subset of the original specifications. We demonstrate the feasibility of our results by discussing the proof-of-concept implementation of the developed profile and the security architecture.
Keywords: Authentication; Authorization; Cryptography; Interoperability; Web services; Applied Cryptography; Authentication; Cyber-Physical Systems (CPS); DPWS; Intelligent Environments; Internet of Things (IoT); Usability (ID#: 15-5425)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106961&isnumber=7106892

 

Van den Abeele, Floris; Vandewinckele, Tom; Hoebeke, Jeroen; Moerman, Ingrid; Demeester, Piet, "Secure Communication In IP-Based Wireless Sensor Networks Via A Trusted Gateway," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, pp. 1, 6, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106963
Abstract: As the IP-integration of wireless sensor networks enables end-to-end interactions, solutions to appropriately secure these interactions with hosts on the Internet are necessary. At the same time, burdening wireless sensors with heavy security protocols should be avoided. While Datagram TLS (DTLS) strikes a good balance between these requirements, it entails a high cost for setting up communication sessions. Furthermore, not all types of communication have the same security requirements: e.g. some interactions might only require authorization and do not need confidentiality. In this paper we propose and evaluate an approach that relies on a trusted gateway to mitigate the high cost of the DTLS handshake in the WSN and to provide the flexibility necessary to support a variety of security requirements. The evaluation shows that our approach leads to considerable energy savings and latency reduction when compared to a standard DTLS use case, while requiring no changes to the end hosts themselves.
Keywords: Bismuth; Cryptography; Logic gates; Random access memory; Read only memory; Servers; Wireless sensor networks; 6LoWPAN; CoAP; DTLS; Gateway; IP; IoT; Wireless sensor networks (ID#: 15-5426)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106963&isnumber=7106892

 

Kurniawan, Agus; Kyas, Marcel, "A Trust Model-Based Bayesian Decision Theory In Large Scale Internet Of Things," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, pp. 1, 5, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106964
Abstract: In addressing the growing problem of security of Internet of Things, we present, from a statistical decision point of view, a naval approach for trust-based access control using Bayesian decision theory. We build a trust model, TrustBayes which represents a trust level for identity management in IoT. TrustBayes model is be applied to address access control on uncertainty environment where identities are not known in advance. The model consists of EX (Experience), KN (Knowledge) and RC (Recommendation) values which is be obtained in measurement while a IoT device requests to access a resource. A decision will be taken based model parameters and be computed using Bayesian decision rules. To evaluate our  trust model, we do a statistical analysis and simulate it using OMNeT++ to investigate battery usage. The simulation result shows that the Bayesian decision theory approach for trust based access control guarantees scalability and it is energy efficient as increasing number of devices and not affecting the functioning and performance.
Keywords: Batteries; Communication system security; Scalability; Wireless communication; Wireless sensor networks; Access Control; Decision making; Decision theory; Internet of Things; Trust Management (ID#: 15-5427)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106964&isnumber=7106892

 

Ozvural, Gorkem; Kurt, Gunes Karabulut, "Advanced Approaches For Wireless Sensor Network Applications And Cloud Analytics," Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, pp. 1, 5, 7-9 April 2015. doi: 10.1109/ISSNIP.2015.7106979
Abstract: Although wireless sensor network applications are still at early stages of development in the industry, it is obvious that it will pervasively come true and billions of embedded microcomputers will become online for the purpose of remote sensing, actuation and sharing information. According to the estimations, there will be 50 billion connected sensors or things by the year 2020. As we are developing first to market wireless sensor-actuator network devices, we have chance to identify design parameters, define technical infrastructure and make an effort to meet scalable system requirements. In this manner, required research and development activities must involve several research directions such as massive scaling, creating information and big data, robustness, security, privacy and human-in-the-loop. In this study, wireless sensor networks and Internet of things concepts are not only investigated theoretically but also the proposed system is designed and implemented end-to-end. Low rate wireless personal area network sensor nodes with random network coding capability are used for remote sensing and actuation. Low throughput embedded IP gateway node is developed utilizing both random network coding at low rate wireless personal area network side and low overhead websocket protocol for cloud communications side. Service-oriented design pattern is proposed for wireless sensor network cloud data analytics.
Keywords: IP networks; Logic gates; Network coding; Protocols; Relays; Wireless sensor networks; Zigbee (ID#: 15-5428)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7106979&isnumber=7106892


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

International Conferences: Software Testing, Verification and Validation Workshops (ICSTW), Graz, Austria

 
SoS Logo

International Conferences: Software Testing, Verification and Validation Workshops (ICSTW), Graz, Austria

 

The 2015 IEEE Eighth International Conference on Software Testing, Verification and Validation Workshops (ICSTW) was held April 13-17, 2015 in Graz, Austria.  The conference focused on model-based testing, software quality, test architecture, combinatorial testing, mutation analysis, security testing and research techniques. Conference details are available at:  http://icst2015.ist.tu-graz.ac.at   These bibliographies focus on articles deemed by the editors to be of most relevance to the Science of Security.


 

Kieseberg, Peter; Fruhwirt, Peter; Schrittwieser, Sebastian; Weippl, Edgar, "Security Tests For Mobile Applications — Why Using TLS/SSL Is Not Enough," Software Testing, Verification and Validation Workshops (ICSTW), 2015 IEEE Eighth International Conference on, pp. 1, 2, 13-17 April 2015. doi: 10.1109/ICSTW.2015.7107416
Abstract: Security testing is a fundamental aspect in many common practices in the field of software testing. Still, the used standard security protocols are typically not questioned and not further analyzed in the testing scenarios. In this work we show that due to this practice, essential potential threats are not detected throughout the testing phase and the quality assurance process. We put our focus mainly on two fundamental problems in the area of security: The definition of the correct attacker model, as well as trusting the client when applying cryptographic algorithms.
Keywords: Security; TLS/SSL; Testing (ID#: 15-5403)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7107416&isnumber=7107396

 

Bozic, Josip; Garn, Bernhard; Simos, Dimitris E.; Wotawa, Franz, "Evaluation Of The IPO-Family Algorithms For Test Case Generation In Web Security Testing," Software Testing, Verification and Validation Workshops (ICSTW), 2015 IEEE Eighth International Conference on, pp. 1,10, 13-17 April 2015. doi: 10.1109/ICSTW.2015.7107436
Abstract: Security testing of web applications remains a major problem of software engineering. In order to reveal vulnerabilities, testing approaches use different strategies for detection of certain kinds of inputs that might lead to a security breach. Such approaches depend on the corresponding test case generation technique that are executed against the system under test. In this work we examine how two of the most popular algorithms for combinatorial test case generation, namely the IPOG and IPOG-F algorithms, perform in web security testing. For generating comprehensive and sophisticated testing inputs we have used input parameter modelling which includes also constraints between the different parameter values. To handle the test execution, we make use of a recently introduced methodology which is based on model-based testing. Our evaluation indicates that both algorithms generate test inputs that succeed in revealing security leaks in web applications with IPOG-F giving overall slightly better results w.r.t. the test quality of the generated inputs. In addition, using constraints during the modelling of the attack grammars results in an increase on the number of test inputs that cause security breaches. Last but not least, a detailed analysis of our evaluation results confirms that combinatorial testing is an efficient test case generation method for web security testing as the security leaks are mainly due to the interaction of a few parameters. This statement is further supported by some combinatorial coverage measurement experiments on the successful test inputs.
Keywords: Combinatorial testing; IPO-Family algorithms; attack patterns; constraints; injection attacks; model-based testing; web security testing (ID#: 15-5404)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7107436&isnumber=7107396

 

Henard, Christopher; Papadakis, Mike; Le Traon, Yves, "Flattening Or Not Of The Combinatorial Interaction Testing Models?," Software Testing, Verification and Validation Workshops (ICSTW), 2015 IEEE Eighth International Conference on, pp. 1,4, 13-17 April 2015. doi: 10.1109/ICSTW.2015.7107443
Abstract: Combinatorial Interaction Testing (CIT) requires the use of models that represent the interactions between the features of the system under test. In most cases, CIT models involve Boolean or integer options and constraints among them. Thus, applying CIT requires solving the involved constraints, which can be directly performed using Satisfiability Modulo Theory (SMT) solvers. An alternative practice is to flatten the CIT model into a Boolean model and use Satisfiability (SAT) solvers. However, the flattening process artificially increases the size of the employed models, raising the question of whether it is profitable or not in the CIT context. This paper investigates this question and demonstrates that flattened models, despite being much larger, are processed faster with SAT solvers than the smaller original ones with SMT solvers. These results suggest that flattening is worthwhile in the CIT context.
Keywords:  (not provided) (ID#: 15-5405)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7107443&isnumber=7107396

 

Lindstrom, Birgitta; Andler, Sten F.; Offutt, Jeff; Pettersson, Paul; Sundmark, Daniel, "Mutating Aspect-Oriented Models To Test Cross-Cutting Concerns," Software Testing, Verification and Validation Workshops (ICSTW), 2015 IEEE Eighth International Conference on, pp. 1, 10, 13-17 April 2015. doi: 10.1109/ICSTW.2015.7107456
Abstract: Aspect-oriented (AO) modeling is used to separate normal behaviors of software from specific behaviors that affect many parts of the software. These are called “cross-cutting concerns,” and include things such as interrupt events, exception handling, and security protocols. AO modeling allow developers to model the behaviors of cross-cutting concerns independently of the normal behavior. Aspect-oriented models (AOM) are then transformed into code by “weaving” the aspects (modeling the cross-cutting concerns) into all locations in the code where they are needed. Testing at this level is unnecessarily complicated because the concerns are often repeated in many locations and because the concerns are muddled with the normal code. This paper presents a method to design robustness tests at the abstract, or model, level. The models are mutated with novel operators that specifically target the features of AOM, and tests are designed to kill those mutants. The tests are then run on the implementation level to evaluate the behavior of the woven cross-cutting concerns.
Keywords: Mutation analysis; aspect-oriented modeling robustness testing (ID#: 15-5406)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7107456&isnumber=7107396

 

Knorr, Konstantin; Aspinall, David, "Security Testing For Android Mhealth Apps," Software Testing, Verification and Validation Workshops (ICSTW), 2015 IEEE Eighth International Conference on, pp. 1, 8, 13-17 April 2015. doi: 10.1109/ICSTW.2015.7107459
Abstract: Mobile health (mHealth) apps are an ideal tool for monitoring and tracking long-term health conditions; they are becoming incredibly popular despite posing risks to personal data privacy and security. In this paper, we propose a testing method for Android mHealth apps which is designed using a threat analysis, considering possible attack scenarios and vulnerabilities specific to the domain. To demonstrate the method, we have applied it to apps for managing hypertension and diabetes, discovering a number of serious vulnerabilities in the most popular applications. Here we summarise the results of that case study, and discuss the experience of using a testing method dedicated to the domain, rather than out-of-the-box Android security testing methods. We hope that details presented here will help design further, more automated, mHealth security testing tools and methods.
Keywords:  (not provided) (ID#: 15-5407)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7107459&isnumber=7107396

 

Riviere, Lionel; Bringer, Julien; Le, Thanh-Ha; Chabanne, Herve, "A Novel Simulation Approach For Fault Injection Resistance Evaluation On Smart Cards," Software Testing, Verification and Validation Workshops (ICSTW), 2015 IEEE Eighth International Conference on, pp. 1, 8, 13-17 April 2015. doi: 10.1109/ICSTW.2015.7107460
Abstract: Physical perturbations are performed against embedded systems that can contain valuable data. Such devices and in particular smart cards are targeted because potential attackers hold them. The embedded system security must hold against intentional hardware failures that can result in software errors. In a malicious purpose, an attacker could exploit such errors to find out secret data or disrupt a transaction. Simulation techniques help to point out fault injection vulnerabilities and come at an early stage in the development process. This paper proposes a generic fault injection simulation tool that has the particularity to embed the injection mechanism into the smart card source code. By its embedded nature, the Embedded Fault Simulator (EFS) allows us to perform fault injection simulations and side-channel analyses simultaneously. It makes it possible to achieve combined attacks, multiple fault attacks and to perform backward analyses. We appraise our approach on real, modern and complex smart card systems under data and control flow fault models. We illustrate the EFS capacities by performing a practical combined attack on an Advanced Encryption Standard (AES) implementation.
Keywords: Fault injection; Physical attack; combined attack; data modification; embedded systems; fault simulation; instruction skip; side-channel attack; smart card (ID#: 15-5408)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7107460&isnumber=7107396

 

Afzal, Zeeshan; Lindskog, Stefan, "Automated Testing of IDS Rules," Software Testing, Verification and Validation Workshops (ICSTW), 2015 IEEE Eighth International Conference on, pp. 1, 2, 13-17 April 2015. doi: 10.1109/ICSTW.2015.7107461
Abstract: As technology becomes ubiquitous, new vulnerabilities are being discovered at a rapid rate. Security experts continuously find ways to detect attempts to exploit those vulnerabilities. The outcome is an extremely large and complex rule set used by Intrusion Detection Systems (IDSs) to detect and prevent the vulnerabilities. The rule sets have become so large that it seems infeasible to verify their precision or identify overlapping rules. This work proposes a methodology consisting of a set of tools that will make rule management easier.
Keywords:  (not provided) (ID#: 15-5409)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7107461&isnumber=7107396

 

Henard, Christopher; Papadakis, Mike; Le Traon, Yves, "Flattening Or Not Of The Combinatorial Interaction Testing Models?," Software Testing, Verification and Validation Workshops (ICSTW), 2015 IEEE Eighth International Conference on, pp. 1, 4, 13-17 April 2015. doi: 10.1109/ICSTW.2015.7107443
Abstract: Combinatorial Interaction Testing (CIT) requires the use of models that represent the interactions between the features of the system under test. In most cases, CIT models involve Boolean or integer options and constraints among them. Thus, applying CIT requires solving the involved constraints, which can be directly performed using Satisfiability Modulo Theory (SMT) solvers. An alternative practice is to flatten the CIT model into a Boolean model and use Satisfiability (SAT) solvers. However, the flattening process artificially increases the size of the employed models, raising the question of whether it is profitable or not in the CIT context. This paper investigates this question and demonstrates that flattened models, despite being much larger, are processed faster with SAT solvers than the smaller original ones with SMT solvers. These results suggest that flattening is worthwhile in the CIT context.
Keywords:  (not provided) (ID#: 15-5410)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7107443&isnumber=7107396


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

International Conferences: Workshop on Security and Privacy Analytics (IWSPA) ’15, San Antonio, Texas

 
SoS Logo

International Conferences: Workshop on Security and Privacy Analytics (IWSPA) ’15, San Antonio, Texas

 

The 2015 ACM International Workshop on Security and Privacy Analytics -- IWSPA'15 was held in conjunction with CODASPY in San Antonio, Texas on March 02 - 04, 2015.  According to the organizers, techniques from data analytics fields are being applied to security challenges and some interesting questions arise: which techniques from these fields are more appropriate for the security domain and which among those are essential knowledge for security practitioners and students. Applications of such techniques also have interesting implications on privacy. The mission of the workshop is to: create a forum for interaction between data analytics and security experts and to examine the questions mentioned above. The conference web page is available at: http://www.wikicfp.com/cfp/servlet/event.showcfp?eventid=40911&copyownerid=70160  


 

George Cybenko; “Deep Learning of Behaviors for Security;” IWSPA '15 Proceedings of the 2015 ACM International Workshop on International Workshop on Security and Privacy Analytics, March 2015, Pages 1-1. Doi: 10.1145/2713579.2713592
Abstract: Deep learning has generated much research and commercialization interest recently. In a way, it is the third incarnation of neural networks as pattern classifiers, using insightful algorithms and architectures that act as unsupervised auto-encoders which learn hierarchies of features in a dataset. After a short review of that work, we will discuss computational approaches for deep learning of behaviors as opposed to just static patterns. Our approach is based on structured non-negative matrix factorizations of matrices that encode observation frequencies of behaviors. Example security applications and covert channel detection and coding will be presented.
Keywords: behaviors, machines learning, security (ID#: 15-5560)
URL: http://doi.acm.org/10.1145/2713579.2713592

 

Nasir Memon; “Photo Forensics: There is More to a Picture Than Meets the Eye; “ IWSPA '15 Proceedings of the 2015 ACM International Workshop on International Workshop on Security and Privacy Analytics, March 2015, Pages 35-35. Doi: 10.1145/2713579.2713594
Abstract: Given an image or a video clip can you tell which camera it was taken from? Can you tell if it was manipulated? Given a camera or even a picture, can you find from the Internet all other pictures taken from the same camera? Forensics professionals all over the world are increasingly encountering such questions. Given the ease by which digital images can be created, altered, and manipulated with no obvious traces, digital image forensics has emerged as a research field with important implications for ensuring digital image credibility. This talk will provide an overview of recent developments in the field, focusing on three problems and list challenges and problems that still need to be addressed. First, collecting image evidence and reconstructing them from fragments, with or without missing pieces. This involves sophisticated file carving technology. Second, attributing the image to a source, be it a camera, a scanner, or a graphically generated picture. The process entails associating the image with a class of sources with common characteristics (device model) or matching the image to an individual source device, for example a specific camera. Third, attesting to the integrity of image data. This involves image forgery detection to determine whether an image has undergone modification or processing after being initially captured.
Keywords: digital forensics, image forensics (ID#: 15-5561)
URL: http://doi.acm.org/10.1145/2713579.2713594

 

Hassan Alizadeh, Samaeh Khoshrou, André Zúquete; “Application-Specific Traffic Anomaly Detection Using Universal Background Model;” IWSPA '15 Proceedings of the 2015 ACM International Workshop on International Workshop on Security and Privacy Analytics, March 2015, Pages 11-17. Doi: 10.1145/2713579.2713586
Abstract: This paper presents an application-specific intrusion detection framework in order to address the problem of detecting intrusions in individual applications when their traffic exhibits anomalies. The system is based on the assumption that authorized traffic analyzers have access to a trustworthy binding between network traffic and the source application responsible for it. Given traffic flows generated by individual genuine application, we exploit the GMM-UBM (Gaussian Mixture Model-Universal Background Model) method to build models for genuine applications, and thereby form our detection system. The system was evaluated on a public dataset collected from a real network. Favorable results indicate the success of the framework.
Keywords: gaussian mixture models, intrusion detection, malware, network anomaly, traffic flows, universal background model, web applications (ID#: 15-5562)
URL: http://doi.acm.org/10.1145/2713579.2713586

 

Shobhit Shakya, Jian Zhang; “Towards Better Semi-Supervised Classification of Malicious Software;” IWSPA '15 Proceedings of the 2015 ACM International Workshop on International Workshop on Security and Privacy Analytics, March 2015, Pages 27-33. Doi: 10.1145/2713579.2713587
Abstract: Due to the large number of malicious software (malware) and the large variety among them, automated detection and analysis using machine learning techniques have become more and more important for network and computer security. An often encountered scenario in these security applications is that training examples are scarce but unlabeled data are abundant. Semi-supervised learning where both labeled and unlabeled data are used to learn a good model quickly is a natural choice under such condition. We investigate semi-supervised classification for malware categorization. We observed that malware data have specific characteristics and that they are noisy. Off-the-shelf semi-supervised learning may not work well in this case. We proposed a semi supervised approach that addresses the problems with malware data and can provide better classification. We conducted a set of experiments to test and compare our method to others. The experimental results show that semi-supervised classification is a promising direction for malware classification. Our method achieved more than 90% accuracy when there were only a few number of training examples. The results also indicates that modifications are needed to make semi-supervised learning work with malware data. Otherwise, semi-supervised classification may perform worse than classifiers trained on only the labeled data.
Keywords: graph spectral, graph-based semi-supervised learning, machine learning, malware classification (ID#: 15-5563)
URL: http://doi.acm.org/10.1145/2713579.2713587

 

Kyle Caudle, Christer Karlsson, Larry D. Pyeatt; “Using Density Estimation to Detect Computer Intrusions;” IWSPA '15 Proceedings of the 2015 ACM International Workshop on International Workshop on Security and Privacy Analytics, March 2015, Pages 43-48. Doi: 10.1145/2713579.2713584
Abstract: Density estimation can be used to make sense of data collected by large scale systems. An estimate of the underlying probability density function can be used to characterize normal network operating conditions. In this paper, we present a recursive method for constructing and updating an estimate of the non-stationary high dimensional probability density function using parallel programming. Once we have characterized standard operating conditions we perform real time checks for changes. We demonstrate the effectiveness of the approach via the use of simulated data as well as data from Internet header packets.
Keywords: data streams, density estimation, parallel programming, wavelets (ID#: 15-5564)
URL: http://doi.acm.org/10.1145/2713579.2713584

 

Alaa Darabseh, Akbar Siami Namin; “Keystroke Active Authentications Based on Most Frequently Used Words;” IWSPA '15 Proceedings of the 2015 ACM International Workshop on International Workshop on Security and Privacy Analytics, March 2015, Pages 49-54. Doi: 10.1145/2713579.2713589
Abstract: The aim of this research is to advance the user active authentication technology using keystroke dynamics. Through this research, we assess the performance and influence of various keystroke features on keystroke dynamics authentication systems. In particular, we investigate the performance of keystroke features on a subset of most frequently used English words. The performance of four features including key duration, flight time latency, diagraph time latency, and word total time duration are analyzed. Experiments are performed to measure the performance of each feature individually and the results from the different subsets of these features. The results of the experiments are evaluated using 28 users. The experimental results show that diagraph time offers the best performance result among all four keystroke features, followed by flight time. Furthermore, the paper introduces new feature which can be effectively used in the keystroke dynamics domain.
Keywords: authentication, biometrics, keystroke dynamics, keystroke feature, security (ID#: 15-5565)
URL: http://doi.acm.org/10.1145/2713579.2713589

 

Zhentan Feng, Shuguang Xiong, Deqiang Cao, Xiaolu Deng, Xin Wang, Yang Yang, Xiaobo Zhou, Yan Huang, Guangzhu Wu; “HRS: A Hybrid Framework for Malware Detection ;” IWSPA '15 Proceedings of the 2015 ACM International Workshop on International Workshop on Security and Privacy Analytics, March 2015, Pages 19-26. Doi: 10.1145/2713579.2713585
Abstract: Traditional signature-based detection methods fail to detect unknown malwares, while data mining methods for detection are proved useful to new malwares but suffer for high false positive rate. In this paper, we provide a novel hybrid framework called HRS based on the analysis for 50 millions of malware samples across 20,000 malware classes from our antivirus platform. The distribution of the samples are elaborated and a hybrid framework HRS is proposed, which consists of Hash-based, Rule-based and SVM-based models trained from different classes of malwares according to the distribution. Rule-based model is the core component of the hybrid framework. It is convenient to control false positives by adjusting the factor of a boolean expression in rule-based method, while it still has the ability to detect the unknown malwares. The SVM-based method is enhanced by examining the critical sections of the malwares, which can significantly shorten the scanning and training time. Rigorous experiments have been performed to evaluate the HRS approach based on the massive dataset and the results demonstrate that HRS achieves a true positive rate of 99.84% with an error rate of 0.17%. The HRS method has already been deployed into our security platform.
Keywords: antivirus engine, data mining, machine learning, malware class distribution, malware detection (ID#: 15-5566)
URL: http://doi.acm.org/10.1145/2713579.2713585

 

Hao Zhang, Maoyuan Sun, Danfeng (Daphne) Yao, Chris North; “Visualizing Traffic Causality for Analyzing Network Anomalies;”  IWSPA '15 Proceedings of the 2015 ACM International Workshop on International Workshop on Security and Privacy Analytics, March 2015, Pages 37-42. Doi: 10.1145/2713579.2713583
Abstract: Monitoring network traffic and detecting anomalies are essential tasks that are carried out routinely by security analysts. The sheer volume of network requests often makes it difficult to detect attacks and pinpoint their causes. We design and develop a tool to visually represent the causal relations for network requests. The traffic causality information enables one to reason about the legitimacy and normalcy of observed network events. Our tool with a special visual locality property supports different levels of visual-based querying and reasoning required for the sensemaking process on complex network data. Leveraging the domain knowledge, security analysts can use our tool to identify abnormal network activities and patterns due to attacks or stealthy malware. We conduct a user study that confirms our tool can enhance the readability and perceptibility of the dependency for host-based network traffic.
Keywords: anomaly detection, information visualization, network traffic analysis, usable security, visual locality (ID#: 15-5567)
URL: http://doi.acm.org/10.1145/2713579.2713583

 

Yang Liu, Jing Zhang, Armin Sarabi, Mingyan Liu, Manish Karir, Michael Bailey; “Predicting Cyber Security Incidents Using Feature-Based Characterization of Network-Level Malicious Activities ;”  IWSPA '15 Proceedings of the 2015 ACM International Workshop on International Workshop on Security and Privacy Analytics, March 2015, Pages 3-9. Doi: 10.1145/2713579.2713582
Abstract: This study offers a first step toward understanding the extent to which we may be able to predict cyber security incidents (which can be of one of many types) by applying machine learning techniques and using externally observed malicious activities associated with network entities, including spamming, phishing, and scanning, each of which may or may not have direct bearing on a specific attack mechanism or incident type. Our hypothesis is that when viewed collectively, malicious activities originating from a network are indicative of the general cleanness of a network and how well it is run, and that furthermore, collectively they exhibit fairly stable and thus predictive behavior over time. To test this hypothesis, we utilize two datasets in this study: (1) a collection of commonly used IP address-based/host reputation blacklists (RBLs) collected over more than a year, and (2) a set of security incident reports collected over roughly the same period. Specifically, we first aggregate the RBL data at a prefix level and then introduce a set of features that capture the dynamics of this aggregated temporal process. A comparison between the distribution of these feature values taken from the incident dataset and from the general population of prefixes shows distinct differences, suggesting their value in distinguishing between the two while also highlighting the importance of capturing dynamic behavior (second order statistics) in the malicious activities. These features are then used to train a support vector machine (SVM) for prediction. Our preliminary results show that we can achieve reasonably good prediction performance over a forecasting window of a few months.
Keywords: network reputation, network security, prediction, temporal pattern, time-series data (ID#: 15-5568)
URL: http://doi.acm.org/10.1145/2713579.2713582

 

Wenyaw Chan, George Cybenko, Murat Kantarcioglu, Ernst Leiss, Thamar Solorio, Bhavani Thuraisingham, Rakesh Verma; “Panel: Essential Data Analytics Knowledge for Cyber-security Professionals and Students;” IWSPA '15 Proceedings of the 2015 ACM International Workshop on International Workshop on Security and Privacy Analytics, March 2015, Pages 55-57. Doi: 10.1145/2713579.2713590
Abstract: Increasingly, techniques from data analytics fields of statistics, machine learning, data mining, and natural language processing are being employed for challenges in cyber-security and privacy. This panel examines which techniques from these fields are essential for current and future cyber-security practitioners and what are the related considerations involved in successfully solving security and privacy challenges of the future.
Keywords: curriculum, data analytics, privacy, security (ID#: 15-5569)
URL: http://doi.acm.org/10.1145/2713579.2713590


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Publications of Interest

 
SoS Logo

Publications of Interest

The Publications of Interest section contains bibliographical citations, abstracts if available and links on specific topics and research problems of interest to the Science of Security community.

How recent are these publications?

These bibliographies include recent scholarly research on topics which have been presented or published within the past year. Some represent updates from work presented in previous years, others are new topics.

How are topics selected?

The specific topics are selected from materials that have been peer reviewed and presented at SoS conferences or referenced in current work. The topics are also chosen for their usefulness for current researchers.

How can I submit or suggest a publication?

Researchers willing to share their work are welcome to submit a citation, abstract, and URL for consideration and posting, and to identify additional topics of interest to the community. Researchers are also encouraged to share this request with their colleagues and collaborators.

Submissions and suggestions may be sent to: news@scienceofsecurity.net

(ID#:15-5613)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Cryptology and Data Security, 2014

 
SoS Logo

Cryptology and Data Security, 2014

 

This bibliographical collection lists articles about cryptology and data processing offered in 2014 at various conferences.  This body of research work was reported in the IEEE digital library.  Most of the work was performed abroad, i.e., not U.S. based.  


 

Karakiş, R.; Güler, I., "An Application of Fuzzy Logic-Based Image Steganography," Signal Processing and Communications Applications Conference (SIU), 2014 22nd, pp.156, 159, 23-25 April 2014. doi: 10.1109/SIU.2014.6830189 Abstract: Today, data security in digital environment (such as text, image and video files) is revealed by development technology. Steganography and Cryptology are very important to save and hide data. Cryptology saves the message contents and Steganography hides the message presence. In this study, an application of fuzzy logic (FL)-based image Steganography was performed. First, the hidden messages were encrypted by XOR (eXclusive Or) algorithm. Second, FL algorithm was used to select the least significant bits (LSB) of the image pixels. Then, the LSBs of selected image pixels were replaced with the bits of the hidden messages. The method of LSB was improved as robustly and safely against steg-analysis by the FL-based LSB algorithm.
Keywords: cryptography; fuzzy logic; image coding; steganography; FL-based LSB algorithm; XOR algorithm; cryptology; data security; eXclusive OR algorithm; fuzzy logic; image steganography; least significant bits; Conferences; Cryptography; Fuzzy logic; Internet; PSNR; Signal processing algorithms (ID#: 15-4797)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830189&isnumber=6830164

 

Porzio, A., "Quantum Cryptography: Approaching Communication Security from a Quantum Perspective," Photonics Technologies, 2014 Fotonica AEIT Italian Conference on, pp. 1, 4, 12-14 May 2014. doi: 10.1109/Fotonica.2014.6843831 Abstract: Quantum cryptography aims at solving the everlasting problem of unconditional security in private communication. Every time we send personal information over a telecom channel a sophisticate algorithm protect our privacy making our data unintelligible to unauthorized receivers. These protocols resulted from the long history of cryptography. The security of modern cryptographic systems is guaranteed by complexity: the computational power that would be needed for gaining info on the code key largely exceed available one. Security of actual crypto systems is not “by principle” but “practical”. On the contrary, quantum technology promises to make possible to realize provably secure protocols. Quantum cryptology exploits paradigmatic aspects of quantum mechanics, like superposition principle and uncertainty relations. In this contribution, after a brief historical introduction, we aim at giving a survey on the physical principles underlying the quantum approach to cryptography. Then, we analyze a possible continuous variable protocol.
Keywords: cryptographic protocols; data privacy; quantum cryptography; quantum theory; telecommunication security; code key; computational power; continuous variable protocol; privacy protection; quantum cryptography; quantum cryptology; quantum mechanics; quantum technology; superposition principle; uncertainty relations; unconditional private communication security; Cryptography; History; Switches; TV; Continuous Variable; Quantum cryptography (ID#: 15-4798)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6843831&isnumber=6843815

 

Boorghany, A.; Sarmadi, S.B.; Yousefi, P.; Gorji, P.; Jalili, R., "Random Data and Key Generation Evaluation of Some Commercial Tokens and Smart Cards," Information Security and Cryptology (ISCISC), 2014 11th International ISC Conference on, pp. 49, 54, 3-4 Sept. 2014. doi: 10.1109/ISCISC.2014.6994021 Abstract: In this paper, we report our evaluation of the strength of random number generator and RSA key-pair generator of some commercially available 1 constrained hardware modules, i.e., tokens and smart cards. That was motivated after recent related attacks to RSA public keys, which are generated by constrained network devices and smart cards, and turned out to be insecure due to low-quality randomness. Those attacks are mostly computing pair-wise GCD between the moduli in public keys, and resulted in breaking several thousands of these keys. Our results show that most of the tested hardware modules behave well. However, some have abnormal or weak random generators which seem to be unsuitable for cryptographic purposes. Moreover, another hardware module, in some rare circumstances, unexpectedly generates moduli which are divisible by very small prime factors.
Keywords: public key cryptography; smart cards; RSA key-pair generator; RSA public keys; commercial tokens; commercially available constrained hardware modules; constrained network devices; cryptographic purposes; key generation evaluation; low-quality randomness; pair-wise GCD; random data evaluation; random number generator; smart cards; weak random generators; Generators; Hardware; Java; Public key; Smart cards; Cryptography; GCD Attack; Hardware Security Module; RSA Common Prime; Random Generator Evaluation (ID#: 15-4799)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6994021&isnumber=6994006

 

Yongjun Ren; Yaping Chen; Jin Wang; Liming Fang, "Leakage Resilient Provable Data Possession in Public Cloud Storage," Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP), 2014 Tenth International Conference on, pp.706,709, 27-29 Aug. 2014. doi: 10.1109/IIH-MSP.2014.182 Abstract: Cloud storage is now an important development trend in information technology. To ensure the integrity of data storage in cloud storing, researchers have present some provable data possession (PDP) schemes. However the schemes can't resist side-channel attacks. Moreover securing cryptographic implementations against side-channel attacks is one of the most important challenges in modern cryptography. In this paper, we propose the first leakage-resilient provable data possession (LR PDP) scheme, which utilizes leakage-resilient signature to construct the homomorphic authenticator. In the scheme, the homomorphic authenticator is based on probabilistic Boneh-Lynn-Shacham(BLS) short signature. Moreover, the leakage-resilient provable data possession can tolerate leakage of (1-O(1)/2) of the secret key at every tag invocation. And the security of the proposed scheme is proved in the generic bilinear group model.
Keywords: cloud computing; cryptography; digital signatures; storage management; BLS short signature; LR PDP scheme; cryptographic implementations; generic bilinear group model; homomorphic authenticator; information technology; leakage resilient provable data possession; leakage-resilient signature; probabilistic Boneh-Lynn-Shacham short signature; public cloud storage; secret key; side-channel attacks; tag invocation; Cascading style sheets; Cloud computing; Computational modeling; Cryptography; Data models; Servers; Cloud computing; leakage-resilient cryptology; provable data possession (ID#: 15-4800)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6998427&isnumber=6998244

 

Mortazavi, R.; Jalili, S., "Iterative Constraint Satisfaction Method for Microaggregation Problem," Information Security and Cryptology (ISCISC), 2014 11th International ISC Conference on, pp.204,209, 3-4 Sept. 2014. doi: 10.1109/ISCISC.2014.6994048 Abstract: In this paper, we propose a novel microaggregation algorithm to produce useful data in privacy preserving data publishing. Microaggregation is a clustering problem with known minimum and maximum group size constraints. We propose a local search algorithm that iteratively satisfies necessary constraints of an optimal solution of the problem. The algorithm solves the problem in O(n2) operations. Experimental results on real and synthetic data sets with different distributions confirm the effectiveness of the method.
Keywords: computational complexity; constraint satisfaction problems; data privacy; iterative methods; optimisation; pattern clustering; search problems; O(n2) operations; clustering problem; iterative constraint satisfaction method; local search algorithm; maximum group size constraints; microaggregation algorithm; microaggregation problem; minimum group size constraints; optimal solution; privacy preserving data publishing; Algorithm design and analysis; Clustering algorithms; Data privacy; Equations; Mathematical model; Partitioning algorithms; Time complexity; Clustering; Microaggregation; Privacy Preserving Data Publishing; k-anonymity (ID#: 15-4801)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6994048&isnumber=6994006

 

Azimi, S.A.; Ahmadian, Z.; Mohajeri, J.; Aref, M.R., "Impossible Differential Cryptanalysis of Piccolo Lightweight Block Cipher," Information Security and Cryptology (ISCISC), 2014 11th International ISC Conference on, pp. 89, 94, 3-4 Sept. 2014. doi: 10.1109/ISCISC.2014.6994028 Abstract: This paper analyzes the Piccolo family of lightweight block ciphers against the impossible differential cryptanalysis. A combination of some ploys such as decreasing the S-box computations, finding an appropriate propagation of differentials, utilizing hash tables and using the linearity of the key-schedule as well as disregarding subkeys of two rounds lead to 12-round and 13-round impossible differential attack on Piccolo-80 and 15-round attack on Piccolo-128. The time and data complexity of the attack against Piccolo-80 is 255.18 and 236.34 for 12-round and 269.7 and 243.25 for 13-round, respectively. Moreover, the time and data complexity for 15 rounds cryptanalysis of Piccolo-128 are 2125.4 and 258.7, respectively.
Keywords: cryptography; 12-round impossible differential attack; 13-round impossible differential attack; 15-round attack; Piccolo lightweight block cipher;Piccolo-128 cipher;Piccolo-80 cipher; S-box computation; differentials propagation; hash tables; impossible differential cryptanalysis; Ciphers; Data collection; Encryption; Memory management; Time complexity; Block cipher; Cryptanalysis; Impossible differential; Piccolo (ID#: 15-4802)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6994028&isnumber=6994006

 

MirShahJafari, M.; Ghavamnia, H., "Classifying IDS Alerts Automatically for Use in Correlation Systems," Information Security and Cryptology (ISCISC), 2014 11th International ISC Conference on, pp.126,130, 3-4 Sept. 2014. doi: 10.1109/ISCISC.2014.6994035 Abstract: The large increase in computer network usage, and the huge amount of sensitive data being stored and transferred through them, has escalated the attacks and invasions on these networks. Intrusion detection systems help in detecting these attacks, but the large amount of false positives has decreased their usability. Different methods have been proposed to reduce the amount of these false positives, which consist of different classification methods. Aggregation of similar alerts is a method proposed to reduce false positives and the large number of alerts, but the problem is assigning similar alerts the same classification parameters. Rules have been created which correlate alerts based on three parameters, but the alerts should be labeled with these parameters. Labeling these alerts, is a time consuming task, because deep knowledge on each alert is required to correctly identify the parameters. This time consuming job has been done on 13000 Emerging Threats Snort signatures, and has been used as a knowledge base to label other alerts. In this paper a method has been proposed to label similar signatures automatically. This method uses word extraction from signatures to identify the words which can specify these labels automatically. To test the method around 1000 signatures, which have been classified manually, were classified by this method and the precision and recall has been computed. The results show that a large number of signatures can be classified using this method.
Keywords: computer network security; digital signatures; pattern classification; Emerging Threats Snort signatures; alert aggregation; alert assignment; alert correlation; alert labelling; attack detection; automatic IDS alert classification method; automatic signature labelling; classification parameters; computer network usage; correlation systems; false-positive reduction; intrusion detection systems; knowledge base; precision value; recall value; sensitive data storage; sensitive data transfer; signature classification; word extraction; Correlation; Data mining; Grippers; Intrusion detection; Knowledge based systems; Servers; Trojan horses; Alert labeling; Classification; Correlation (ID#: 15-4803)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6994035&isnumber=6994006

 

Sadeghi, A.-A.; Aminmansour, F.; Shahriari, H.-R., "Tazhi: A Novel Technique for Hunting Trampoline Gadgets of Jump Oriented Programming (a Class of Code Reuse Attacks)," Information Security and Cryptology (ISCISC), 2014 11th International ISC Conference on, pp. 21, 26, 3-4 Sept. 2014. doi: 10.1109/ISCISC.2014.6994016 Abstract: Code reuse attacks enable attackers to manipulate the memory and execute their own code on a target system without the need to inject any operating code in the memory space. Jump Oriented Programming is known as a class of this type which has two different kinds of implementation. The main idea is to chain different sequences of instructions terminated to an indirect jump by using controller gadgets called dispatchers or trampolines. This paper focuses on the second type of implementations which uses trampoline gadgets. Finding useful trampolines in different libraries is an issue that considered here. This paper shows useful intended and unintended trampolines available in some famous versions of libraries in Windows and Linux platforms. Additionally, our searching algorithm and a comparison between results of trampolines are presented.
Keywords: Linux; object-oriented programming; security of data; Linux platforms; Tazhi; Windows platforms; code reuse attacks; controller gadgets; dispatchers; jump oriented programming ;trampoline gadgets; Filtering;  algorithms; Libraries; Loading; Malware; Programming; Registers; Writing; Code Reuse Attacks; Jump Oriented Programming; Trampoline gadget (ID#: 15-4804)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6994016&isnumber=6994006

 

Tajiki, M.M.; Akhaee, M.A., "Secure and Privacy Preserving Keyword Searching Cryptography," Information Security and Cryptology (ISCISC), 2014 11th International ISC Conference on, pp.226,230, 3-4 Sept. 2014. doi: 10.1109/ISCISC.2014.6994052 Abstract: Using storage systems outside a company endanger data security. This leads users to encrypt their information for risk mitigation. Although encryption improves confidentiality, it causes inefficiency such as the encrypted data is not searchable. In this paper, data would be stored in a cloud storage provider (CSP) in a way that it is secure and simultaneously searchable. To this end, one of the state-of-the art encryption schemes secure and privacy preserving keyword searching (SPKS) has been employed. The encryption algorithm employs CSP for partially decryption of the cipher texts. Consequently, the client computational and communication overhead in decryption will be reduced. Although the CSP participates in the deciphering process, it cannot detect any information about the plaintext. In this paper we show that due to lack of client signature in the SPKS, an attack called forging attack is applicable on it. An improved version of SPKS has been introduced and the security of the proposed scheme is analyzed.
Keywords: cloud computing; cryptography; data privacy; CSP; SPKS; cipher text partial decryption; cloud storage provider; communication overhead; computational overhead; deciphering process; encryption algorithm; forging attack; secure and privacy preserving keyword searching cryptography; Cloud computing; Encryption; Generators; Keyword search; Servers; Cloud storage data security; Searchable encryption; asymmetric encryption (ID#: 15-4805)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6994052&isnumber=6994006

 

Kurt, M.; Duru, N., "Steganography Over Video Files Using Menezes Vanstone Elliptic Curve Cryptography Algorithm," Signal Processing and Communications Applications Conference (SIU), 2014. 22nd, pp. 1195, 1198, 23-25 April 2014. doi: 10.1109/SIU.2014.6830449 Abstract: In recent years information security and information privacy have been more important with an increment of technology. Different techniques of stenography and cryptography are used for sending information to recipient due to safety communication channel. Lots of algorithms have been developed as a result of these techniques. In this work the message to be sent is divided into consecutive two main parts are called coordinate data and stego data. Data represent coordinate points are encrypted with Modified Menezes Vanstone Elliptic Curve Cryptography (MMV - ECC) Algorithm and coordinate points are achieved. These coordinate points are found on related frame of video file in AVI format, and then these coordinate points' pixel value replace with decimal value of stego data.
Keywords: data privacy; public key cryptography; security of data; steganography; telecommunication channels; video coding; AVI format; MMV-ECC algorithm; coordinate data; coordinate point pixel value; decimal value; information privacy; information security; modified Menezes-Vanstone elliptic curve cryptography; safety communication channel; steganography; stego data; video files; Conferences; Elliptic curve cryptography; PSNR; Reactive power; Signal processing algorithms; İmage Processing; Cryptology; Steganography; Video Processing (ID#: 15-4806)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830449&isnumber=6830164

 

Xinyi Huang; Xiaofeng Chen; Jin Li; Yang Xiang; Li Xu, "Further Observations on Smart-Card-Based Password-Authenticated Key Agreement in Distributed Systems," Parallel and Distributed Systems, IEEE Transactions on, vol.25, no.7, pp. 1767, 1775, July 2014. doi: 10.1109/TPDS.2013.230 Abstract: This paper initiates the study of two specific security threats on smart-card-based password authentication in distributed systems. Smart-card-based password authentication is one of the most commonly used security mechanisms to determine the identity of a remote client, who must hold a valid smart card and the corresponding password to carry out a successful authentication with the server. The authentication is usually integrated with a key establishment protocol and yields smart-card-based password-authenticated key agreement. Using two recently proposed protocols as case studies, we demonstrate two new types of adversaries with smart card: 1) adversaries with pre-computed data stored in the smart card, and 2) adversaries with different data (with respect to different time slots) stored in the smart card. These threats, though realistic in distributed systems, have never been studied in the literature. In addition to point out the vulnerabilities, we propose the countermeasures to thwart the security threats and secure the protocols.
Keywords: cryptographic protocols; distributed processing; message authentication; smart cards; distributed systems; key establishment protocol; security threats; smart-card-based password-authenticated key agreement; Authentication; Dictionaries; Educational institutions; Protocols; Servers; Smart cards; Authentication; key exchange; offline-dictionary attack; online-dictionary attack; smart card (ID#: 15-4807)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6594742&isnumber=6828815

 

Bidokhti, A.; Ghaemmaghami, S., "A Generalized Multi-Layer Information Hiding Scheme Using Wet Paper Coding," Information Security and Cryptology (ISCISC), 2014 11th International ISC Conference on, pp. 210, 213, 3-4 Sept. 2014. doi: 10.1109/ISCISC.2014.6994049 Abstract: Multi-layer schemes have been proposed for steganography. Also some authors have combined these methods with the idea of wet paper codes and gained higher embedding efficiency. This paper proposes a generalized multi-layer method for wet paper embedding. First, the cover bits are divided into blocks and, by combining these bits in groups of 3, a pyramid is formed. Next, the secret message is embedded through a layer-by-layer procedure. The proposed method has higher embedding efficiency in some cases and provides more flexibility for choosing the embedding payload, especially in lower payload conditions.
Keywords: steganography; generalized multilayer information hiding scheme; layer-by-layer procedure; payload conditions; steganography; wet paper coding; wet paper embedding; Data mining; Educational institutions; Electrical engineering; Encoding; Payloads; Security; Vectors; Embedding efficiency; Embedding payload; Steganography; Wet Paper Codes (ID#: 15-4808)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6994049&isnumber=6994006

 

Ahmadi, S.; Delavar, M.; Mohajeri, J.; Aref, M.R., "Security Analysis of CLEFIA-128," Information Security and Cryptology (ISCISC), 2014 11th International ISC Conference on, pp. 84, 88, 3-4 Sept. 2014. doi: 10.1109/ISCISC.2014.6994027 Abstract: Biclique attack is one of the most recent methods for cryptanalysis of block ciphers. In this paper, we present a new biclique attack on the full round of the lightweight block cipher CLEFIA-128. We obtained 2127.44 for computational complexity while the data complexity is 264 and memory complexity is 27. To the best of our knowledge, it is the first biclique attack on the full CLEFIA-128 lightweight block cipher. Also, we show that MITM attack in the way of using partial matching with precomputation and recomputation technique can reduce the data complexity of the attack to only 2 known plaintext-ciphertext pairs.
Keywords: computational complexity; cryptography; pattern matching; CLEFIA-128; biclique attack; block ciphers; computational complexity; data complexity; partial matching; security analysis; Ciphers; Computational complexity; Encryption; Schedules; CLEFIA block cipher; MITM attack; biclique attack; lightweight cryptography; partial matching (ID#: 15-4809)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6994027&isnumber=6994006

 

Rastegari, P.; Berenjkoub, M., "A Multi-Signer Convertible Limited Multi-Verifier Signature Scheme in the Standard Model," Information Security and Cryptology (ISCISC), 2014 11th International ISC Conference on, pp. 143, 148, 3-4 Sept. 2014. doi: 10.1109/ISCISC.2014.6994038 Abstract: In a multi-signer convertible limited multi-verifier signature (MSCLMVS) scheme, a set of multi signers (co-signers) cooperatively creates a signature that can only be verified by limited verifiers. In this scheme, the conflicts between the authenticity and the privacy of the co-signers can be solved by controlling the verifiability. Moreover, the limited verifiers can designate the signature to a trusted third party such as a judge to convince him about the validity of the signature. Furthermore, both the co-signers and the limited verifiers can convert the signature to a traditional publicly verifiable signature if necessary. In this paper we present a multi-signer convertible limited multi-verifier signature scheme based on Waters' signature which is constructed by bilinear pairings. The proposed scheme is proved to be secure in the standard model by the assumption of the hardness of the Weak Gap Bilinear Diffie-Hellman problem. To the best of our knowledge, this is the first multi-signer convertible limited multi-verifier signature scheme with provable security without random oracles.
Keywords: data privacy; digital signatures; MSCLMVS; Waters signature; bilinear pairings; limited co-signers authenticity; multisigner convertible limited multiverifier signature scheme; privacy; signature verifiability control; standard model; weak gap bilinear Diffie-Hellman problem; Algorithm design and analysis; Polynomials; Probabilistic logic; Public key; Voltage control; Waters' signature; bilinear pairing; multi-signer convertible limited multi-verifier signature; multi-signer universal designated multi-verifier signature; random oracle; standard model (ID#: 15-4810)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6994038&isnumber=6994006

 

Yajam, H.A.; Mousavi, A.S.; Amirmazlaghani, M., "A New Linguistic Steganography Scheme Based on Lexical Substitution," Information Security and Cryptology (ISCISC), 2014 11th International ISC Conference on, pp.155, 160, 3-4 Sept. 2014. doi: 10.1109/ISCISC.2014.6994040 Abstract: Recent studies in the field of text-steganography shows a promising future for linguistic driven stegosystems. One of the most common techniques in this field is known as lexical substitution which provides the requirements for security and payload capacity. However, the existing lexical substitution schemes need an enormous amount of shared data between sender and receiver which acts as the stego key. In this paper, we propose a novel encoding method to overcome this problem. Our proposed approach preserves the good properties of lexical substitution schemes while it provides short length stego keys and significant robustness against active adversary attacks. We demonstrate high efficiency of the proposed scheme through theoretical and experimental results.
Keywords: linguistics; natural language processing; steganography; text analysis; active adversary attacks; encoding method; lexical substitution; linguistic driven stegosystems; linguistic steganography scheme; natural language processing; payload capacity; receiver data; security requirements; sender data; stego key; text-steganography; Encoding; Natural languages;Pragmatics;Resistance;Robustness;Watermarking;Text;lexical substitution; natural language processing; steganography (ID#: 15-4811)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6994040&isnumber=6994006

 

Najafi, A.; Sepahi, A.; Jalili, R., "Web Driven Alert Verification," Information Security and Cryptology (ISCISC), 2014 11th International ISC Conference on, pp.180,185, 3-4 Sept. 2014. doi: 10.1109/ISCISC.2014.6994044 Abstract: A web attack is an attack against a web server through the HTTP Protocol. By analyzing known web attacks, we find out that each one has its own behavior. Vestiges of their behavior could be detected in non-body parts of the HTTP Protocol. Such information can be used to verify web alerts generated by Web Application Firewalls (WAFs) and Web Intrusion Detection Systems (Web IDSs). In this paper, we propose a method to verify web alerts generated by mentioned sensors. The goal of the alert verification component is to eliminate or tag alerts that do not represent successful attacks. Our approach is based on analyzing HTTP Transaction metadata, including Request method, Request Headers, Status Code, and Response Headers. We implemented an alert verification module, reconfigured ModSecurity, modified a subset of the OWASP ModSecurity Core Rule Set, and developed knowledge-base of web attack vectors to evaluate our method. We show that our approach significantly reduces false and non-relevant alerts with quite low processing overhead, thus enhances the quality of the results.
Keywords: Internet; computer network security; hypermedia; meta data; transport protocols; HTTP protocol; HTTP transaction metadata analysis; OWASP ModSecurity Core Rule Set; WAF; Web IDS; Web application firewalls; Web attack; Web attack vector knowledge-base; Web driven alert verification; Web intrusion detection systems; Web server; alert verification module; reconfigured ModSecurity; request headers; request method; status code; Accuracy; Firewalls (computing);Intrusion detection; Knowledge based systems; Protocols; Web servers; HTTP Protocol; Intrusion Detection System; Web Application Firewall; alert verification; web attack (ID#: 15-4812)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6994044&isnumber=6994006

 

Fathabadi, Z.F.; Nogoorani, S.D.; Hemmatyar, A.M., "CR-SMTC: Privacy Preserving Collusion-Resistant Multi-Party Trust Computation," Information Security and Cryptology (ISCISC), 2014 11th International ISC Conference on, pp.167, 172, 3-4 Sept. 2014. doi: 10.1109/ISCISC.2014.6994042 Abstract: The ever-increasing use of trust and reputation models has posed new challenges in distributed environments. One of these challenges is the computation of trust while preserving privacy of feedback providers. This is because of the fact that some people may report a dishonest value due to social pressure or fear of the consequences. In this paper, we propose a privacy-preserving collusion-resistant multi-party trust computation scheme which uses data perturbation and homomorphic encryption to preserve the privacy of feedbacks. Our scheme is consisted of two protocols for private summation (S-protocol) and inner product (P-protocol). Our protocols are resistant to collusion of up to m+1 and m+2 agents, respectively, where m is a configurable parameter. In addition, their computational complexities are O(nm) and O(n(m+h)), respectively, where n is the number of agents and h is the homomorphic encryption algorithm complexity. We compare our protocols with related works and show its superiority in terms of collusion-resilience probability as well as complexity.
Keywords: computational complexity; cryptographic protocols; data privacy; trusted computing; CR-SMTC; O(n(m+h)) computational complexity; O(nm) computational complexity; P-protocol; S-protocol; collusion resistant protocols; collusion-resilience probability; configurable parameter; data perturbation; dishonest value; distributed environments; feedback provider privacy preservation; homomorphic encryption; homomorphic encryption algorithm complexity; inner product protocols; privacy-preserving collusion-resistant multiparty trust computation scheme; private summation protocols; reputation model; social pressure; trust computation; trust model; Complexity theory; Computational modeling; Encryption; Privacy; Protocols; Resistance; collusion attack (key words);computational trust; data perturbation; homomorphic encryption; privacy preservation (ID#: 15-4813)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6994042&isnumber=6994006

 

Orojloo, H.; Azgomi, M.A., "A Method for Modeling and Evaluation of the Security of Cyber-Physical Systems," Information Security and Cryptology (ISCISC), 2014 11th International ISC Conference on, pp. 131, 136, 3-4 Sept. 2014. doi: 10.1109/ISCISC.2014.6994036 Abstract: Quantitative evaluation of security has always been one of the challenges in the field of computer security. The integration of computing and communication technologies with physical components, has introduced a variety of new security risks, which threaten cyber-physical components. It is possible that an attacker damage a physical component with cyber attack. In this paper, we propose a new approach for modeling and quantitative evaluation of the security of cyber-physical systems (CPS). The proposed method, considers those cyber attacks that can lead to physical damages. The factors impacting attacker's decision-making in the process of cyber attack to cyber-physical system are also taken into account. Furthermore, for describing the attacker and the system behaviors over time, the uniform probability distributions are used in a state-based semi-Markov chain (SMC) model. The security analysis is carried out for mean time to security failure (MTTSF), steady-state security, and steady-state physical availability.
Keywords: Markov processes; decision making; security of data; statistical distributions; CPS security; MTTSF; communication technology integration; computer security; computing technology integration; cyber attack; cyber-physical components; cyber-physical system security; decision-making; mean time-to-security failure; quantitative evaluation; security analysis; security risks; state-based SMC model; state-based semi-Markov chain model; steady-state physical availability; steady-state security; uniform probability distributions; Analytical models; Availability; Computational modeling; Mathematical model; Random variables; Security; Steady-state; Cyber-physical systems; physical damage; quantitative security evaluation; security modelling (ID#: 15-4814)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6994036&isnumber=6994006

 

Nasr, P.M.; Varjani, A.Y., "Petri Net Model of Insider Attacks in SCADA System," Information Security and Cryptology (ISCISC), 2014 11th International ISC Conference on, pp.55,60, 3-4 Sept. 2014. doi: 10.1109/ISCISC.2014.6994022 Abstract: This paper investigates the use of Petri nets for modeling insider attacks on the Supervisory Control and Data Acquisition (SCADA) system. Insider attacks are one of the most dangerous threats for Critical Infrastructures (CIs). An insider attacker, by sending legitimate control commands, can bring catastrophic damages to CIs at the national level. Therefore, it is important to develop new model to study the sequence of the operator actions in the CIs. Many CIs are monitored and controlled by SCADA systems. This paper proposes a new modelling approach of operator behavior, for resolving alarms and insider attacks, in electric power SCADA. In order to study operator behavior, several attack scenarios have been studied to evaluate offered model. The proposed model is based on Colored Petri Nets (CPNs).
Keywords: Petri nets; SCADA systems; power engineering computing; security of data; CPN; Petri net model; colored Petri nets; electric power SCADA; insider attacks; operator behavior; supervisory control and data acquisition system; Analytical models; Computational modeling Monitoring; Petri nets; SCADA systems; Servers; Substations; Insider attack; SCADA; colored petri net (ID#: 15-4815)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6994022&isnumber=6994006

 

Hajiabadi, M.H.; Saidi, H.; Behdadfar, M., "Scalable, High-Throughput and Modular Hardware-Based String Matching Algorithm," Information Security and Cryptology (ISCISC), 2014 11th International ISC Conference on, pp.192,198, 3-4 Sept. 2014. doi: 10.1109/ISCISC.2014.6994046 Abstract: String matching is the primary function of signature based intrusion detection systems. In this paper, a novel string matching algorithm is proposed based on the idea of searching words in a dictionary. We have also presented a scalable, high throughput, memory efficient and modular architecture for large scale string matching based on the proposed algorithm. The words of dictionary have been extracted from malicious patterns of Snort NIDS (2013) database. The memory efficiency of the proposed algorithms is directly proportional to the dissimilarity of patterns. In a large dictionary, it is feasible to create several groups in such a way that the members of each group satisfy a desired condition. The presented architecture is designed for implementation on the Field Programmable Gate Array and profits from the pipeline, modular structure and suitable utilization of distributed memory resources. Due to the routing limitation of FPGAs, the maximum length of patterns has been limited and a further solution suggested for tackling this obstacle. The post place & route implementation results of a set of 11895 patterns (117832 Byte) with lengths within the range from 2 to 20 characters show an efficiency of 1.47 Byte/Char or 0.28 (6-input LUT/char) and a maximum throughput of 2.38 Gbps. Other results for a set of 3471 patterns (104399 Byte) with lengths within 21 and 40 characters show an efficiency of 1.87 Byte/Char or 0.42 (6-input LUT/char) and the maximum throughput of 1.97 Gbps. Adding new string to dictionary is feasible by placing extra modules in architecture.
Keywords: field programmable gate arrays; pipeline processing; security of data; string matching; Snort NIDS database; dictionary; distributed memory resources; field programmable gate array; high-throughput string matching algorithm; large scale string matching; malicious patterns; modular architecture; modular hardware-based string matching algorithm; modular structure; pattern dissimilarity; pipeline; scalable string matching algorithm; signature based intrusion detection systems; Algorithm design and analysis; Dictionaries; Indexes; Memory management; Pattern matching;Throughput;Vectors; FPGA; Field programmble gate array; String matching; String matching algorithm; hardware based; intrusion detection system (ID#: 15-4816)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6994046&isnumber=6994006

 

Hasanifard, M.; Ladani, B.T., "DoS and Port Scan Attack Detection in High Speed Networks," Information Security and Cryptology (ISCISC), 2014 11th International ISC Conference on, pp.6 1, 66, 3-4 Sept. 2014. doi: 10.1109/ISCISC.2014.6994023 Abstract: One of the necessities of high-speed Internet highways is the use of intrusion detection systems (IDSs). To this end, IDS should be able to process a high volume of traffic with limited resources. IDSs have improved significantly in recent years and they showed acceptable outcomes. However, there is no appropriate solution for high-speed networks. This paper proposes a solution for diagnosing denial of service (DoS) and port scan attacks as a layer of defense. The proposed method attains high speed rate using a parallel data structure to filter out DoS and port scan attacks from network traffic before entering the intrusion detection system. Attack filtering is based on statistical anomaly detection. The experimental results from implementing and evaluating the proposed method show acceptable records in both error rate and speed.
Keywords: Internet; computer network security; data structures; parallel processing; statistical analysis; telecommunication traffic; DoS; IDS; attack filtering; denial of service attack; high speed networks; high-speed Internet highways; intrusion detection systems; network traffic; parallel data structure; port scan attack detection; statistical anomaly detection; Computer crime; Data structures; Feature extraction; High-speed networks; IP networks; Ports (Computers);Servers; Data stream computing; Denial of service attack; Intrusion detection system; Port scan attack; Statistical anomaly detection (ID#: 15-4817)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6994023&isnumber=6994006

 

Khosravi-Farmad, M.; Rezaee, R.; Bafghi, A.G., "Considering Temporal and Environmental Characteristics of Vulnerabilities in Network Security Risk Assessment," Information Security and Cryptology (ISCISC), 2014 11th International ISC Conference on, pp.186, 191, 3-4 Sept. 2014. doi: 10.1109/ISCISC.2014.6994045 Abstract: Assessing the overall security of a network requires a thorough understanding of interconnections between host vulnerabilities. In this paper, Bayesian attack graphs are used to model interconnections between vulnerabilities that enable the attacker to achieve a particular goal. In order to estimate the success probability of vulnerability exploitation, in addition to inherent characteristics of vulnerabilities, their temporal characteristics are also used to have more accurate estimation for current time of risk assessment. Since impacts of vulnerability exploitations in different environments varies from one organization to the other, environmental factors that affect the security goals such as confidentiality, integrity and availability are also considered which leads to a more precise assessment. Finally, the risk of each asset compromise is calculated by multiplying the unconditional probability of penetrating each asset in its resulted impact. The experimental results show that the proposed method effectively reduces the security risk in a test network in comparison to similar works.
Keywords: Bayes methods; graph theory; risk management; security of data; Bayesian attack graphs; environmental characteristics; network security; risk assessment; temporal characteristics; vulnerability exploitation; Availability; Bayes methods; Measurement; Organizations; Risk management; Security; Attack graph; Bayesian networks; CVSS framework ;Security risk assessment; Vulnerability (ID#: 15-4818)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6994045&isnumber=6994006

 

Razian, M.R.; Sangchi, H.M., "A Threatened-Based Software Security Evaluation Method," Information Security and Cryptology (ISCISC), 2014 11th International ISC Conference on, pp.120,125, 3-4 Sept. 2014. doi: 10.1109/ISCISC.2014.6994034 Abstract: Nowadays, security evaluation of software is a substantial matter in software world. Security level of software will be determined by wealth of data and operation which it provides for us. The security level is usually evaluated by a third party, named Software Security Certification Issuance Centers. It is important for software security evaluators to perform a sound and complete evaluation, which is a complicated process considering the increasing number of emerging threats. In this paper we propose a Threatened-based Software Security Evaluation method to improve the security evaluation process of software. In this method, we focus on existing threatened entities of software which in turn result in software threats and their corresponding controls and countermeasures. We also demonstrate a Security Evaluation Assistant (SEA) tool to practically show the effectiveness of our evaluation method.
Keywords: security of data; software performance evaluation; software tools; SEA; security evaluation assistant tool; software security certification issuance centers; software threats; threatened-based software security evaluation method; Certification; Feature extraction; Organizations; Security; Software; Standards; Vectors; Assessment; Control; Evaluation; Security; Security Certification; Software; Software Security; Threat; Threatened (ID#: 15-4819)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6994034&isnumber=6994006

 

Junliang Shu; Juanru Li; Yuanyuan Zhang; Dawu Gu, "Android App Protection via Interpretation Obfuscation," Dependable, Autonomic and Secure Computing (DASC), 2014 IEEE 12th International Conference on, pp.63,68, 24-27 Aug. 2014. doi: 10.1109/DASC.2014.20 Abstract: To protect Android app from malicious reproduction or tampering, code obfuscation techniques are introduced to increase the difficulty of reverse engineering and program understanding. Current obfuscation schemes focus more on the protection of the meta information over the executable code which contains valuable or patented algorithms. Therefore, a more sophisticated obfuscator is needed to improve the protection on the executable code. In this paper we propose SMOG, a comprehensive executable code obfuscation system to protect Android app. SMOG is composed of two parts, an obfuscation engine and an execution environment. The obfuscation engine is at software vendor's side to conduct the obfuscation on the app's executable code, and then release the obfuscated app to the end-user along with an execution token. The execution environment is setup by integrating the received execution token, which endows the Android Dalvik VM the capability to execute the obfuscated app. SMOG is an easily deployed system which proves fine-grained level protection. The obfuscated app generated by SMOG could resist static and dynamic reverse engineering. Moreover, the benchmark result shows SMOG only costs about 5% more performance in dispatching the incoming bytecode to the proper interpreter.
Keywords: Android (operating system);computer crime; data protection; reverse engineering; source code (software); Android Dalvik VM; Android app protection; SMOG; code obfuscation techniques; dynamic reverse engineering; executable code obfuscation system; executable code protection; execution environment; execution token; fine-grained level protection; interpretation obfuscation; malicious reproduction; meta information protection; obfuscated app; obfuscation engine; obfuscator;program understanding; software vendor; static reverse engineering; tampering; Conferences; Android App; Execution Token; Interpretation Obfuscation; Reverse Engineering; Static Disassembly (ID#: 15-4820)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6945305&isnumber=6945641


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Deterrence

 
SoS Logo

Deterrence

 

Finding ways both technical and behavioral to provide disincentives to threats is a promising area of research. Since most cybersecurity is “bolt on” rather than embedded, and since detection, response, and forensics are expensive, time-consuming processes, discouraging attacks can be a cost-effective cybersecurity approach.  The research works cited here were presented and published in 2014. 


 

Fahl, Sascha; Dechand, Sergej; Perl, Henning; Fischer, Felix; Smrcek, Jaromir; Smith, Matthew; “Hey, NSA: Stay Away from My Market! Future Proofing App Markets Against Powerful Attackers;” CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 1143-1155. Doi: 10.1145/2660267.2660311
Abstract: Mobile devices are evolving as the dominant computing platform and consequently application repositories and app markets are becoming the prevalent paradigm for deploying software. Due to their central and trusted position in the software ecosystem, coerced, hacked or malicious app markets pose a serious threat to user security. Currently, there is little that hinders a nation state adversary (NSA) or other powerful attackers from using such central and trusted points of software distribution to deploy customized (malicious) versions of apps to specific users. Due to intransparencies in the current app installation paradigm, this kind of attack is extremely hard to detect.  In this paper, we evaluate the risks and drawbacks of current app deployment in the face of powerful attackers. We assess the app signing practices of 97% of all free Google Play apps and find that the current practices make targeted attacks unnecessarily easy and almost impossible to detect for users and app developers alike. We show that high profile Android apps employ intransparent and unaccountable strategies when they publish apps to (multiple) alternative markets. We then present and evaluate Application Transparency (AT), a new framework that can defend against ``targeted-and-stealthy'' attacks, mount by malicious markets.   We deployed AT in the wild and conducted an extensive field study in which we analyzed app installations on 253,819 real world Android devices that participate in a popular anti-virus app's telemetry program. We find that AT can effectively protect users against malicious targeted attack apps and furthermore adds transparency and accountability to the current intransparent signing and packaging strategies employed by many app developers.
Keywords:  android, apps, market, nsa, security, transparency (ID#: 15-5051)
URL: http://doi.acm.org/10.1145/2660267.2660311

 

Almeshekah, Mohammed H.; Spafford, Eugene H.; “Planning and Integrating Deception into Computer Security Defenses;” NSPW '14 Proceedings of the 2014 Workshop on New Security Paradigms, September 2014, Pages  127-138. Doi:  10.1145/2683467.2683482 Abstract: Deceptive techniques played a prominent role in many human conflicts throughout history. Digital conflicts are no different as the use of deception has found its way to computing since at least the 1980s. However, many computer defenses that use deception were ad-hoc attempts to incorporate deceptive elements. In this paper, we present a model that can be used to plan and integrate deception in computer security defenses. We present an overview of fundamental reasons why deception works and the essential principles involved in using such techniques. We investigate the unique advantages deception-based mechanisms bring to traditional computer security defenses. Furthermore, we show how our model can be used to incorporate deception in many part of computer systems and discuss how we can use such techniques effectively. A successful deception should present plausible alternative(s) to the truth and these should be designed to exploit specific adversaries' biases. We investigate these biases and discuss how can they be used by presenting a number of examples.
Keywords: biases, computer security, deception (ID#: 15-5052)
URL: http://doi.acm.org/10.1145/2683467.2683482

 

Dubey, N.K.; Kumar, S., "A Review of Watermarking Application in Digital Cinema for Piracy Deterrence," Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on, pp.626,630, 7-9 April 2014. doi: 10.1109/CSNT.2014.131 Abstract: Many pirated digital movies by camcorder capture are found on the Internet or on the street market before their official release. During piracy of cinema footage, composite geometric distortions commonly occur due to the angle of the camcorder relative to the screen. There are various research has been done to utilize the geometric distortions that will be occur during piracy in theatre to estimate the position of pirate in theatre via watermarking scheme followed by LACF (local auto correlation function). This paper present the notion of Watermarking and the features required to design a watermarked video for piracy deterrence. We review several methods, and introduce frequently used key techniques. The aim of this paper is to focus on the watermarking technique that is good for piracy deterrence. The majority of the reviewed methods based on watermarking emphasize on the notion of secure spread spectrum way of watermarking followed by LACF for estimating the position of pirate.
Keywords: copy protection; video cameras; video watermarking; LACF; camcorder capture; cinema footage; digital cinema; geometric distortions; local auto correlation function; piracy deterrence; secure spread spectrum; watermarking application; Acoustics; Correlation; Estimation; Internet; Motion pictures; Video equipment; Watermarking; Digital cinema; audio watermarking; local auto-correlation function; local auto-correlation function (LACF); video watermarking (ID#: 15-5053)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821473&isnumber=6821334

 

Shepherd, M.; Mejias, R.; Klein, G., "A Longitudinal Study to Determine Non-technical Deterrence Effects of Severity and Communication of Internet Use Policy for Reducing Employee Internet Abuse," System Sciences (HICSS), 2014 47th Hawaii International Conference on, pp. 3159,3168, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.392 Abstract: This is the second part of a longitudinal study that examines how employee Internet abuse may be reduced by non-technical deterrence methods, specifically via IT acceptable use policies (AUP). Both studies used actual usage and audit logs (not self-reporting measures) to monitor the web activity of employees. In the earlier study, a mild AUP reminder to company employees resulted in a 12 percent decrease in non-work Internet usage. The current study utilized a more severe AUP communication and resulted in a 33 percent decrease in non-work Internet usage. For both studies, the AUP reminder resulted in an immediate decrease in non-work Internet usage. Results indicate that while non-work traffic under both treatments returned over time, the longevity effect of the severe AUP message was greater than the mild AUP message and non-work traffic did not return to its previous pre-treatment level by the end of the study.
Keywords: Internet; authorisation; industrial property; personnel; social aspects of automation; AUP; IT acceptable use policy; Internet use policy; Web activity; employee Internet abuse; longevity effect; longitudinal study; nontechnical deterrence effect; nonwork Internet usage; nonwork traffic; Companies; Employment; Information security; Internet; Monitoring; AUP; Internet abuse mitigation (ID#: 15-5054)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6758994&isnumber=6758592

 

Axelrod, C.W., "Reducing Software Assurance Risks for Security-Critical and Safety-Critical Systems," Systems, Applications and Technology Conference (LISAT), 2014 IEEE Long Island pp.1,6, 2-2 May 2014. doi: 10.1109/LISAT.2014.6845212 Abstract: According to the Office of the Assistant Secretary of Defense for Research and Engineering (ASD(R&E)), the US Department of Defense (DoD) recognizes that there is a “persistent lack of a consistent approach ... for the certification of software assurance tools, testing and methodologies” [1]. As a result, the ASD(R&E) is seeking “to address vulnerabilities and weaknesses to cyber threats of the software that operates ... routine applications and critical kinetic systems ...” The mitigation of these risks has been recognized as a significant issue to be addressed in both the public and private sectors. In this paper we examine deficiencies in various software-assurance approaches and suggest ways in which they can be improved. We take a broad look at current approaches, identify their inherent weaknesses and propose approaches that serve to reduce risks. Some technical, economic and governance issues are: (1) Development of software-assurance technical standards (2) Management of software-assurance standards (3) Evaluation of tools, techniques, and metrics (4) Determination of update frequency for tools, techniques (5) Focus on most pressing threats to software systems (6) Suggestions as to risk-reducing research areas (7) Establishment of models of the economics of software-assurance solutions, and testing and certifying software We show that, in order to improve current software assurance policy and practices, particularly with respect to security, there has to be a major overhaul in how software is developed, especially with respect to the requirements and testing phases of the SDLC (Software Development Lifecycle). We also suggest that the current preventative approaches are inadequate and that greater reliance should be placed upon avoidance and deterrence. We also recommend that those developing and operating security-critical and safety-critical systems exchange best-of-breed software assurance methods to prevent the vulnerability of components leading to compromise of entire systems of systems. The recent catastrophic loss of a Malaysia Airlines airplane is then presented as an example of possible compromises of physical and logical security of on-board communications and management and control systems.
Keywords: program testing; safety-critical software; software development management; software metrics; ASD(R&E);Assistant Secretary of Defense for Research and Engineering; Malaysia Airlines airplane; SDLC;US Department of Defense; US DoD; component vulnerability prevention; control systems; critical kinetic systems; cyber threats; economic issues; governance issues; logical security; management systems; on-board communications; physical security; private sectors; public sectors; risk mitigation; safety-critical systems ;security-critical systems; software assurance risk reduction; software assurance tool certification; software development; software development lifecycle; software methodologies ;software metric evaluation; software requirements; software system threats; software technique evaluation; software testing; software tool evaluation; software-assurance standard management ;software-assurance technical standard development ;technical issues; update frequency determination; Measurement; Organizations; Security; Software systems; Standards; Testing; cyber threats; cyber-physical systems; governance; risk; safety-critical systems; security-critical systems; software assurance; technical standards; vulnerabilities; weaknesses (ID#: 15-5055)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845212&isnumber=6845183

 

Lodhi, F.K.; Hasan, S.R.; Hasan, O.; Awwad, F., "Low Power Soft Error Tolerant Macro Synchronous Micro Asynchronous (MSMA) Pipeline," VLSI (ISVLSI), 2014 IEEE Computer Society Annual Symposium on, pp. 601, 606, 9-11 July 2014. doi: 10.1109/ISVLSI.2014.59 Abstract: Advancement in deep submicron (DSM) technologies led to miniaturization. However, it also increased the vulnerability against some electrical and device non-idealities, including the soft errors. These errors are significant threat to the reliable functionality of digital circuits. Several techniques for the detection and deterrence of soft errors (to improve the reliability) have been proposed, both in synchronous and asynchronous domain. In this paper we propose a low power and soft error tolerant solution for synchronous systems that leverages the asynchronous pipeline within a synchronous framework. We named our technique as macro synchronous micro asynchronous (MSMA) pipeline. We provided a framework along with timing analysis of the MSMA technique. MSMA is implemented using a macro synchronous system and soft error tolerant and low power version of null convention logic (NCL) asynchronous circuit. It is found out that this solution can easily replace the intermediate stages of synchronous and asynchronous pipelines without changing its interface protocol. Such NCL asynchronous circuits can be used as a standard cell in the synchronous ASIC design flow. Power and performance analysis is done using electrical simulations, which shows that this techniques consumes at least 22% less power and 45% less energy delay product (EDP) compared to state-of-the-art solutions.
Keywords: asynchronous circuits; circuit simulation; integrated circuit design; integrated logic circuits; low-power electronics; radiation hardening (electronics);deep submicron technologies; electrical simulations; energy delay product; low power soft error tolerant MSMA pipeline; macrosynchronous;  microasynchronous; null convention logic asynchronous circuit; synchronous ASIC design flow; Adders; Asynchronous circuits; Delays; Logic gates; Pipelines; Rails; Registers; Low power Asynchronous circuits; NCL pipeline; SE tolerant circuits; Soft Error (ID#: 15-5056)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903430&isnumber=6903314

 

Jahyun Goo; Myung-Seong Yim; Kim, D.J., "A Path to Successful Management of Employee Security Compliance: An Empirical Study of Information Security Climate," Professional Communication, IEEE Transactions on, vol.57, no.4, pp.286,308, Dec. 2014. doi: 10.1109/TPC.2014.2374011 Abstract: Research problem: Although organizations have been exerting a significant effort to leverage policies and procedures to improve information security, their impact and effectiveness is under scrutiny as employees' compliance with information security procedures remains problematic. Research questions: (1) What is the role of information security climate (ISC) in cultivating individual's compliance with security policy? (2) Do individual affective and normative states mediate the effect of ISC to increase security policy compliance intention while thwarting employees' security avoidance? Literature review: Drawing upon Griffin and Neal's safety climate model, which states the effect of safety climate on individual safety behaviors that lead to specific performance outcomes, we develop an ISC model to empirically examine the efficacy of security climate in governing employee's policy compliance. The literature suggests that there could be practical reasons for employees not to observe the security policies and procedures. These go beyond the simple lack of use or negligence, and include rationalizing security violation, particularly in light of the fact that they are under pressure to get something done without delays in daily work. To empirically address such employee behavior, we employed the term, security avoidance in this study-an employee's deliberate intention to avoid security policies or procedures in daily work despite the need and opportunity to do so. Methodology: We surveyed IT users in South Korea about individuals' perception about various organizational/managerial information security practices in the work environment. Results and discussion: The results from 581 participants strongly support the fundamental proposition that the information security climate has a significant positive impact on employee's conformity with the security policy. The study also reveals that the security climate nurtures the employee's affective and cognitive states - hrough affective commitment and normative commitment. These, in turn, mediate the influence of security climate on employee policy compliance by facilitating rule adherence among employees while, at the same time, inspiring self-adjusted behaviors to neutralize their deliberate intents of negligence. Overall, the findings support our view that the creation of strong security climate is the adequate alternative to a sanction-based deterrence to employees' security policy compliance, which limits the presence of security avoidance. The implications to theory are the multidimensional nature of ISC construct and its linkage to a systematic view of individual level information security activities. The implications to practice are the ISC's favorable role of discouraging employee's security avoidance while inducing the security policy compliance intention at the same time, given the limit of sanctions.
Keywords: personnel; security of data; ISC; employee security policy compliance; information security climate; security avoidance; Employment; Information security; Organizations; Personnel; Security; Employee security behavior; partial least squares; security avoidance; security climate; security policy compliance (ID#: 15-5057)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6977993&isnumber=6979776

 

Jafarian, Jafar Haadi H.; Al-Shaer, Ehab; Duan, Qi; “Spatio-temporal Address Mutation for Proactive Cyber Agility Against Sophisticated Attackers;” MTD ’14 Proceedings of the First ACM Workshop on Moving Target Defense; November 2014, Pages 69-78. Doi: 10.1145/2663474.2663483 Abstract: The static one-to-one binding of hosts to IP addresses allows adversaries to conduct thorough reconnaissance in order to discover and enumerate network assets. Specifically, this fixed address mapping allows distributed network scanners to aggregate information gathered at multiple locations over different times in order to construct an accurate and persistent view of the network. The unvarying nature of this view enables adversaries to collaboratively share and reuse their collected reconnaissance information in various stages of attack planning and execution. This paper presents a novel moving target defense (MTD) technique which enables host-to-IP binding of each destination host to vary randomly across the network based on the source identity (spatial randomization) as well as time (temporal randomization). This spatio-temporal randomization will distort attackers' view of the network by causing the collected reconnaissance information to expire as adversaries transition from one host to another or if they stay long enough in one location. Consequently, adversaries are forced to re-scan the network frequently at each location or over different time intervals. These recurring probings significantly raise the bar for the adversaries by slowing down the attack progress, while improving its detectability. We introduce three novel metrics for quantifying the effectiveness of MTD defense techniques: deterrence, deception, and detectability. Using these metrics, we perform rigorous theoretical and experimental analysis to evaluate the efficacy of this approach. These analyses show that our approach is effective in countering a significant number of sophisticated threat models including collaborative reconnaissance, worm propagation, and advanced persistent threat (APT), in an evasion-free manner.
Keywords: adversary-awareness; ip address randomization; moving target defense (mtd); reconnaissance (ID#: 15-5058)
URL: http://doi.acm.org/10.1145/2663474.2663483

 

Crossler, Robert; B'elanger, France; “An Extended Perspective on Individual Security Behaviors: Protection Motivation Theory and a Unified Security Practices (USP) Instrument;” ACM SIGMIS Database, Volume 45 Issue 4, November 2014, Pages 51-71. Doi: 10.1145/2691517.2691521 Abstract: Security threats regularly affect users of home computers. As such, it is important to understand the practices of users for protecting their computers and networks, and to identify determinants of these practices. Several recent studies utilize Protection Motivation Theory (PMT) to explore these practices. However, these studies focus on one specific security protection behavior or on intentions to use a generic measure of security protection tools or techniques (practices). In contrast, this study empirically tests the effectiveness of PMT to explain a newly developed measure for collectively capturing several individual security practices. The results show that PMT explains an important portion of the variance in the unified security practices measure, and demonstrates the importance of explaining individual security practices as a whole as opposed to one particular behavior individually. Implications of the study for research and practice are discussed.
Keywords: home user, information security, protection motivation theory, security practices (ID#: 15-5059)
URL: http://doi.acm.org/10.1145/2691517.2691521

 

Feigenbaum, Joan; Jaggard, Aaron D.; Wright, Rebecca N.; “Open vs. Closed Systems for Accountability;”  HotSoS '14 Proceedings of the 2014 Symposium and Bootcamp on the Science of Security, April 2014, Article No. 4. Doi: 10.1145/2600176.2600179 Abstract: The relationship between accountability and identity in online life presents many interesting questions. Here, we first systematically survey the various (directed) relationships among principals, system identities (nyms) used by principals, and actions carried out by principals using those nyms. We also map these relationships to corresponding accountability-related properties from the literature.  Because punishment is fundamental to accountability, we then focus on the relationship between punishment and the strength of the connection between principals and nyms. To study this particular relationship, we formulate a utility-theoretic framework that distinguishes between principals and the identities they may use to commit violations. In doing so, we argue that the analogue applicable to our setting of the well known concept of quasilinear utility is insufficiently rich to capture important properties such as reputation. We propose more general utilities with linear transfer that do seem suitable for this model.  In our use of this framework, we define notions of "open" and "closed" systems. This distinction captures the degree to which system participants are required to be bound to their system identities as a condition of participating in the system. This allows us to study the relationship between the strength of identity binding and the accountability properties of a system.
Keywords: accountability, identity, utility (ID#: 15-5060)
URL: http://doi.acm.org/10.1145/2600176.2600179

 

Ferdous, Md. Sadek; Norman, Gethin; Poet, Ron; “Mathematical Modelling of Identity, Identity Management and Other Related Topics;” SIN '14 Proceedings of the 7th International Conference on Security of Information and Networks, September 2014, Pages 9. Doi:  10.1145/2659651.2659729 Abstract: There exist disparate sets of definitions with different semantics on different topics of Identity Management which often lead to misunderstanding. A few efforts can be found compiling several related vocabularies into a single place to build up a set of definitions based on a common semantic. However, these efforts are not comprehensive and are only textual in nature. In essence, a mathematical model of identity and identity management covering all its aspects is still missing. In this paper we build up a mathematical model of different core topics covering a wide range of vocabularies related to Identity Management. At first we build up a mathematical model of Digital Identity. Then we use the model to analyse different aspects of Identity Management. Finally, we discuss three applications to illustrate the applicability of our approach. Being based on mathematical foundations, the approach can be used to build up a solid understanding on different topics of Identity Management.
Keywords: Identity, Identity Management, Mathematical Modelling (ID#: 15-5061)
URL: http://doi.acm.org/10.1145/2659651.2659729

 

Mora, Antonio M.; De las Cuevas, Paloma; Merelo, Juan Julian; Zamarripa, Sergio; Esparcia-Alcazar, Anna I.; “Enforcing Corporate Security Policies via Computational Intelligence Techniques;” GECCO Comp '14 Proceedings of the 2014 Conference Companion On Genetic And Evolutionary Computation Companion, July 2014, Pages 1245-1252. Doi:  10.1145/2598394.2605438 Abstract: This paper presents an approach, based in a project in development, which combines Data Mining, Machine Learning and Computational Intelligence techniques, in order to create a user-centric and adaptable corporate security system. Thus, the system, named MUSES, will be able to analyse the user's behaviour (modelled as events) when interacting with the company's server, accessing to corporate assets, for instance. As a result of this analysis, and after the application of the aforementioned techniques, the Corporate Security Policies, and specifically, the Corporate Security Rules will be adapted to deal with new anomalous situations, or to better manage user's behaviour. The work reviews the current state of the art in security issues resolution by means of these kind of methods. Then it describes the MUSES features in this respect and compares them with the existing approaches.
Keywords: computational intelligence, corporate security policies, evolutionary computation, security rules (ID#: 15-5062)
URL: http://doi.acm.org/10.1145/2598394.2605438

 

Syta, Ewa; Corrigan-Gibbs, Henry; Weng, Shu-Chun; Wolinsky, David; Ford, Bryan; Johnson, Aaron; “Security Analysis of Accountable Anonymity in Dissent;” ACM Transactions on Information and System Security (TISSEC), Volume 17, Issue 1, August 2014, Article No. 4.  Doi: 10.1145/2629621 Abstract: Users often wish to communicate anonymously on the Internet, for example, in group discussion or instant messaging forums. Existing solutions are vulnerable to misbehaving users, however, who may abuse their anonymity to disrupt communication. Dining Cryptographers Networks (DC-nets) leave groups vulnerable to denial-of-service and Sybil attacks; mix networks are difficult to protect against traffic analysis; and accountable voting schemes are unsuited to general anonymous messaging.  Dissent is the first general protocol offering provable anonymity and accountability for moderate-size groups, while efficiently handling unbalanced communication demands among users. We present an improved and hardened dissent protocol, define its precise security properties, and offer rigorous proofs of these properties. The improved protocol systematically addresses the delicate balance between provably hiding the identities of well-behaved users, while provably revealing the identities of disruptive users, a challenging task because many forms of misbehavior are inherently undetectable. The new protocol also addresses several nontrivial attacks on the original dissent protocol stemming from subtle design flaws.
Keywords: Anonymous communication, accountable anonymity, provable security (ID#: 15-5063)
URL: http://doi.acm.org/10.1145/2629621

 

Ren, Chuangang; Chen, Kai; Liu, Peng; “Droidmarking: Resilient Software Watermarking for Impeding Android Application Repackaging;” ASE '14 Proceedings of the 29th ACM/IEEE International Conference on Automated Software Engineering, September 2014, Pages 635-646. Doi: 10.1145/2642937.2642977 Abstract: Software plagiarism in Android markets (app repackaging) is raising serious concerns about the health of the Android ecosystem. Existing app repackaging detection techniques fall short in detection efficiency and in resilience to circumventing attacks; this allows repackaged apps to be widely propagated and causes extensive damages before being detected. To overcome these difficulties and instantly thwart app repackaging threats, we devise a new dynamic software watermarking technique - Droidmarking - for Android apps that combines the efforts of all stakeholders and achieves the following three goals: (1) copyright ownership assertion for developers, (2) real-time app repackaging detection on user devices, and (3) resilience to evading attacks. Distinct from existing watermarking techniques, the watermarks in Droidmarking are non-stealthy, which means that watermark locations are not intentionally concealed, yet still are impervious to evading attacks. This property effectively enables normal users to recover and verify watermark copyright information without requiring a confidential watermark recognizer. Droidmarking is based on a primitive called self-decrypting code (SDC). Our evaluations show that Droidmarking is a feasible and robust technique to effectively impede app repackaging with relatively small performance overhead.
Keywords: android, app repackaging, software watermarking (ID#: 15-5064)
URL: http://doi.acm.org/10.1145/2642937.2642977

 

Zhou, Wu; Wang, Zhi; Zhou, Yajin; Jiang, Xuxian; “DIVILAR: Diversifying Intermediate Language for Anti-repackaging on Android Platform;” CODASPY '14 Proceedings of the 4th ACM Conference on Data and Application Security and Privacy, March 2014, Pages 199-210. Doi:  10.1145/2557547.2557558 Abstract: App repackaging remains a serious threat to the emerging mobile app ecosystem. Previous solutions have mostly focused on the postmortem detection of repackaged apps by measuring similarity among apps. In this paper, we propose DIVILAR, a virtualization-based protection scheme to enable self-defense of Android apps against app repackaging. Specifically, it re-encodes an Android app in a diversified virtual instruction set and uses a specialized execute engine for these virtual instructions to run the protected app. However, this extra layer of execution may cause significant performance overhead, rendering the solution unacceptable for daily use. To address this challenge, we leverage a light-weight hooking mechanism to hook into Dalvik VM, the execution engine for Dalvik bytecode, and piggy-back the decoding of virtual instructions to that of Dalvik bytecode. By compositing virtual and Dalvik instruction execution, we can effectively eliminate this extra layer of execution and significantly reduce the performance overhead. We have implemented a prototype of DIVILAR. Our evaluation shows that DIVILAR is resilient against existing static and dynamic analysis, including these specific to VM-based protection. Further performance evaluation demonstrates its efficiency for daily use (an average of 16.2 and 8.9 increase to the start time and run time, respectively).
Keywords: android, anti-repackaging, virtual machine (ID#: 15-5065)
URL: http://doi.acm.org/10.1145/2557547.2557558

 

Sun, Mengtao; Tan, Gang; “NativeGuard: Protecting Android Applications from Third-party Native Libraries;” WiSec '14 Proceedings of the 2014 ACM Conference on Security and Privacy in Wireless & Mobile Networks,  July 2014, pages 165-176.  Doi:  10.1145/2627393.2627396 Abstract: Android applications often include third-party libraries written in native code. However, current native components are not well managed by Android's security architecture. We present NativeGuard, a security framework that isolates native libraries from other components in Android applications. Leveraging the process-based protection in Android, NativeGuard isolates native libraries of an Android application into a second application where unnecessary privileges are eliminated. NativeGuard requires neither modifications to Android nor access to the source code of an application. It addresses multiple technical issues to support various interfaces that Android provides to the native world. Experimental results demonstrate that our framework works well with a set of real-world applications, and incurs only modest overhead on benchmark programs.
Keywords: android, java native interface, privilege isolation (ID#: 15-5066)
URL: http://doi.acm.org/10.1145/2627393.2627396

 

Peng, Chunyi; Li, Chi-Yu; Wang, Hongyi; Tu, Guan-Hua; Lu, Songwu; “Real Threats to Your Data Bills: Security Loopholes and Defenses in Mobile Data Charging;” CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 727-738. Doi: 10.1145/2660267.2660346 Abstract:  Secure mobile data charging (MDC) is critical to cellular network operations. It must charge the right user for the right volume that (s)he authorizes to consume (i.e., requirements of authentication, authorization, and accounting (AAA)). In this work, we conduct security analysis of the MDC system in cellular networks. We find that all three can be breached in both design and practice, and identify three concrete vulnerabilities: authentication bypass, authorization fraud and accounting volume inaccuracy. The root causes lie in technology fundamentals of cellular networks and the Internet IP design, as well as imprudent implementations. We devise three showcase attacks to demonstrate that, even simple attacks can easily penetrate the operational 3G/4G cellular networks. We further propose and evaluate defense solutions.
Keywords: aaa, accounting, attack, authentication, authorization, cellular networks, defense, mobile data services (ID#: 15-5067)
URLhttp://doi.acm.org/10.1145/2660267.2660346

 

Nostro, Nicola; Ceccarelli, Andrea; Bondavalli, Andrea; Brancati, Francesco; “Insider Threat Assessment: A Model-Based Methodology;” ACM SIGOPS Operating Systems Review, Volume 48, Issue 2, July 2014, Pages 3-12. Doi:   10.1145/2694737.2694740 Abstract:  Security is a major challenge for today's companies, especially ICT ones which manage large scale cyber-critical systems. Amongst the multitude of attacks and threats to which a system is potentially exposed, there are insider attackers i.e., users with legitimate access which abuse or misuse of their power, thus leading to unexpected security violation (e.g., acquire and disseminate sensitive information). These attacks are very difficult to detect and mitigate due to the nature of the attackers, which often are company's employees motivated by socio-economical reasons, and to the fact that attackers operate within their granted restrictions. It is a consequence that insider attackers constitute an actual threat for ICT organizations. In this paper we present our methodology, together with the application of existing supporting libraries and tools from the state-of-the-art, for insider threats assessment and mitigation. The ultimate objective is to define the motivations and the target of an insider, investigate the likeliness and severity of potential violations, and finally identify appropriate countermeasures. The methodology also includes a maintenance phase during which the assessment can be updated to reflect system changes. As case study, we apply our methodology to the crisis management system Secure!, which includes different kinds of users and consequently is potentially exposed to a large set of insider threats.
Keywords: attack path, insider threats, risk assessment, security (ID#: 15-5068)
URL: http://doi.acm.org/10.1145/2694737.2694740

 

Uta, Adina; Ivan, Ion; Popa, Marius; Ciurea, Cristian; Doinea, Mihai; “Security of Virtual Entities;” CompSysTech '14 Proceedings of the 15th International Conference on Computer Systems and Technologies, June 2014, Pages 278-285. Doi: 10.1145/2659532.2659634 Abstract: The concepts of basic virtual entity and derived virtual entity are presented. Their quality characteristics are defined in the context of multiple accessing by heterogeneous target group members. The development conditions of derived entities are established. For collections of basic virtual entities and derived entities, are constructed and implemented algorithms to ensure and increase the level of security in the virtual environment. To implement a complete set of virtual entities, measurements of the security level are performed, using a special metric built.
Keywords: basic entities, derived entities, multi-access, security, security metric, target group, virtual environment (ID#: 15-5069)
URL: http://doi.acm.org/10.1145/2659532.2659634

 

Okada, Kazuya; Hazeyama, Hiroaki; Kadobayashi, Youki; “Oblivious DDoS Mitigation with Locator/ID Separation Protocol;” CFI '14 Proceedings of The Ninth International Conference on Future Internet Technologies, June 2014, Article No. 8. Doi: 10.1145/2619287.2619291 Abstract: The need to keep an attacker oblivious of an attack mitigation effort is a very important component of a defense against denial of services (DoS) and distributed denial of services (DDoS) attacks because it helps to dissuade attackers from changing their attack patterns. Conceptually, DDoS mitigation can be achieved by two components. The first is a decoy server that provides a service function or receives attack traffic as a substitute for a legitimate server. The second is a decoy network that restricts attack traffic to the peripheries of a network, or which reroutes attack traffic to decoy servers. In this paper, we propose the use of a two-stage map table extension Locator/ID Separation Protocol (LISP) to realize a decoy network. We also describe and demonstrate how LISP can be used to implement an oblivious DDoS mitigation mechanism by adding a simple extension on the LISP MapServer. Together with decoy servers, this method can terminate DDoS traffic on the ingress end of an LISP-enabled network. We verified the effectiveness of our proposed mechanism through simulated DDoS attacks on a simple network topology. Our evaluation results indicate that the mechanism could be activated within a few seconds, and that the attack traffic can be terminated without incurring overhead on the MapServer.
Keywords: DoS/DDoS, LISP, mitigation, routing (ID#: 15-5070)
URL: http://doi.acm.org/10.1145/2619287.2619291


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Deterrence, 2014 (ACM Publications)

 
SoS Logo

Deterrence, 2014 (ACM Publications)

 

Finding ways both technical and behavioral to provide disincentives to threats is a promising area of research. Since most cybersecurity is “bolt on” rather than embedded, and since detection, response and forensics are expensive, time consuming processes, discouraging attacks can be a cost effective cybersecurity approach. The research works cited here were presented and published in 2014 in various publications of the ACM.  


 

Mohammed H. Almeshekah, Eugene H. Spafford; “Planning and Integrating Deception into Computer Security Defenses;” NSPW '14 Proceedings of the 2014 workshop on New Security Paradigms Workshop, September 2014, Pages 127-138. Doi: 10.1145/2683467.2683482 Abstract: Deceptive techniques played a prominent role in many human conflicts throughout history. Digital conflicts are no different as the use of deception has found its way to computing since at least the 1980s. However, many computer defenses that use deception were ad-hoc attempts to incorporate deceptive elements. In this paper, we present a model that can be used to plan and integrate deception in computer security defenses. We present an overview of fundamental reasons why deception works and the essential principles involved in using such techniques. We investigate the unique advantages deception-based mechanisms bring to traditional computer security defenses. Furthermore, we show how our model can be used to incorporate deception in many part of computer systems and discuss how we can use such techniques effectively. A successful deception should present plausible alternative(s) to the truth and these should be designed to exploit specific adversaries' biases. We investigate these biases and discuss how can they be used by presenting a number of examples.
Keywords: biases, computer security, deception (ID#: 15-5071)
URL: http://doi.acm.org/10.1145/2683467.2683482

 

Kazuya Okada, Hiroaki Hazeyama, Youki Kadobayashi; “Oblivious DDoS Mitigation With Locator/ID Separation Protocol;” CFI '14 Proceedings of The Ninth International Conference on Future Internet Technologies, June 2014, Article No. 8. Doi: 10.1145/2619287.2619291 Abstract: The need to keep an attacker oblivious of an attack mitigation effort is a very important component of a defense against denial of services (DoS) and distributed denial of services (DDoS) attacks because it helps to dissuade attackers from changing their attack patterns. Conceptually, DDoS mitigation can be achieved by two components. The first is a decoy server that provides a service function or receives attack traffic as a substitute for a legitimate server. The second is a decoy network that restricts attack traffic to the peripheries of a network, or which reroutes attack traffic to decoy servers. In this paper, we propose the use of a two-stage map table extension Locator/ID Separation Protocol (LISP) to realize a decoy network. We also describe and demonstrate how LISP can be used to implement an oblivious DDoS mitigation mechanism by adding a simple extension on the LISP MapServer. Together with decoy servers, this method can terminate DDoS traffic on the ingress end of an LISP-enabled network. We verified the effectiveness of our proposed mechanism through simulated DDoS attacks on a simple network topology. Our evaluation results indicate that the mechanism could be activated within a few seconds, and that the attack traffic can be terminated without incurring overhead on the MapServer.
Keywords:  DoS/DDoS, LISP, mitigation, routing (ID#: 15-5072)
URL: http://doi.acm.org/10.1145/2619287.2619291

 

Nicola Nostro, Andrea Ceccarelli, Andrea Bondavalli, Francesco Brancati; “Insider Threat Assessment: a Model-Based Methodology;” ACM SIGOPS Operating Systems Review, Volume 48 Issue 2, July 2014, Pages 3-12. Doi: 10.1145/2694737.2694740 Abstract: Security is a major challenge for today's companies, especially ICT ones which manage large scale cyber-critical systems. Amongst the multitude of attacks and threats to which a system is potentially exposed, there are insider attackers i.e., users with legitimate access which abuse or misuse of their power, thus leading to unexpected security violation (e.g., acquire and disseminate sensitive information). These attacks are very difficult to detect and mitigate due to the nature of the attackers, which often are company's employees motivated by socio-economical reasons, and to the fact that attackers operate within their granted restrictions. It is a consequence that insider attackers constitute an actual threat for ICT organizations. In this paper we present our methodology, together with the application of existing supporting libraries and tools from the state-of-the-art, for insider threats assessment and mitigation. The ultimate objective is to define the motivations and the target of an insider, investigate the likeliness and severity of potential violations, and finally identify appropriate countermeasures. The methodology also includes a maintenance phase during which the assessment can be updated to reflect system changes. As case study, we apply our methodology to the crisis management system Secure!, which includes different kinds of users and consequently is potentially exposed to a large set of insider threats.
Keywords: attack path, insider threats, risk assessment, security (ID#: 15-5073)
URL: http://doi.acm.org/10.1145/2694737.2694740

 

Adina Uta, Ion Ivan, Marius Popa, Cristian Ciurea, Mihai Doinea; “Security of Virtual Entities” CompSysTech '14 Proceedings of the 15th International Conference on Computer Systems and Technologies, June 2014, Pages 278-285. Doi:  10.1145/2659532.2659634 Abstract: The concepts of basic virtual entity and derived virtual entity are presented. Their quality characteristics are defined in the context of multiple accessing by heterogeneous target group members. The development conditions of derived entities are established. For collections of basic virtual entities and derived entities, are constructed and implemented algorithms to ensure and increase the level of security in the virtual environment. To implement a complete set of virtual entities, measurements of the security level are performed, using a special metric built.
Keywords: basic entities, derived entities, multi-access, security, security metric, target group, virtual environment (ID#: 15-5074)
URL: http://doi.acm.org/10.1145/2659532.2659634

 

Jafar Haadi H. Jafarian, Ehab Al-Shaer, Qi Duan; “Spatio-temporal Address Mutation for Proactive Cyber Agility against Sophisticated Attackers;” MTD '14 Proceedings of the First ACM Workshop on Moving Target Defense, December 2014, Pages 69-78. Doi: 10.1145/2663474.2663483 Abstract: The static one-to-one binding of hosts to IP addresses allows adversaries to conduct thorough reconnaissance in order to discover and enumerate network assets. Specifically, this fixed address mapping allows distributed network scanners to aggregate information gathered at multiple locations over different times in order to construct an accurate and persistent view of the network. The unvarying nature of this view enables adversaries to collaboratively share and reuse their collected reconnaissance information in various stages of attack planning and execution. This paper presents a novel moving target defense (MTD) technique which enables host-to-IP binding of each destination host to vary randomly across the network based on the source identity (spatial randomization) as well as time (temporal randomization). This spatio-temporal randomization will distort attackers' view of the network by causing the collected reconnaissance information to expire as adversaries transition from one host to another or if they stay long enough in one location. Consequently, adversaries are forced to re-scan the network frequently at each location or over different time intervals. These recurring probings significantly raises the bar for the adversaries by slowing down the attack progress, while improving its detectability. We introduce three novel metrics for quantifying the effectiveness of MTD defense techniques: deterrence, deception, and detectability. Using these metrics, we perform rigorous theoretical and experimental analysis to evaluate the efficacy of this approach. These analyses show that our approach is effective in countering a significant number of sophisticated threat models including collaborative reconnaissance, worm propagation, and advanced persistent threat (APT), in an evasion-free manner.
Keywords: adversary-awareness, ip address randomization, moving target defense (mtd), reconnaissance (ID#: 15-5075)
URL: http://doi.acm.org/10.1145/2663474.2663483

 

Aron Laszka, Benjamin Johnson, Pascal Schöttle, Jens Grossklags, Rainer Böhme; “Secure Team Composition to Thwart Insider Threats and Cyber-Espionage;” ACM Transactions on Internet Technology (TOIT) - Special Issue on Pricing and Incentives in Networks and Systems and Regular Papers, Volume 14 Issue 2-3, October 2014,  Article No. 19. Doi: 10.1145/2663499 Abstract: We develop a formal nondeterministic game model for secure team composition to counter cyber-espionage and to protect organizational secrets against an attacker who tries to sidestep technical security mechanisms by offering a bribe to a project team member. The game captures the adversarial interaction between the attacker and the project manager who has a secret she wants to protect but must share with a team of individuals selected from within her organization. Our interdisciplinary work is important in the face of the multipronged approaches utilized by well-motivated attackers to circumvent the fortifications of otherwise well-defended targets.
Keywords: Insider threat, access control, cyber-espionage, game theory, human factor, management of information security (ID#: 15-5076)
URL: http://doi.acm.org/10.1145/2663499

 

Robert Crossler, France Bélanger; ”An Extended Perspective on Individual Security Behaviors: Protection Motivation Theory and a Unified Security Practices (USP) Instrument;” ACM SIGMIS, Volume 45 Issue 4, November 2014, Pages 51-71. Doi: 10.1145/2691517.2691521 Abstract: Security threats regularly affect users of home computers. As such, it is important to understand the practices of users for protecting their computers and networks, and to identify determinants of these practices. Several recent studies utilize Protection Motivation Theory (PMT) to explore these practices. However, these studies focus on one specific security protection behavior or on intentions to use a generic measure of security protection tools or techniques (practices). In contrast, this study empirically tests the effectiveness of PMT to explain a newly developed measure for collectively capturing several individual security practices. The results show that PMT explains an important portion of the variance in the unified security practices measure, and demonstrates the importance of explaining individual security practices as a whole as opposed to one particular behavior individually. Implications of the study for research and practice are discussed.
Keywords: home user, information security, protection motivation theory, security practices (ID#: 15-5077)
URL:   http://doi.acm.org/10.1145/2691517.2691521

 

Joan Feigenbaum, Aaron D. Jaggard, Rebecca N. Wright; “Open vs. Closed Systems for Accountability;” HotSoS '14 Proceedings of the 2014 Symposium and Bootcamp on the Science of Security, April 2014, Article No. 4.  Doi: 10.1145/2600176.2600179 Abstract: The relationship between accountability and identity in online life presents many interesting questions. Here, we first systematically survey the various (directed) relationships among principals, system identities (nyms) used by principals, and actions carried out by principals using those nyms. We also map these relationships to corresponding accountability-related properties from the literature. Because punishment is fundamental to accountability, we then focus on the relationship between punishment and the strength of the connection between principals and nyms. To study this particular relationship, we formulate a utility-theoretic framework that distinguishes between principals and the identities they may use to commit violations. In doing so, we argue that the analogue applicable to our setting of the well known concept of quasilinear utility is insufficiently rich to capture important properties such as reputation. We propose more general utilities with linear transfer that do seem suitable for this model.  In our use of this framework, we define notions of "open" and "closed" systems. This distinction captures the degree to which system participants are required to be bound to their system identities as a condition of participating in the system. This allows us to study the relationship between the strength of identity binding and the accountability properties of a system.
Keywords: accountability, identity, utility (ID#: 15-5078)
URL: http://doi.acm.org/10.1145/2600176.2600179

 

Md. Sadek Ferdous, Gethin Norman, Ron Poet; “Mathematical Modelling of Identity, Identity Management and Other Related Topics;” SIN '14 Proceedings of the 7th International Conference on Security of Information and Networks, September 2014, Pages 9. Doi: 10.1145/2659651.2659729 Abstract: There exist disparate sets of definitions with different semantics on different topics of Identity Management which often lead to misunderstanding. A few efforts can be found compiling several related vocabularies into a single place to build up a set of definitions based on a common semantic. However, these efforts are not comprehensive and are only textual in nature. In essence, a mathematical model of identity and identity management covering all its aspects is still missing. In this paper we build up a mathematical model of different core topics covering a wide range of vocabularies related to Identity Management. At first we build up a mathematical model of Digital Identity. Then we use the model to analyse different aspects of Identity Management. Finally, we discuss three applications to illustrate the applicability of our approach. Being based on mathematical foundations, the approach can be used to build up a solid understanding on different topics of Identity Management.
Keywords: Identity, Identity Management, Mathematical Modelling (ID#: 15-5079)
URLhttp://doi.acm.org/10.1145/2659651.2659729

 

Enric Junqué de Fortuny, Marija Stankova, Julie Moeyersoms, Bart Minnaert, Foster Provost, David Martens; “Corporate Residence Fraud Detection;”  KDD '14 Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, August 2014, Pages 1650-1659. Doi: 10.1145/2623330.2623333 Abstract: With the globalisation of the world's economies and ever-evolving financial structures, fraud has become one of the main dissipaters of government wealth and perhaps even a major contributor in the slowing down of economies in general. Although corporate residence fraud is known to be a major factor, data availability and high sensitivity have caused this domain to be largely untouched by academia. The current Belgian government has pledged to tackle this issue at large by using a variety of in-house approaches and cooperations with institutions such as academia, the ultimate goal being a fair and efficient taxation system. This is the first data mining application specifically aimed at finding corporate residence fraud, where we show the predictive value of using both structured and fine-grained invoicing data. We further describe the problems involved in building such a fraud detection system, which are mainly data-related (e.g. data asymmetry, quality, volume, variety and velocity) and deployment-related (e.g. the need for explanations of the predictions made).
Keywords: corporate residence fraud, fraud detection, structured data, transactional data (ID#: 15-5080)
URL: http://doi.acm.org/10.1145/2623330.2623333

 

Antonio M. Mora, Paloma De las Cuevas, Juan Julián Merelo, Sergio Zamarripa, Anna I. Esparcia-Alcázar; “Enforcing Corporate Security Policies via Computational Intelligence Techniques;”  GECCO Comp '14 Proceedings of the 2014 Conference Companion on Genetic and Evolutionary Computation Companion, July 2014, Pages 1245-1252. Doi: 10.1145/2598394.2605438 Abstract: This paper presents an approach, based in a project in development, which combines Data Mining, Machine Learning and Computational Intelligence techniques, in order to create a user-centric and adaptable corporate security system. Thus, the system, named MUSES, will be able to analyse the user's behaviour (modelled as events) when interacting with the company's server, accessing to corporate assets, for instance. As a result of this analysis, and after the application of the aforementioned techniques, the Corporate Security Policies, and specifically, the Corporate Security Rules will be adapted to deal with new anomalous situations, or to better manage user's behaviour. The work reviews the current state of the art in security issues resolution by means of these kind of methods. Then it describes the MUSES features in this respect and compares them with the existing approaches.
Keywords: computational intelligence, corporate security policies, evolutionary computation, security rules (ID#: 15-5081)
URL: http://doi.acm.org/10.1145/2598394.2605438

 

Chuangang Ren, Kai Chen, Peng Liu; “Droidmarking: Resilient Software Watermarking for Impeding Android Application Repackaging;” ASE '14 Proceedings of the 29th ACM/IEEE International Conference on Automated Software Engineering, September 2014, Pages 635-646. Doi: 10.1145/2642937.2642977 Abstract: Software plagiarism in Android markets (app repackaging) is raising serious concerns about the health of the Android ecosystem. Existing app repackaging detection techniques fall short in detection efficiency and in resilience to circumventing attacks; this allows repackaged apps to be widely propagated and causes extensive damages before being detected. To overcome these difficulties and instantly thwart app repackaging threats, we devise a new dynamic software watermarking technique - Droidmarking - for Android apps that combines the efforts of all stakeholders and achieves the following three goals: (1) copyright ownership assertion for developers, (2) real-time app repackaging detection on user devices, and (3) resilience to evading attacks. Distinct from existing watermarking techniques, the watermarks in Droidmarking are non-stealthy, which means that watermark locations are not intentionally concealed, yet still are impervious to evading attacks. This property effectively enables normal users to recover and verify watermark copyright information without requiring a confidential watermark recognizer. Droidmarking is based on a primitive called self-decrypting code (SDC). Our evaluations show that Droidmarking is a feasible and robust technique to effectively impede app repackaging with relatively small performance overhead.
Keywords: android, app repackaging, software watermarking (ID#: 15-5082)
URL: http://doi.acm.org/10.1145/2642937.2642977

 

Wu Zhou, Zhi Wang, Yajin Zhou, Xuxian Jiang: “DIVILAR: Diversifying Intermediate Language for Anti-Repackaging on Android Platform;” CODASPY '14 Proceedings of the 4th ACM Cnference on Data and Application Security and Privacy, March 2014, Pages 199-210. Doi: 10.1145/2557547.2557558  Abstract: App repackaging remains a serious threat to the emerging mobile app ecosystem. Previous solutions have mostly focused on the postmortem detection of repackaged apps by measuring similarity among apps. In this paper, we propose DIVILAR, a virtualization-based protection scheme to enable self-defense of Android apps against app repackaging. Specifically, it re-encodes an Android app in a diversified virtual instruction set and uses a specialized execute engine for these virtual instructions to run the protected app. However, this extra layer of execution may cause significant performance overhead, rendering the solution unacceptable for daily use. To address this challenge, we leverage a light-weight hooking mechanism to hook into Dalvik VM, the execution engine for Dalvik bytecode, and piggy-back the decoding of virtual instructions to that of Dalvik bytecode. By compositing virtual and Dalvik instruction execution, we can effectively eliminate this extra layer of execution and significantly reduce the performance overhead. We have implemented a prototype of DIVILAR. Our evaluation shows that DIVILAR is resilient against existing static and dynamic analysis, including these specific to VM-based protection. Further performance evaluation demonstrates its efficiency for daily use (an average of 16.2 and 8.9 increase to the start time and run time, respectively).
Keywords: android, anti-repackaging, virtual machine (ID#: 15-5083)
URLhttp://doi.acm.org/10.1145/2557547.2557558

 

Ewa Syta, Henry Corrigan-Gibbs, Shu-Chun Weng, David Wolinsky, Bryan Ford, Aaron Johnson; “Security Analysis of Accountable Anonymity in Dissent;” ACM Transactions on Information and System Security (TISSEC), Volume 17 Issue 1, August 2014, Article No. 4. Doi: 10.1145/2629621 Abstract: Users often wish to communicate anonymously on the Internet, for example, in group discussion or instant messaging forums. Existing solutions are vulnerable to misbehaving users, however, who may abuse their anonymity to disrupt communication. Dining Cryptographers Networks (DC-nets) leave groups vulnerable to denial-of-service and Sybil attacks; mix networks are difficult to protect against traffic analysis; and accountable voting schemes are unsuited to general anonymous messaging. Dissent is the first general protocol offering provable anonymity and accountability for moderate-size groups, while efficiently handling unbalanced communication demands among users. We present an improved and hardened dissent protocol, define its precise security properties, and offer rigorous proofs of these properties. The improved protocol systematically addresses the delicate balance between provably hiding the identities of well-behaved users, while provably revealing the identities of disruptive users, a challenging task because many forms of misbehavior are inherently undetectable. The new protocol also addresses several nontrivial attacks on the original dissent protocol stemming from subtle design flaws.
Keywords: Anonymous communication, accountable anonymity, provable security (ID#: 15-5084)
URL:   http://doi.acm.org/10.1145/2629621

 

Mengtao Sun, Gang Tan; “NativeGuard: Protecting Android Applications from Third-Party Native Libraries;” WiSec '14 Proceedings of the 2014 ACM Conference on Security and Privacy in Wireless & Mobile Networks, July 2014, Pages 165-176. Doi: 10.1145/2627393.2627396 Abstract: Android applications often include third-party libraries written in native code. However, current native components are not well managed by Android's security architecture. We present NativeGuard, a security framework that isolates native libraries from other components in Android applications. Leveraging the process-based protection in Android, NativeGuard isolates native libraries of an Android application into a second application where unnecessary privileges are eliminated. NativeGuard requires neither modifications to Android nor access to the source code of an application. It addresses multiple technical issues to support various interfaces that Android provides to the native world. Experimental results demonstrate that our framework works well with a set of real-world applications, and incurs only modest overhead on benchmark programs.
Keywords: android, java native interface, privilege isolation (ID#: 15-5085)
URL: http://doi.acm.org/10.1145/2627393.2627396

 

Christopher Smowton, Jacob R. Lorch, David Molnar, Stefan Saroiu, Alec Wolman; “Zero-effort Payments: Design, Deployment, and Lessons;” UbiComp '14 Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing, September 2014, Pages 763-774. Doi: 10.1145/2632048.2632067 Abstract: This paper presents Zero-Effort Payments (ZEP), a seamless mobile computing system designed to accept payments with no effort on the customer's part beyond a one-time opt-in. With ZEP, customers need not present cards nor operate smartphones to convey their identities. ZEP uses three complementary identification technologies: face recognition, proximate device detection, and human assistance. We demonstrate that the combination of these technologies enables ZEP to scale to the level needed by our deployments.  We designed and built ZEP, and demonstrated its usefulness across two real-world deployments lasting five months of continuous deployment, and serving 274 customers. The different nature of our deployments stressed different aspects of our system. These challenges led to several system design changes to improve scalability and fault-tolerance.
Keywords: BLE, biometrics, Bluetooth, face recognition, fault tolerance, indoor localization, latency, mobile payments, scalability (ID#: 15-5086)
URL:   http://doi.acm.org/10.1145/2632048.2632067

 

Chunyi Peng, Chi-Yu Li, Hongyi Wang, Guan-Hua Tu, Songwu Lu; “Real Threats to Your Data Bills: Security Loopholes and Defenses in Mobile Data Charging;” CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 727-738. Doi: 10.1145/2660267.2660346 Abstract: Secure mobile data charging (MDC) is critical to cellular network operations. It must charge the right user for the right volume that (s)he authorizes to consume (i.e., requirements of authentication, authorization, and accounting (AAA)). In this work, we conduct security analysis of the MDC system in cellular networks. We find that all three can be breached in both design and practice, and identify three concrete vulnerabilities: authentication bypass, authorization fraud and accounting volume inaccuracy. The root causes lie in technology fundamentals of cellular networks and the Internet IP design, as well as imprudent implementations. We devise three showcase attacks to demonstrate that, even simple attacks can easily penetrate the operational 3G/4G cellular networks. We further propose and evaluate defense solutions.
Keywords:  aaa, accounting, attack, authentication, authorization, cellular networks, defense, mobile data services (ID#: 15-5087)
URL: http://doi.acm.org/10.1145/2660267.2660346

 

Agostino Bruzzone, Marina Massei, Francesco Longo, Simonluca Poggi, Matteo Agresta, Christian Bartolucci, Letizia Nicoletti; “Human Behavior Simulation for Complex Scenarios Based on Intelligent Agents;” ANSS '14 Proceedings of the 2014 Annual Simulation Symposium, April 2014, Article No. 10. Doi: 2664292.2664302 Abstract: The focus of this paper is to develop a scenario and realistic case study to be applied in the human behavior simulation for complex scenarios involving coalition operations; for this purpose the intelligent agents will be used in order to reproduce the interactions among forces, local population and interest groups as well as the consequences of different COAs (Courses of Actions). The proposed modeling approach considers the complex interactions among many variables and resulting as effects of the Commander decisions in a comprehensive scenario involving multiple layers(i.e. Political, Military, Economic, Diplomatic, Social, Media and Infrastructure); the authors proposes here a realistic scenario for using Interoperable Simulation on Crisis Management related to a NEO (Non-combatant Evacuation Operation).
Keywords: computer generated forces, crisis management, human behavior models, intelligent agents, interoperable simulation, non combatant evacuation operations (ID#: 15-5088)
URL:   http://dl.acm.org/citation.cfm?id=2664292.2664302

 

R. Cohen, D. Y. Lam, N. Agarwal, M. Cormier, J. Jagdev, T. Jin, M. Kukreti, J. Liu, K. Rahim, R. Rawat, W. Sun, D. Wang, M. Wexler; “Using Computer Technology to Address the Problem of Cyberbullying;” ACM SIGCAS Computers and Society, Volume 44 Issue 2, July 2014, Pages 52-61. Doi: 10.1145/2656870.2656876 Abstract: The issue of cyberbullying is a social concern that has arisen due to the prevalent use of computer technology today. In this paper, we present a multi-faceted solution to mitigate the effects of cyberbullying, one that uses computer technology in order to combat the problem. We propose to provide assistance for various groups affected by cyberbullying (the bullied and the bully, both). Our solution was developed through a series of group projects and includes i) technology to detect the occurrence of cyberbullying ii) technology to enable reporting of cyberbullying iii) proposals to integrate third-party assistance when cyberbullying is detected iv) facilities for those with authority to manage online social networks or to take actions against detected bullies. In all, we demonstrate how this important social problem which arises due to computer technology can also leverage computer technology in order to take steps to better cope with the undesirable effects that have arisen.
Keywords: cyberbullying, education, online safety, social network (ID#: 15-5089)
URLhttp://doi.acm.org/10.1145/2656870.2656876


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Elliptic Curve Cryptography from ACM, 2014, Part 1

 
SoS Logo

Elliptic Curve Cryptography

ACM (2014)

 Part 1

 

In Issue Number 4 of the 2015 Newsletter, the editors offered publications of interest about Elliptic Curve Cryptography from IEEE sources in five parts.  This bibliography adds research work published by the Association for Computing Machinery (ACM) in 2014.


Andrea Höller, Norbert Druml, Christian Kreiner, Christian Steger, Tomaz Felicijan; Hardware/Software Co-Design of Elliptic-Curve Cryptography for Resource-Constrained Applications; DAC '14 Proceedings of the 51st Annual Design Automation Conference, June 2014, Pages 1-6. Doi: 10.1145/2593069.2593148 Abstract: ECC is an asymmetric encryption providing a comparably high cryptographic strength in relation to the key sizes employed. This makes ECC attractive for resource-constrained systems. While pure hardware solutions usually offer a good performance and a low power consumption, they are inflexible and typically lead to a high area.  Here, we show a flexible design approach using a 163-bit GF(2m) elliptic curve and an 8-bit processor. We propose improvements to state-of-the-art software algorithms and present innovative hardware/software codesign variants. The proposed implementation offers highly competitive performance in terms of performance and area.
Keywords: Elliptic Curve Cryptography, Embedded Devices, RFID (ID#: 15-4504)
URLhttp://doi.acm.org/10.1145/2593069.2593148

 

Yu-Fang Chen, Chang-Hong Hsu, Hsin-Hung Lin, Peter Schwabe, Ming-Hsien Tsai, Bow-Yaw Wang, Bo-Yin Yang, Shang-Yi Yang; Verifying Curve25519 Software; CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 299-309. Doi:  10.1145/2660267.2660370 Abstract: This paper presents results on formal verification of high-speed cryptographic software. We consider speed-record-setting hand-optimized assembly software for Curve25519 elliptic-curve key exchange presented by Bernstein et al. at CHES 2011. Two versions for different microarchitectures are available. We successfully verify the core part of the computation, and reproduce detection of a bug in a previously published edition. An SMT solver supporting array and bit-vector theories is used to establish almost all properties. Remaining properties are verified in a proof assistant with simple rewrite tactics. We also exploit the compositionality of Hoare logic to address the scalability issue. Essential differences between both versions of the software are discussed from a formal-verification perspective.
Keywords: boolector, coq, elliptic-curve cryptography, hoare logic, optimized assembly, smt solver (ID#: 15-4505)
URLhttp://doi.acm.org/10.1145/2660267.2660370

 

Ruan de Clercq, Leif Uhsadel, Anthony Van Herrewege, Ingrid Verbauwhede;  Ultra Low-Power Implementation of ECC on the ARM Cortex-M0+; DAC '14 Proceedings of the 51st Annual Design Automation Conference, June 2014, Pages 1-6. Doi:10.1145/2593069.2593238 Abstract: In this work, elliptic curve cryptography (ECC) is used to make a fast, and very low-power software implementation of a public-key cryptography algorithm on the ARM Cortex-M0+. An optimization of the López-Dahab field multiplication method is proposed, which aims to reduce the number of memory accesses, as this is a slow operation on the target platform. A mixed C and assembly implementation was made; a random point multiplication requires 34.16 μJ, whereas our fixed point multiplication requires 20.63 μJ. Our implementation's energy consumption beats all other software implementations, on any platform, by a factor of at least 3.3.
Keywords: ECC, Embedded, Low-Power, Public-key cryptography (ID#: 15-4506)
URLhttp://doi.acm.org/10.1145/2593069.2593238

 

Eun-Jun Yoon, Kee-Young Yoo; A Biometric-Based Authenticated Key Agreement Scheme Using ECC for Wireless Sensor Networks; SAC '14 Proceedings of the 29th Annual ACM Symposium on Applied Computing, March 2014, Pages 699-705. Doi: 10.1145/2554850.2555045 Abstract: Recently, various user authentication schemes have successfully drawn researchers' attention and been studied widely in order to guarantee secure communication for wireless sensor networks (WSNs). This paper proposes a new biometric-based authenticated key agreement scheme using Elliptic Curve Cryptosystem (ECC) for WSN to minimize the complexity of computational costs between the sensor node and the GW-node and fit low-power sensor network environments. Compared with previous schemes, the newly proposed scheme has the following more practical characteristics: (1) it provides secure session key agreement function by adopting elliptic curve cryptosystem, (2) it can reduce the total execution time and memory requirement due to the elliptic curve cryptography, (3) it is not only secure against well-known cryptographical attacks but also provides perfect forward secrecy, and (4) it does not require the user password and uses only hash function. Analysis results show that the proposed scheme is extremely suitable for use in WSNs since it provides security, reliability, and efficiency.
Keywords: authentication, biometrics, impersonation attack, key agreement, security, wireless sensor networks (ID#: 15-4507)
URL: http://doi.acm.org/10.1145/2554850.2555045

 

Binod Vaidya, Dimitrios Makrakis, Hussein Mouftah;  Effective Public Key Infrastructure for Vehicle-to-Grid Network; DIVANet '14 Proceedings of the Fourth ACM International Symposium on Development and Analysis Of Intelligent Vehicular Networks And Applications, September 2014, Pages 95-101. Doi: 10.1145/2656346.2656348 Abstract: A growth of electric vehicle (EV) technologies likely leads a fundamental shift not only in transportation sector but also in the existing electric power grid infrastructure. In Smart grid infrastructure, vehicle-to-grid (V2G) network can be formed such that participating EVs can be used to store energy and supply this energy back to the power grid when required. To realize proper deployment of V2G network, charging infrastructure having various entities such as charging facility, clearinghouse, and energy provider has to be constructed. So use of Public key infrastructure (PKI) is indispensable for provisioning security solution in V2G network. The ISO/IEC 15118 standard is ascribed that incorporates X.509 PKI solution for V2G network. However, as traditional X.509 based PKI for V2G network has several shortcomings, we have proposed an effectual PKI for a V2G network that is built on based on elliptic curve cryptography and self-certified public key technique having implicit certificate to reduce certificate size and certificate verification time. We show that the proposed solution outperforms the existing solution.
Keywords: ECC, ISO/IEC 15118, PKI, X.509, implicit certificate, smart grid, vehicle-to-grid network (ID#: 15-4508)
URL: http://doi.acm.org/10.1145/2656346.2656348

 

Jo Vliegen, Nele Mentens, Ingrid Verbauwhede;  Secure, Remote, Dynamic Reconfiguration of FPGAs; ACM Transactions on Reconfigurable Technology and Systems (TRETS), Volume 7 Issue 4, January 2015, Article No. 35.  Doi:  10.1145/2629423 Abstract: With the widespread availability of broadband Internet, Field-Programmable Gate Arrays (FPGAs) can get remote updates in the field. This provides hardware and software updates, and enables issue solving and upgrade ability without device modification. In order to prevent an attacker from eavesdropping or manipulating the configuration data, security is a necessity.  This work describes an architecture that allows the secure, remote reconfiguration of an FPGA. The architecture is partially dynamically reconfigurable and it consists of a static partition that handles the secure communication protocol and a single reconfigurable partition that holds the main application. Our solution distinguishes itself from existing work in two ways: it provides entity authentication and it avoids the use of a trusted third party. The former provides protection against active attackers on the communication channel, while the latter reduces the number of reliable entities. Additionally, this work provides basic countermeasures against simple power-oriented side-channel analysis attacks.  The result is an implementation that is optimized toward minimal resource occupation. Because configuration updates occur infrequently, configuration speed is of minor importance with respect to area. A prototype of the proposed design is implemented, using 5,702 slices and having minimal downtime.
Keywords: DPR, FPGA, partial reconfiguration, remote, secure (ID#: 15-4509)
URLhttp://doi.acm.org/10.1145/2629423

 

Debapriya Basu Roy, Debdeep Mukhopadhyay, Masami Izumi, Junko Takahashi; Tile Before Multiplication: An Efficient Strategy to Optimize DSP Multiplier for Accelerating Prime Field ECC for NIST Curves; DAC '14 Proceedings of the 51st Annual Design Automation Conference, June 2014, Article 177, Pages 1-6. Doi: 10.1145/2593069.2593234 Abstract: High speed DSP blocks present in the modern FPGAs can be used to implement prime field multiplication to accelerate Elliptic Curve scalar multiplication in prime fields. However, compared to logic slices, DSP blocks are scarce resources, hence its usage needs to be optimized. The asymmetric 25 × 18 signed multipliers in FPGAs open a new paradigm for multiplier design, where operand decomposition becomes equivalent to a tiling problem. Previous literature has reported that for asymmetric multiplier, it is possible to generate a tiling (known as non-standard tiling) which requires less number of DSP blocks compared to standard tiling, generated by school book algorithm. In this paper, we propose a generic technique for such tiling generation and generate this tiling for field multiplication in NIST specified curves. We compare our technique with standard school book algorithm to highlight the improvement. The acceleration in ECC scalar multiplication due to the optimized field multiplier is experimentally validated for P-256. The impact of this accelerated scalar multiplication is shown for the key encapsulation algorithm PSEC-KEM (Provably Secure Key Encapsulation Mechanism).
Keywords: DSP Blocks, ECC, FPGA, NIST Curves (ID#: 15-4510)
URL: http://doi.acm.org/10.1145/2593069.2593234

 

Gilles Barthe, François Dupressoir, Pierre-Alain Fouque, Benjamin Grégoire, Jean-Christophe Zapalowicz; Synthesis of Fault Attacks on Cryptographic Implementations; CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 1016-1027. Doi:  10.1145/2660267.2660304 Abstract: Fault attacks are attacks in which an adversary with physical access to a cryptographic device, say a smartcard, tampers with the execution of an algorithm to retrieve secret material. Since the seminal Bellcore attack on modular exponentiation, there has been extensive work to discover new fault attacks against cryptographic schemes and develop countermeasures against such attacks. Originally focused on high-level algorithmic descriptions, these efforts increasingly focus on concrete implementations. While lowering the abstraction level leads to new fault attacks, it also makes their discovery significantly more challenging. In order to face this trend, it is therefore desirable to develop principled, tool-supported approaches that allow a systematic analysis of the security of cryptographic implementations against fault attacks.  We propose, implement, and evaluate a new approach for finding fault attacks against cryptographic implementations. Our approach is based on identifying implementation-independent mathematical properties, or fault conditions. We choose fault conditions so that it is possible to recover secret data purely by computing on sufficiently many data points that satisfy them. Fault conditions capture the essence of a large number of attacks from the literature, including lattice-based attacks on RSA. Moreover, they provide a basis for discovering automatically new attacks: using fault conditions, we specify the problem of finding faulted implementations as a program synthesis problem. Using a specialized form of program synthesis, we discover multiple faulted attacks on RSA and ECDSA. Several of the attacks found by our tool are new, and of independent interest. 
Keywords: automated proofs, fault attacks, program synthesis, program verification (ID#: 15-4511)
URL: http://doi.acm.org/10.1145/2660267.2660304

 

Peter Chapin, Christian Skalka;  SpartanRPC: Remote Procedure Call Authorization in Wireless Sensor Networks; ACM Transactions on Information and System Security (TISSEC), Volume 17 Issue 2, November 2014, Article No. 5. Doi: 10.1145/2644809 Abstract: We describe SpartanRPC, a secure middleware technology that supports cooperation between distinct security domains in wireless sensor networks. SpartanRPC extends nesC to provide a link-layer remote procedure call (RPC) mechanism, along with an enhancement of configuration wirings that allow specification of remote, dynamic endpoints. RPC invocation is secured via an authorization logic that enables servers to specify access policies and requires clients to prove authorization. This mechanism is implemented using a combination of symmetric and public key cryptography. We report on benchmark testing of a prototype implementation and on an application of the framework that supports secure collaborative use and administration of an existing WSN data-gathering system.
Keywords: Remote procedure call, sensor networks, trust management (ID#: 15-4512)
URL: http://doi.acm.org/10.1145/2644809

 

Gary Anthes; French Team Invents Faster Code-Breaking Algorithm; Communications of the ACM, Volume 57 Issue 1, January 2014, Pages 21-23. Doi: 10.1145/2555807 Abstract: New method can crack certain cryptosystems far faster than earlier alternatives.
Keywords:  (not provided) (ID#: 15-4513)
URL: http://doi.acm.org/10.1145/2555807

 

Pawel Szalachowski, Stephanos Matsumoto, Adrian Perrig; PoliCert: Secure and Flexible TLS Certificate Management; CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 406-417.  Doi: 10.1145/2660267.2660355  Abstract: The recently proposed concept of publicly verifiable logs is a promising approach for mitigating security issues and threats of the current Public-Key Infrastructure (PKI). Although much progress has been made towards a more secure infrastructure, the currently proposed approaches still suffer from security vulnerabilities, inefficiency, or incremental deployment challenges.  In this paper we propose PoliCert, a comprehensive log-based and domain-oriented architecture that enhances the security of PKI by offering: a) stronger authentication of a domain's public keys, b) comprehensive and clean mechanisms for certificate management, and c) an incentivised incremental deployment plan. Surprisingly, our approach has proved fruitful in addressing other seemingly unrelated problems such as TLS-related error handling and client/server misconfiguration.
Keywords: certificate validation, public log servers, public-key certificate, public-key infrastructure, security policy, ssl, tls (ID#: 15-4514)
URL: http://doi.acm.org/10.1145/2660267.2660355

 

S. Prayla Shyry; Novel Enhanced Encryption Algorithm for Shared Key Generation; ICONIAAC '14 Proceedings of the 2014 International Conference on Interdisciplinary Advances in Applied Computing, October 2014, Article No. 41. Doi:  10.1145/2660859.2660953 Abstract: The central theme in dynamic environments is secured transmission of packets to remote Cooperative group. In dynamic environments, a new encrypted shared key has to be generated for every join/leave event and forwarded to the key distribution centre (KDC) of the requester. Existing algorithms have used rekeying options for shared key generation. But it requires more bandwidth and time which ultimately degrades the performance of the network. In this paper, a novel Enhanced Encryption Algorithm (EEA) for generating a secured (encrypted) shared key is proposed for the transmission of packets in dynamic environments.
Keywords: Key Distribution Centre, Re-keying, Shared key (ID#: 15-4515)
URL: http://doi.acm.org/10.1145/2660859.2660953

 

Marco Tiloca; Efficient Protection of Response Messages in DTLS-Based Secure Multicast Communication; SIN '14 Proceedings of the 7th International Conference on Security of Information and Networks, September 2014, Pages  466. Doi: 10.1145/2659651.2659668 Abstract: DTLS is a standardized security protocol designed to provide end-to-end secure communication among two peers, and particularly considered for the emerging Internet of Things. In order to protect group communication, the IETF is currently working on a method to secure multicast messages through the same DTLS security services. However, such an approach relies on traditional DTLS sessions to protect unicast responses to multicast messages. This increases the amount of security material stored by group members and can have a relevant impact on network performance. In this paper we propose an extension to the IETF approach which allows to efficiently protect group responses by reusing the same group key material. Our proposal does not require to establish additional DTLS sessions, thus preserving high communication performance within the group and limiting storage overhead on group members. Furthermore, we discuss a suitable key management policy to provision and renew group key material.
Keywords: DTLS, Group communication, Multicast, Security (ID#: 15-4516)
URL: http://doi.acm.org/10.1145/2659651.2659668

 

Vireshwar Kumar, Jung-Min Park, Kaigui Bian; Blind Transmitter Authentication for Spectrum Security and Enforcement; CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security,  November 2014, pages 787-798. Doi: 10.1145/2660267.2660318  Abstract: Recent advances in spectrum access technologies, such as cognitive radios, have made spectrum sharing a viable option for addressing the spectrum shortage problem. However, these advances have also contributed to the increased possibility of "hacked" or "rogue" radios causing harm to the spectrum sharing ecosystem by causing significant interference to other wireless devices. One approach for countering such threats is to employ a scheme that can be used by a regulatory entity (e.g., FCC) to uniquely identify a transmitter by authenticating its waveform. This enables the regulatory entity to collect solid evidence of rogue transmissions that can be used later during an adjudication process. We coin the term Blind Transmitter Authentication (BTA) to refer to this approach. Unlike in the existing techniques for PHY-layer authentication, in BTA, the entity that is authenticating the waveform is not the intended receiver. Hence, it has to extract and decode the authentication signal "blindly" with little or no knowledge of the transmission parameters. In this paper, we propose a novel BTA scheme called Frequency offset Embedding for Authenticating Transmitters (FEAT). FEAT embeds the authentication information into the transmitted waveform by inserting an intentional frequency offset. Our results indicate that FEAT is a practically viable approach and is very robust to harsh channel conditions. Our evaluation of FEAT is based on theoretical bounds, simulations, and indoor experiments using an actual implementation.
Keywords: cognitive radios, phy-layer authentication, spectrum sharing and management, transmitter identification (ID#: 15-4517)
URL: http://doi.acm.org/10.1145/2660267.2660318

 

David Basin, Cas Cremers, Tiffany Hyun-Jin Kim, Adrian Perrig, Ralf Sasse, Pawel Szalachowski; ARPKI: Attack Resilient Public-Key Infrastructure;  CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 382-393. Doi:  10.1145/2660267.2660298 Abstract: We present ARPKI, a public-key infrastructure that ensures that certificate-related operations, such as certificate issuance, update, revocation, and validation, are transparent and accountable. ARPKI is the first such infrastructure that systematically takes into account requirements identified by previous research. Moreover, ARPKI is co-designed with a formal model, and we verify its core security property using the Tamarin prover. We present a proof-of-concept implementation providing all features required for deployment. ARPKI efficiently handles the certification process with low overhead and without incurring additional latency to TLS. ARPKI offers extremely strong security guarantees, where compromising n-1 trusted signing and verifying entities is insufficient to launch an impersonation attack. Moreover, it deters misbehavior as all its operations are publicly visible.
Keywords: attack resilience, certificate validation, formal validation, public log servers, public-key infrastructure, tls (ID#: 15-4518)
URL: http://doi.acm.org/10.1145/2660267.2660298

 

Mario Cornejo, Sylvain Ruhault; Characterization of Real-Life PRNGs under Partial State Corruption; CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 1004-1015. Doi: 10.1145/2660267.2660377 Abstract: Pseudo-random number generators (PRNGs) are widely used as a randomness source in cryptographic applications. It is essential for their security that the internal state, in which the entropy is accumulated, is kept secret. However, this assumption is unrealistic for PRNGs that are implemented in software, as the internal state can be partially corrupted through memory corruption bugs such as buffer overflows or through faults attacks. The recent Heartbleed bug gives us a concrete illustration of this vulnerability. In this work we study several widely used PRNGs from different popular providers, including OpenSSL, OpenJDK, Android, IBM and Bouncy Castle and we characterize how they handle their internal states. We formalize a framework based on the most recent and strongest security model called robustness of PRNGs to analyze these PRNGs and their implementations. With this framework we capture the notion of how much of the internal state must be corrupted in order to generate a predictable output. Using this framework, we determine the number of bits of the internal state that an attacker needs to corrupt in order to produce a predictable output. We also show that two of the PRNGs do not require state compromise to generate a non-random output. To the best of our knowledge, we present the first thorough characterization of an IBM implementation of a PRNG.
Keywords: android, java, openssl, randomness, security models (ID#: 15-4519)
URL: http://doi.acm.org/10.1145/2660267.2660377

 

Kuan-Chung Huang, Yu-Chen Wu, Che-Wei Chang, Tei-Wei Kuo, Chi-Sheng Shih, Qingxu Deng; Real-time Process Synchronization for Systems with Accelerators; RACS '14 Proceedings of the 2014 Conference on Research in Adaptive and Convergent Systems, October 2014, Pages 350-355. Doi:  10.1145/2663761.2664220 Abstract: This work is motivated by the needs to manage the priority inversion problem without sacrificing the utilization of increasingly popular hardware accelerators. A new mechanism is developed to dedicate accelerators to selected higher-priority tasks. The floor and ceiling priorities of accelerators are thus proposed as an extension of the concept of semaphore priority ceiling to guarantee at most two priority inversions for any real-time task in a uniprocessor system with multiple accelerators. The properties of the proposed concept are explored with respect to blocking behaviors over the CPU and accelerators and are verified by a series of experiments, for which the insight of the simple but effective idea is evaluated and presented.
Keywords: blocking time analysis, dedicated accelerators, synchronization protocols (ID#: 15-4520)
URLhttp://doi.acm.org/10.1145/2663761.2664220

 

Jun Tao, Jun Ma, Melissa Keranen, Jean Mayo, Ching-Kuang Shene, Chaoli Wang; RSAvisual: A Visualization Tool for the RSA Cipher; SIGCSE '14 Proceedings of the 45th ACM Technical Symposium on Computer Science Education, March 2014, Pages 635-640. Doi: 10.1145/2538862.2538891 Abstract: This paper describes a visualization tool RSAvisual that helps students learn and instructors teach the RSA cipher. This tool permits the user to visualize the steps of the RSA cipher, do encryption and decryption, learn simple factorization algorithms, and perform some elementary attacks. The demo mode of RSAvisual can be used for classroom presentation and self-study. With the practice mode, the user may go through steps in encryption, decryption, the Extended Euclidean algorithm, two simple factorization algorithms and three elementary attacks. The user may compute the output of each operation and check for correctness. This helps students learn the primitive operations and how they are used in the RSA cipher. The opportunity for self-study provides an instructor with greater flexibility in selecting a lecture pace for the detailed materials. Classroom evaluation was positive and very encouraging.
Keywords: cryptography, visualization (ID#: 15-4521)
URL: http://doi.acm.org/10.1145/2538862.2538891

 

Raghav V. Sampangi, Srinivas Sampalli; HiveSign: Dynamic Message Signatures For Resource-Constrained Wireless Networks; Q2SWinet '14 Proceedings of the 10th ACM Symposium on QoS and Security for Wireless and Mobile Networks, September 2014, Pages 33-40. Doi: 10.1145/2642687.2642699 Abstract: Radio Frequency Identification (RFID) and Wireless body area network (WBAN) are two of the emerging wireless networks that are becoming increasingly popular, owing to their applicability in a variety of domains and longevity-based designs. However, the flexibility they offer and their reduced manufacturing cost come with a trade-off — they have severe implicit hardware restrictions. These restrictions limit their ability to store a large amount of data and/or perform sophisticated computation, thereby leading them to be classified as resource-constrained wireless networks. Their constraints further limit the security that can be implemented on these devices, necessitating design of optimized solutions for security. In our paper, we present a new approach that generates dynamic message signatures using simple logical operations, hashing and pseudorandom number generation (PRNG) to accomplish integrity and entity authentication. Our approach provides a means to verify the integrity of both the message as well as the key. We validate our proposal using security evaluation and complexity analysis.

Keywords: dynamic message signatures, entity authentication, message signatures, security in resource-constrained networks (ID#: 15-4522)
URL: http://doi.acm.org/10.1145/2642687.2642699

 

Muhammad Rizwan Asghar, Ashish Gehani, Bruno Crispo, Giovanni Russello; PIDGIN: Privacy-Preserving Interest and Content Sharing In Opportunistic Networks; ASIA CCS '14 Proceedings of the 9th ACM Symposium On Information, Computer And Communications Security, June 2014, Pages 135-146. Doi:  10.1145/2590296.2590303 Abstract: Opportunistic networks have recently received considerable attention from both industry and researchers. These networks can be used for many applications without the need for a dedicated IT infrastructure. In the context of opportunistic networks, content sharing in particular has attracted significant attention. To support content sharing, opportunistic networks often implement a publish-subscribe system in which users may publish their own content and indicate interest in other content through subscriptions. Using a smartphone, any user can act as a broker by opportunistically forwarding both published content and interests within the network. Unfortunately, opportunistic networks are faced with serious privacy and security issues. Untrusted brokers can not only compromise the privacy of subscribers by learning their interests but also can gain unauthorised access to the disseminated content. This paper addresses the research challenges inherent to the exchange of content and interests without: (i) compromising the privacy of subscribers, and (ii) providing unauthorised access to untrusted brokers. Specifically, this paper presents an interest and content sharing solution that addresses these security challenges and preserves privacy in opportunistic networks. We demonstrate the feasibility and efficiency of the solution by implementing a prototype and analysing its performance on smart phones.
Keywords: encrypted CP-ABE policies, privacy-preserving content sharing, secure haggle, secure opportunistic networks, sensitive policy enforcement (ID#: 15-4523)
URL: http://doi.acm.org/10.1145/2590296.2590303

 

 Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Elliptic Curve Cryptography from ACM, 2014, Part 2

 
SoS Logo

Elliptic Curve Cryptography

ACM (2014)

Part 2

 

In Issue Number 4 of the 2015 Newsletter, the editors offered publications of interest about Elliptic Curve Cryptography from IEEE sources in five parts.  This bibliography adds research work published by the Association for Computing Machinery (ACM) in 2014.


Tim Pruss, Priyank Kalla, Florian Enescu; Equivalence Verification of Large Galois Field Arithmetic Circuits using Word-Level Abstraction via Gröbner Bases; DAC '14 Proceedings of the 51st Annual Design Automation Conference, June 2014, Pages 1-6. Doi: 10.1145/2593069.2593134 Abstract: Custom arithmetic circuits designed over Galois fields F2k are prevalent in cryptography, where the field size k is very large (e.g. k = 571-bits). Equivalence checking of such large custom arithmetic circuits against baseline golden models is beyond the capabilities of contemporary techniques. This paper addresses the problem by deriving word-level canonical polynomial representations from gate-level circuits as Z = F (A) over F2k, where Z and A represent the output and input bit-vectors of the circuit, respectively. Using algebraic geometry, we show that the canonical polynomial abstraction can be derived by computing a Gröbner basis of a set of polynomials extracted from the circuit, using a specific elimination (abstraction) term order. By efficiently applying these concepts, we can derive the canonical abstraction in hierarchically designed, custom arithmetic circuits with up to 571-bit datapath, whereas contemporary techniques can verify only up to 163-bit circuits.
Keywords: Gröbner Bases, Hardware Verification, Word-Level Abstraction (ID#: 15-4524)
URL: http://doi.acm.org/10.1145/2593069.2593134

 

Lindsey N. Whitehurst, Todd R. Andel, J. Todd McDonald; Exploring Security in ZigBee Networks; CISR '14 Proceedings of the 9th Annual Cyber and Information Security Research Conference , April 2014, Pages 25-28. Doi: 10.1145/2602087.2602090 Abstract: ZigBee networks have become popular for their low cost, low power, and ease of implementation. The ZigBee protocol has particularly become prevalent for home automation and controlling devices such as door locks and garage door openers. Preventing attacks and reducing vulnerabilities is imperative in cases where there can be high financial losses due to poor security implementations. For systems where low power and cost are desirable, but security is a priority, the application developer must be extremely cautious in the design of their network. This paper surveys security issues and vulnerabilities in the ZigBee specification and current key management schemes proposed for these networks.
Keywords: 802.15.4, ZigBee, security, smart grid, wireless networks (ID#: 15-4525)
URL: http://doi.acm.org/10.1145/2602087.2602090

 

Julian Horsch, Konstantin Böttinger, Michael Weiß, Sascha Wessel, Frederic Stumpf; TrustID: Trustworthy Identities for Untrusted Mobile Devices; CODASPY '14 Proceedings of the 4th ACM Conference On Data And Application Security And Privacy, March 2014, Pages 281-288. Doi:  10.1145/2557547.2557593 Abstract: Identity theft has deep impacts in today's mobile ubiquitous environments. At the same time, digital identities are usually still protected by simple passwords or other insufficient security mechanisms. In this paper, we present the TrustID architecture and protocols to improve this situation. Our architecture utilizes a Secure Element (SE) to store multiple context-specific identities securely in a mobile device, e.g., a smartphone. We introduce protocols for securely deriving identities from a strong root identity into the SE inside the smartphone as well as for using the newly derived IDs. Both protocols do not require a trustworthy smartphone operating system or a Trusted Execution Environment. In order to achieve this, our concept includes a secure combined PIN entry mechanism for user authentication, which prevents attacks even on a malicious device. To show the feasibility of our approach, we implemented a prototype running on a Samsung Galaxy SIII smartphone utilizing a microSD card SE. The German identity card nPA is used as root identity to derive context-specific identities.
Keywords: android, combined pin entry, identity derivation, identity provider, mobile security, npa, secure element, smartphone (ID#: 15-4526)
URL: http://doi.acm.org/10.1145/2557547.2557593

 

Amir Herzberg, Haya Shulman, Bruno Crispo; Less is More: Cipher-Suite Negotiation for DNSSEC; ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 346-355. Doi: 10.1145/2664243.2664283 Abstract: We propose a transport layer cipher-suite negotiation mechanism for DNSSEC standard, allowing name-servers to send responses containing only the keys and signatures that correspond to the cipher-suite option negotiated with the resolver, rather than sending all the signatures and keys (as is done currently).  As we show, a lack of cipher-suite negotiation, is one of the factors impeding deployment of DNSSEC, and also results in adoption of weak ciphers. Indeed, the vast majority of domains rely on RSA 1024-bit cryptography, which is already considered insecure. Furthermore, domains, that want better security, have to support a number of cryptographic ciphers. As a result, the DNSSEC responses are large and often fragmented, harming the DNS functionality, and causing inefficiency and vulnerabilities.  A cipher-suite negotiation mechanism reduces responses' sizes, and hence solves the interoperability problems with DNSSEC-signed responses, and prevents reflection and cache poisoning attacks.
Keywords: DNS interoperability, DNS security, DNSSEC, cipher suite negotiation (ID#: 15-4527)
URLhttp://doi.acm.org/10.1145/2664243.2664283

 

Raphael Spreitzer, Jörn-Marc Schmidt; Group-Signature Schemes on Constrained Devices: The Gap Between Theory and Practice; CS2 '14 Proceedings of the First Workshop on Cryptography and Security in Computing Systems, January 2014, Pages 31-36. Doi: 10.1145/2556315.2556321 Abstract: Group-signature schemes allow members within a predefined group to prove specific properties without revealing more information than necessary. Potential areas of application include electronic IDs (eIDs) and smartcards, i.e., resource-constrained environments. Though literature provides many theoretical proposals for group-signature schemes, practical evaluations regarding the applicability of such mechanisms in resource-constrained environments are missing. In this work, we investigate four different group-signature schemes in terms of mathematical operations, signature length, and the proposed revocation mechanisms. We also use the RELIC toolkit to implement the two most promising of the investigated group-signature schemes—one of which is going to be standardized in ISO/IEC 20008—for the AVR microcontroller. This allows us to give practical insights into the applicability of pairings on the AVR microcontroller in general and the applicability of group-signature schemes in particular on the very same. Contrary to the general recommendation of precomputing and storing pairing evaluations if possible, we observed that the evaluation of pairings might be faster than computations on cached pairings.
Keywords: ATmega128, AVR, group-signature schemes, pairing-based cryptography, type 1 pairings, type 3 pairings (ID#: 15-4528)
URL: http://doi.acm.org/10.1145/2556315.2556321

 

Lakshmi Kuppusamy, Jothi Rangasamy, Praveen Gauravaram; On Secure Outsourcing of Cryptographic Computations to Cloud; SCC '14 Proceedings of the 2nd International Workshop on Security in Cloud Computing, June 2014, Pages 63-68. Doi: 10.1145/2600075.2600085 Abstract: For the past few years, research works on the topic of secure outsourcing of cryptographic computations has drawn significant attention from academics in security and cryptology disciplines as well as information security practitioners. One main reason for this interest is their application for resource constrained devices such as RFID tags. While there has been significant progress in this domain since Hohenberger and Lysyanskaya have provided formal security notions for secure computation delegation, there are some interesting challenges that need to be solved that can be useful towards a wider deployment of cryptographic protocols that enable secure outsourcing of cryptographic computations. This position paper brings out these challenging problems with RFID technology as the use case together with our ideas, where applicable, that can provide a direction towards solving the problems.
Keywords: cryptography, public-key protocols, secrecy (ID#: 15-4529)
URL: http://doi.acm.org/10.1145/2600075.2600085

 

Kim Ramchen, Brent Waters; Fully Secure and Fast Signing from Obfuscation; CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 659-673. Doi: 10.1145/2660267.2660306 Abstract: In this work we explore new techniques for building short signatures from obfuscation. Our goals are twofold. First, we would like to achieve short signatures with adaptive security proofs. Second, we would like to build signatures with fast signing, ideally significantly faster than comparable signatures that are not based on obfuscation. The goal here is to create an "imbalanced'' scheme where signing is fast at the expense of slower verification.  We develop new methods for achieving short and fully secure obfuscation-derived signatures. Our base signature scheme is built from punctured programming and makes a novel use of the "prefix technique" to guess a signature. We find that our initial scheme has slower performance than comparable algorithms (e.g. EC-DSA). We find that the underlying reason is that the underlying PRG is called ~l2 times for security parameter l. To address this issue we construct a more efficient scheme by adapting the Goldreich-Goldwasser-Micali [16] construction to form the basis for a new puncturable PRF. This puncturable PRF accepts variable-length inputs and has the property that evaluations on all prefixes of a message can be efficiently pipelined. Calls to the puncturable PRF by the signing algorithm therefore make fewer invocations of the underlying PRG, resulting in reduced signing costs.  We evaluate our puncturable PRF based signature schemes using a variety of cryptographic candidates for the underlying PRG. We show that the resulting performance on message signing is competitive with that of widely deployed signature schemes. 
Keywords: adaptive security, digital signature scheme, obfuscation, punctured programming (ID#: 15-4530)
URL: http://doi.acm.org/10.1145/2660267.2660306

 

Melissa Chase, Sarah Meiklejohn, Greg Zaverucha; Algebraic MACs and Keyed-Verification Anonymous Credentials; CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, pages 1205-1216. Doi: 10.1145/2660267.2660328 Abstract: We consider the problem of constructing anonymous credentials for use in a setting where the issuer of credentials is also the verifier, or more generally where the issuer and verifier have a shared key. In this setting we can use message authentication codes (MACs) instead of public key signatures as the basis for the credential system.  To this end, we construct two algebraic MACs in prime-order groups, along with efficient protocols for issuing credentials, asserting possession of a credential, and proving statements about hidden attributes (e.g., the age of the credential owner). We prove the security of the first scheme in the generic group model, and prove the security of the second scheme\dash using a dual-system-based approach\dash under decisional Diffie-Hellman (DDH). Our MACs are of independent interest, as they are the only uf-cmva-secure MACs with efficient proofs of knowledge.  Finally, we compare the efficiency of our new systems to two existing constructions of anonymous credentials: U-Prove and Idemix. We show that the performance of the new schemes is competitive with U-Prove (which does not have multi-show unlinkability), and many times faster than Idemix.
Keywords: anonymity, anonymous credentials, mac (ID#: 15-4531)
URLhttp://doi.acm.org/10.1145/2660267.2660328

 

Gabriel Ghinita, Razvan Rughinis; An Efficient Privacy-Preserving System for Monitoring Mobile Users: Making Searchable Encryption Practical; CODASPY '14 Proceedings of the 4th ACM Conference on Data and Application Security and Privacy, March 2014, Pages 321-332. Doi: 10.1145/2557547.2557559 Abstract: Monitoring location updates from mobile users has important applications in several areas, ranging from public safety and national security to social networks and advertising. However, sensitive information can be derived from movement patterns, so protecting the privacy of mobile users is a major concern. Users may only be willing to disclose their locations when some condition is met, for instance in proximity of a disaster area, or when an event of interest occurs nearby. Currently, such functionality is achieved using searchable encryption. Such cryptographic primitives provide provable guarantees for privacy, and allow decryption only when the location satisfies some predicate. Nevertheless, they rely on expensive pairing-based cryptography (PBC), and direct application to the domain of location updates leads to impractical solutions.  We propose secure and efficient techniques for private processing of location updates that complement the use of PBC and lead to significant gains in performance by reducing the amount of required pairing operations. We also implement two optimizations that further improve performance: materialization of results to expensive mathematical operations, and parallelization. Extensive experimental results show that the proposed techniques significantly improve performance compared to the baseline, and reduce the searchable encryption overhead to a level that is practical in a computing environment with reasonable resources, such as the cloud.
Keywords: location privacy, pairing-based cryptography (ID#: 15-4532)
URL: http://doi.acm.org/10.1145/2557547.2557559

 

Rong Jin, Xianru Du, Zi Deng, Kai Zeng, Jing Xu; Practical Secret Key Agreement for Full-Duplex Near Field Communications; ASIA CCS '14 Proceedings of the 9th ACM Symposium On Information, Computer And Communications Security, June 2014, Pages 217-228.  Doi:  10.1145/2590296.2590327 Abstract: Near Field Communication (NFC) is a promising short distance radio communication technology for many useful applications. Although its communication range is short, NFC alone does not guarantee secure communication and is subject to security attacks, such as eavesdropping attack. Generating a shared key and using symmetric key cryptography to secure the communication between NFC devices is a feasible solution to prevent various attacks. However, conventional Diffie-Hellman key agreement protocol is not preferable for resource constrained NFC devices due to its extensive computational overhead and energy consumption. In this paper, we propose a practical, fast and energy-efficient key agreement scheme, called RIWA (Random bIts transmission with Waveform shAking), for NFC devices by exploiting its full-duplex capability. In RIWA, two devices send random bits to each other simultaneously without strict synchronization or perfect match of amplitude and phase. On the contrary, RIWA randomly introduces synchronization offset and mismatch of amplitude and phase for each bit transmission in order to prevent a passive attacker from determining the generated key. A shared bit can be established when two devices send different bits. We conduct theoretical analysis on the correctness and security strength of RIWA, and extensive simulations to evaluate its effectiveness. We build a testbed based on USRP software defined radio and conduct proof-of-concept experiments to evaluate RIWA in a real-world environment. It shows that RIWA achieves a high key generation rate about 26kbps and is immune to eavesdropping attack even when the attacker is within several centimeters away from the legitimate devices. RIWA is a practical, fast, energy-efficient, and secure key agreement scheme for resource-constrained NFC devices.
Keywords: USRP, energy efficient, near field communication, practical key agreement (ID#: 15-4533)
URL: http://doi.acm.org/10.1145/2590296.2590327

 

Benoît Libert, Marc Joye, Moti Yung; Born and Raised Distributively: Fully Distributed Non-Interactive Adaptively-Secure Threshold Signatures with Short Shares;  PODC '14 Proceedings of the 2014 ACM Symposium On Principles Of Distributed Computing, July 2014, Pages 303-312. Doi: 10.1145/2611462.2611498 Abstract: Threshold cryptography is a fundamental distributed computational paradigm for enhancing the availability and the security of cryptographic public-key schemes. It does it by dividing private keys into n shares handed out to distinct servers. In threshold signature schemes, a set of at least t+1 ≤ n servers is needed to produce a valid digital signature. Availability is assured by the fact that any subset of t+1 servers can produce a signature when authorized. At the same time, the scheme should remain robust (in the fault tolerance sense) and unforgeable (cryptographically) against up to t corrupted servers; i.e., it adds quorum control to traditional cryptographic services and introduces redundancy. Originally, most practical threshold signatures have a number of demerits: They have been analyzed in a static corruption model (where the set of corrupted servers is fixed at the very beginning of the attack), they require interaction, they assume a trusted dealer in the key generation phase (so that the system is not fully distributed), or they suffer from certain overheads in terms of storage (large share sizes). In this paper, we construct practical fully distributed (the private key is born distributed), non-interactive schemes---where the servers can compute their partial signatures without communication with other servers---with adaptive security (i.e., the adversary corrupts servers dynamically based on its full view of the history of the system). Our schemes are very efficient in terms of computation, communication, and scalable storage (with private key shares of size O(1), where certain solutions incur O(n) storage costs at each server). Unlike other adaptively secure schemes, our schemes are erasure-free (reliable erasure is a hard to assure and hard to administer property in actual systems).  To the best of our knowledge, such a fully distributed highly constrained scheme has been an open problem in the area. In particular, and of special interest, is the fact that Pedersen's traditional distributed key generation (DKG) protocol can be safely employed in the initial key generation phase when the system is born -- although it is well-known not to ensure uniformly distributed public keys. An advantage of this is that this protocol only takes one round optimistically (in the absence of faulty player).
Keywords: adaptive security, availability, distributed key generation, efficiency, erasure-free schemes, fault tolerance, fully distributed systems, non-interactivity, threshold signature schemes (ID#: 15-4534)
URL: http://doi.acm.org/10.1145/2611462.2611498

 

Tobias Oder, Thomas Pöppelmann, Tim Güneysu; Beyond ECDSA and RSA: Lattice-based Digital Signatures on Constrained Devices; DAC '14 Proceedings of the 51st Annual Design Automation Conference; June 2014, Pages 1-6. Doi: 10.1145/2593069.2593098 Abstract: All currently deployed asymmetric cryptography is broken with the advent of powerful quantum computers. We thus have to consider alternative solutions for systems with long-term security requirements (e.g., for long-lasting vehicular and avionic communication infrastructures). In this work we present an efficient implementation of BLISS, a recently proposed, post-quantum secure, and formally analyzed novel lattice-based signature scheme. We show that we can achieve a significant performance of 35.3 and 6 ms for signing and verification, respectively, at a 128-bit security level on an ARM Cortex-M4F microcontroller. This shows that lattice-based cryptography can be efficiently deployed on today's hardware and provides security solutions for many use cases that can even withstand future threats.
Keywords: (not provided) (ID#: 15-4535)
URLhttp://doi.acm.org/10.1145/2593069.2593098

 

Florian Bergsma, Benjamin Dowling, Florian Kohlar, Jörg Schwenk, Douglas Stebila; Multi-Ciphersuite Security of the Secure Shell (SSH) Protocol; CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 369-381. Doi: 10.1145/2660267.2660286 Abstract: The Secure Shell (SSH) protocol is widely used to provide secure remote access to servers, making it among the most important security protocols on the Internet. We show that the signed-Diffie--Hellman SSH ciphersuites of the SSH protocol are secure: each is a secure authenticated and confidential channel establishment (ACCE) protocol, the same security definition now used to describe the security of Transport Layer Security (TLS) ciphersuites. While the ACCE definition suffices to describe the security of individual ciphersuites, it does not cover the case where parties use the same long-term key with many different ciphersuites: it is common in practice for the server to use the same signing key with both finite field and elliptic curve Diffie--Hellman, for example. While TLS is vulnerable to attack in this case, we show that SSH is secure even when the same signing key is used across multiple ciphersuites. We introduce a new generic multi-ciphersuite composition framework to achieve this result in a black-box way.
Keywords: authenticated and confidential channel establishment, cross-protocol security, key agility, multi-ciphersuite, secure shell (SSH) (ID#: 15-4536)
URL: http://doi.acm.org/10.1145/2660267.2660286

 

Koen Claessen, Michał H. Pałka; Splittable Pseudorandom Number Generators using Cryptographic Hashing; Haskell '13 Proceedings of the 2013 ACM SIGPLAN symposium on Haskell, January 2014, Pages 47-58. Doi: 10.1145/2578854.2503784 Abstract: We propose a new splittable pseudorandom number generator (PRNG) based on a cryptographic hash function. Splittable PRNGs, in contrast to linear PRNGs, allow the creation of two (seemingly) independent generators from a given random number generator. Splittable PRNGs are very useful for structuring purely functional programs, as they avoid the need for threading around state. We show that the currently known and used splittable PRNGs are either not efficient enough, have inherent flaws, or lack formal arguments about their randomness. In contrast, our proposed generator can be implemented efficiently, and comes with a formal statements and proofs that quantify how 'random' the results are that are generated. The provided proofs give strong randomness guarantees under assumptions commonly made in cryptography.
Keywords: haskell, provable security, splittable pseudorandom number generators (ID#: 15-4537)
URLhttp://doi.acm.org/10.1145/2578854.2503784

 

Junchao Wu, Qun Liu, Xiaofeng Liao; A Secure and Efficient Outsourceable Group Key Transfer Protocol in Cloud Computing; SCC '14 Proceedings of the 2nd International Workshop On Security In Cloud Computing, June 2014, Pages 43-50. Doi: 10.1145/2600075.2600079 Abstract: Cloud computing provides robust computational power, and the customer can economically access to large amount of computing resources with a "pay-per-use" utility service. It also brings forth new challenges for security when customers want to securely outsource the computation of cryptographic operations to the untrusted cloud servers. Though group key transfer is a quite common scientific and engineering task, it is difficult to implement the protocol among group members, if group members are computationally weaker players. Cloud computing provides an avenue for computationally weaker players. A novel system model, with two public cloud servers and a trusted key generation center (KGC), is proposed to address the issue of group key transfer. In order to protect the sensitive information of the customers from the public cloud's learning, we design a secure group key transfer protocol based on secret sharing in cloud computing, in which both KGC and weaker group members can delegate cloud servers to compute the interpolation polynomial and the group members are able to come up with a same key. Extensive theoretical analysis and experiment results are also given to validate the practicability of our protocol.
Keywords: cloud computing, group key transfer, interpolation polynomial, outsourcing, secret sharing (ID#: 15-4538)
URL: http://doi.acm.org/10.1145/2600075.2600079

 

Roland van Rijswijk-Deij, Anna Sperotto, Aiko Pras; DNSSEC and Its Potential for DDos Attacks: A Comprehensive Measurement Study;  IMC '14 Proceedings of the 2014 Conference on Internet Measurement Conference, November 2014, Pages 449-460. Doi: 10.1145/2663716.2663731 Abstract: Over the past five years we have witnessed the introduction of DNSSEC, a security extension to the DNS that relies on digital signatures. DNSSEC strengthens DNS by preventing attacks such as cache poisoning. However, a common argument against the deployment of DNSSEC is its potential for abuse in Distributed Denial of Service (DDoS) attacks, in particular reflection and amplification attacks. DNS responses for a DNSSEC-signed domain are typically larger than those for an unsigned domain, thus, it may seem that DNSSEC could actually worsen the problem of DNS-based DDoS attacks. The potential for abuse in DNSSEC-signed domains has, however, never been assessed on a large scale.  In this paper we establish ground truth around this open question. We perform a detailed measurement on a large dataset of DNSSEC-signed domains, covering 70% (2.5 million) of all signed domains in operation today, and compare the potential for amplification attacks to a representative sample of domains without DNSSEC. At first glance, the outcome of these measurements confirms that DNSSEC indeed worsens the DDoS phenomenon. Closer examination, however, gives a more nuanced picture. DNSSEC really only makes the situation worse for one particular query type (ANY), for which responses may be over 50 times larger than the original query (and in rare cases up to 179x). We also discuss a number of mitigation strategies that can have immediate impact for operators and suggest future research directions with regards to these mitigation strategies. 
Keywords: DDoS, DNS, DNSSEC, amplification attack, attack, denial of service, denial-of-service, measurements, reflection attack (ID#: 15-4539)
URLhttp://doi.acm.org/10.1145/2663716.2663731

 

Jeffrey Robinson, Brandon Dixon, Jeffrey Galloway; Montgomery Multiplication Using CUDA; ACM SE '14 Proceedings of the 2014 ACM Southeast Regional Conference, March 2014, Article No. 23. Doi: 10.1145/2638404.2638485 Abstract:  Modular multiplication is useful in many areas of number theory; the most well-known being cryptography. In order to encrypt or decrypt a document using either RSA or ECC encryption algorithms which perform long chains of multiplication modulo N. To factor large numbers many factoring algorithms have long multiplication chains modulo N, where N is a prime number. In our paper, we implement a highly optimized systolic Montgomery multiplication algorithm in order to provide high performance modular multiplications. We develop our algorithm using NVIDIAs general-purpose parallel programming model called CUDA (Compute Unified Device Architecture) for NVIDIA GPUs (Graphics Processing Units). Our implementation can perform up to 338.15 million multiplications per second using a GTX 660 and 475.66 using a GTX 670 with 256 bit numbers. While using 1024-bit numbers the GTX 660 can perform 20.15 million multiplications per second and the GTX 670 can perform 27.89 million multiplications per second. When using 2048-bit numbers, the GTX 660 can perform 4.96 million multiplications per second and the GTX 670 can perform 6.78 million multiplications per second. We also show that our version is faster than previous implemented multiprecision Montgomery multiplication algorithms, while also providing an intuitive data representation.
Keywords: CUDA, algorithms, concurrent processes, numerical methods, parallel computing (ID#: 15-4540)
URL: http://doi.acm.org/10.1145/2638404.2638485

 

Hongbo Liu, Hui Wang, Yingying Chen, Dayong Jia; Defending against Frequency-Based Attacks on Distributed Data Storage in Wireless NetworksACM Transactions on Sensor Networks (TOSN), Volume 10 Issue 3, April 2014, Article No. 49. Doi: 10.1145/2594774 Abstract: As wireless networks become more pervasive, the amount of the wireless data is rapidly increasing. One of the biggest challenges of wide adoption of distributed data storage is how to store these data securely. In this work, we study the frequency-based attack, a type of attack that is different from previously well-studied ones, that exploits additional adversary knowledge of domain values and/or their exact/approximate frequencies to crack the encrypted data. To cope with frequency-based attacks, the straightforward 1-to-1 substitution encryption functions are not sufficient. We propose a data encryption strategy based on 1-to-n substitution via dividing and emulating techniques to defend against the frequency-based attack, while enabling efficient query evaluation over encrypted data. We further develop two frameworks, incremental collection and clustered collection, which are used to defend against the global frequency-based attack when the knowledge of the global frequency in the network is not available. Built upon our basic encryption schemes, we derive two mechanisms, direct emulating and dual encryption, to handle updates on the data storage for energy-constrained sensor nodes and wireless devices. Our preliminary experiments with sensor nodes and extensive simulation results show that our data encryption strategy can achieve high security guarantee with low overhead.
Keywords: Frequency-based attack, secure distributed data storage, wireless networks (ID#: 15-4541)
URLhttp://doi.acm.org/10.1145/2594774 

 

Fabienne Eigner, Aniket Kate, Matteo Maffei, Francesca Pampaloni, Ivan Pryvalov; Differentially Private Data Aggregation with Optimal Utility; ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 316-325. Doi: 10.1145/2664243.2664263 Abstract: Computing aggregate statistics about user data is of vital importance for a variety of services and systems, but this practice has been shown to seriously undermine the privacy of users. Differential privacy has proved to be an effective tool to sanitize queries over a database, and various cryptographic protocols have been recently proposed to enforce differential privacy in a distributed setting, e.g., statistical queries on sensitive data stored on the user's side. The widespread deployment of differential privacy techniques in real-life settings is, however, undermined by several limitations that existing constructions suffer from: they support only a limited class of queries, they pose a trade-off between privacy and utility of the query result, they are affected by the answer pollution problem, or they are inefficient. This paper presents PrivaDA, a novel design architecture for distributed differential privacy that leverages recent advances in secure multiparty computations on fixed and floating point arithmetic to overcome the previously mentioned limitations. In particular, PrivaDA supports a variety of perturbation mechanisms (e.g., the Laplace, discrete Laplace, and exponential mechanisms) and it constitutes the first generic technique to generate noise in a fully distributed manner while maintaining the optimal utility. Furthermore, PrivaDA does not suffer from the answer pollution problem. We demonstrate the efficiency of PrivaDA with a performance evaluation, and its expressiveness and flexibility by illustrating several application scenarios such as privacy-preserving web analytics.
Keywords:  (not provided) (ID#: 15-4542)
URL: http://doi.acm.org/10.1145/2664243.2664263

 

Haya Shulman; Pretty Bad Privacy: Pitfalls of DNS Encryption; WPES '14 Proceedings of the 13th Workshop on Privacy in the Electronic Society; November 2014, Pages 191-200. Doi: 10.1145/2665943.2665959 Abstract: As awareness for privacy of Domain Name System (DNS) is increasing, a number of mechanisms for encryption of DNS packets were proposed. We study the prominent defences, focusing on the privacy guarantees, interoperability with the DNS infrastructure, and the efficiency overhead. In particular:  •We explore dependencies in DNS and show techniques that utilise side channel leaks, due to transitive trust, allowing to infer information about the target domain in an encrypted DNS packet. •We examine common DNS servers configurations and show that the proposals are expected to encounter deployment obstacles with (at least) 38% of 50K-top Alexa domains and (at least) 12% of the top-level domains (TLDs), and will disrupt the DNS functionality and availability for clients. •We show that due to the non-interoperability with the caches, the proposals for end-to-end encryption may have a prohibitive traffic overhead on the name servers.  Our work indicates that further study may be required to adjust the proposals to stand up to their security guarantees, and to make them suitable for the common servers' configurations in the DNS infrastructure. Our study is based on collection and analysis of the DNS traffic of 50K-top Alexa domains and 568 TLDs.
Keywords: dns, dns caching, dns encryption, dns infrastructure, dns privacy, dns security, side channel attacks, transitive trust dependencies (ID#: 15-4543)
URL: http://doi.acm.org/10.1145/2665943.2665959

 

Boyang Wang, Yantian Hou, Ming Li, Haitao Wang, Hui Li; Maple: Scalable Multi-Dimensional Range Search over Encrypted Cloud Data with Tree-Based Index; ASIA CCS '14 Proceedings of the 9th ACM Symposium On Information, Computer And Communications Security, June 2014, Pages 111-122. Doi: 10.1145/2590296.2590305 Abstract: Cloud computing promises users massive scale outsourced data storage services with much lower costs than traditional methods. However, privacy concerns compel sensitive data to be stored on the cloud server in an encrypted form. This posts a great challenge for effectively utilizing cloud data, such as executing common SQL queries. A variety of searchable encryption techniques have been proposed to solve this issue; yet efficiency and scalability are still the two main obstacles for their adoptions in real-world datasets, which are multi-dimensional in general. In this paper, we propose a tree-based public-key Multi-Dimensional Range Searchable Encryption (MDRSE) to overcome the above limitations. Specifically, we first formally define the leakage function and security of a tree-based MDRSE. Then, by leveraging an existing predicate encryption in a novel way, our tree-based MDRSE efficiently indexes and searches over encrypted cloud data with multi-dimensional tree structures (i.e., R-trees). Moreover, our scheme is able to protect single-dimensional privacy while previous efficient solutions fail to achieve. Our scheme is selectively secure, and through extensive experimental evaluation on a large-scale real-world dataset, we show the efficiency and scalability of our scheme.
Keywords: encrypted cloud data, multiple dimension, range search, tree structures (ID#: 15-4544)
URLhttp://doi.acm.org/10.1145/2590296.2590305

 

Keita Emura, Akira Kanaoka, Satoshi Ohta, Takeshi Takahashi; Building Secure and Anonymous Communication Channel: Formal Model and Its Prototype Implementation; SAC '14 Proceedings of the 29th Annual ACM Symposium on Applied Computing, March 2014, Pages 1641-1648. Doi: 10.1145/2554850.2554879 Abstract: Various techniques need to be combined to realize anonymously authenticated communication. Cryptographic tools enable anonymous user authentication while anonymous communication protocols hide users' IP addresses from service providers. One simple approach for realizing anonymously authenticated communication is their simple combination. but this gives rise to another issue; how to build a secure channel. The current public key infrastructure cannot be used since the user's public key identifies the user. To cope with this issue, we propose a protocol that uses identity-based encryption for packet encryption without sacrificing anonymity, and group signature for anonymous user authentication. Communications in the protocol take place through proxy entities that conceal users' IP addresses from service providers. The underlying group signature is customized to meet our objective and improve its efficiency. We also introduce a proof-of-concept implementation to demonstrate the protocol's feasibility. We compare its performance to SSL communication and demonstrate its practicality, and conclude that the protocol realizes secure, anonymous, and authenticated communication between users and service providers with practical performance.
Keywords: anonymous authentication, anonymous communication, group signature, identity-based encryption, secure channel (ID#: 15-4545)
URL: http://doi.acm.org/10.1145/2554850.2554879  

 

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Hard Problems: Predictive Security Metrics (ACM)

 
SoS Logo

Hard Problems: Predictive Security Metrics (ACM)

 

Predictive security metrics are a hard problem in the Science of Security.   A survey of the ACM Digital Library found nearly three hundred scholarly articles about research into security metrics that were published in 2014.  This series of bibliographical citations includes those actually published by ACM.  A separate listing of works researching these areas but not published by ACM, and therefore subject to intellectual property restrictions about the use of abstracts, is cited  under the heading “Citations for Hard Problems.”


 

Arpan Chakraborty, Brent Harrison, Pu Yang, David Roberts, Robert St. Amant; Exploring Key-Level Analytics for Computational Modeling of Typing Behavior; HotSoS '14 Proceedings of the 2014 Symposium and Bootcamp on the Science of Security, April, 2014, Article No. 34. Doi: 10.1145/2600176.2600210  Abstract: Typing is a human activity that can be affected by a number of situational and task-specific factors. Changes in typing behavior resulting from the manipulation of such factors can be predictably observed through key-level input analytics. Here we present a study designed to explore these relationships. Participants play a typing game in which letter composition, word length and number of words appearing together are varied across levels. Inter-keystroke timings and other higher order statistics (such as bursts and pauses), as well as typing strategies, are analyzed from game logs to find the best set of metrics that quantify the effect that different experimental factors have on observable metrics.  Beyond task-specific factors, we also study the effects of habituation by recording changes in performance with practice. Currently a work in progress, this research aims at developing a predictive model of human typing. We believe this insight can lead to the development of novel security proofs for interactive systems that can be deployed on existing infrastructure with minimal overhead. Possible applications of such predictive capabilities include anomalous behavior detection, authentication using typing signatures, bot detection using word challenges etc.
Keywords: cognitive modeling, typing, user interfaces (ID#: 15-4403)
URL: http://doi.acm.org/10.1145/2600176.2600210

 

Arild B. Torjusen, Habtamu Abie, Ebenezer Paintsil, Denis Trcek, Åsmund Skomedal; Towards Run-Time Verification of Adaptive Security for IoT in eHealth; ECSAW '14 Proceedings of the 2014 European Conference on Software Architecture Workshops, August 2014, Article No. 4.  Doi: 10.1145/2642803.2642807 Abstract: This paper integrates run-time verification enablers in the feedback adaptation loop of the ASSET adaptive security framework for Internet of Things (IoT) in the eHealth settings and instantiates the resulting framework with Colored Petri Nets. The run-time enablers make machine-readable formal models of a system state and context available at run-time. In addition, they make requirements that define the objectives of verification available at run-time as formal specifications and enable dynamic context monitoring and adaptation. Run-time adaptive behavior that deviates from the normal mode of operation of the system represents a major threat to the sustainability of critical eHealth services. Therefore, the integration of run-time enablers into the ASSET adaptive framework could lead to a sustainable security framework for IoT in eHealth.
Keywords: Adaptive Security, Formal Run-time Verification, IoT, eHealth (ID#: 15-4404)
URL: http://doi.acm.org/10.1145/2642803.2642807

 

William Herlands, Thomas Hobson, Paula J. Donovan; Effective Entropy: Security-Centric Metric for Memory Randomization Techniques; CSET'14 Proceedings of the 7th USENIX Conference on Cyber Security Experimentation and Test, April 2014, Pages 5-5. Doi: (none provided) Abstract: User space memory randomization techniques are an emerging field of cyber defensive technology which attempts to protect computing systems by randomizing the layout of memory. Quantitative metrics are needed to evaluate their effectiveness at securing systems against modern adversaries and to compare between randomization technologies. We introduce Effective Entropy, a measure of entropy in user space memory which quantitatively considers an adversary's ability to leverage low entropy regions of memory via absolute and dynamic intersection connections. Effective Entropy is indicative of adversary workload and enables comparison between different randomization techniques. Using Effective Entropy, we present a comparison of static Address Space Layout Randomization (ASLR), Position Independent Executable (PIE) ASLR, and a theoretical fine grain randomization technique.
Keywords:  (not provided) (ID#: 15-4405)
URLhttp://dl.acm.org/citation.cfm?id=2671214.2671219

 

Shouhuai Xu; Cybersecurity Dynamics; HotSoS '14 Proceedings of the 2014 Symposium and Bootcamp on the Science of Security, April 2014, Article No. 14. Doi: 10.1145/2600176.2600190 Abstract: We explore the emerging field of Cybersecurity Dynamics, a candidate foundation for the Science of Cybersecurity.
Keywords: cybersecurity dynamics, security analysis, security model (ID#: 15-4406)
URL: http://doi.acm.org/10.1145/2600176.2600190

 

Michael Sherman, Gradeigh Clark, Yulong Yang, Shridatt Sugrim, Arttu Modig, Janne Lindqvist, Antti Oulasvirta, Teemu Roos; User-Generated Free-Form Gestures for Authentication: Security and Memorability; MobiSys '14 Proceedings of the 12th Annual International Conference On Mobile Systems, Applications, and Services, June 2014, Pages 176-189. Doi:  10.1145/2594368.2594375 Abstract: This paper studies the security and memorability of free-form multitouch gestures for mobile authentication. Towards this end, we collected a dataset with a generate-test-retest paradigm where participants (N=63) generated free-form gestures, repeated them, and were later retested for memory. Half of the participants decided to generate one-finger gestures, and the other half generated multi-finger gestures. Although there has been recent work on template-based gestures, there are yet no metrics to analyze security of either template or free-form gestures. For example, entropy-based metrics used for text-based passwords are not suitable for capturing the security and memorability of free-form gestures. Hence, we modify a recently proposed metric for analyzing information capacity of continuous full-body movements for this purpose. Our metric computed estimated mutual information in repeated sets of gestures. Surprisingly, one-finger gestures had higher average mutual information. Gestures with many hard angles and turns had the highest mutual information. The best-remembered gestures included signatures and simple angular shapes. We also implemented a multitouch recognizer to evaluate the practicality of free-form gestures in a real authentication system and how they perform against shoulder surfing attacks. We discuss strategies for generating secure and memorable free-form gestures. We conclude that free-form gestures present a robust method for mobile authentication.
Keywords: gestures, memorability, mutual information, security (ID#: 15-4407)
URLhttp://doi.acm.org/10.1145/2594368.2594375

 

A. M. Mora, P. De las Cuevas, J. J. Merelo, S. Zamarripa, M. Juan, A. I. Esparcia-Alcázar, M. Burvall, H. Arfwedson, Z. Hodaie; MUSES: A corporate user-centric system which applies computational intelligence methods; SAC '14 Proceedings of the 29th Annual ACM Symposium on Applied Computing, March 2014, Pages 1719-1723. Doi: 10.1145/2554850.2555059  Abstract: This work presents the description of the architecture of a novel enterprise security system, still in development, which can prevent and deal with the security flaws derived from the users in a company. Thus, the Multiplatform Usable Endpoint Security system (MUSES) considers diverse factors such as the information distribution, the type of accesses, the context where the users are, the category of users, or the mix between personal and private data, among others. This system includes an event correlator and a risk and trust analysis engine to perform the decision process. MUSES follows a set of defined security rules, according to the enterprise security policies, but it is able to self-adapt the decisions and even create new security rules depending on the user behaviour, the specific device, and the situation or context. To this aim MUSES applies machine learning and computational intelligence techniques which can also be used to predict potential unsafe or dangerous user's behaviour.
Keywords: BYOD, enterprise security, event correlation, multiplatform, risk and trust analysis, security policies, self-adaptation, user-centric system (ID#: 15-4408)
URLhttp://doi.acm.org/10.1145/2554850.2555059

 

Cornel Barna, Mark Shtern, Michael Smit, Vassilios Tzerpos, Marin Litoiu; Mitigating DoS Attacks Using Performance Model-Driven Adaptive Algorithms; ACM Transactions on Autonomous and Adaptive Systems (TAAS), Volume 9, Issue 1, March 2014, Article No. 3. Doi: 10.1145/2567926 Abstract: Denial of Service (DoS) attacks overwhelm online services, preventing legitimate users from accessing a service, often with impact on revenue or consumer trust. Approaches exist to filter network-level attacks, but application-level attacks are harder to detect at the firewall. Filtering at this level can be computationally expensive and difficult to scale, while still producing false positives that block legitimate users.  This article presents a model-based adaptive architecture and algorithm for detecting DoS attacks at the web application level and mitigating them. Using a performance model to predict the impact of arriving requests, a decision engine adaptively generates rules for filtering traffic and sending suspicious traffic for further review, where the end user is given the opportunity to demonstrate they are a legitimate user. If no legitimate user responds to the challenge, the request is dropped. Experiments performed on a scalable implementation demonstrate effective mitigation of attacks launched using a real-world DoS attack tool.
Keywords: Denial of service, DoS attack mitigation, distributed denial of service, layered queuing network, model-based adaptation, performance model (ID#: 15-4409)
URL: http://doi.acm.org/10.1145/2567926

 

Julien Freudiger, Shantanu Rane, Alejandro E. Brito, Ersin Uzun; Privacy Preserving Data Quality Assessment for High-Fidelity Data Sharing; WISCS '14 Proceedings of the 2014 ACM Workshop on Information Sharing & Collaborative Security, November 2014, Pages 21-29. Doi:  10.1145/2663876.2663885  Abstract: In a data-driven economy that struggles to cope with the volume and diversity of information, data quality assessment has become a necessary precursor to data analytics. Real-world data often contains inconsistencies, conflicts and errors. Such dirty data increases processing costs and has a negative impact on analytics. Assessing the quality of a dataset is especially important when a party is considering acquisition of data held by an untrusted entity. In this scenario, it is necessary to consider privacy risks of the stakeholders.  This paper examines challenges in privacy-preserving data quality assessment. A two-party scenario is considered, consisting of a client that wishes to test data quality and a server that holds the dataset. Privacy-preserving protocols are presented for testing important data quality metrics: completeness, consistency, uniqueness, timeliness and validity. For semi-honest parties, the protocols ensure that the client does not discover any information about the data other than the value of the quality metric. The server does not discover the parameters of the client's query, the specific attributes being tested and the computed value of the data quality metric. The proposed protocols employ additively homomorphic encryption in conjunction with condensed data representations such as counting hash tables and histograms, serving as efficient alternatives to solutions based on private set intersection.
Keywords: cryptographic protocols, data quality assessment, privacy and confidentiality (ID#: 15-4410)
URLhttp://doi.acm.org/10.1145/2663876.2663885

 

Shweta Subramani, Mladen Vouk, Laurie Williams; An Analysis of Fedora Security Profile; HotSoS '14 Proceedings of the 2014 Symposium and Bootcamp on the Science of Security, April 2014, Article No. 35. Doi:  10.1145/2600176.2600211 Abstract: This paper examines security faults/vulnerabilities reported for Fedora. Results indicate that, at least in some situations, fault roughly constant may be used to guide estimation of residual vulnerabilities in an already released product, as well as possibly guide testing of the next version of the product.
Keywords: Fedora, detection, non-operational testing, prediction, security faults, vulnerabilities (ID#: 15-4411)
URL: http://doi.acm.org/10.1145/2600176.2600211

 

Omer Tripp, Julia Rubin; A Bayesian Approach to Privacy Enforcement in Smartphones; SEC'14 Proceedings of the 23rd USENIX Conference on Security Symposium, August 2014, Pages 175-190. Doi: (none provided) Abstract: Mobile apps often require access to private data, such as the device ID or location. At the same time, popular platforms like Android and iOS have limited support for user privacy. This frequently leads to unauthorized disclosure of private information by mobile apps, e.g. for advertising and analytics purposes. This paper addresses the problem of privacy enforcement in mobile systems, which we formulate as a classification problem: When arriving at a privacy sink (e.g., database update or outgoing web message), the runtime system must classify the sink's behavior as either legitimate or illegitimate. The traditional approach of information-flow (or taint) tracking applies "binary" classification, whereby information release is legitimate iff there is no data flow from a privacy source to sink arguments. While this is a useful heuristic, it also leads to false alarms. We propose to address privacy enforcement as a learning problem, relaxing binary judgments into a quantitative/ probabilistic mode of reasoning. Specifically, we propose a Bayesian notion of statistical classification, which conditions the judgment whether a release point is legitimate on the evidence arising at that point. In our concrete approach, implemented as the BAYESDROID system that is soon to be featured in a commercial product, the evidence refers to the similarity between the data values about to be released and the private data stored on the device. Compared to TaintDroid, a state-of-the-art taint-based tool for privacy enforcement, BAYESDROID is substantially more accurate. Applied to 54 top-popular Google Play apps, BAYESDROID is able to detect 27 privacy violations with only 1 false alarm.
Keywords:  (not provided) (ID#: 15-4412)
URL:   http://dl.acm.org/citation.cfm?id=2671225.2671237

 

Reid Priedhorsky, Aron Culotta, Sara Y. Del Valle; Inferring the Origin Locations of Tweets with Quantitative Confidence; CSCW '14 Proceedings of the 17th ACM Conference On Computer Supported Cooperative Work & Social Computing, February 2014, Pages 1523-1536. Doi: 10.1145/2531602.2531607 Abstract: Social Internet content plays an increasingly critical role in many domains, including public health, disaster management, and politics. However, its utility is limited by missing geographic information; for example, fewer than 1.6% of Twitter messages (tweets) contain a geotag. We propose a scalable, content-based approach to estimate the location of tweets using a novel yet simple variant of gaussian mixture models. Further, because real-world applications depend on quantified uncertainty for such estimates, we propose novel metrics of accuracy, precision, and calibration, and we evaluate our approach accordingly. Experiments on 13 million global, comprehensively multi-lingual tweets show that our approach yields reliable, well-calibrated results competitive with previous computationally intensive methods. We also show that a relatively small number of training data are required for good estimates (roughly 30,000 tweets) and models are quite time-invariant (effective on tweets many weeks newer than the training set). Finally, we show that toponyms and languages with small geographic footprint provide the most useful location signals.
Keywords: gaussian mixture models, geo-location, location inference, metrics, twitter, uncertainty quantification (ID#: 15-4413)
URLhttp://doi.acm.org/10.1145/2531602.2531607

 

Rinkaj Goyal, Pravin Chandra, Yogesh Singh; Why Interaction Between Metrics Should be Considered in the Development of Software Quality Models: A Preliminary StudyACM SIGSOFT Software Engineering Notes, Volume 39 Issue 4, July 2014, Pages 1-4. Doi: 10.1145/2632434.2659853  Abstract: This study examines the need to consider interactions between the measurements (metrics) of different quality factors in the development of software quality models. Though the correlation between metrics has been explored to a considerable depth in the development of these models, consideration of interactions between predictors is comparatively new in software engineering. This preliminary study is supported by statistically-proven results, differentiating interactions with correlation analysis.  The issues raised here will assist analysts to improve empirical analyses by incorporating interactions in software quality model development, where amalgamating effects between different characteristics or subcharacteristics are observed.
Keywords: empirical software engineering, interaction, metrics, quality models, regression analysis, software fault prediction models (ID#: 15-4414)
URL: http://doi.acm.org/10.1145/2632434.2659853

 

Richard J. Oentaryo, Ee-Peng Lim, Jia-Wei Low, David Lo, Michael Finegold; Predicting Response in Mobile Advertising with Hierarchical Importance-Aware Factorization Machine; WSDM '14 Proceedings of the 7th ACM International Conference On Web Search And Data Mining, February 2014, Pages 123-132. Doi: 10.1145/2556195.2556240 Abstract: Mobile advertising has recently seen dramatic growth, fueled by the global proliferation of mobile phones and devices. The task of predicting ad response is thus crucial for maximizing business revenue. However, ad response data change dynamically over time, and are subject to cold-start situations in which limited history hinders reliable prediction. There is also a need for a robust regression estimation for high prediction accuracy, and good ranking to distinguish the impacts of different ads. To this end, we develop a Hierarchical Importance-aware Factorization Machine (HIFM), which provides an effective generic latent factor framework that incorporates importance weights and hierarchical learning. Comprehensive empirical studies on a real-world mobile advertising dataset show that HIFM outperforms the contemporary temporal latent factor models. The results also demonstrate the efficacy of the HIFM's importance-aware and hierarchical learning in improving the overall prediction and prediction in cold-start scenarios, respectively.
Keywords: factorization machine, hierarchy, importance weight, mobile advertising, response prediction (ID#: 15-4415)
URLhttp://doi.acm.org/10.1145/2556195.2556240

 

Thomas Fritz, Andrew Begel, Sebastian C. Müller, Serap Yigit-Elliott, Manuela Züger; Using Psycho-Physiological Measures to Assess Task Difficulty in Software Development;  ICSE 2014 Proceedings of the 36th International Conference on Software Engineering, May 2014, pages 402-413. Doi: 10.1145/2568225.2568266 Abstract: Software developers make programming mistakes that cause serious bugs for their customers. Existing work to detect problematic software focuses mainly on post hoc identification of correlations between bug fixes and code. We propose a new approach to address this problem --- detect when software developers are experiencing difficulty while they work on their programming tasks, and stop them before they can introduce bugs into the code.   In this paper, we investigate a novel approach to classify the difficulty of code comprehension tasks using data from psycho-physiological sensors. We present the results of a study we conducted with 15 professional programmers to see how well an eye-tracker, an electrodermal activity sensor, and an electroencephalography sensor could be used to predict whether developers would find a task to be difficult. We can predict nominal task difficulty (easy/difficult) for a new developer with 64.99% precision and 64.58% recall, and for a new task with 84.38% precision and 69.79% recall. We can improve the Naive Bayes classifier's performance if we trained it on just the eye-tracking data over the entire dataset, or by using a sliding window data collection schema with a 55 second time window. Our work brings the community closer to a viable and reliable measure of task difficulty that could power the next generation of programming support tools.
Keywords: psycho-physiological, study, task difficulty (ID#: 15-4416)
URLhttp://doi.acm.org/10.1145/2568225.2568266

 

Elvis S. Liu, Georgios K. Theodoropoulos; Interest Management for Distributed Virtual Environments: A Survey; ACM Computing Surveys (CSUR), Volume 46 Issue 4, April 2014, Article No. 51.  Doi: 10.1145/2535417 Abstract: The past two decades have witnessed an explosion in the deployment of large-scale distributed simulations and distributed virtual environments in different domains, including military and academic simulation systems, social media, and commercial applications such as massively multiplayer online games. As these systems become larger, more data intensive, and more latency sensitive, the optimisation of the flow of data, a paradigm referred to as interest management, has become increasingly critical to address the scalability requirements and enable their successful deployment. Numerous interest management schemes have been proposed for different application scenarios. This article provides a comprehensive survey of the state of the art in the design of interest management algorithms and systems. The scope of the survey includes current and historical projects providing a taxonomy of the existing schemes and summarising their key features. Identifying the primary requirements of interest management, the article discusses the trade-offs involved in the design of existing approaches.
Keywords: Interest management, data distribution management, distributed virtual environments, high-level architecture, massively multiplayer online games (ID#: 15-4417)
URL: http://doi.acm.org/10.1145/2535417

 

Tony Ohmann, Michael Herzberg, Sebastian Fiss, Armand Halbert, Marc Palyart, Ivan Beschastnikh, Yuriy Brun; Behavioral Resource-Aware Model Inference; ASE '14 Proceedings of the 29th ACM/IEEE International Conference On Automated Software Engineering, September 2014, Pages 19-30. Doi: 10.1145/2642937.2642988 Abstract: Software bugs often arise because of differences between what developers think their system does and what the system actually does. These differences frustrate debugging and comprehension efforts. We describe Perfume, an automated approach for inferring behavioral, resource-aware models of software systems from logs of their executions. These finite state machine models ease understanding of system behavior and resource use.  Perfume improves on the state of the art in model inference by differentiating behaviorally similar executions that differ in resource consumption. For example, Perfume separates otherwise identical requests that hit a cache from those that miss it, which can aid understanding how the cache affects system behavior and removing cache-related bugs. A small user study demonstrates that using Perfume is more effective than using logs and another model inference tool for system comprehension. A case study on the TCP protocol demonstrates that Perfume models can help understand non-trivial protocol behavior. Perfume models capture key system properties and improve system comprehension, while being reasonably robust to noise likely to occur in real-world executions.
Keywords: comprehension, debugging, log analysis, model inference, performance, perfume, system understanding (ID#: 15-4418)
URL: http://doi.acm.org/10.1145/2642937.2642988

 

Moshe Lichman, Padhraic Smyth; Modeling Human Location Data with Mixtures of Kernel Densities; KDD '14 Proceedings of the 20th ACM SIGKDD International Conference On Knowledge Discovery And Data Mining, August 2014, Pages 35-44. Doi: 10.1145/2623330.2623681 Abstract: Location-based data is increasingly prevalent with the rapid increase and adoption of mobile devices. In this paper we address the problem of learning spatial density models, focusing specifically on individual-level data. Modeling and predicting a spatial distribution for an individual is a challenging problem given both (a) the typical sparsity of data at the individual level and (b) the heterogeneity of spatial mobility patterns across individuals. We investigate the application of kernel density estimation (KDE) to this problem using a mixture model approach that can interpolate between an individual's data and broader patterns in the population as a whole. The mixture-KDE approach is evaluated on two large geolocation/check-in data sets, from Twitter and Gowalla, with comparisons to non-KDE baselines, using both log-likelihood and detection of simulated identity theft as evaluation metrics. Our experimental results indicate that the mixture-KDE method provides a useful and accurate methodology for capturing and predicting individual-level spatial patterns in the presence of noisy and sparse data.
Keywords: anomaly/novelty detection, kernel density estimation, probabilistic methods, social media, spatial, user modeling (ID#: 15-4419)
URL: http://doi.acm.org/10.1145/2623330.2623681

 

Ramesh A., Anusha J., Clarence J.M. Tauro; A Novel, Generalized Recommender System for Social Media Using the Collaborative-Filtering Technique; ACM SIGSOFT Software Engineering Notes, Volume 39 Issue 3, May 2014, Pages 1-4. Doi: 10.1145/2597716.2597721 Abstract: Our goal in this paper is to discuss various methods available for Recommender Systems and describe an end-to-end approach for designing a Recommender System for social media using the collaborative-filtering approach. We will discuss the scope of contributions made in the recommender-system field, pros and cons for the collaborative-filtering approach, and current trends and challenges involved in the market with respect to the implementation of collaborative filtering.
Keywords: algorithms, collaborative filtering, recommendation, recommender systems, social media (ID#: 15-4420)
URLhttp://doi.acm.org/10.1145/2597716.2597721

 

Anna C. Squicciarini, Cornelia Caragea, Rahul Balakavi; Analyzing Images' Privacy for the Modern Web; HT '14 Proceedings of the 25th ACM Conference On Hypertext And Social Media, September 2014, pages 136-147.  Doi: 10.1145/2631775.2631803 Abstract: Images are now one of the most common form of content shared in online user-contributed sites and social Web 2.0 applications. In this paper, we present an extensive study exploring privacy and sharing needs of users' uploaded images. We develop learning models to estimate adequate privacy settings for newly uploaded images, based on carefully selected image-specific features. We focus on a set of visual-content features and on tags. We identify the smallest set of features, that by themselves or combined together with others, can perform well in properly predicting the degree of sensitivity of users' images. We consider both the case of binary privacy settings (i.e. public, private), as well as the case of more complex privacy options, characterized by multiple sharing options. Our results show that with few carefully selected features, one may achieve extremely high accuracy, especially when high-quality tags are available.
Keywords: privacy, image analysis (ID#: 15-4421)
URL: http://doi.acm.org/10.1145/2631775.2631803

 

Ramin Moazeni, Daniel Link, Barry Boehm; COCOMO II Parameters and IDPD: Bilateral Relevances; ICSSP 2014 Proceedings of the 2014 International Conference on Software and System Process, May 2014, Pages 20-24. Doi: 10.1145/2600821.2600847 Abstract: The phenomenon called Incremental Development Productivity Decline (IDPD) is presumed to be present in all incremental soft-ware projects to some extent. COCOMO II is a popular parametric cost estimation model that has not yet been adapted to account for the challenges that IDPD poses to cost estimation. Instead, its cost driver and scale factors stay constant throughout the increments of a project. While a simple response could be to make these parameters variable per increment, questions are raised as to whether the existing parameters are enough to predict the behavior of an incrementally developed project even in that case. Individual COCOMO II parameters are evaluated with regard to their development over the course of increments and how they influence IDPD. The reverse is also done. In light of data collected in recent experimental projects, additional new variable parameters that either extend COCOMO II or could stand on their own are proposed.
Keywords: IDPD, Parametric cost estimation, cost drivers, incremental development, scale factors (ID#: 15-4422)
URL: http://doi.acm.org/10.1145/2600821.2600847

 

Nicolás E. Bordenabe, Konstantinos Chatzikokolakis, Catuscia Palamidessi; Optimal Geo-Indistinguishable Mechanisms for Location Privacy; CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November, 2014, Pages 251-262. Doi: 10.1145/2660267.2660345 Abstract: We consider the geo-indistinguishability approach to location privacy, and the trade-off with respect to utility. We show that, given a desired degree of geo-indistinguishability, it is possible to construct a mechanism that minimizes the service quality loss, using linear programming techniques. In addition we show that, under certain conditions, such mechanism also provides optimal privacy in the sense of Shokri et al. Furthermore, we propose a method to reduce the number of constraints of the linear program from cubic to quadratic, maintaining the privacy guarantees and without affecting significantly the utility of the generated mechanism. This reduces considerably the time required to solve the linear program, thus enlarging significantly the location sets for which the optimal mechanisms can be computed.
Keywords: differential privacy, geo-indistinguishability, linear optimization, location obfuscation, location privacy (ID#: 15-4423)
URLhttp://doi.acm.org/10.1145/2660267.2660345

 

Elizeu Santos-Neto, Tatiana Pontes, Jussara Almeida, Matei Ripeanu; On the Choice of Data Sources to Improve Content Discoverability via Textual Feature Optimization; HT '14 Proceedings of the 25th ACM Conference On Hypertext And Social Media, September 2014, Pages 273-278. Doi: 10.1145/2631775.2631815 Abstract: A large portion of the audience of video content items on the web currently comes from keyword-based search and/or tag-based navigation. Thus, the textual features of this content (e.g., the title, description, and tags) can directly impact the view count of a particular content item, and ultimately the advertisement generated revenue. More importantly, the textual features can generally be optimized to attract more search traffic. This study makes progress on the problem of automating tag selection for online video content with the goal of increasing viewership. It brings two key insights: first, based on evidence that existing tags for YouTube videos can be improved by an automated tag recommender, even for a sample of well curated movies, it explores the impact of using information mined from repositories created by different production modes (e.g., peer- and expert-produced); second, this study performs a preliminary characterization of the factors that impact the quality of the tag recommendation pipeline for different input data sources.
Keywords: peer-production, social tagging, video popularity (ID#: 15-4423)
URLhttp://doi.acm.org/10.1145/2631775.2631815

 

Yu Zheng, Abhishek Basak, Swarup Bhunia; CACI: Dynamic Current Analysis Towards Robust Recycled Chip Identification; DAC '14 Proceedings of the 51st Annual Design Automation Conference, June 2014, Pages 1-6. Doi: 10.1145/2593069.2593102  Abstract: Rising incidences of counterfeit chips in the supply chain have posed a serious threat to the semiconductor industry. Recycling of used chips constitutes a major form of counterfeiting attacks. If undetected, they can lead to serious consequences including system performance/reliability issues during field operation and potential revenue/reputation loss for a trusted manufacturer. Existing validation approaches based on path delay analysis suffer from reduced robustness and sensitivity under large process variations. On the other hand, existing design solutions based on aging sensors require additional design/verification efforts and cannot be applied to legacy chips. In this paper, we present a novel recycled chip identification approach, CACI, that exploits differential aging in self-similar modules (e.g., different parts of an adder) to isolate aged chips under large inter- and intra-die process variations. It compares dynamic current (IDDT) signatures between two adjacent similar circuit structures in a chip. We derive an isolation metric based on multiple current comparisons to provide high level of confidence. CACI does not rely on any embedded structures for authentication, thus it comes at virtually zero design overhead and can be applied to chips already in the market. Through extensive simulations, we show that for 15% inter- and 10% intra-die variations in threshold voltage for a 45nm CMOS process, over 97% of recycled chips can be reliably identified.
Keywords: BTI, Counterfeiting attack, Hardware security, Recycled chip (ID#: 15-4424)
URLhttp://doi.acm.org/10.1145/2593069.2593102

 

Ron Eyal, Avi Rosenfeld, Sigal Sina, Sarit Kraus; Predicting and Identifying Missing Node Information in Social Networks; ACM Transactions on Knowledge Discovery from Data (TKDD), Volume 8 Issue 3, June 2014, Article No. 14. Doi:  10.1145/2536775  Abstract: In recent years, social networks have surged in popularity. One key aspect of social network research is identifying important missing information that is not explicitly represented in the network, or is not visible to all. To date, this line of research typically focused on finding the connections that are missing between nodes, a challenge typically termed as the link prediction problem. This article introduces the missing node identification problem, where missing members in the social network structure must be identified. In this problem, indications of missing nodes are assumed to exist. Given these indications and a partial network, we must assess which indications originate from the same missing node and determine the full network structure. Toward solving this problem, we present the missing node identification by spectral clustering algorithm (MISC), an approach based on a spectral clustering algorithm, combined with nodes’ pairwise affinity measures that were adopted from link prediction research. We evaluate the performance of our approach in different problem settings and scenarios, using real-life data from Facebook. The results show that our approach has beneficial results and can be effective in solving the missing node identification problem. In addition, this article also presents R-MISC, which uses a sparse matrix representation, efficient algorithms for calculating the nodes’ pairwise affinity, and a proprietary dimension reduction technique to enable scaling the MISC algorithm to large networks of more than 100,000 nodes. Last, we consider problem settings where some of the indications are unknown. Two algorithms are suggested for this problem: speculative MISC, based on MISC, and missing link completion, based on classical link prediction literature. We show that speculative MISC outperforms missing link completion.
Keywords: Social networks, missing nodes, spectral clustering (ID#: 15-4425)
URL: http://doi.acm.org/10.1145/2536775

 

Tung Thanh Nguyen, Evelyn Duesterwald, Tim Klinger, P. Santhanam, Tien N. Nguyen; Characterizing Defect Trends in Software Support; ICSE Companion 2014 Companion Proceedings of the 36th International Conference on Software Engineering, May 2014, pages 508-511. Doi: 10.1145/2591062.2591112  Abstract: We present an empirical analysis of defect arrival data in the operational phase of multiple software products. We find that the shape of the defect curves is sufficiently determined by three external and readily available release cycle attributes: the product type, the license model, and the cycle time between releases. This finding provides new insights into the driving forces affecting the specifics of defect curves and opens up new opportunities for software support organizations to reduce the cost of maintaining defect arrival models for individual products. In addition, it allows the possibility of predicting the defect arrival rate of one product from another with similar known attributes.
Keywords: Empirical study, operational phase, post release defects modeling (ID#: 15-4426)
URLhttp://doi.acm.org/10.1145/2591062.2591112

 

Emre Sarigol, David Garcia, Frank Schweitzer; Online Privacy as a Collective Phenomenon; COSN '14 Proceedings of the Second ACM Conference On Online Social Networks, October 2014, Pages 95-106. Doi: 10.1145/2660460.2660470 Abstract: The problem of online privacy is often reduced to individual decisions to hide or reveal personal information in online social networks (OSNs). However, with the increasing use of OSNs, it becomes more important to understand the role of the social network in disclosing personal information that a user has not revealed voluntarily: How much of our private information do our friends disclose about us, and how much of our privacy is lost simply because of online social interaction? Without strong technical effort, an OSN may be able to exploit the assortativity of human private features, this way constructing shadow profiles with information that users chose not to share. Furthermore, because many users share their phone and email contact lists, this allows an OSN to create full shadow profiles for people who do not even have an account for this OSN.  We empirically test the feasibility of constructing shadow profiles of sexual orientation for users and non-users, using data from more than 3 Million accounts of a single OSN. We quantify a lower bound for the predictive power derived from the social network of a user, to demonstrate how the predictability of sexual orientation increases with the size of this network and the tendency to share personal information. This allows us to define a privacy leak factor that links individual privacy loss with the decision of other individuals to disclose information. Our statistical analysis reveals that some individuals are at a higher risk of privacy loss, as prediction accuracy increases for users with a larger and more homogeneous first- and second-order neighborhood of their social network. While we do not provide evidence that shadow profiles exist at all, our results show that disclosing of private information is not restricted to an individual choice, but becomes a collective decision that has implications for policy and privacy regulation.
Keywords: prediction, privacy, shadow profiles (ID#: 15-4427)
URLhttp://doi.acm.org/10.1145/2660460.2660470

 

Vasileios Kagklis, Vassilios S. Verykios, Giannis Tzimas, Athanasios K. Tsakalidis; Knowledge Sanitization on the Web; WIMS '14 Proceedings of the 4th International Conference on Web Intelligence, Mining and Semantics (WIMS14), June 2014, Article No. 4. Doi: 10.1145/2611040.2611044 Abstract: The widespread use of the Internet caused the rapid growth of data on the Web. But as data on the Web grew larger in numbers, so did the perils due to the applications of data mining. Privacy preserving data mining (PPDM) is the field that investigates techniques to preserve the privacy of data and patterns. Knowledge Hiding, a subfield of PPDM, aims at preserving the sensitive patterns included in the data, which are going to be published. A wide variety of techniques fall under the umbrella of Knowledge Hiding, such as frequent pattern hiding, sequence hiding, classification rule hiding and so on.  In this tutorial we create a taxonomy for the frequent itemset hiding techniques. We also provide as examples for each category representative works that appeared recently and fall into each one of these categories. Then, we focus on the detailed overview of a specific category, the so called linear programming-based techniques. Finally, we make a quantitative and qualitative comparison among some of the existing techniques that are classified into this category.
Keywords: Frequent Itemset Hiding, Knowledge Hiding, LP-Based Hiding Approaches, Privacy Preserving Data Mining; (ID#: 15-4428)
URLhttp://doi.acm.org/10.1145/2611040.2611044

 

Fei Xing, Haihang You; Workload Aware Utilization Optimization for a Petaflop Supercomputer: Evidence Based Assessment Using Statistical Methods; XSEDE '14 Proceedings of the 2014 Annual Conference on Extreme Science and Engineering Discovery Environment, July 2014, Article No. 50. Doi: 10.1145/2616498.2616536  Abstract: Nowadays, computing resources like supercomputers are shared by many users. Most systems are equipped with batch systems as their resource managers. From a user's perspective, the overall turnaround of each submitted job is measured by time-to-solution which consists of the sum of batch queuing time and execution time. On a busy machine, most jobs spend more time waiting in the batch queue than their real job executions. And rarely this is a topic of performance tuning and optimization of parallel computing. we propose a workload aware method systematically to predict jobs' batch queue waiting time patterns. Consequently, it will help user to optimize utilization and improve productivity. With workload data gathered from a supercomputer, we apply Bayesian framework to predict the temporal trend of long-time batch queue waiting probability. Thus, the workload of the machine not only can be predicted, we are able to provide users with a monthly updated reference chart to suggest job submission assembled with better chosen number of CPU and running time requests, which will avoid long-time waiting in batch queue. Our experiment shows that the model could make over 89% correct predictions for all cases we have tested.
Keywords: Kraken, near repeat, queuing time, workload (ID#: 15-4429)
URL: http://doi.acm.org/10.1145/2616498.2616536

 

Stratis Ioannidis, Andrea Montanari, Udi Weinsberg, Smriti Bhagat, Nadia Fawaz, Nina Taft; Privacy Tradeoffs in Predictive Analytics; SIGMETRICS '14 The 2014 ACM International Conference On Measurement And Modeling Of Computer Systems, June 2014, Pages 57-69  Doi: 10.1145/2591971.2592011 Abstract: Online services routinely mine user data to predict user preferences, make recommendations, and place targeted ads. Recent research has demonstrated that several private user attributes (such as political affiliation, sexual orientation, and gender) can be inferred from such data. Can a privacy-conscious user benefit from personalization while simultaneously protecting her private attributes? We study this question in the context of a rating prediction service based on matrix factorization. We construct a protocol of interactions between the service and users that has remarkable optimality properties: it is privacy-preserving, in that no inference algorithm can succeed in inferring a user's private attribute with a probability better than random guessing; it has maximal accuracy, in that no other privacy-preserving protocol improves rating prediction; and, finally, it involves a minimal disclosure, as the prediction accuracy strictly decreases when the service reveals less information. We extensively evaluate our protocol using several rating datasets, demonstrating that it successfully blocks the inference of gender, age and political affiliation, while incurring less than 5% decrease in the accuracy of rating prediction.
Keywords: matrix factorization, privacy-preserving protocols (ID#: 15-4430)
URL: http://doi.acm.org/10.1145/2591971.2592011

 

Ramin Moazeni, Daniel Link, Celia Chen, Barry Boehm; Software Domains in Incremental Development Productivity Decline; ICSSP 2014 Proceedings of the 2014 International Conference on Software and System Process, May 2014, Pages 75-83. Doi: 10.1145/2600821.2600830  Abstract: This research paper expands on a previously introduced phenomenon called Incremental Development Productivity Decline (IDPD) that is presumed to be present in all incremental software projects to some extent. Incremental models are now being used by many organizations in order to reduce development risks. Incremental development has become the most common method of software development. Therefore its characteristics inevitably influence the productivity of projects. Based on their observed IDPD, incrementally developed projects are split into several major IDPD categories. Different ways of measuring productivity are presented and evaluated in order to come to a definition or set of definitions that is suitable to these categories of projects. Data has been collected and analyzed, indicating the degree of IDPD associated with each category. Several hypotheses have undergone preliminary evaluations regarding the existence, stability and category-dependence of IDPD with encouraging results. Further data collection and hypothesis testing is underway.
Keywords: Software engineering, incremental development, productivity decline, statistics (ID#: 15-4431)
URL: http://doi.acm.org/10.1145/2600821.2600830

 

Sangho Lee, Changhee Jung, Santosh Pande; Detecting Memory Leaks Through Introspective Dynamic Behavior Modelling Using Machine Learning; ICSE 2014 Proceedings of the 36th International Conference on Software Engineering, May 2014, Pages 814-824. Doi: 10.1145/2568225.2568307 Abstract: This paper expands staleness-based memory leak detection by presenting a machine learning-based framework. The proposed framework is based on an idea that object staleness can be better leveraged in regard to similarity of objects; i.e., an object is more likely to have leaked if it shows significantly high staleness not observed from other similar objects with the same allocation context.  A central part of the proposed framework is the modeling of heap objects. To this end, the framework observes the staleness of objects during a representative run of an application. From the observed data, the framework generates training examples, which also contain instances of hypothetical leaks. Via machine learning, the proposed framework replaces the error-prone user-definable staleness predicates used in previous research with a model-based prediction.  The framework was tested using both synthetic and real-world examples. Evaluation with synthetic leakage workloads of SPEC2006 benchmarks shows that the proposed method achieves the optimal accuracy permitted by staleness-based leak detection. Moreover, by incorporating allocation context into the model, the proposed method achieves higher accuracy than is possible with object staleness alone. Evaluation with real-world memory leaks demonstrates that the proposed method is effective for detecting previously reported bugs with high accuracy.
Keywords: Machine learning, Memory leak detection, Runtime analysis (ID#: 15-4432)
URLhttp://doi.acm.org/10.1145/2568225.2568307

 

Mardé Helbig, Andries P. Engelbrecht; Benchmarks for Dynamic Multi-Objective Optimisation Algorithms; ACM Computing Surveys (CSUR), Volume 46 Issue 3, January 2014, Article No. 37. Doi: 10.1145/2517649 Abstract: Algorithms that solve Dynamic Multi-Objective Optimisation Problems (DMOOPs) should be tested on benchmark functions to determine whether the algorithm can overcome specific difficulties that can occur in real-world problems. However, for Dynamic Multi-Objective Optimisation (DMOO), no standard benchmark functions are used. A number of DMOOPs have been proposed in recent years. However, no comprehensive overview of DMOOPs exist in the literature. Therefore, choosing which benchmark functions to use is not a trivial task. This article seeks to address this gap in the DMOO literature by providing a comprehensive overview of proposed DMOOPs, and proposing characteristics that an ideal DMOO benchmark function suite should exhibit. In addition, DMOOPs are proposed for each characteristic. Shortcomings of current DMOOPs that do not address certain characteristics of an ideal benchmark suite are highlighted. These identified shortcomings are addressed by proposing new DMOO benchmark functions with complicated Pareto-Optimal Sets (POSs), and approaches to develop DMOOPs with either an isolated or deceptive Pareto-Optimal Front (POF). In addition, DMOO application areas and real-world DMOOPs are discussed.
Keywords: Dynamic multi-objective optimisation, benchmark functions, complex Pareto-optimal set, deceptive Pareto-optimal front, ideal benchmark function suite, isolated Pareto-optimal front (ID#: 15-4433)
URL: http://doi.acm.org/10.1145/2517649

 

Yu Zhang, Daby Sow, Deepak Turaga, Mihaela van der Schaar; A Fast Online Learning Algorithm for Distributed Mining of BigData; ACM SIGMETRICS Performance Evaluation Review, Volume 41 Issue 4, March 2014, Pages 90-93. Doi: 10.1145/2627534.2627562 Abstract: BigData analytics require that distributed mining of numerous data streams is performed in real-time. Unique challenges associated with designing such distributed mining systems are: online adaptation to incoming data characteristics, online processing of large amounts of heterogeneous data, limited data access and communication capabilities between distributed learners, etc. We propose a general framework for distributed data mining and develop an efficient online learning algorithm based on this. Our framework consists of an ensemble learner and multiple local learners, which can only access different parts of the incoming data. By exploiting the correlations of the learning models among local learners, our proposed learning algorithms can optimize the prediction accuracy while requiring significantly less information exchange and computational complexity than existing state-of-the-art learning solutions.
Keywords:  (not provided) (ID#: 15-4434)
URL: http://doi.acm.org/10.1145/2627534.2627562

 

Sudarshan Srinivasan, Victor Hazlewood, Gregory D. Peterson; Descriptive Data Analysis of File Transfer Data; XSEDE '14 Proceedings of the 2014 Annual Conference on Extreme Science and Engineering Discovery Environment, July 2014, Article No. 37. Doi: 10.1145/2616498.2616550  Abstract: There are millions of files and multi-terabytes of data transferred to and from the University of Tennessee's National Institute for Computational Sciences each month. New capabilities available with GridFTP version 5.2.2 include additional transfer log information previously unavailable in prior versions implemented within XSEDE. The transfer log data now available includes identification of source and destination endpoints which unlocks a wealth of information that can be used to detail GridFTP activities across the Internet. This information can be used for a wide variety of reports of interest to individual XSEDE Service Providers and to XSEDE Operations. In this paper, we discuss the new capabilities available for transfer logs in GridFTP 5.2.2, our initial attempt to organize, analyze, and report on this file transfer data for NICS, and its applicability to XSEDE Service Providers. Analysis of this new information can provide insight into effective and efficient utilization of GridFTP resources including identification of potential areas of GridFTP file transfer improvement (e.g., network and server tuning) and potential predictive analysis to improve efficiency.
Keywords: Log transfer, data analysis, database loading (ID#: 15-4435)
URL: http://doi.acm.org/10.1145/2616498.2616550

 

Robert Lagerström, Mathias Ekstedt; Extending a General Theory of Software to Engineering; GTSE 2014 Proceedings of the 3rd SEMAT Workshop on General Theories of Software Engineering, June 2014, Pages 36-39. Doi: 10.1145/2593752.2593759 Abstract: In this paper, we briefly describe a general theory of software used in order to model and predict the current and future quality of software systems and their environment. The general theory is described using a class model containing classes such as application component, business service, and infrastructure function as well as attributes such as modifiability, cost, and availability. We also elaborate how this general theory of software can be extended into a general theory of software engineering by adding engineering activities, roles, and requirements.
Keywords: General theory, Software engineering, Software systems, and Software quality prediction (ID#: 15-4436)
URLhttp://doi.acm.org/10.1145/2593752.2593759

 

Xiuchao Wu, Kenneth N. Brown, Cormac J. Sreenan; Data Pre-Forwarding for Opportunistic Data Collection in Wireless Sensor Networks; ACM Transactions on Sensor Networks (TOSN), Volume 11 Issue 1, November 2014,  Article No. 8. Doi: 10.1145/2629369  Abstract: Opportunistic data collection in wireless sensor networks uses passing smartphones to collect data from sensor nodes, thus avoiding the cost of multiple static sink nodes. Based on the observed mobility patterns of smartphone users, sensor data should be preforwarded to the nodes that are visited more frequently with the aim of improving network throughput. In this article, we construct a formal network model and an associated theoretical optimization problem to maximize the throughput subject to energy constraints of sensor nodes. Since a centralized controller is not available in opportunistic data collection, data pre-forwarding (DPF) must operate as a distributed mechanism in which each node decides when and where to forward data based on local information. Hence, we develop a simple distributed DPF mechanism with two heuristic algorithms, implement this proposal in Contiki-OS, and evaluate it thoroughly. We demonstrate empirically, in simulations, that our approach is close to the optimal solution obtained by a centralized algorithm. We also demonstrate that this approach performs well in scenarios based on real mobility traces of smartphone users. Finally, we evaluate our proposal on a small laboratory testbed, demonstrating that the distributed DPF mechanism with heuristic algorithms performs as predicted by simulations, and thus that it is a viable technique for opportunistic data collection through smartphones.
Keywords: Wireless sensor network, data pre-forwarding, human mobility, opportunistic data collection, routing, smartphone (ID#: 15-4437)
URLhttp://doi.acm.org/10.1145/2629369

 

Qiang Fu, Jieming Zhu, Wenlu Hu, Jian-Guang Lou, Rui Ding, Qingwei Lin, Dongmei Zhang, Tao Xie; Where Do Developers Log? An Empirical Study on Logging Practices in Industry;  ICSE Companion 2014 Companion Proceedings of the 36th International Conference on Software Engineering, May 2014, Pages 24-33. Doi: 10.1145/2591062.2591175 Abstract: System logs are widely used in various tasks of software system management. It is crucial to avoid logging too little or too much. To achieve so, developers need to make informed decisions on where to log and what to log in their logging practices during development. However, there exists no work on studying such logging practices in industry or helping developers make informed decisions. To fill this significant gap, in this paper, we systematically study the logging practices of developers in industry, with focus on where developers log. We obtain six valuable findings by conducting source code analysis on two large industrial systems (2.5M and 10.4M LOC, respectively) at Microsoft. We further validate these findings via a questionnaire survey with 54 experienced developers in Microsoft. In addition, our study demonstrates the high accuracy of up to 90% F-Score in predicting where to log.
Keywords: Logging practice, automatic logging, developer survey (ID#: 15-4438)
URLhttp://doi.acm.org/10.1145/2591062.2591175

 

Martina Maggio, Federico Terraneo, Alberto Leva; Task Scheduling: A Control-Theoretical Viewpoint for a General and Flexible Solution; ACM Transactions on Embedded Computing Systems (TECS) - Regular Papers, Volume 13 Issue 4, November 2014, Article No. 76. Doi: 10.1145/2560015  Abstract: This article presents a new approach to the design of task scheduling algorithms, where system-theoretical methodologies are used throughout. The proposal implies a significant perspective shift with respect to mainstream design practices, but yields large payoffs in terms of simplicity, flexibility, solution uniformity for different problems, and possibility to formally assess the results also in the presence of unpredictable run-time situations. A complete implementation example is illustrated, together with various comparative tests, and a methodological treatise of the matter.
Keywords: Task scheduling, control-based system design, discrete-time dynamic systems, feedback control, formal assessment (ID#: 15-4439)
URL: http://doi.acm.org/10.1145/2560015

 

Sebastian Zander, Lachlan L.H. Andrew, Grenville Armitage; Capturing Ghosts: Predicting the Used IPv4 Space by Inferring Unobserved Addresses; IMC '14 Proceedings of the 2014 Conference on Internet Measurement Conference, November 2014, Pages 319-332. Doi: 10.1145/2663716.2663718 Abstract: The pool of unused routable IPv4 prefixes is dwindling, with less than 4% remaining for allocation at the end of June 2014. Yet the adoption of IPv6 remains slow. We demonstrate a new capture-recapture technique for improved estimation of the size of "IPv4 reserves" (allocated yet unused IPv4 addresses or routable prefixes) from multiple incomplete data sources. A key contribution of our approach is the plausible estimation of both observed and unobserved-yet-active (ghost) IPv4 address space. This significantly improves our community's understanding of IPv4 address space exhaustion and likely pressure for IPv6 adoption. Using "ping scans", network traces and server logs we estimate that 6.3 million /24 subnets and 1.2 billion IPv4 addresses are currently in use (roughly 60% and 45% of the publicly routed space respectively). We also show how utilisation has changed over the last 2--3 years and provide an up-to-date estimate of potentially-usable remaining IPv4 space.
Keywords: capture-recapture, used ipv4 space (ID#: 15-4440)
URLhttp://doi.acm.org/10.1145/2663716.2663718

 


 

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

 

 

Hard Problems: Predictive Security Metrics (IEEE)

 

 
SoS Logo

Hard Problems: Predictive Security Metrics (IEEE)

 

Predictive security metrics are a hard problem in the Science of Security.  A survey of the IEEE Digital Library found sixteen scholarly articles about research into security metrics that were published in 2014.  A separate listing of works published by ACM is referenced under the heading “Hard Problems: Predictive Security Metrics,” and those research works cited by, but not published by ACM, and therefore subject to intellectual property restrictions about the use of abstracts, are cited under the heading “Citations for Hard Problems.”


 

Savola, R.M.; Kylanpaa, M., "Security Objectives, Controls and Metrics Development for an Android Smartphone Application," Information Security for South Africa (ISSA), 2014, pp.1, 8, 13-14 Aug. 2014. doi: 10.1109/ISSA.2014.6950501 Abstract: Security in Android smartphone platforms deployed in public safety and security mobile networks is a remarkable challenge. We analyse the security objectives and controls for these systems based on an industrial risk analysis. The target system of the investigation is an Android platform utilized for public safety and security mobile network. We analyse how a security decision making regarding this target system can be supported by effective and efficient security metrics. In addition, we describe implementation details of security controls for authorization and integrity objectives of a demonstration of the target system.
Keywords: Android (operating system); authorisation; data integrity; decision making; risk analysis; safety;smart phones; Android smartphone application ;authorization objective; industrial risk analysis; integrity objective; metrics development; public safety; security controls; security decision making; security metrics; security mobile networks; security objectives; Authorization; Libraries; Monitoring; Android; risk analysis; security effectiveness; security metrics; security objectives (ID#: 15-4441)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950501&isnumber=6950479

 

Hatzivasilis, G.; Papaefstathiou, I.; Manifavas, C.; Papadakis, N., "A Reasoning System for Composition Verification and Security Validation," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1,4, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814001 Abstract: The procedure to prove that a system-of-systems is composable and secure is a very difficult task. Formal methods are mathematically-based techniques used for the specification, development and verification of software and hardware systems. This paper presents a model-based framework for dynamic embedded system composition and security evaluation. Event Calculus is applied for modeling the security behavior of a dynamic system and calculating its security level with the progress in time. The framework includes two main functionalities: composition validation and derivation of security and performance metrics and properties. Starting from an initial system state and given a series of further composition events, the framework derives the final system state as well as its security and performance metrics and properties. We implement the proposed framework in an epistemic reasoner, the rule engine JESS with an extension of DECKT for the reasoning process and the JAVA programming language.
Keywords: Java; embedded systems; formal specification; formal verification; reasoning about programs; security of data; software metrics; temporal logic; DECKT; JAVA programming language; composition validation; composition verification; dynamic embedded system composition; epistemic reasoner; event calculus; formal methods; model-based framework; performance metrics; reasoning system; rule engine JESS; security evaluation; security validation; system specification; system-of-systems; Cognition; Computational modeling; Embedded systems; Measurement; Protocols; Security; Unified modeling language (ID#: 15-4442)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814001&isnumber=6813963

 

Axelrod, C.W., "Reducing Software Assurance Risks for Security-Critical and Safety-Critical Systems," Systems, Applications and Technology Conference (LISAT), 2014 IEEE Long Island, vol., no., pp.1,6, 2-2 May 2014. doi: 10.1109/LISAT.2014.6845212 Abstract: According to the Office of the Assistant Secretary of Defense for Research and Engineering (ASD(R&E)), the US Department of Defense (DoD) recognizes that there is a “persistent lack of a consistent approach ... for the certification of software assurance tools, testing and methodologies” [1]. As a result, the ASD(R&E) is seeking “to address vulnerabilities and weaknesses to cyber threats of the software that operates ... routine applications and critical kinetic systems ...” The mitigation of these risks has been recognized as a significant issue to be addressed in both the public and private sectors. In this paper we examine deficiencies in various software-assurance approaches and suggest ways in which they can be improved. We take a broad look at current approaches, identify their inherent weaknesses and propose approaches that serve to reduce risks. Some technical, economic and governance issues are: (1) Development of software-assurance technical standards (2) Management of software-assurance standards (3) Evaluation of tools, techniques, and metrics (4) Determination of update frequency for tools, techniques (5) Focus on most pressing threats to software systems (6) Suggestions as to risk-reducing research areas (7) Establishment of models of the economics of software-assurance solutions, and testing and certifying software We show that, in order to improve current software assurance policy and practices, particularly with respect to security, there has to be a major overhaul in how software is developed, especially with respect to the requirements and testing phases of the SDLC (Software Development Lifecycle). We also suggest that the current preventative approaches are inadequate and that greater reliance should be placed upon avoidance and deterrence. We also recommend that those developing and operating security-critical and safety-critical systems exchange best-ofbreed software assurance methods to prevent the v- lnerability of components leading to compromise of entire systems of systems. The recent catastrophic loss of a Malaysia Airlines airplane is then presented as an example of possible compromises of physical and logical security of on-board communications and management and control systems.
Keywords: program testing; safety-critical software; software development management; software metrics; ASD(R&E);Assistant Secretary of Defense for Research and Engineering; Malaysia Airlines airplane; SDLC;US Department of Defense; US DoD; component vulnerability prevention; control systems; critical kinetic systems; cyber threats; economic issues; governance issues; logical security; management systems; on-board communications; physical security; private sectors; public sectors; risk mitigation; safety-critical systems; security-critical systems; software assurance risk reduction; software assurance tool certification; software development; software development lifecycle; software methodologies; software metric evaluation; software requirements; software system threats; software technique evaluation; software testing; software tool evaluation; software-assurance standard management; software-assurance technical standard development; technical issues; update frequency determination; Measurement; Organizations; Security; Software systems; Standards; Testing; cyber threats; cyber-physical systems; governance; risk; safety-critical systems; security-critical systems; software assurance; technical standards; vulnerabilities; weaknesses (ID#: 15-4443)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845212&isnumber=6845183

 

Cain, A.A.; Schuster, D., "Measurement of Situation Awareness Among Diverse Agents in Cyber Security," Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), 2014 IEEE International Inter-Disciplinary Conference on, pp.124,129, 3-6 March 2014. doi: 10.1109/CogSIMA.2014.6816551 Abstract: Development of innovative algorithms, metrics, visualizations, and other forms of automation are needed to enable network analysts to build situation awareness (SA) from large amounts of dynamic, distributed, and interacting data in cyber security. Several models of cyber SA can be classified as taking an individual or a distributed approach to modeling SA within a computer network. While these models suggest ways to integrate the SA contributed by multiple actors, implementing more advanced data center automation will require consideration of the differences and similarities between human teaming and human-automation interaction. The purpose of this paper is to offer guidance for quantifying the shared cognition of diverse agents in cyber security. The recommendations presented can inform the development of automated aids to SA as well as illustrate paths for future empirical research.
Keywords: cognition; security of data; SA; cyber security; data center automation; diverse agents; shared cognition; situation awareness measurement; Automation; Autonomous agents; Cognition; Computer security; Data models; Sociotechnical systems; Situation awareness; cognition; cyber security; information security ;teamwork (ID#: 15-4444)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6816551&isnumber=6816529

 

Elgendy, M.A.; Shawish, A.; Moussa, M.I., "MCACC: New Approach for Augmenting the Computing Capabilities of Mobile Devices With Cloud Computing," Science and Information Conference (SAI), 2014, pp.79,86, 27-29 Aug. 2014. doi: 10.1109/SAI.2014.6918175 Abstract: Smartphones are becoming increasingly popular with a wide range of capabilities for the purpose of handling heavy applications like gaming, video editing, and face recognition etc. These kinds of applications continuously require intensive computational power, memory, and battery. Many of the early techniques solve this problem by offloading these applications to run on the Cloud due to its famous resources availability. Later, enhanced techniques choosed to offload part of the applications while leaving the rest to be processed on the smartphone based on one or two metrics like power and CPU consumption without any consideration to the communication and network overhead. With the notable development of the smartphone's hardware, it becomes crucial to develop a smarter offloading framework that is able to efficiently utilize the available smartphone's resources and only offload when necessary based on real-time decision metrics. This paper proposed such framework, which we called Mobile Capabilities Augmentation using Cloud Computing (MCACC). In this framework, any mobile application is divided into a group of services, and then each of them is either executed locally on the mobile or remotely on the Cloud based a novel dynamic offloading decision model. Here, the decision is based on five realtime metrics: total execution time, energy consumption, remaining battery, memory and security. The extensive simulation studies show that both heavy and light applications can benefit from our proposed model while saving energy and improving performance compare to previous techniques. The proposed MCACC turns the smartphones to be more smarter as the offloading decision is taken without any user interaction.
Keywords: cloud computing; face recognition; smart phones; CPU consumption; MCACC; battery; cloud computing; dynamic offloading decision model; energy consumption; face recognition; gaming ;intensive computational power; memory; mobile capabilities augmentation; mobile devices; network overhead; notable development; offloading framework; real-time decision metrics; realtime metrics; smart phone hardware; smart phone resources ;total execution time; user interaction; video editing; Androids; Batteries; Humanoid robots; Java; Measurement; Mobile communication; Smart phones; Android; battery; mobile Cloud computing; offloading; security; smartphones (ID#: 15-4445)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6918175&isnumber=6918164

 

Hills, M.; Klint, P., "PHP AiR: Analyzing PHP Systems With Rascal," Software Maintenance, Reengineering and Reverse Engineering (CSMR-WCRE), 2014 Software Evolution Week - IEEE Conference on, pp.454,457, 3-6 Feb. 2014. doi: 10.1109/CSMR-WCRE.2014.6747217 Abstract: PHP is currently one of the most popular programming languages, widely used in both the open source community and in industry to build large web-focused applications and application frameworks. To provide a solid framework for working with large PHP systems in areas such as evaluating how language features are used, studying how PHP systems evolve, program analysis for refactoring and security validation, and software metrics, we have developed PHP AiR, a framework for PHP Analysis in Rascal. Here we briefly describe features available in PHP AiR, integration with the Eclipse PHP Development Tools, and usage scenarios in program analysis, metrics, and empirical software engineering.
Keywords: Internet; object-oriented languages; program diagnostics; public domain software; security of data; software maintenance; software metrics; Eclipse PHP development tools; PHP AiR; PHP system analysis; Rascal; Web-focused applications; application frameworks; empirical software engineering; open source community; program analysis; programming languages; refactoring; security validation; software metrics; Java; Libraries; Manuals; Performance analysis; Runtime; Software (ID#: 15-4446)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6747217&isnumber=6747152

 

Rostami, M.; Wendt, J.B.; Potkonjak, M.; Koushanfar, F., "Quo Vadis, PUF?: Trends and challenges of emerging physical-disorder based security," Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014, pp.1,6, 24-28 March 2014. doi: 10.7873/DATE.2014.365 Abstract: The physical unclonable function (PUF) has emerged as a popular and widely studied security primitive based on the randomness of the underlying physical medium. To date, most of the research emphasis has been placed on finding new ways to measure randomness, hardware realization and analysis of a few initially proposed structures, and conventional secret-key based protocols. In this work, we present our subjective analysis of the emerging and future trends in this area that aim to change the scope, widen the application domain, and make a lasting impact. We emphasize on the development of new PUF-based primitives and paradigms, robust protocols, public-key protocols, digital PUFs, new technologies, implementations, metrics and tests for evaluation/validation, as well as relevant attacks and countermeasures.
Keywords: cryptographic protocols; public key cryptography; PUF-based paradigms; PUF-based primitives; Quo Vadis; application domain; digital PUF; hardware realization; physical medium randomness measurement; physical unclonable function; physical-disorder-based security; public-key protocol; secret-key based protocols; security primitive; structure analysis; subjective analysis; Aging; Correlation; Hardware; NIST; Protocols; Public key (ID#: 15-4447)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6800566&isnumber=6800201

 

Witkowski, M.; Igras, M.; Grzybowska, J.; Jaciow, P.; Galka, J.; Ziolko, M., "Caller Identification by Voice," Pacific Voice Conference (PVC), 2014 XXII Annual, pp.1,7, 11-13 April 2014. doi: 10.1109/PVC.2014.6845420 Abstract: The aim of our work is to develop the software for caller identification or to create his characteristic by analysis of his voice. Based on collected speech samples, our system aims to identify emergency callers both on-line and off-line. This homeland security project covers speaker recognition (when speaker's speech sample is known), speaker's gender, age detection and recognition of emotions. Proposed system is not limited to bio-metrics. The goal of this application is to provide an innovative, supporting tool for rapid and accurate threat detection and threat neutralization. This complex system will include: a speech signal analysis, an automatic development of speech patterns database and appropriate classification methods.
Keywords: emotion recognition; national security; signal classification; speaker recognition; speech intelligibility; age detection; automatic development of; biometrics; caller identification; classification methods; emergency callers identification; emotion recognition; homeland security project; innovative supporting tool; software development; speaker gender; speaker recognition; speaker speech sample; speech patterns database; speech signal analysis; threat detection; threat neutralization; voice analysis; Acoustics; Databases; Feature extraction; Hidden Markov models; Psychoacoustic models; Spectrogram; Speech; Acoustic Background Detection; Age Detection; Emotion Detection; Speaker Identification; Speaker Recognition; Speaker Verification (ID#: 15-4448)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845420&isnumber=6845414

 

Parihar, J.S.; Rathore, J.S.; Burse, K., "Agent Based Intrusion Detection System to Find Layers Attacks," Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on, pp.685,689, 7-9 April 2014. doi: 10.1109/CSNT.2014.144 Abstract: The development and advancement in communication technology and its related techniques, users have experienced the joy of the fast information technology era. Advancements in thin devices such as smart phone like windows phone or Google Android phones has a key factor to glue on network access service. The most amazing fact is that conventional TCP/IP model has driven all the services to the end user with some valuable enrichment on it. The key metrics play an important role to keep the information intact-Confidentiality, Integrity and Availability (CIA). Intrusion detection system prevents unauthorized access of computer without giving permission and detection helps to us to determine whether or not someone attempted to break into our system. In this paper we present an enhanced Agent Based [1-2] security model to discover unknown attacks or intrusion. Proposed system works in dual mode, network and host. In network model the real time traffic behavior (flows /attribute) has captured from the network while in host mode the user logs and user activity has been checked and monitored from. Attributes collected from both the mode, i.e. Network as well as host traffic with respect to the time as well as acknowledgment of protocol. In Proposed "Agent Based Intrusion Detection System" (ABIDS) has designed five types of agents to shield from both side (Host and Network). Agents are works in distributed manner to and communicate with each other to check the abnormality (suspicious) of the incoming traffic or logs via ACL.
Keywords: computer network security; multi-agent systems; software agents; transport protocols; ABIDS;CIA metrics; Google Android phones ;Internet protocol;TCP/IP model; Windows phone; agent based intrusion detection system; agent based security model; communication technology; confidentiality-integrity-availability metrics; information technology; smart phone; traffic behavior; transport control protocol; user activity; user logs; Communication systems; ACL; Agent; DoS; IDPS; IDS; IPS; Intrusion Detection; JADE; MAS; Network security: Layers Attacks (ID#: 15-4449)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821486&isnumber=6821334

 

Chandrasekhar, A.M.; Raghuveer, K., "Confederation of FCM Clustering, ANN and SVM Techniques to Implement Hybrid NIDS Using Corrected KDD Cup 99 Dataset," Communications and Signal Processing (ICCSP), 2014 International Conference on, pp.672,676, 3-5 April 2014. doi: 10.1109/ICCSP.2014.6949927 Abstract: With the rapid advancement in the network technologies including higher bandwidths and ease of connectivity of wireless and mobile devices, Intrusion detection and protection systems have become a essential addition to the security infrastructure of almost every organization. Data mining techniques now a day play a vital role in development of IDS. In this paper, an effort has been made to propose an efficient intrusion detection model by blending competent data mining techniques such as Fuzzy-C-means clustering, Artificial neural network(ANN) and support vector machine (SVM), which is significantly improvises the prediction of network intrusions. We implemented the proposed IDS in MATLAB version R2013a on a Windows PC having 3.20 GHz CPU and 4GB RAM. The experiments and evaluations of proposed method were performed with Corrected KDD cup 99 intrusion detection dataset and we used sensitivity, specificity and accuracy as the evaluation metrics. We attained detection accuracy of about 99.66% for DOS attacks, 98.55% for PROBE, 98.99% for R2L and 98.81% for U2R attacks. Results are compared with relevant existing techniques so as to prove efficiency of our model.
Keywords: data mining; mathematics computing; neural nets; security of data; support vector machines; ANN techniques; FCM clustering; Matlab; SVM techniques; artificial neural network; corrected KDD cup 99 dataset; data mining ;fuzzy-C-means clustering; hybrid NIDS; intrusion detection; mobile devices; protection systems; support vector machine; wireless devices; Accuracy; Artificial neural networks; Databases; Measurement; Probes; Random access memory; Support vector machines; Artificial Neural Networks; Corrected KDD cup 99;Fuzzy-C-means Clustering; Intrusion Detection System; Support Vector Machine (ID#: 15-4450)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6949927&isnumber=6949766

 

Michaels, Alan J.; Lau, Chad, "Performance of Percent Gaussian Orthogonal Signaling Waveforms," Military Communications Conference (MILCOM), 2014 IEEE, pp.338,343, 6-8 Oct. 2014. doi: 10.1109/MILCOM.2014.61 Abstract: Recent developments of secure digital chaotic spread spectrum communication systems have been based on the generalized ideals of maximum channel capacity and maximal entropy/security, which result in a Gaussian-distributed noise like signal that is indistinguishable from naturally occurring (band limited) thermal noise. An implementation challenge associated with these waveforms is that the signal peak-to average power ratio (PAPR) is approximately that of an i.i.d Gaussian distributed random sequence, with infinite tails in the Gaussian distribution, modeled practically by a Gaussian distribution truncated to ±4.8s, the peak excursions of the output can be 13-15 dB over that of the average signal power. To address this PAPR constraint, a series of "percent Gaussian" orthogonal signaling waveforms were developed, allowing parameterized waveform selection that compactly trade PAPR improvements with cyclostationary feature content, these waveforms are bounded by the Gaussian distributed digital chaos signal and a constant amplitude zero autocorrelation (CAZAC) signal, all of which deliver security advantages over traditional direct sequence spread spectrum (DSSS) waveforms. This paper presents an underlying model for these "percent Gaussian" waveforms, derives a generalized set of symbol error rate metrics. Discussion of the performance bounds is also presented.
Keywords: Chaotic communication; Correlation; Noise; Peak to average power ratio; Receivers; Spread spectrum communication; CAZAC; digital chaos; percent Gaussian (ID#: 15-4451)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956782&isnumber=6956719

 

Almutairi, A.; Shawly, T.A.; Basalamah, S.M.; Ghafoor, A., "Policy-Driven High Assurance Cyber Infrastructure-Based Systems," High-Assurance Systems Engineering (HASE), 2014 IEEE 15th International Symposium on, pp.146,153, 9-11 Jan. 2014. doi: 10.1109/HASE.2014.28 Abstract: The objective of this paper is to present major challenges and a framework for modeling and managing context-aware policy-driven Cyber Infrastructure-Based Systems (CIBS). With the growing reliance on Cyber technology providing solutions for a broad range of CIBS applications, comes the high assurance challenges in terms of reliability, trustworthiness and vulnerabilities. The paper proposes a development framework to allow dynamic reconfigurability of CIBS components under various contexts to achieve a desired degree of assurance.
Keywords: cloud computing; software reliability; ubiquitous computing; CIBS component dynamic reconfigurability; cloud computing; context-aware policy-driven cyber infrastructure-based systems; cyber technology; policy-driven high assurance cyber infrastructure-based systems; reliability; trustworthiness; vulnerabilities; Availability; Complexity theory; Context; Linear programming; Measurement; Security; CIBS optimization; cloud computing; cyber-physical systems; high assurance metrics; policy composition (ID#: 15-4452)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6754599&isnumber=6754569

 

Pirinen, Rauno, "Studies of Integration Readiness Levels: Case Shared Maritime Situational Awareness System," Intelligence and Security Informatics Conference (JISIC), 2014 IEEE Joint, pp.212,215, 24-26 Sept. 2014. doi: 10.1109/JISIC.2014.79 Abstract. The research question of this study is: How Integration Readiness Level (IRL) metrics can be understood and realized in the domain of border control information systems. The study address to the IRL metrics and their definition, criteria, references, and questionnaires for validation of border control information systems in case of the shared maritime situational awareness system. The target of study is in improvements of ways for acceptance, operational validation, risk assessment, and development of sharing mechanisms and integration of information systems and border control information interactions and collaboration concepts in Finnish national and European border control domains.
Keywords: Buildings; Context; Control systems; Information systems; Interviews; Measurement; Systems engineering and theory; integration; integration readiness levels; maturity; pre-operational validation; situational awareness (ID#: 15-4453)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6975575&isnumber=6975536

 

Chaudhary, R.; Chatterjee, R., "Reusability in AOSD - The Aptness, Assessment and Analysis," Optimization, Reliabilty, and Information Technology (ICROIT), 2014 International Conference on,  pp.34,39, 6-8 Feb. 2014. doi: 10.1109/ICROIT.2014.6798291 Abstract: Aspect-Oriented Programming (AOP) is an emerging technique that has profound impact in the area of software development. AOP aims to ease maintenance and promotes reuse of software components by providing mechanism for implementing cross-cutting concerns. Examples of cross-cutting concerns are readability, security etc. Reusability is the cost of transferring a module or program to another application. It is the most important criteria for the evaluation of software system. A reusable component will help in better understandability and low maintenance efforts for the application. Therefore, it is necessary to estimate reusability of the component, before integrating it into the system. In the present study, our focus is on those AO languages that have features of Java and AO technology. In this category, we have selected the Aspect AOP language. The MATLAB and Fuzzy logic approach have been used for the assessment of reusability in Aspect-Oriented Systems.
Keywords: Java; aspect-oriented programming; fuzzy logic; fuzzy set theory; security of data; software maintenance; software metrics ;software reusability; AO technology; AOSD; Aspect AOP language; Java technology; MATLAB; analysis; aptness; aspect oriented metrics; aspect-oriented programming; assessment; cross-cutting concerns; fuzzy logic approach; software component maintenance; software component reusability; software development; software system evaluation; Measurement; Syntactics; Aspect Oriented Metrics; Aspect-Oriented Software development (AOSD); FuzzyLogic (ID#: 15-4454)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798291&isnumber=6798279

 

Yangsong Wu; Yibiao Yang; Yangyang Zhao; Hongmin Lu; Yuming Zhou; Baowen Xu, "The Influence of Developer Quality on Software Fault-Proneness Prediction," Software Security and Reliability (SERE), 2014 Eighth International Conference on, pp.11,19, June 30 2014-July 2 2014. doi: 10.1109/SERE.2014.14 Abstract: Previous studies have shown that process metrics are useful for building fault-proneness prediction models. In particular, it has been found that those process metrics incorporating developer experience (defined as the percentage of the code a developer contributes) exhibit a good ability to predict fault-proneness. However, developer quality, which we strongly believe should have a great influence on software quality, is surprisingly ignored. In this paper, we first quantify the quality of a developer via the percentage of history bug-introduce commits over all his/her commits during the development process. Then, we leverage developer quality information to develop eight file quality metrics. Finally, we empirically study the usefulness of these eight file quality metrics for fault-proneness prediction. Based on eight open source software systems, our experiment results show that: 1) these proposed file quality metrics capture additional information compared with existing process metrics, 2) almost all the proposed file quality metrics have a significant association with fault-proneness in an expected direction, and 3) the proposed file quality metrics can in general improve the effectiveness of fault-proneness prediction models when together used with existing process metrics. These results suggest that developer quality has a strong influence on software quality and should be taken into account when predicting software fault-proneness.
Keywords: public domain software; software fault tolerance; software metrics; software quality; file quality metrics; open source software systems; process metrics; software fault-proneness prediction; software quality; Security; Software; Software reliability; Developer quality ;faultproneness; prediction; process metrics (ID#: 15-4455)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6895411&isnumber=6895396

 

Sethi, M.; Antikainen, M.; Aura, T., "Commitment-Based Device Pairing With Synchronized Drawing," Pervasive Computing and Communications (PerCom), 2014 IEEE International Conference on, pp.181,189, 24-28 March 2014. doi: 10.1109/PerCom.2014.6813959 Abstract: Secure device pairing is a widely studied problem. Local wireless connections such as Bluetooth and WiFi typically rely on user-entered secret keys or manually verified authentication codes. Several recent proposals replace these with contextual or location-dependent sensor inputs, which are assumed to be secret from anyone not present at the location where the pairing takes place. These protocols have to cope with a fuzzy secret, i.e. noisy secret input that differs between the devices. In this paper, we overview such protocols and propose a new variation using time-based opening of commitments. Our protocol has the advantage of treating the fuzzy secret as one piece of data rather than requiring it to be partitioned into time intervals, and being more robust against variations in input entropy than those based on error correction codes. The protocol development is motivated by the discovery of a novel human source for the fuzzy secret: synchronized drawing with two fingers of the same hand on two touch screens or surfaces. Metrics for measuring the distance between the drawings are described and evaluated. We implement a prototype of this surprisingly simple and natural pairing mechanism and show that it accurately differentiates between true positives and man-in-the-middle attackers.
Keywords: fuzzy set theory; mobile computing; protocols; security of data; Bluetooth; WiFi; Wireless Fidelity; commitment-based device pairing; contextual-dependent sensor inputs; device pairing security; error correction codes; fuzzy secret; input entropy; location-dependent sensor inputs; man-in-the-middle attackers; manually verified authentication codes; synchronized drawing; time intervals; time-based commitment opening; user-entered secret keys; wireless connections; Authentication; Cryptography; Entropy; Noise measurement; Protocols; Synchronization (ID#: 15-4456)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6813959&isnumber=6813930


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Hard Problems: Resilient Security Architectures (ACM)

 
SoS Logo

Hard Problems: Resilient Security Architectures (ACM)

 

Resilient security architectures are a hard problem in the Science of Security.  A survey of the ACM Digital Library found these scholarly articles about research into resilient security architectures that were published in 2014. A separate listing of works published by IEEE is referenced under the heading “Resilient Security Architectures (IEEE).” A great deal of research useful to resilience is coming from the literature on control theory. In addition to the Science of Security community, much of this work is also relevant to the SURE project.


 

Stylianos Gisdakis, Thanassis Giannetsos, Panos Papadimitratos; SPPEAR: Security & Privacy-Preserving Architecture for Participatory-Sensing Applications; WiSec '14 Proceedings of the 2014 ACM Conference On Security And Privacy In Wireless & Mobile Networks, July 2014, Pages 39-50. Doi: 10.1145/2627393.2627402  Abstract: Recent advances in sensing, computing, and networking have paved the way for the emerging paradigm of participatory sensing (PS). The openness of such systems and the richness of user data they entail raise significant concerns for their security, privacy and resilience. Prior works addressed different aspects of the problem. But in order to reap the benefits of this new sensing paradigm, we need a comprehensive solution. That is, a secure and accountable PS system that preserves user privacy, and enables the provision of incentives to the participants. At the same time, we are after a PS system that is resilient to abusive users and guarantees privacy protection even against multiple misbehaving PS entities (servers). We address these seemingly contradicting requirements with our SPPEAR architecture. Our full blown implementation and experimental evaluation demonstrate that SPPEAR is efficient, practical, and scalable. Last but not least, we formally assess the achieved security and privacy properties. Overall, our system is a comprehensive solution that significantly extends the state-of-the-art and can catalyze the deployment of PS applications.
Keywords: anonymity, participatory sensing, privacy, security (ID#: 15-5487)
URLhttp://doi.acm.org/10.1145/2627393.2627402

 

Balakrishnan Chandrasekaran, Theophilus Benson; Tolerating SDN Application Failures with LegoSDN; HotNets-XIII Proceedings of the 13th ACM Workshop on Hot Topics in Networks, October 2014, Page 22. Doi:  10.1145/2670518.2673880 Abstract: Despite Software Defined Network's (SDN) proven benefits, there remains significant reluctance in adopting it. Among the issues that hamper SDN's adoption two stand out: reliability and fault tolerance. At the heart of these issues is a set of fate-sharing relationships: The first between the SDN-Apps and controllers, where-in the crash of the former induces a crash of the latter, and thereby affecting availability; and, the second between the SDN-App and the network, where-in a byzantine failure e.g., black-holes and network-loops, induces a failure in the network, and thereby affecting network availability. The principal position of this paper is that availability is of utmost concern -- second only to security. To this end, we present a re-design of the controller architecture centering around a set of abstractions to eliminate these fate-sharing relationships, and make the controllers and network resilient to SDN-App failures. We illustrate how these abstractions can be used to improve the reliability of an SDN environment, thus eliminating one of the barriers to SDN's adoption.
Keywords: Fault Tolerance, Software-Defined Networking (ID#: 15-5488)
URL: http://doi.acm.org/10.1145/2670518.2673880

 

Daniel Migault, Daniel Palomares, Hendrik Hendrik, Maryline Laurent; Secure IPsec Based Offload Architectures for Mobile Data; Q2SWinet '14 Proceedings of the 10th ACM Symposium on QoS and Security For Wireless And Mobile Networks, September 2014, Pages 95-104. Doi: 10.1145/2642687.2642690 Abstract: Radio Access Network (RAN) are likely to be overloaded, and some places will not be able to provide the necessary requested bandwidth. In order to respond to the demand of bandwidth, overloaded RAN are currently offloading their traffic on WLAN. WLAN Access Points like (ISP provided xDSL boxes) are untrusted, unreliable and do not handle mobility. As a result, mobility, multihoming, and security cannot be handled by the network anymore, and must be handled by the terminal. This paper positions offload architectures based on IPsec and shows that IPsec can provide end-to-end security, as well as seamless connectivity across IP networks. Then, the remaining of the paper evaluates how mobility on these IPsec based architectures impacts the Quality of Service (QoS) for real time applications such as an audio streaming service. QoS is measured using network interruption time and POLQA. Measurements compare TCP/HLS and UDP/RTSP over various IPsec configurations.
Keywords: IPsec mobility, IPsec multiple interfaces, quality of service, terminal mobility, wlan offload architecture (ID#: 15-5489)
URL: http://doi.acm.org/10.1145/2642687.2642690

 

Teklemariam Tsegay Tesfay, Jean-Pierre Hubaux, Jean-Yves Le Boudec, Philippe Oechslin; Cyber-secure Communication Architecture for Active Power Distribution Networks; SAC '14 Proceedings of the 29th Annual ACM Symposium on Applied Computing, March 2014, Pages 545-552. Doi:  10.1145/2554850.2555082 Abstract: Active power distribution networks require sophisticated monitoring and control strategies for efficient energy management and automatic adaptive reconfiguration of the power infrastructure. Such requirements are realised by deploying a large number of various electronic automation and communication field devices, such as Phasor Measurement Units (PMUs) or Intelligent Electronic Devices (IEDs), and a reliable two-way communication infrastructure that facilitates transfer of sensor data and control signals. In this paper, we perform a detailed threat analysis in a typical active distribution network's automation system. We also propose mechanisms by which we can design a secure and reliable communication network for an active distribution network that is resilient to insider and outsider malicious attacks, natural disasters, and other unintended failure. The proposed security solution also guarantees that an attacker is not able to install a rogue field device by exploiting an emergency situation during islanding.
Keywords: PKI, active distribution network, authentication, islanding, smart grid, smart grid security, unauthorised access (ID#: 15-5490)
URL: http://doi.acm.org/10.1145/2554850.2555082

 

Rui Zhuang, Scott A. DeLoach, Xinming Ou; Towards a Theory of Moving Target Defense; MTD '14 Proceedings of the First ACM Workshop on Moving Target Defense, November 2014, Pages 31-40. Doi: 10.1145/2663474.2663479 Abstract: The static nature of cyber systems gives attackers the advantage of time. Fortunately, a new approach, called the Moving Target Defense (MTD) has emerged as a potential solution to this problem. While promising, there is currently little research to show that MTD systems can work effectively in real systems. In fact, there is no standard definition of what an MTD is, what is meant by attack surface, or metrics to define the effectiveness of such systems. In this paper, we propose an initial theory that will begin to answer some of those questions. The paper defines the key concepts required to formally talk about MTD systems and their basic properties. It also discusses three essential problems of MTD systems, which include the MTD Problem (or how to select the next system configuration), the Adaptation Selection Problem, and the Timing Problem. We then formalize the MTD Entropy Hypothesis, which states that the greater the entropy of the system's configuration, the more effective the MTD system.
Keywords: computer security, moving target defense, network security, science of security (ID#: 15-5491)
URLhttp://doi.acm.org/10.1145/2663474.2663479

 

S. T. Choden Konigsmark, Leslie K. Hwang, Deming Chen, Martin D. F. Wong; System-of-PUFs: Multilevel Security for Embedded Systems; CODES '14 Proceedings of the 2014 International Conference on Hardware/Software Codesign and System Synthesis, October 2014, Article No. 27. Doi: 10.1145/2656075.2656099 Abstract: Embedded systems continue to provide the core for a wide range of applications, from smart-cards for mobile payment to smart-meters for power-grids. The resource and power dependency of embedded systems continues to be a challenge for state-of-the-art security practices. Moreover, even theoretically secure algorithms are often vulnerable in their implementation. With decreasing cost and complexity, physical attacks are an increasingly important threat. This threat led to the development of Physically Unclonable Functions (PUFs) which are disordered physical systems with various applications in hardware security. However, consistent security oriented design of embedded systems remains a challenge, as most formalizations and security models are concerned with isolated physical components or high-level concept. We provide four unique contributions: (i) We propose a system-level security model to overcome the chasm between secure components and requirements of high-level protocols; this enables synergy between component-oriented security formalizations and theoretically proven protocols. (ii) An analysis of current practices in PUF protocols using the proposed system-level security model; we identify significant issues and expose assumptions that require costly security techniques. (iii) A System-of-PUF (SoP) that utilizes the large PUF design-space to achieve security requirements with minimal resource utilization; SoP requires 64% less gate-equivalent units than recently published schemes. (iv) A multilevel authentication protocol based on SoP which is validated using our system-level security model and which overcomes current vulnerabilities. Furthermore, this protocol offers breach recognition and recovery.
Keywords: hardware authentication, physically unclonable functions (ID#: 15-5492)
URL: http://doi.acm.org/10.1145/2656075.2656099

 

Markus Kammerstetter, Lucie Langer, Florian Skopik, Wolfgang Kastner; Architecture-Driven Smart Grid Security Management;  IH&MMSec '14 Proceedings of the 2nd ACM Workshop On Information Hiding And Multimedia Security, June 2014, Pages 153-158. Doi: 10.1145/2600918.2600937 Abstract: The introduction of smart grids goes along with an extensive use of ICT technologies in order to support the integration of renewable energy sources. However, the use of ICT technologies bears risks in terms of cyber security attacks which could negatively affect the electrical power grid. These risks need to be assessed, mitigated and managed in a proper way to ensure the security of both current and future energy networks. Existing approaches have been either restricted to very specific components of the smart grid (e.g., smart meters), or provide a high-level view only. We therefore propose an architecture-driven security management approach for smart grids which goes beyond a mere abstract view without focusing too much on technical details. Our approach covers architecture modeling, risk identification and assessment as well as risk mitigation and compliance checking. We have proven the practical usability of this process together with leading manufacturers and utilities.
Keywords: risks, security, security management, smart grid (ID#: 15-5493)
URL: http://doi.acm.org/10.1145/2600918.2600937

 

Bradley Schmerl, Javier Cámara, Jeffrey Gennari, David Garlan, Paulo Casanova, Gabriel A. Moreno, Thomas J. Glazier, Jeffrey M. Barnes; Architecture-Based Self-Protection: Composing and Reasoning About Denial-of-Service Mitigations; HotSoS '14 Proceedings of the 2014 Symposium and Bootcamp on the Science of Security, April 2014, Article No. 2. Doi: 10.1145/2600176.2600181 Abstract: Security features are often hardwired into software applications, making it difficult to adapt security responses to reflect changes in runtime context and new attacks. In prior work, we proposed the idea of architecture-based self-protection as a way of separating adaptation logic from application logic and providing a global perspective for reasoning about security adaptations in the context of other business goals. In this paper, we present an approach, based on this idea, for combating denial-of-service (DoS) attacks. Our approach allows DoS-related tactics to be composed into more sophisticated mitigation strategies that encapsulate possible responses to a security problem. Then, utility-based reasoning can be used to consider different business contexts and qualities. We describe how this approach forms the underpinnings of a scientific approach to self-protection, allowing us to reason about how to make the best choice of mitigation at runtime. Moreover, we also show how formal analysis can be used to determine whether the mitigations cover the range of conditions the system is likely to encounter, and the effect of mitigations on other quality attributes of the system. We evaluate the approach using the Rainbow self-adaptive framework and show how Rainbow chooses DoS mitigation tactics that are sensitive to different business contexts.
Keywords: denial-of-service, probabilistic model checking, self-adaptation (ID#: 15-5494)
URL: http://doi.acm.org/10.1145/2600176.2600181

 

David Basin, Cas Cremers, Tiffany Hyun-Jin Kim, Adrian Perrig, Ralf Sasse, Pawel Szalachowski; ARPKI: Attack Resilient Public-Key Infrastructure; CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 382-393. Doi:  10.1145/2660267.2660298 Abstract: We present ARPKI, a public-key infrastructure that ensures that certificate-related operations, such as certificate issuance, update, revocation, and validation, are transparent and accountable. ARPKI is the first such infrastructure that systematically takes into account requirements identified by previous research. Moreover, ARPKI is co-designed with a formal model, and we verify its core security property using the Tamarin prover. We present a proof-of-concept implementation providing all features required for deployment. ARPKI efficiently handles the certification process with low overhead and without incurring additional latency to TLS. ARPKI offers extremely strong security guarantees, where compromising n-1 trusted signing and verifying entities is insufficient to launch an impersonation attack. Moreover, it deters misbehavior as all its operations are publicly visible.
Keywords: attack resilience, certificate validation, formal validation, public log servers, public-key infrastructure, tls (ID#: 15-5495)
URL: http://doi.acm.org/10.1145/2660267.2660298

 

Teng Xu, James Bradley Wendt, Miodrag Potkonjak; Secure Remote Sensing and Communication Using Digital PUFs; ANCS '14 Proceedings of the Tenth ACM/IEEE Symposium on Architectures for Networking and Communications Systems, October 2014, Pages 173-184. Doi: 10.1145/2658260.2658279 Abstract: Small form, mobile, and remote sensor network systems require secure and ultralow power data collection and communication solutions due to their energy constraints. The physical unclonable function (PUF) has emerged as a popular modern low power security primitive. However, current designs are analog in nature and susceptible to instability and difficult to integrate into existing circuitry. In this paper, we present the digital PUF which is stable in the same sense that digital logic is stable, has a very small footprint and very small timing overhead, and can be easily integrated into existing designs. We demonstrate the use of the digital PUF on two applications that are crucial for sensor networks: trusted remote sensing and logic obfuscation. We present our security analysis using standard randomness tests and confusion and diffusion analysis, and apply our new obfuscation approach on a set of standard design benchmarks.
Keywords: security (ID#: 15-5496)
URL: http://doi.acm.org/10.1145/2658260.2658279

 

Gilles Barthe, Gustavo Betarte, Juan Campo, Carlos Luna, David Pichardie; System-level Non-interference for Constant-time Cryptography; CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 1267-1279. Doi: 10.1145/2660267.2660283 Abstract: Cache-based attacks are a class of side-channel attacks that are particularly effective in virtualized or cloud-based environments, where they have been used to recover secret keys from cryptographic implementations. One common approach to thwart cache-based attacks is to use constant-time implementations, i.e., which do not branch on secrets and do not perform memory accesses that depend on secrets. However, there is no rigorous proof that constant-time implementations are protected against concurrent cache-attacks in virtualization platforms with shared cache; moreover, many prominent implementations are not constant-time. An alternative approach is to rely on system-level mechanisms. One recent such mechanism is stealth memory, which provisions a small amount of private cache for programs to carry potentially leaking computations securely. Stealth memory induces a weak form of constant-time, called S-constant-time, which encompasses some widely used cryptographic implementations. However, there is no rigorous analysis of stealth memory and S-constant-time, and no tool support for checking if applications are S-constant-time.  We propose a new information-flow analysis that checks if an x86 application executes in constant-time, or in S-constant-time. Moreover, we prove that constant-time (resp. S-constant-time) programs do not leak confidential information through the cache to other operating systems executing concurrently on virtualization platforms (resp. platforms supporting stealth memory). The soundness proofs are based on new theorems of independent interest, including isolation theorems for virtualization platforms (resp. platforms supporting stealth memory), and proofs that constant-time implementations (resp. S-constant-time implementations) are non-interfering with respect to a strict information flow policy which disallows that control flow and memory accesses depend on secrets. We formalize our results using the Coq proof assistant and we demonstrate the effectiveness of our analyses on cryptographic implementations, including PolarSSL AES, DES and RC4, SHA256 and Salsa20.
Keywords: cache-based attacks, constant-time cryptography, coq, non-interference, stealth memory (ID#: 15-5497)
URL: http://doi.acm.org/10.1145/2660267.2660283

 

Fangzhou Yao, Read Sprabery, Roy H. Campbell; CryptVMI: A Flexible and Encrypted Virtual Machine Introspection System in the Cloud;  SCC '14 Proceedings of the 2nd International Workshop on Security in Cloud Computing, June 2014, Pages 11-18. Doi: 10.1145/2600075.2600078 Abstract: Virtualization has demonstrated its importance in both public and private cloud computing solutions. In such environments, multiple virtual instances run on the same physical machine concurrently. Thus, the isolation in the system is not guaranteed by the physical infrastructure anymore. Reliance on logical isolation makes a system vulnerable to attacks. Thus, Virtual Machine Introspection techniques become essential, since they simplify the process to acquire evidence for further analysis in this complex system. However, Virtual Machine Introspection tools for the cloud are usually written specifically for a single system and do not provide a standard interface to work with other security monitoring systems. Moreover, this technique breaks down the borders of the segregation between multiple tenants, which should be avoided in a public cloud computing environment. In this paper, we focus on building a flexible and encrypted Virtual Machine Introspection system, CryptVMI, to address the above concerns. Our approach maintains a client application on the user end to send queries to the cloud, as well as parse the results returned in a standard form. We also have a handler that cooperates with an introspection application in the cloud infrastructure to process queries and return encrypted results. This work shows our design and implementation of this system, and the benchmark results prove that it does not incur much performance overhead.
Keywords: cloud computing, confidentiality, virtual machine introspection, virtualization (ID#: 15-5498)
URL: http://doi.acm.org/10.1145/2600075.2600078

 

Arto Juhola, Titta Ahola, Kimmo Ahola; Adaptive Risk Management with Ontology Linked Evidential Statistics and SDN; ECSAW '14 Proceedings of the 2014 European Conference on Software Architecture Workshops, August 2014, Article No. 2.  Doi: 10.1145/2642803.2642805 Abstract: New technologies have increased the dynamism of distributed systems; advances such as Software Defined Networking (SDN) and cloud computing enable unprecedented service flexibility and scalability. By their nature, they are in a constant state of flux, presenting tough challenges for system security. Here an adaptive -- in real time - risk management system capable of keeping abreast of these developments is considered. This paper presents an on-going work on combining a hierarchical threat ontology, real-time risk analysis, and SDN to an efficient whole. The main contribution of this paper is on finding the suitable architectures, components, necessary requirements, and favorable modifications on the systems and system modelling (including the models involving the security analysis) to reach this goal.
Keywords: Adaptive security, Dempster-Schafer, Dezert-Smarandache, Neural Network inspired Fuzzy C-means, SDN, Threat ontology (ID#: 15-5499)
URL: http://doi.acm.org/10.1145/2642803.2642805

 

Tomas Bures, Petr Hnetynka, Frantisek Plasil; Strengthening Architectures of Smart CPS by Modeling Them as Runtime Product-Lines; CBSE '14 Proceedings of the 17th International ACM Sigsoft Symposium On Component-Based Software Engineering, June 2014, Pages 91-96.  Doi: 10.1145/2602458.2602478 Abstract: Smart Cyber-Physical Systems (CPS) are complex distributed decentralized systems of cooperating mobile and stationary devices which closely interact with the physical environment. Although Component-Based Development (CBD) might seem as a viable solution to target the complexity of smart CPS, existing component models scarcely cope with the open-ended and very dynamic nature of smart CPS. This is especially true for design-time modeling using hierarchical explicit architectures, which traditionally provide an excellent means of coping with complexity by providing multiple levels of abstractions and explicitly specifying communication links between component instances. In this paper we propose a modeling method (materialized in the SOFA NG component model) which conveys the benefits of explicit architectures of hierarchical components to the design of smart CPS. Specifically, we base our method on modeling systems as reference architectures of Software Product Lines (SPL). Contrary to traditional SPL, which is a fully design-time approach, we create SPL configurations at runtime. We do so in a decentralized way by translating the configuration process to the process of establishing component ensembles (i.e. dynamic cooperation groups of components) of our DEECo component model.
Keywords: component model, component-based development, cyber-physical systems, software architecture, software components (ID#: 15-5500)
URL: http://doi.acm.org/10.1145/2602458.2602478

 

Benoît Libert, Marc Joye, Moti Yung; Born and Raised Distributively: Fully Distributed Non-Interactive Adaptively-Secure Threshold Signatures with Short Shares; PODC '14 Proceedings of the 2014 ACM Symposium On Principles Of Distributed Computing, July 2014, Pages 303-312. Doi: 10.1145/2611462.2611498 Abstract: Threshold cryptography is a fundamental distributed computational paradigm for enhancing the availability and the security of cryptographic public-key schemes. It does it by dividing private keys into n shares handed out to distinct servers. In threshold signature schemes, a set of at least t+1 ≤ n servers is needed to produce a valid digital signature. Availability is assured by the fact that any subset of t+1 servers can produce a signature when authorized. At the same time, the scheme should remain robust (in the fault tolerance sense) and unforgeable (cryptographically) against up to t corrupted servers; i.e., it adds quorum control to traditional cryptographic services and introduces redundancy. Originally, most practical threshold signatures have a number of demerits: They have been analyzed in a static corruption model (where the set of corrupted servers is fixed at the very beginning of the attack), they require interaction, they assume a trusted dealer in the key generation phase (so that the system is not fully distributed), or they suffer from certain overheads in terms of storage (large share sizes). In this paper, we construct practical fully distributed (the private key is born distributed), non-interactive schemes --- where the servers can compute their partial signatures without communication with other servers--- with adaptive security (i.e., the adversary corrupts servers dynamically based on its full view of the history of the system). Our schemes are very efficient in terms of computation, communication, and scalable storage (with private key shares of size O(1), where certain solutions incur O(n) storage costs at each server). Unlike other adaptively secure schemes, our schemes are erasure-free (reliable erasure is a hard to assure and hard to administer property in actual systems).  To the best of our knowledge, such a fully distributed highly constrained scheme has been an open problem in the area. In particular, and of special interest, is the fact that Pedersen's traditional distributed key generation (DKG) protocol can be safely employed in the initial key generation phase when the system is born -- although it is well-known not to ensure uniformly distributed public keys. An advantage of this is that this protocol only takes one round optimistically (in the absence of faulty player).
Keywords: adaptive security, availability, distributed key generation, efficiency, erasure-free schemes, fault tolerance, fully distributed systems, non-interactivity, threshold signature schemes (ID#: 15-5501)
URL: http://doi.acm.org/10.1145/2611462.2611498

 

Javier Cámara, Pedro Correia, Rogério de Lemos, Marco Vieira; Empirical Resilience Evaluation of an Architecture-Based Self-Adaptive Software System; QoSA '14 Proceedings of the 10th International ACM Sigsoft Conference on Quality of Software Architectures, June 2014, Pages 63-72. Doi:  10.1145/2602576.2602577 Abstract: Architecture-based self-adaptation is considered as a promising approach to drive down the development and operation costs of complex software systems operating in ever changing environments. However, there is still a lack of evidence supporting the arguments for the beneficial impact of architecture-based self-adaptation on resilience with respect to other customary approaches, such as embedded code-based adaptation. In this paper, we report on an empirical study about the impact on resilience of incorporating architecture-based self-adaptation in an industrial middleware used to collect data in highly populated networks of devices. To this end, we compare the results of resilience evaluation between the original version of the middleware, in which adaptation mechanisms are embedded at the code-level, and a modified version of that middleware in which the adaptation mechanisms are implemented using Rainbow, a framework for architecture-based self-adaptation. Our results show improved levels of resilience in architecture-based compared to embedded code-based self-adaptation.
Keywords: architecture-based self-adaptation probabilistic model checking, rainbow, resilience evaluation (ID#: 15-5502)
URL: http://doi.acm.org/10.1145/2602576.2602577

 

Ebrahim Tarameshloo, Philip W.L. Fong, Payman Mohassel; On Protection in Federated Social Computing Systems; CODASPY '14 Proceedings of the 4th ACM Conference on Data and Application Security and Privacy, March 2014, Pages 75-86. Doi: 10.1145/2557547.2557555  Abstract: Nowadays, a user may belong to multiple social computing systems (SCSs) in order to benefit from a variety of services that each SCS may provide. To facilitate the sharing of contents across the system boundary, some SCSs provide a mechanism by which a user may "connect" his accounts on two SCSs. The effect is that contents from one SCS can now be shared to another SCS. Although such a connection feature delivers clear usability advantages for users, it also generates a host of privacy challenges. A notable challenge is that the access control policy of the SCS from which the content originates may not be honoured by the SCS to which the content migrates, because the latter fails to faithfully replicate the protection model of the former. In this paper we formulate a protection model for a federation of SCSs that support content sharing via account connection. A core feature of the model is that sharable contents are protected by access control policies that transcend system boundary - they are enforced even after contents are migrated from one SCS to another. To ensure faithful interpretation of access control policies, their evaluation involves querying the protection states of various SCSs, using Secure Multiparty Computation (SMC). An important contribution of this work is that we carefully formulate the conditions under which policy evaluation using SMC does not lead to the leakage of information about the protection states of the SCSs. We also study the computational problem of statically checking if an access control policy can be evaluated without information leakage. Lastly, we identify useful policy idioms.
Keywords: account connection, composite policy, federated social computing systems, policy language, protection model, safe function evaluation, secure content sharing, secure multiparty computation (ID#: 15-5503)
URL: http://doi.acm.org/10.1145/2557547.2557555

 

Sebastian Mödersheim, Luca Viganò; Sufficient Conditions for Vertical Composition of Security Protocols; ASIA CCS '14 Proceedings of the 9th ACM Symposium On Information, Computer And Communications Security, June 2014, Pages 435-446.   Doi: 10.1145/2590296.2590330 Abstract: Vertical composition of security protocols means that an application protocol (e.g., a banking service) runs over a channel established by another protocol (e.g., a secure channel provided by TLS). This naturally gives rise to a compositionality question: given a secure protocol P1 that provides a certain kind of channel as a goal and another secure protocol P2 that assumes this kind of channel, can we then derive that their vertical composition P2[P1] is secure? It is well known that protocol composition can lead to attacks even when the individual protocols are all secure in isolation. In this paper, we formalize seven easy-to-check static conditions that support a large class of channels and applications and that we prove to be sufficient for vertical security protocol composition.
Keywords: model checking, protocol composition, security protocols, static analysis, verification (ID#: 15-5504)
URL: http://doi.acm.org/10.1145/2590296.2590330

 

Chuangang Ren, Kai Chen, Peng Liu; Droidmarking: Resilient Software Watermarking for Impeding Android Application Repackaging; ASE '14 Proceedings of the 29th ACM/IEEE International Conference On Automated Software Engineering, September 2014, Pages 635-646. Doi: 10.1145/2642937.2642977  Abstract: Software plagiarism in Android markets (app repackaging) is raising serious concerns about the health of the Android ecosystem. Existing app repackaging detection techniques fall short in detection efficiency and in resilience to circumventing attacks; this allows repackaged apps to be widely propagated and causes extensive damages before being detected. To overcome these difficulties and instantly thwart app repackaging threats, we devise a new dynamic software watermarking technique - Droidmarking - for Android apps that combines the efforts of all stakeholders and achieves the following three goals: (1) copyright ownership assertion for developers, (2) real-time app repackaging detection on user devices, and (3) resilience to evading attacks. Distinct from existing watermarking techniques, the watermarks in Droidmarking are non-stealthy, which means that watermark locations are not intentionally concealed, yet still are impervious to evading attacks. This property effectively enables normal users to recover and verify watermark copyright information without requiring a confidential watermark recognizer. Droidmarking is based on a primitive called self-decrypting code (SDC). Our evaluations show that Droidmarking is a feasible and robust technique to effectively impede app repackaging with relatively small performance overhead.
Keywords: android, app repackaging, software watermarking (ID#: 15-5505)
URLhttp://doi.acm.org/10.1145/2642937.2642977

 

Sampsa Rauti, Johannes Holvitie, Ville Leppänen; Towards a Diversification Framework for Operating System Protection; CompSysTech '14 Proceedings of the 15th International Conference on Computer Systems and Technologies, June 2014, Pages 286-293. Doi: 10.1145/2659532.2659642  Abstract: In order to use resources of a computer, malware has to know the interfaces provided by the operating system. If we make these critical interfaces unique by diversifying the operating system and user applications, a piece of malware can no longer successfully interact with its environment. Diversification can be considered as a computer-specific secret. This paper discusses how this API diversification could be performed. We also study how much work would be needed to diversify the Linux kernel in order to hide the system call interface from malware.
Keywords: code diversification, malware protection, operating system security (ID#: 15-5506)
URL: http://doi.acm.org/10.1145/2659532.2659642

 

Ren-Shuo Liu, De-Yu Shen, Chia-Lin Yang, Shun-Chih Yu, Cheng-Yuan Michael Wang; NVM Duet: Unified Working Memory and Persistent Store Architecture; ACM SIGPLAN Notices - ASPLOS '14, Volume 49 Issue 4, April 2014, Pages 455-470. Doi: 10.1145/2644865.2541957  Abstract: Emerging non-volatile memory (NVM) technologies have gained a lot of attention recently. The byte-addressability and high density of NVM enable computer architects to build large-scale main memory systems. NVM has also been shown to be a promising alternative to conventional persistent store. With NVM, programmers can persistently retain in-memory data structures without writing them to disk. Therefore, one can envision that in the future, NVM will play the role of both working memory and persistent store at the same time.  Persistent store demands consistency and durability guarantees, thereby imposing new design constraints on the memory system. Consistency is achieved at the expense of serializing multiple write operations. Durability requires memory cells to guarantee non-volatility and thus reduces the write speed. Therefore, a unified architecture oblivious to these two use cases would lead to suboptimal design. In this paper, we propose a novel unified working memory and persistent store architecture, NVM Duet, which provides the required consistency and durability guarantees for persistent store while relaxing these constraints if accesses to NVM are for working memory. A cross-layer design approach is adopted to achieve the design goal. Overall, simulation results demonstrate that NVM Duet achieves up to 1.68x (1.32x on average) speedup compared with the baseline design.
Keywords:  consistency, durability, memory management, memory scheduler, non-volatile memory, phase-change memory, refresh, resistance drift, resistance distribution, storage-class memory (ID#: 15-5507)
URL: http://doi.acm.org/10.1145/2644865.2541957

 

Keita Teranishi, Michael A. Heroux; Toward Local Failure Local Recovery Resilience Model using MPI-ULFM; EuroMPI/ASIA '14 Proceedings of the 21st European MPI Users' Group Meeting, September 2014, Pages 51ff.   Doi: 10.1145/2642769.2642774  Abstract: The current system reaction to the loss of a single MPI process is to kill all the remaining processes and restart the application from the most recent checkpoint. This approach will become unfeasible for future extreme scale systems. We address this issue using an emerging resilient computing model called Local Failure Local Recovery (LFLR) that provides application developers with the ability to recover locally and continue application execution when a process is lost. We discuss the design of our software framework to enable the LFLR model using MPI-ULFM and demonstrate the resilient version of MiniFE that achieves a scalable recovery from process failures.
Keywords: Fault Tolerance, MPI, PDE solvers, Scientific Computing, User Level Fault Mitigation (ID#: 15-5508)
URL: http://doi.acm.org/10.1145/2642769.2642774

 

Fangfang Zhang, Heqing Huang, Sencun Zhu, Dinghao Wu, Peng Liu; ViewDroid: Towards Obfuscation-Resilient Mobile Application Repackaging Detection; WiSec '14 Proceedings of the 2014 ACM Conference On Security And Privacy In Wireless & Mobile Networks, July 2014, Pages 25-36.  Doi: 10.1145/2627393.2627395  Abstract: In recent years, as mobile smart device sales grow quickly, the development of mobile applications (apps) keeps accelerating, so does mobile app repackaging. Attackers can easily repackage an app under their own names or embed advertisements to earn pecuniary profits. They can also modify a popular app by inserting malicious payloads into the original app and leverage its popularity to accelerate malware propagation. In this paper, we propose ViewDroid, a user interface based approach to mobile app repackaging detection. Android apps are user interaction intensive and event dominated, and the interactions between users and apps are performed through user interface, or views. This observation inspires the design of our new birthmark for Android apps, namely, feature view graph, which captures users' navigation behavior across app views. Our experimental results demonstrate that this birthmark can characterize Android apps from a higher level abstraction, making it resilient to code obfuscation. ViewDroid can detect repackaged apps at a large scale, both effectively and efficiently. Our experiments also show that the false positive and false negative rates of ViewDroid are both very low.
Keywords: mobile application, obfuscation resilient, repackaging, user interface (ID#: 15-5509)
URL: http://doi.acm.org/10.1145/2627393.2627395

 

Kevin M. Carter, James F. Riordan, Hamed Okhravi; A Game Theoretic Approach to Strategy Determination for Dynamic Platform Defenses; MTD '14 Proceedings of the First ACM Workshop on Moving Target Defense, November 2014, Pages 21-30.  Doi: 10.1145/2663474.2663478   Abstract: Moving target defenses based on dynamic platforms have been proposed as a way to make systems more resistant to attacks by changing the properties of the deployed platforms. Unfortunately, little work has been done on discerning effective strategies for the utilization of these systems, instead relying on two generally false premises: simple randomization leads to diversity and platforms are independent. In this paper, we study the strategic considerations of deploying a dynamic platform system by specifying a relevant threat model and applying game theory and statistical analysis to discover optimal usage strategies. We show that preferential selection of platforms based on optimizing platform diversity approaches the statistically optimal solution and significantly outperforms simple randomization strategies. Counter to popular belief, this deterministic strategy leverages fewer platforms than may be generally available, which increases system security.
Keywords: game theory, moving target, system diversity (ID#: 15-5510)
URLhttp://doi.acm.org/10.1145/2663474.2663478

 

Giovanni Toso, Daniele Munaretto, Mauro Conti, Michele Zorzi; Attack Resilient Underwater Networks Through Software Defined Networking; WUWNET '14 Proceedings of the International Conference on Underwater Networks & Systems, November 2014, Article No. 44.  Doi: 10.1145/2671490.2674589  Abstract: In this paper we discuss how security of Underwater Acoustic Networks (UANs) could be improved by leveraging the concept of Software Defined Networking (SDN). In particular, we consider a set of realistic network deployment scenarios and security threats. We propose possible approaches towards security countermeasures that employ the SDN paradigm, and that could significantly mitigate the impact of attacks. Furthermore, we discuss those approaches with respect to deployment issues such as routing configuration, nodes trajectory optimization, and management of the node buffers. We believe that the proposed approaches could pave the way to further research in the design of UANs that are more resilient to both attacks and failures.  
Keywords: Software Defined Networking, Underwater Acoustic Networks (ID#: 15-5511)
URLhttp://doi.acm.org/10.1145/2671490.2674589

 

Young-Jin Kim, Keqiang He, Marina Thottan, Jayant G. Deshpande; Self-Configurable and Scalable Utility Communications Enabled by Software-Defined Networks; e-Energy '14 Proceedings of the 5th International Conference On Future Energy Systems, June 2014, Pages 217-218. Doi: 10.1145/2602044.2602074  Abstract: Utility communications are increasingly required to support machine-to-machine communications for thousands to millions of end devices ranging from meters and PMUs to tiny sensors and electric vehicles. The Software Defined Network (SDN) concept provides inherent features to support in a scalable and self-configurable manner the deployment and management of existing and envisioned utility end devices and applications. Using the SDN technology, we can create dynamically adaptable virtual network slices to cost-effectively and securely meet the utility communication needs. The programmability of SDN allows the elastic, fast, and scalable deployment of present and future utility applications with varying requirements on security and time criticality. In this work, we design a SDN-enabled utility communication architecture to support scalable deployment of applications that leverage many utility end devices. The feasibility of the architecture over an SDN network is discussed.
Keywords: machine-to-machine (M2M), performance, scalability, self-configurability (ID#: 15-5512)
URLhttp://doi.acm.org/10.1145/2602044.2602074

 

Euijin Choo, Jianchun Jiang, Ting Yu; COMPARS: Toward an Empirical Approach for Comparing the Resilience of Reputation Systems; CODASPY '14 Proceedings of the 4th ACM Conference on Data and Application Security and Privacy, March 2014, Pages 87-98. Doi: 10.1145/2557547.2557565  Abstract: Reputation is a primary mechanism for trust management in decentralized systems. Many reputation-based trust functions have been proposed in the literature. However, picking the right trust function for a given decentralized system is a non-trivial task. One has to consider and balance a variety of factors, including computation and communication costs, scalability and resilience to manipulations by attackers. Although the former two are relatively easy to evaluate, the evaluation of resilience of trust functions is challenging. Most existing work bases evaluation on static attack models, which is unrealistic as it fails to reflect the adaptive nature of adversaries (who are often real human users rather than simple computing agents). In this paper, we highlight the importance of the modeling of adaptive attackers when evaluating reputation-based trust functions, and propose an adaptive framework - called COMPARS - for the evaluation of resilience of reputation systems. Given the complexity of reputation systems, it is often difficult, if not impossible, to exactly derive the optimal strategy of an attacker. Therefore, COMPARS takes a practical approach that attempts to capture the reasoning process of an attacker as it decides its next action in a reputation system. Specifically, given a trust function and an attack goal, COMPARS generates an attack tree to estimate the possible outcomes of an attacker's action sequences up to certain points in the future. Through attack trees, COMPARS simulates the optimal attack strategy for a specific reputation function f, which will be used to evaluate the resilience of f. By doing so, COMPARS allows one to conduct a fair and consistent comparison of different reputation functions.
Keywords: evaluation framework, reputation system, resilience, trust functions (ID#: 15-5513)
URL: http://doi.acm.org/10.1145/2557547.2557565

 

Fan Long, Stelios Sidiroglou-Douskos, Martin Rinard; Automatic Runtime Error Repair and Containment via Recovery Shepherding; PLDI '14 Proceedings of the 35th ACM SIGPLAN Conference on Programming Language Design and Implementation, June 2014, Pages 227-238. Doi: 10.1145/2666356.2594337  Abstract: We present a system, RCV, for enabling software applications to survive divide-by-zero and null-dereference errors. RCV operates directly on off-the-shelf, production, stripped x86 binary executables. RCV implements recovery shepherding, which attaches to the application process when an error occurs, repairs the execution, tracks the repair effects as the execution continues, contains the repair effects within the application process, and detaches from the process after all repair effects are flushed from the process state. RCV therefore incurs negligible overhead during the normal execution of the application.  We evaluate RCV on all divide-by-zero and null-dereference errors available in the CVE database [2] from January 2011 to March 2013 that 1) provide publicly-available inputs that trigger the error which 2) we were able to use to trigger the reported error in our experimental environment. We collected a total of 18 errors in seven real world applications, Wireshark, the FreeType library, Claws Mail, LibreOffice, GIMP, the PHP interpreter, and Chromium. For 17 of the 18 errors, RCV enables the application to continue to execute to provide acceptable output and service to its users on the error-triggering inputs. For 13 of the 18 errors, the continued RCV execution eventually flushes all of the repair effects and RCV detaches to restore the application to full clean functionality. We perform a manual analysis of the source code relevant to our benchmark errors, which indicates that for 11 of the 18 errors the RCV and later patched versions produce identical or equivalent results on all inputs.
Keywords: divide-by-zero, error recovery, null-dereference (ID#: 15-5514)
URL: http://doi.acm.org/10.1145/2666356.2594337

 

Jason XIn Zheng, Miodrag Potkonjak; A Digital PUF-Based IP Protection Architecture for Network Embedded Systems; ANCS '14 Proceedings of the tenth ACM/IEEE Symposium on Architectures for Networking and Communications Systems, October, 2014, Pages 255-256. Doi: 10.1145/2658260.2661776   Abstract: In this paper we present an architecture for a secure embedded system that is resilient to tempering and code injection attacks and offers anti-piracy protection for the software and hardware Intellectual Property (IP). We incorporate digital Physical Unclonable Functions (PUFs) in an authentication mechanism at the machine code level. The digital PUFs are used to de-obfuscate, at run time, a firmware that's issued by a central authority with very little performance and resource overhead. Each PUF is unique to the hosting device, and at the same time can be reconfigured with new seeds. The reconfigurable digital PUFs (drPUFs) have much lower risks of side-channel attacks and vastly higher number of usable challenge-response pairs, while retaining the speed and ease to implementation of digital PUFs.
Keywords: embedded systems, ip protection, obfuscation, puf (ID#: 15-5515)
URL: http://doi.acm.org/10.1145/2658260.2661776

 

Xing Chen, Wei Yu, David Griffith, Nada Golmie, Guobin Xu; On Cascading Failures and Countermeasures Based on Energy Storage in the Smart Grid; RACS '14 Proceedings of the 2014 Conference on Research in Adaptive and Convergent Systems, October 2014, Pages 291-296. Doi:  10.1145/2663761.2663770  Abstract: Recently, there have been growing concerns about electric power grid security and resilience. The performance of the power grid may suffer from component failures or targeted attacks. A sophisticated adversary may target critical components in the grid, leading to cascading failures and large blackouts. To this end, this paper begins with identifying the most critical components that lead to cascading failures in the grid and then presents a defensive mechanism using energy storage to defend against cascading failures. Based on the optimal power flow control on the standard IEEE power system test cases, we systematically assess component significance, simulate attacks against power grid components, and evaluate the consequences. We also conduct extensive simulations to investigate the effectiveness of deploying Energy Storage Systems (ESSs), in terms of storage capacity and deployment locations, to mitigate cascading failures. Through extensive simulations, our data shows that integrating energy storage systems into the smart grid can efficiently mitigate cascading failures.
Keywords: cascading failure, cascading mitigation, energy storage, smart grid (ID#: 15-5516)
URLhttp://doi.acm.org/10.1145/2663761.2663770

 

Min Suk Kang, Virgil D. Gligor;  Routing Bottlenecks in the Internet: Causes, Exploits, and Countermeasures; CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 321-333. Doi: 10.1145/2660267.2660299  Abstract: How pervasive is the vulnerability to link-flooding attacks that degrade connectivity of thousands of Internet hosts? Are some geographic regions more vulnerable than others? Do practical countermeasures exist? To answer these questions, we introduce the notion of the routing bottlenecks and show that it is a fundamental property of Internet design; i.e., it is a consequence of route-cost minimizations. We illustrate the pervasiveness of routing bottlenecks in an experiment comprising 15 countries and 15 cities distributed around the world, and measure their susceptibility to scalable link-flooding attacks. We present the key characteristics of routing bottlenecks, including size, link type, and distance from host destinations, and suggest specific structural and operational countermeasures to link-flooding attacks. These countermeasures can be deployed by network operators without needing major Internet redesign.
Keywords: ddos attack, link-flooding attack, power law, routing bottleneck (ID#: 15-5517)
URL: http://doi.acm.org/10.1145/2660267.2660299

 

Der-Yeuan Yu, Aanjhan Ranganathan, Thomas Locher, Srdjan Capkun, David Basin; Short Paper: Detection of GPS Spoofing Attacks in Power Grids; WiSec '14 Proceedings of the 2014 ACM Conference On Security And Privacy In Wireless & Mobile Networks, July 2014, Pages 99-104.  Doi: 10.1145/2627393.2627398  Abstract: Power companies are deploying a multitude of sensors to monitor the energy grid. Measurements at different locations should be aligned in time to obtain the global state of the grid, and the industry therefore uses GPS as a common clock source. However, these sensors are exposed to GPS time spoofing attacks that cause misaligned aggregated measurements, leading to inaccurate monitoring that affects power stability and line fault contingencies. In this paper, we analyze the resilience of phasor measurement sensors, which record voltages and currents, to GPS spoofing performed by an adversary external to the system. We propose a solution that leverages the characteristics of multiple sensors in the power grid to limit the feasibility of such attacks. In order to increase the robustness of wide-area power grid monitoring, we evaluate mechanisms that allow collaboration among GPS receivers to detect spoofing attacks. We apply multilateration techniques to allow a set of GPS receivers to locate a false GPS signal source. Using simulations, we show that receivers sharing a local clock can locate nearby spoofing adversaries with sufficient confidence.
Keywords: clock synchronization, gps spoofing, power grids (ID#: 15-5518)
URLhttp://doi.acm.org/10.1145/2627393.2627398

 

Camille Fayollas, Philippe Palanque, Jean-Charles Fabre, David Navarre, Eric Barboni, Martin Cronel, Yannick Deleris; A Fault-Tolerant Architecture for Resilient Interactive Systems; IHM '14 Proceedings of the 26th Conference on l'Interaction Homme-Machine, October 2014, Pages 80-90. Doi:  10.1145/2670444.2670462  Abstract: Research contributions to improve interactive systems reliability as, for now, mainly focused towards fault occurrence prevention by removing software bugs at development time. However, Interactive Systems complexity is so high that whatever efforts are deployed at development time, faults and failures occur at operation time. Root causes of such failures may be due to transient hardware faults or (when systems are used in high atmosphere) may be so called "natural faults" triggered by alpha particles in processors or neutrons from cosmic radiations. This paper proposes an exhaustive identification of faults to be handled in order to improve interactive systems reliability. As currently no research has been carried out in the field of interactive systems to detect and remove natural faults, this paper proposes a software architecture providing fault-tolerant mechanisms dedicated to interactive systems. More precisely, the paper how such architecture addresses the various component of interactive applications namely widgets, user application and window manager. These concepts are demonstrated through a case study from the domain of interactive cockpits of large civil aircrafts.
Keywords: critical interactive systems, fault-tolerance, resilience, software architecture (ID#: 15-5519)
URL: http://doi.acm.org/10.1145/2670444.2670462


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Journal: IEEE Transactions on Information Forensics and Security, March 2015

 
SoS Logo

Journal: IEEE Transactions on Information Forensics and Security, March 2015

 

The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features.  It is published by the IEEE Signal Processing Society.


 

Veugen, T.; de Haan, R.; Cramer, R.; Muller, F., "A Framework for Secure Computations With Two Non-Colluding Servers and Multiple Clients, Applied to Recommendations," Information Forensics and Security, IEEE Transactions on, vol.10, no.3, pp.445, 457, March 2015. doi: 10.1109/TIFS.2014.2370255  Abstract: We provide a generic framework that, with the help of a preprocessing phase that is independent of the inputs of the users, allows an arbitrary number of users to securely outsource a computation to two non-colluding external servers. Our approach is shown to be provably secure in an adversarial model where one of the servers may arbitrarily deviate from the protocol specification, as well as employ an arbitrary number of dummy users. We use these techniques to implement a secure recommender system based on collaborative filtering that becomes more secure, and significantly more efficient than previously known implementations of such systems, when the preprocessing efforts are excluded. We suggest different alternatives for preprocessing, and discuss their merits and demerits.
Keywords: Authentication; Computational modeling; Cryptography; Protocols; Recommender systems; Servers; Secure multi-party computation; client-server systems; malicious model; preprocessing; recommender systems; secret sharing (ID#: 15-4776)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6955802&isnumber=7019030

 

Ma, S.; Huang, Q.; Zhang, M.; Yang, B., "Efficient Public Key Encryption With Equality Test Supporting Flexible Authorization," Information Forensics and Security, IEEE Transactions on, vol.10, no.3, pp.458, 470, March 2015. doi: 10.1109/TIFS.2014.2378592  Abstract: We reformalize and recast the notion of public key encryption with equality test (PKEET), which was proposed in CT-RSA 2010 and supports to check whether two ciphertexts encrypted under different public keys contain the same message. PKEET has many interesting applications, for example, in constructing searchable encryption and partitioning encrypted data. However, the original PKEET scheme lacks an authorization mechanism for a user to control the comparison of its ciphertexts with others’. In this paper, we study the authorization mechanism for PKEET, and propose four types of authorization policies to enhance the privacy of users’ data. We give the definitions of the policies, propose a PKEET scheme supporting these four types of authorization at the same time, and prove its security based on the computational Diffie–Hellman assumption in the random oracle model. To the best of our knowledge, it is the only PKEET scheme supporting flexible authorization.
Keywords: Authorization; Electronic mail; Encryption; Monitoring; Public key; Searchable encryption; flexible authorization; public key encryption with equality test; searchable encryption (ID#: 15-4777)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6975231&isnumber=7019030

 

Yao, G.; Bi, J.; Vasilakos, A.V., "Passive IP Traceback: Disclosing the Locations of IP Spoofers From Path Backscatter," Information Forensics and Security, IEEE Transactions on, vol.10, no.3, pp.471, 484, March 2015. doi: 10.1109/TIFS.2014.2381873 Abstract: It is long known attackers may use forged source IP address to conceal their real locations. To capture the spoofers, a number of IP traceback mechanisms have been proposed. However, due to the challenges of deployment, there has been not a widely adopted IP traceback solution, at least at the Internet level. As a result, the mist on the locations of spoofers has never been dissipated till now. This paper proposes passive IP traceback (PIT) that bypasses the deployment difficulties of IP traceback techniques. PIT investigates Internet Control Message Protocol error messages (named path backscatter) triggered by spoofing traffic, and tracks the spoofers based on public available information (e.g., topology). In this way, PIT can find the spoofers without any deployment requirement. This paper illustrates the causes, collection, and the statistical results on path backscatter, demonstrates the processes and effectiveness of PIT, and shows the captured locations of spoofers through applying PIT on the path backscatter data set. These results can help further reveal IP spoofing, which has been studied for long but never well understood. Though PIT cannot work in all the spoofing attacks, it may be the most useful mechanism to trace spoofers before an Internet-level traceback system has been deployed in real.
Keywords: Backscatter; Computer crime; IP networks; Internet; Logic gates; Telescopes; Topology; Computer network management; IP traceback; computer network security; denial of service (DoS) (ID#: 15-4778)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6987335&isnumber=7019030

 

Barsoum, A.F.; Hasan, M.A., "Provable Multicopy Dynamic Data Possession in Cloud Computing Systems," Information Forensics and Security, IEEE Transactions on, vol.10, no.3, pp.485, 497, March 2015. doi: 10.1109/TIFS.2014.2384391  
Abstract: Increasingly more and more organizations are opting for outsourcing data to remote cloud service providers (CSPs). Customers can rent the CSPs storage infrastructure to store and retrieve almost unlimited amount of data by paying fees metered in gigabyte/month. For an increased level of scalability, availability, and durability, some customers may want their data to be replicated on multiple servers across multiple data centers. The more copies the CSP is asked to store, the more fees the customers are charged. Therefore, customers need to have a strong guarantee that the CSP is storing all data copies that are agreed upon in the service contract, and all these copies are consistent with the most recent modifications issued by the customers. In this paper, we propose a map-based provable multicopy dynamic data possession (MB-PMDDP) scheme that has the following features: 1) it provides an evidence to the customers that the CSP is not cheating by storing fewer copies; 2) it supports outsourcing of dynamic data, i.e., it supports block-level operations, such as block modification, insertion, deletion, and append; and 3) it allows authorized users to seamlessly access the file copies stored by the CSP. We give a comparative analysis of the proposed MB-PMDDP scheme with a reference model obtained by extending existing provable possession of dynamic single-copy schemes. The theoretical analysis is validated through experimental results on a commercial cloud platform. In addition, we show the security against colluding servers, and discuss how to identify corrupted copies by slightly modifying the proposed scheme.
Keywords: Computational modeling; Cryptography; Indexes; Organizations; Outsourcing; Servers; Tin; Cloud computing; data replication; dynamic environment; outsourcing data storage (ID#: 15-4779)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6991539&isnumber=7019030

 

Li, J.; Li, X.; Yang, B.; Sun, X., "Segmentation-Based Image Copy-Move Forgery Detection Scheme," Information Forensics and Security, IEEE Transactions on, vol.10, no.3, pp.507, 518, March 2015. doi: 10.1109/TIFS.2014.2381872  Abstract: In this paper, we propose a scheme to detect the copy-move forgery in an image, mainly by extracting the keypoints for comparison. The main difference to the traditional methods is that the proposed scheme first segments the test image into semantically independent patches prior to keypoint extraction. As a result, the copy-move regions can be detected by matching between these patches. The matching process consists of two stages. In the first stage, we find the suspicious pairs of patches that may contain copy-move forgery regions, and we roughly estimate an affine transform matrix. In the second stage, an Expectation-Maximization-based algorithm is designed to refine the estimated matrix and to confirm the existence of copy-move forgery. Experimental results prove the good performance of the proposed scheme via comparing it with the state-of-the-art schemes on the public databases.
Keywords: Accuracy; Educational institutions; Estimation; Forgery; Image segmentation; Robustness; Transforms; Copy-move forgery detection; image forensics; segmentation (ID#: 15-4780)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6987281&isnumber=7019030

 

Veugen, T., "Linear Round Bit-Decomposition of Secret-Shared Values," Information Forensics and Security, IEEE Transactions on, vol.10, no.3, pp.498, 506, March 2015. doi: 10.1109/TIFS.2014.2373811 Abstract: In the field of signal processing in the encrypted domain, linear operations are usually easy to perform, whereas multiplications, and bitwise operations like comparison, are more costly in terms of computation and communication. These bitwise operations frequently require a decomposition of the secret value into bits. To minimize the communication complexity, previous studies have focused on solutions that require a constant number of communication rounds, often at the cost of a large number of multiplications. We develop a bit-decomposition protocol within a linear secret sharing system, where sharings of the bits are computed from an integer that is secret-shared among multiple parties. We consider new solutions that require fewer multiplications, but where the number of communication rounds is linear in the input size. Although our basic solution requires m communication rounds to extract the m least significant bits, we present a way of reducing it by an arbitrary factor, using additional precomputations. Given that the best constant round solutions need at least 23 communication rounds, our solution is preferable for integers up to 165 bits, leading to fewer rounds and a smaller number of secure multiplications. In one variant, it is even possible to compute all I bits through only one opening and one additional communication round containing l multiplications, when a precomputation phase of 2 + log2 I rounds and 2I-l-1 secure multiplications has been performed.
Keywords: Complexity theory; Cryptography; Equations; Logic gates; Materials; Protocols; Linear secret sharing; bit-decomposition; linear secret sharing; secure multi-party computations (ID#: 15-4781)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6963482&isnumber=7019030

 

Taha, M.; Schaumont, P., "Key Updating for Leakage Resiliency With Application to AES Modes of Operation," Information Forensics and Security, IEEE Transactions on, vol.10, no.3, pp.519, 528, March 2015. doi: 10.1109/TIFS.2014.2383359  Abstract: Side-channel analysis (SCA) exploits the information leaked through unintentional outputs (e.g., power consumption) to reveal the secret key of cryptographic modules. The real threat of SCA lies in the ability to mount attacks over small parts of the key and to aggregate information over different encryptions. The threat of SCA can be thwarted by changing the secret key at every run. Indeed, many contributions in the domain of leakage resilient cryptography tried to achieve this goal. However, the proposed solutions were computationally intensive and were not designed to solve the problem of the current cryptographic schemes. In this paper, we propose a generic framework of lightweight key updating that can protect the current cryptographic standards and evaluate the minimum requirements for heuristic SCA-security. Then, we propose a complete solution to protect the implementation of any standard mode of Advanced Encryption Standard. Our solution maintains the same level of SCA-security (and sometimes better) as the state of the art, at a negligible area overhead while doubling the throughput of the best previous work.
Keywords: Ciphers; Hardware; Radiation detectors; Random variables; Standards; HWS-SIDE; Hardware Security (Side Channels); Hardware security (side channels) (ID#: 15-4782)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6987331&isnumber=7019030

 

Karachontzitis, S.; Timotheou, S.; Krikidis, I.; Berberidis, K., "Security-Aware Max–Min Resource Allocation in Multiuser OFDMA Downlink," Information Forensics and Security, IEEE Transactions on, vol.10, no.3, pp.529, 542, March 2015. doi: 10.1109/TIFS.2014.2384392  Abstract: In this paper, we study the problem of resource allocation for a multiuser orthogonal frequency-division multiple access (OFDMA) downlink with eavesdropping. The considered setup consists of a base station, several users, and a single eavesdropper that intends to wiretap the transmitted message within each OFDMA subchannel. By taking into consideration the existence of the eavesdropper, the base station aims to assign subchannels and allocate the available power in order to optimize the max–min fairness criterion over the users’ secrecy rate. The considered problem is a mixed integer nonlinear program. For a fixed subchannel assignment, the optimal power allocation is obtained by developing an algorithm of polynomial computational complexity. In the general case, the problem is investigated from two different perspectives due to its combinatorial nature. In the first, the number of users is equal or higher than the number of subchannels, whereas in the second, the number of users is less than the number of subchannels. In the first case, we provide the optimal solution in polynomial time by transforming the original problem into an assignment one for which there are polynomial time algorithms. In the second case, the secrecy rate formula is linearly approximated and the problem is transformed to a mixed integer linear program, which is solved by a branch-and-bound algorithm. Moreover, optimality is discussed for two particular cases where the available power tends to infinity and zero, respectively. Based on the resulting insights, three heuristic schemes of polynomial complexity are proposed, offering a better balance between performance and complexity. Simulation results demonstrate that each one of these schemes achieves its highest performance at a different power regime of the system.
Keywords: Downlink; OFDM; Physical layer; Polynomials; Power demand; Resource management; Security; Resource allocation; integer programming; linear approximation; linear sum assignment problem; mixed linear; mixed linear integer programming; physical layer security (ID#: 15-4783)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6991587&isnumber=7019030

 

Lian, B.; Chen, G.; Ma, M.; Li, J., "Periodic K -Times Anonymous Authentication With Efficient Revocation of Violator’s Credential," Information Forensics and Security, IEEE Transactions on, vol.10, no.3, pp.543,557, March 2015. doi: 10.1109/TIFS.2014.2386658  Abstract: In a periodic K-times anonymous authentication system, user can anonymously show credential at most K times in one time period. In the next time period, user can automatically get another K-times authentication permission. If a user tries to show credential beyond K times in one time period, anyone can identify the dishonest user (the violator). But identifying violators is not enough for some systems, where it is also desirable to revoke violators’ credentials for preventing them from abusing the anonymous property again. However, the problem of revoking credential without trusted third party has not been solved efficiently and practically. To solve it, we present an efficient scheme with efficient revocation of violator’s credential. In fact, our method also solves an interesting problem—leaking information in a statistic zero-knowledge way, so our solution to the revocation problem outperforms all prior solutions. For achieving it, we use the special zero-knowledge proof with special information leak for revoking the violator’s credential, but it can still be proven to be perfect statistic zero knowledge for guaranteeing the honest user’s anonymity. Comparing with existing schemes, our scheme is efficient, and moreover, our method of revoking violator’s credential is more practical with the least additional costs.
Keywords: Authentication; Cloning; Educational institutions; Games; Protocols; Public key; K-times anonymous authentication; K-times; anonymous authentication; provably secure; revocation of credential; truly anonymous; zero-knowledge (ID#: 15-4784)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6999947&isnumber=7019030

 

Li, B.; Ng, T.; Li, X.; Tan, S.; Huang, J., "Revealing the Trace of High-Quality JPEG Compression Through Quantization Noise Analysis," Information Forensics and Security, IEEE Transactions on, vol.10, no.3, pp.558, 573, March 2015. doi: 10.1109/TIFS.2015.2389148  Abstract: To identify whether an image has been JPEG compressed is an important issue in forensic practice. The state-of-the-art methods fail to identify high-quality compressed images, which are common on the Internet. In this paper, we provide a novel quantization noise-based solution to reveal the traces of JPEG compression. Based on the analysis of noises in multiple-cycle JPEG compression, we define a quantity called forward quantization noise. We analytically derive that a decompressed JPEG image has a lower variance of forward quantization noise than its uncompressed counterpart. With the conclusion, we develop a simple yet very effective detection algorithm to identify decompressed JPEG images. We show that our method outperforms the state-of-the-art methods by a large margin especially for high-quality compressed images through extensive experiments on various sources of images. We also demonstrate that the proposed method is robust to small image size and chroma subsampling. The proposed algorithm can be applied in some practical applications, such as Internet image classification and forgery detection.
Keywords: Discrete cosine transforms; Forensics; Image coding; Noise; Quantization (signal);Transform coding; Upper bound; Discrete cosine transform (DCT);Dynamic dead-time controller; compression identification; forgery detection; forward quantization noise; high-frequency boost converters; high-voltage synchronous gate driver; quasi-square-wave zero-voltage switching; zero-voltage switching technique (ID#: 15-4785)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7001657&isnumber=7019030

 

Chen, G.; Gong, Y.; Xiao, P.; Chambers, J.A., "Physical Layer Network Security in the Full-Duplex Relay System," Information Forensics and Security, IEEE Transactions on, vol.10, no.3, pp.574, 583, March 2015. doi: 10.1109/TIFS.2015.2390136  Abstract: This paper investigates the secrecy performance of full-duplex relay (FDR) networks. The resulting analysis shows that FDR networks have better secrecy performance than half duplex relay networks, if the self-interference can be well suppressed. We also propose a full duplex jamming relay network, in which the relay node transmits jamming signals while receiving the data from the source. While the full duplex jamming scheme has the same data rate as the half duplex scheme, the secrecy performance can be significantly improved, making it an attractive scheme when the network secrecy is a primary concern. A mathematic model is developed to analyze secrecy outage probabilities for the half duplex, the full duplex and full duplex jamming schemes, and the simulation results are also presented to verify the analysis.
Keywords: Approximation methods; Data communication; Jamming; Physical layer; Relay networks (telecommunications); Security; Physical layer secrecy; cooperative relay networks; full duplex relay; secrecy outage probability (ID#: 15-4786)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7004893&isnumber=7019030


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Malware Analysis 2014, Part 1 (ACM)

 
SoS Logo

Malware Analysis, 2014, (ACM)

Part 1

 

The ACM published nearly 500 articles about malware analysis in 2014, making the topic one of the most studied. The bibliographical citations presented here, broken into several parts, should be of interest to the Science of Security community.


 

Tamas K. Lengyel, Steve Maresca, Bryan D. Payne, George D. Webster, Sebastian Vogl, Aggelos Kiayias; Scalability, Fidelity and Stealth in the DRAKVUF Dynamic Malware Analysis System; ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 386-395. Doi: 10.1145/2664243.2664252  Abstract; Malware is one of the biggest security threats on the Internet today and deploying effective defensive solutions requires the rapid analysis of a continuously increasing number of malware samples. With the proliferation of metamorphic malware the analysis is further complicated as the efficacy of signature-based static analysis systems is greatly reduced. While dynamic malware analysis is an effective alternative, the approach faces significant challenges as the ever increasing number of samples requiring analysis places a burden on hardware resources. At the same time modern malware can both detect the monitoring environment and hide in unmonitored corners of the system.  In this paper we present DRAKVUF, a novel dynamic malware analysis system designed to address these challenges by building on the latest hardware virtualization extensions and the Xen hypervisor. We present a technique for improving stealth by initiating the execution of malware samples without leaving any trace in the analysis machine. We also present novel techniques to eliminate blind-spots created by kernel-mode rootkits by extending the scope of monitoring to include kernel internal functions, and to monitor file-system accesses through the kernel's heap allocations. With extensive tests performed on recent malware samples we show that DRAKVUF achieves significant improvements in conserving hardware resources while providing a stealthy, in-depth view into the behavior of modern malware.
Keywords: dynamic malware analysis, virtual machine introspection (ID#: 15-4661)
URL: http://doi.acm.org/10.1145/2664243.2664252

 

Shahid Alam, Ibrahim Sogukpinar, Issa Traore, Yvonne Coady; In-Cloud Malware Analysis and Detection: State of the Art; SIN '14 Proceedings of the 7th International Conference on Security of Information and Networks, September 2014, Pages 473.  Doi: 10.1145/2659651.2659730  Abstract: With the advent of Internet of Things, we are facing another wave of malware attacks, that encompass intelligent embedded devices. Because of the limited energy resources, running a complete malware detector on these devices is quite challenging. There is a need to devise new techniques to detect malware on these devices. Malware detection is one of the services that can be provided as an in-cloud service. This paper reviews current such systems, discusses there pros and cons, and recommends an improved in-cloud malware analysis and detection system. We introduce a new three layered hybrid system with a lightweight antimalware engine. These features can provide faster malware detection response time, shield the client from malware and reduce the bandwidth between the client and the cloud, compared to other such systems. The paper serves as a motivation for improving the current and developing new techniques for in-cloud malware analysis and detection system.
Keywords: Cloud computing, In-cloud services, Malware analysis, Malware detection (ID#: 15-4662)
URL: http://doi.acm.org/10.1145/2659651.2659730

 

Markus Wagner, Wolfgang Aigner, Alexander Rind, Hermann Dornhackl, Konstantin Kadletz, Robert Luh, Paul Tavolato; Problem Characterization and Abstraction for Visual Analytics in Behavior-Based Malware Pattern Analysis;  VizSec '14 Proceedings of the Eleventh Workshop on Visualization for Cyber Security, November 2014, Pages 9-16. Doi: 10.1145/2671491.2671498 Abstract: Behavior-based analysis of emerging malware families involves finding suspicious patterns in large collections of execution traces. This activity cannot be automated for previously unknown malware families and thus malware analysts would benefit greatly from integrating visual analytics methods in their process. However existing approaches are limited to fairly static representations of data and there is no systematic characterization and abstraction of this problem domain. Therefore we performed a systematic literature study, conducted a focus group as well as semi-structured interviews with 10 malware analysts to elicit a problem abstraction along the lines of data, users, and tasks. The requirements emerging from this work can serve as basis for future design proposals to visual analytics-supported malware pattern analysis.
Keywords: evaluation, malicious software, malware analysis, problem characterization and abstraction, visual analytics (ID#: 15-4663)
URL: http://doi.acm.org/10.1145/2671491.2671498

 

Jae-wook Jang, Jiyoung Woo, Jaesung Yun, Huy Kang Kim;  Mal-Netminer: Malware Classification Based on Social Network Analysis of Call Graph; WWW Companion '14 Proceedings of the Companion Publication of The 23rd International Conference on World Wide Web Companion, April 2014, Pages 731-734. Doi: 10.1145/2567948.2579364  Abstract: In this work, we aim to classify malware using automatic classifiers by employing graph metrics commonly used in social network analysis. First, we make a malicious system call dictionary that consists of system calls found in malware. To analyze the general structural information of malware and measure the influence of system calls found in malware, we adopt social network analysis. Thus, we use social network metrics such as the degree distribution, degree centrality, and average distance, which are implicitly equivalent to distinct behavioral characteristics. Our experiments demonstrate that the proposed system performs well in classifying malware families within each malware class with accuracy greater than 98%. As exploiting the social network properties of system calls found in malware, our proposed method can not only classify the malware with fewer features than previous methods adopting graph features but also enables us to build a quick and simple detection system against malware.
Keywords: degree distribution, dynamic analysis, malware, social network analysis (SNA), system call graph (ID#: 15-4664)
URLhttp://dl.acm.org/citation.cfm?id=2579364

 

Tobias Wüchner, Martín Ochoa, Alexander Pretschner;   Malware Detection With Quantitative Data Flow Graphs;  ASIA CCS '14 Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security, June 2014, Pages 271-282. Doi: 10.1145/2590296.2590319  Abstract: We propose a novel behavioral malware detection approach based on a generic system-wide quantitative data flow model. We base our data flow analysis on the incremental construction of aggregated quantitative data flow graphs. These graphs represent communication between different system entities such as processes, sockets, files or system registries. We demonstrate the feasibility of our approach through a prototypical instantiation and implementation for the Windows operating system. Our experiments yield encouraging results: in our data set of samples from common malware families and popular non-malicious applications, our approach has a detection rate of 96% and a false positive rate of less than 1.6%. In comparison with closely related data flow based approaches, we achieve similar detection effectiveness with considerably better performance: an average full system analysis takes less than one second.
Keywords: behavioral malware analysis, data flow tracking, intrusion detection, malware detection, quantitative data flows (ID#: 15-4665)
URL: http://doi.acm.org/10.1145/2590296.2590319

 

Ashish Saini, Ekta Gandotra, Divya Bansal, Sanjeev Sofat;  Classification of PE Files using Static Analysis ; SIN '14 Proceedings of the 7th International Conference on Security of Information and Networks, September, Pages 429. Doi:  10.1145/2659651.2659679  Abstract: Malware is one of the most terrible and major security threats facing the Internet today. Anti-malware vendors are challenged to identify, classify and counter new malwares due to the obfuscation techniques being used by malware authors. In this paper, we present a simple, fast and scalable method of differentiating malwares from cleanwares on the basis of features extracted from Windows PE files. The features used in this work are Suspicious Section Count and Function Call Frequency. After automatically extracting features of executables, we use machine learning algorithms available in WEKA library to classify them into malwares and cleanwares. Our experimental results provide an accuracy of over 98% for a data set of 3,087 executable files including 2,460 malwares and 627 cleanwares. Based on the results obtained, we conclude that the Function Call Frequency feature derived from the static analysis method plays a significant role in distinguishing malware files from benign ones.
Keywords: Classification, Machine Learning, Static Malware Analysis (ID#: 15-4666)
URL: http://doi.acm.org/10.1145/2659651.2659679

 

Bernhard Grill, Christian Platzer, Jürgen Eckel; A Practical Approach for Generic Bootkit Detection and Prevention;  EuroSec '14 Proceedings of the Seventh European Workshop on System Security, April 2014, Article No. 4. Doi: 10.1145/2592791.2592795  Abstract: Bootkits are still the most powerful tool for attackers to stealthily infiltrate computer systems. In this paper we present a novel approach to detect and prevent bootkit attacks during the infection phase. Our approach relies on emulation and monitoring of the system's boot process. We present results of a preliminary evaluation on our approach using a Windows system and the leaked Carberp bootkit.
Keywords: bootkit detection and prevention, dynamic malware analysis, x86 emulation (ID#: 15-4667)
URL: http://doi.acm.org/10.1145/2592791.2592795

 

Dennis Gamayunov;  Falsifiability of Network Security Research: The Good, the Bad, and the Ugly; TRUST '14 Proceedings of the 1st ACM SIGPLAN Workshop on Reproducible Research Methodologies and New Publication Models in Computer Engineering, June 2014, Article No. 4. Doi: 10.1145/2618137.2618141  Abstract: A falsifiability criterion helps us to distinguish between scientific and non-Scientific theories. One may try to raise a question whether this criterion is applicable to the information security research, especially to the intrusion detection and malware research fields. In fact, these research fields seems to fail to satisfy the falsifiability criterion, since they lack the practice of publishing raw experimental data which were used to prove the theories. Existing public datasets like the KDD Cup'99 dataset and VX Heavens virus dataset are outdated. Furthermore, most of current Scientific research projects tend to keep their datasets private. We suggest that the Scientific community should pay more attention to creating and maintaining public open datasets of malware and any kinds of computer attack-related data. But how can we bring this into reality, taking into account legal and privacy concerns?
Keywords: intrusion detection, malware analysis, network security, research methodology (ID#: 15-4668)
URL: http://doi.acm.org/10.1145/2618137.2618141

 

TaeGuen Kim, Jung Bin Park, In Gyeom Cho, Boojoong Kang, Eul Gyu Im, SooYong Kang;  Similarity Calculation Method for User-Define Functions to Detect Malware Variants; RACS '14 Proceedings of the 2014 Conference on Research in Adaptive and Convergent Systems,October 2014, Pages 236-241. Doi: 10.1145/2663761.2664222  Abstract: The number of malware has sharply increased over years, and it caused various damages on computing systems and data. In this paper, we propose techniques to detect malware variants. Malware authors usually reuse malware modules when they generate new malware or malware variants. Therefore, malware variants have common code for some functions in their binary files. We focused on this common code in this research, and proposed the techniques to detect malware variants through similarity calculation of user-defined function. Since many malware variants evade malware detection system by transforming their static signatures, to cope with this problem, we applied pattern matching algorithms for DNA variations in Bioinformatics to similarity calculation of malware binary files. Since the pattern matching algorithm we used provides the local alignment function, small modification of functions can be overcome. Experimental results show that our proposed method can detect malware similarity and it is more resilient than other methods.
Keywords: malware analysis, smith-waterman algorithm, static analysis (ID#: 15-4669)
URL: http://doi.acm.org/10.1145/2663761.2664222

 

Timothy Vidas, Jiaqi Tan, Jay Nahata, Chaur Lih Tan, Nicolas Christin, Patrick Tague;  A5: Automated Analysis of Adversarial Android Applications; SPSM '14 Proceedings of the 4th ACM Workshop on Security and Privacy in Smartphones & Mobile Devices, November 2014, Pages 39-50. Doi: 10.1145/2666620.2666630  Abstract: Mobile malware is growing - both in overall volume and in number of existing variants - at a pace rapid enough that systematic manual, human analysis is becoming increasingly difficult. As a result, there is a pressing need for techniques and tools that provide automated analysis of mobile malware samples. We present A5, an open source automated system to process Android malware. A5 is a hybrid system combining static and dynamic malware analysis techniques. Android's architecture permits many different paths for malware to react to system events, any of which may result in malicious behavior. Key innovations in A5 consist of novel methods of interacting with mobile malware to better coerce malicious behavior, and in combining both virtual and physical pools of Android platforms to capture behavior that could otherwise be missed. The primary output of A5 is a set of network threat indicators and intrusion detection system signatures that can be used to detect and prevent malicious network activity. We detail A5's distributed design and demonstrate applicability of our interaction techniques using examples from real malware. Additionally, we compare A5 with other automated systems and provide performance measurements of an implementation, using a published dataset of 1,260 unique malware samples, showing that A5 can quickly process large amounts of malware. We provide a public web interface to our implementation of A5 that allows third parties to use A5 as a web service.
Keywords: dynamic analysis, malicious behavior, mobile malware, sandbox, static analysis, virtualization (ID#: 15-4670)
URL: http://doi.acm.org/10.1145/2666620.2666630 

 

M. Zubair Rafique, Ping Chen, Christophe Huygens, Wouter Joosen;  Evolutionary Algorithms for Classification of Malware Families Through Different Network Behaviors;  GECCO '14 Proceedings of the 2014 Conference on Genetic and Evolutionary Computation, July 2014, Pages 1167-1174.  Doi: 10.1145/2576768.2598238  Abstract: The staggering increase of malware families and their diversity poses a significant threat and creates a compelling need for automatic classification techniques. In this paper, we first analyze the role of network behavior as a powerful technique to automatically classify malware families and their polymorphic variants. Afterwards, we present a framework to efficiently classify malware families by modeling their different network behaviors (such as HTTP, SMTP, UDP, and TCP). We propose protocol-aware and state-space modeling schemes to extract features from malware network behaviors. We analyze the applicability of various evolutionary and non-evolutionary algorithms for our malware family classification framework. To evaluate our framework, we collected a real-world dataset of 6,000 unique and active malware samples belonging to 20 different malware families. We provide a detailed analysis of network behaviors exhibited by these prevalent malware families. The results of our experiments shows that evolutionary algorithms, like sUpervised Classifier System (UCS), can effectively classify malware families through different network behaviors in real-time. To the best of our knowledge, the current work is the first malware classification framework based on evolutionary classifier that uses different network behaviors.
Keywords: machine learning, malware classification, network behaviors (ID#: 15-4671)
URL: http://doi.acm.org/10.1145/2576768.2598238

 

Mu Zhang, Yue Duan, Heng Yin, Zhiruo Zhao;  Semantics-Aware Android Malware Classification Using Weighted Contextual API Dependency Graphs;  CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 1105-1116. Doi: 10.1145/2660267.2660359  Abstract: The drastic increase of Android malware has led to a strong interest in developing methods to automate the malware analysis process. Existing automated Android malware detection and classification methods fall into two general categories: 1) signature-based and 2) machine learning-based. Signature-based approaches can be easily evaded by bytecode-level transformation attacks. Prior learning-based works extract features from application syntax, rather than program semantics, and are also subject to evasion. In this paper, we propose a novel semantic-based approach that classifies Android malware via dependency graphs. To battle transformation attacks, we extract a weighted contextual API dependency graph as program semantics to construct feature sets. To fight against malware variants and zero-day malware, we introduce graph similarity metrics to uncover homogeneous application behaviors while tolerating minor implementation differences. We implement a prototype system, DroidSIFT, in 23 thousand lines of Java code. We evaluate our system using 2200 malware samples and 13500 benign samples. Experiments show that our signature detection can correctly label 93\% of malware instances; our anomaly detector is capable of detecting zero-day malware with a low false negative rate (2\%) and an acceptable false positive rate (5.15\%) for a vetting purpose.
Keywords: android, anomaly detection, graph similarity, malware classification, semantics-aware, signature detection (ID#: 15-4672)
URL:   http://doi.acm.org/10.1145/2660267.2660359

 

 Shahid Alam, Issa Traore, Ibrahim Sogukpinar; Current Trends and the Future of Metamorphic Malware Detection; SIN '14 Proceedings of the 7th International Conference on Security of Information and Networks, September 2014, Pages 411. Doi: 10.1145/2659651.2659670  Abstract: Dynamic binary obfuscation or metamorphism is a technique where a malware never keeps the same sequence of opcodes in the memory. This stealthy mutation technique helps a malware evade detection by today's signature-based anti-malware programs. This paper analyzes the current trends, provides future directions and reasons about some of the basic characteristics of a system for providing real-time detection of metamorphic malware. Our emphasis is on the most recent advancements and the potentials available in metamorphic malware detection, so we only cover some of the major academic research efforts carried out, including and after, the year 2006. The paper not only serves as a collection of recent references and information for easy comparison and analysis, but also as a motivation for improving the current and developing new techniques for metamorphic malware detection.
Keywords: End point security, Malware detection, Metamorphic malware, Obfuscations (ID#: 15-4673)
URLhttp://doi.acm.org/10.1145/2659651.2659670

 

Zhaoyan Xu, Antonio Nappa, Robert Baykov, Guangliang Yang, Juan Caballero, Guofei Gu;  AUTOPROBE: Towards Automatic Active Malicious Server Probing Using Dynamic Binary Analysis; CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 179-190.  Doi: 10.1145/2660267.2660352  Abstract: Malware continues to be one of the major threats to Internet security. In the battle against cybercriminals, accurately identifying the underlying malicious server infrastructure (e.g., C&C servers for botnet command and control) is of vital importance. Most existing passive monitoring approaches cannot keep up with the highly dynamic, ever-evolving malware server infrastructure. As an effective complementary technique, active probing has recently attracted attention due to its high accuracy, efficiency, and scalability (even to the Internet level). In this paper, we propose Autoprobe, a novel system to automatically generate effective and efficient fingerprints of remote malicious servers. Autoprobe addresses two fundamental limitations of existing active probing approaches: it supports pull-based C&C protocols, used by the majority of malware, and it generates fingerprints even in the common case when C&C servers are not alive during fingerprint generation. Using real-world malware samples we show that Autoprobe can successfully generate accurate C&C server fingerprints through novel applications of dynamic binary analysis techniques. By conducting Internet-scale active probing, we show that Autoprobe can successfully uncover hundreds of malicious servers on the Internet, many of them unknown to existing blacklists. We believe Autoprobe is a great complement to existing defenses, and can play a unique role in the battle against cybercriminals.
Keywords: active probing malware fingerprint generation c&c server (ID#: 15-4674)
URL:   http://doi.acm.org/10.1145/2660267.2660352

 

Ekta Gandotra, Divya Bansal, Sanjeev Sofat; Integrated Framework for Classification of Malwares; SIN '14 Proceedings of the 7th International Conference on Security of Information and Networks, September 2014, Pages  417. Doi: 10.1145/2659651.2659738  Abstract: Malware is one of the most terrible and major security threats facing the Internet today. It is evolving, becoming more sophisticated and using new ways to target computers and mobile devices. The traditional defences like antivirus softwares typically rely on signature based methods and are unable to detect previously unseen malwares. Machine learning approaches have been adopted to classify malwares based on the features extracted using static or dynamic analysis. Both type of malware analysis have their pros and cons. In this paper, we propose a classification framework which uses integration of both static and dynamic features for distinguishing malwares from clean files. A real world corpus of recent malwares is used to validate the proposed approach. The experimental results, based on a dataset of 998 malwares and 428 cleanware files provide an accuracy of 99.58% indicating that the hybrid approach enhances the accuracy rate of malware detection and classification over the results obtained when these features are considered separately.
Keywords: Classification, Dynamic Analysis, Machine Learning, Malware, Static Analysis (ID#: 15-4675)
URL:   http://doi.acm.org/10.1145/2659651.2659738

 

Hao Zhang, Danfeng Daphne Yao, Naren Ramakrishnan;   Detection of Stealthy Malware Activities With Traffic Causality and Scalable Triggering Relation Discovery; ASIA CCS '14 Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security, June 2014, Pages 39-50.   Doi: 10.1145/2590296.2590309  Abstract: Studies show that a significant portion of networked computers are infected with stealthy malware. Infection allows remote attackers to control, utilize, or spy on victim machines. Conventional signature-scan or counting-based techniques are limited, as they are unable to stop new zero-day exploits. We describe a traffic analysis method that can effectively detect malware activities on a host. Our new approach efficiently discovers the underlying triggering relations of a massive amount of network events. We use these triggering relations to reason the occurrences of network events and to pinpoint stealthy malware activities. We define a new problem of triggering relation discovery of network events. Our solution is based on domain-knowledge guided advanced learning algorithms. Our extensive experimental evaluation involving 6+ GB traffic of various types shows promising results on the accuracy of our triggering relation discovery.
Keywords: anomaly detection, network security, stealthy malware (ID#: 15-4676)
URL: http://doi.acm.org/10.1145/2590296.2590309

 

Timothy Vidas, Nicolas Christin; Evading Android Runtime Analysis via Sandbox Detection;  ASIA CCS '14 Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security, June 2014, Pages 447-458. Doi: 10.1145/2590296.2590325  Abstract: The large amounts of malware, and its diversity, have made it necessary for the security community to use automated dynamic analysis systems. These systems often rely on virtualization or emulation, and have recently started to be available to process mobile malware. Conversely, malware authors seek to detect such systems and evade analysis. In this paper, we present techniques for detecting Android runtime analysis systems. Our techniques are classified into four broad classes showing the ability to detect systems based on differences in behavior, performance, hardware and software components, and those resulting from analysis system design choices. We also evaluate our techniques against current publicly accessible systems, all of which are easily identified and can therefore be hindered by a motivated adversary. Our results show some fundamental limitations in the viability of dynamic mobile malware analysis platforms purely based on virtualization.
Keywords: android, evasion, malware, sandbox (ID#: 15-4677)
URL: http://doi.acm.org/10.1145/2590296.2590325

 

Yiming Jing, Ziming Zhao, Gail-Joon Ahn, Hongxin Hu; Morpheus: Automatically Generating Heuristics to Detect Android Emulators; ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 216-225. Doi: 10.1145/2664243.2664250  Abstract: Emulator-based dynamic analysis has been widely deployed in Android application stores. While it has been proven effective in vetting applications on a large scale, it can be detected and evaded by recent Android malware strains that carry detection heuristics. Using such heuristics, an application can check the presence or contents of certain artifacts and infer the presence of emulators. However, there exists little work that systematically discovers those heuristics that would be eventually helpful to prevent malicious applications from bypassing emulator-based analysis. To cope with this challenge, we propose a framework called Morpheus that automatically generates such heuristics. Morpheus leverages our insight that an effective detection heuristic must exploit discrepancies observable by an application. To this end, Morpheus analyzes the application sandbox and retrieves observable artifacts from both Android emulators and real devices. Afterwards, Morpheus further analyzes the retrieved artifacts to extract and rank detection heuristics. The evaluation of our proof-of-concept implementation of Morpheus reveals more than 10,000 novel detection heuristics that can be utilized to detect existing emulator-based malware analysis tools. We also discuss the discrepancies in Android emulators and potential countermeasures.
Keywords: Android, emulator, malware (ID#: 15-4678)
URL:   http://doi.acm.org/10.1145/2664243.2664250

 

Hien Thi Thu Truong, Eemil Lagerspetz, Petteri Nurmi, Adam J. Oliner, Sasu Tarkoma, N. Asokan, Sourav Bhattacharya; The Company You Keep: Mobile Malware Infection Rates and Inexpensive Risk Indicators; WWW '14 Proceedings of the 23rd International Conference on World Wide Web, April 2014, Pages 39-50. Doi: 10.1145/2566486.2568046  Abstract: There is little information from independent sources in the public domain about mobile malware infection rates. The only previous independent estimate (0.0009%) [11], was based on indirect measurements obtained from domain-name resolution traces. In this paper, we present the first independent study of malware infection rates and associated risk factors using data collected directly from over 55,000 Android devices. We find that the malware infection rates in Android devices estimated using two malware datasets (0.28% and 0.26%), though small, are significantly higher than the previous independent estimate. Based on the hypothesis that some application stores have a greater density of malicious applications and that advertising within applications and cross-promotional deals may act as infection vectors, we investigate whether the set of applications used on a device can serve as an indicator for infection of that device. Our analysis indicates that, while not an accurate indicator of infection by itself, the application set does serve as an inexpensive method for identifying the pool of devices on which more expensive monitoring and analysis mechanisms should be deployed. Using our two malware datasets we show that this indicator performs up to about five times better at identifying infected devices than the baseline of random checks. Such indicators can be used, for example, in the search for new or previously undetected malware. It is therefore a technique that can complement standard malware scanning. Our analysis also demonstrates a marginally significant difference in battery use between infected and clean devices.
Keywords: android, infection rate, malware detection, mobile malware (ID#: 15-4679)
URL: http://doi.acm.org/10.1145/2566486.2568046

 

Qian Feng, Aravind Prakash, Heng Yin, Zhiqiang Lin; MACE: High-Coverage and Robust Memory Analysis for Commodity Operating Systems; ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 196-205. Doi: 10.1145/2664243.2664248  Abstract: Memory forensic analysis collects evidence for digital crimes and malware attacks from the memory of a live system. It is increasingly valuable, especially in cloud computing. However, memory analysis on on commodity operating systems (such as Microsoft Windows) faces the following key challenges: (1) a partial knowledge of kernel data structures; (2) difficulty in handling ambiguous pointers; and (3) lack of robustness by relying on soft constraints that can be easily violated by kernel attacks. To address these challenges, we present MACE, a memory analysis system that can extract a more complete view of the kernel data structures for closed-source operating systems and significantly improve the robustness by only leveraging pointer constraints (which are hard to manipulate) and evaluating these constraint globally (to even tolerate certain amount of pointer attacks). We have evaluated MACE on 100 memory images for Windows XP SP3 and Windows 7 SP0. Overall, MACE can construct a kernel object graph from a memory image in just a few minutes, and achieves over 95% recall and over 96% precision. Our experiments on real-world rootkit samples and synthetic attacks further demonstrate that MACE outperforms other external memory analysis tools with respect to wider coverage and better robustness.
Keywords: memory analysis, random surfer, rootkit detection (ID#: 15-4680)
URL:   http://doi.acm.org/10.1145/2664243.2664248

 

Ali Zand, Giovanni Vigna, Xifeng Yan, Christopher Kruegel; Extracting Probable Command and Control Signatures for Detecting Botnets; SAC '14 Proceedings of the 29th Annual ACM Symposium on Applied Computing, March 2014, Pages 1657-1662.  Doi: 10.1145/2554850.2554896  Abstract: Botnets, which are networks of compromised machines under the control of a single malicious entity, are a serious threat to online security. The fact that botnets, by definition, receive their commands from a single entity can be leveraged to fight them. To this end, one requires techniques that can detect command and control (C&C) traffic, as well as the servers that host C&C services. Given the knowledge of a C&C server's IP address, one can use this information to detect all hosts that attempt to contact such a server, and subsequently disinfect, disable, or block the infected machines. This information can also be used by law enforcement to take down the C&C server. In this paper, we present a new botnet C&C signature extraction approach that can be used to find C&C communication in traffic generated by executing malware samples in a dynamic analysis system. This approach works in two steps. First, we extract all frequent strings seen in the network traffic. Second, we use a function that assigns a score to each string. This score represents the likelihood that the string is indicative of C&C traffic. This function allows us to rank strings and focus our attention on those that likely represent good C&C signatures. We apply our technique to almost 2.6 million network connections produced by running more than 1.4 million malware samples. Using our technique, we were able to automatically extract a set of signatures that are able to identify C&C traffic. Furthermore, we compared our signatures with those used by existing tools, such as Snort and BotHunter.
Keywords:  (not provided) (ID#: 15-4681)
URL: http://doi.acm.org/10.1145/2554850.2554896

 

Tom Deering, Suresh Kothari, Jeremias Sauceda, Jon Mathews; Atlas: A New Way to Explore Software, Build Analysis Tools;  ICSE Companion 2014 Companion Proceedings of the 36th International Conference on Software Engineering, May 2014, Pages 588-591.  Doi: 10.1145/2591062.2591065  Abstract: Atlas is a new software analysis platform from EnSoft Corp. Atlas decouples the domain-specific analysis goal from its underlying mechanism by splitting analysis into two distinct phases. In the first phase, polynomial-time static analyzers index the software AST, building a rich graph database. In the second phase, users can explore the graph directly or run custom analysis scripts written using a convenient API. These features make Atlas ideal for both interaction and automation. In this paper, we describe the motivation, design, and use of Atlas. We present validation case studies, including the verification of safe synchronization of the Linux kernel, and the detection of malware in Android applications. Our ICSE 2014 demo explores the comprehension and malware detection use cases. Video: http://youtu.be/cZOWlJ-IO0k
Keywords: Analysis platform, Human-in-the-loop, Static analysis (ID#: 15-4682)
URL: http://doi.acm.org/10.1145/2591062.2591065

 

Christopher Kruegel; Fighting Malicious Code: An Eternal Struggle;  ASIA CCS '14 Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security, June 2014, Pages 1-1. Doi: 10.1145/2590296.2590348 Abstract: Despite many years of research and significant commercial investment, the malware problem is far from being solved (or even reasonably well contained). Every week, the mainstream press publishes articles that describe yet another incident where millions of credit cards were leaked, a large company discloses that adversaries had remote access to its corporate secrets for years, and we discover a new botnet with tens of thousands of compromised machines. Clearly, this situation is not acceptable, but why isn't it getting any better?  In this talk, I will discuss some of the reasons why the malware problem is fundamentally hard, and why existing defenses in industry are no longer working. I will then outline progress that researchers and industry have made over the last years, and highlight a few milestones in our struggle to keep malicious code off our computer systems. This part will not focus on advances related to the analysis of malicious code alone, but take a broader perspective. How can we prevent malicious code from getting onto our machines in the first place? How can we detect network communication between malware programs and remote control nodes? And how can we lower the benefits that attackers obtain from their compromised machines? Finally, I will point out a few areas in which I believe that we should make progress to have the most impact in our fight against malicious code.
Keywords: intrusion/anomaly detection and malware mitigation (ID#: 15-4683)
URL: http://doi.acm.org/10.1145/2590296.2590348

 

Sebastián García, Vojtěch Uhlíř, Martin Rehak; Identifying and Modeling Botnet C&C Behaviors;  ACySE '14 Proceedings of the 1st International Workshop on Agents and CyberSecurity, May 2014, Article No. 1. Doi: 10.1145/2602945.2602949  Abstract: Through the analysis of a long-term botnet capture, we identified and modeled the behaviors of its C&C channels. They were found and characterized by periodicity analyses and statistical representations. The relationships found between the behaviors of the UDP, TCP and HTTP C&C channels allowed us to unify them in a general model of the botnet behavior. Our behavioral analysis of the C&C channels gives a new perspective on the modeling of malware behavior, helping to better understand botnets.
Keywords: botnet, malware, network behavior, network security (ID#: 15-4684)
URLhttp://doi.acm.org/10.1145/2602945.2602949

 

Youn-sik Jeong, Hwan-taek Lee, Seong-je Cho, Sangchul Han, Minkyu Park; A Kernel-Based Monitoring Approach for Analyzing Malicious Behavior on Android;  SAC '14 Proceedings of the 29th Annual ACM Symposium on Applied Computing, March 2014, Pages 1737-1738. Doi: 10.1145/2554850.2559915  Abstract: This paper proposes a new technique that monitors important events at the kernel level of Android and analyzes malicious behavior systematically. The proposed technique is designed in two ways. First, in order to analyze malicious behavior that might happen inside one application, it monitors file operations by hooking the system calls to create, read from, and write to a file. Secondly, in order to analyze malicious behavior that might happen in the communication between colluding applications, it monitors IPC messages (Intents) by hooking the binder driver. Our technique can detect even the behavior of obfuscated malware using a run-time monitoring method. In addition, it can reduce the possibility of false detection by providing more specific analysis results compared to the existing methods on Android. Experimental results show that our technique is effective to analyze malicious behavior on Android and helpful to detect malware.
Keywords: Android malware, kernel-based monitoring, malware detection, monitoring, signature based detection (ID#: 15-4685)
URLhttp://doi.acm.org/10.1145/2554850.2559915


 

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Malware Analysis, 2014, Part 2 (ACM)

 
SoS Logo

Malware Analysis, 2014, (ACM)

Part 2

 

The ACM published nearly 500 articles about malware analysis in 2014, making the topic one of the most studied. The bibliographical citations presented here, broken into several parts, should be of interest to the Science of Security community.


Ting-Fang Yen, Victor Heorhiadi, Alina Oprea, Michael K. Reiter, Ari Juels;  An Epidemiological Study of Malware Encounters in a Large Enterprise; CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 1117-1130. Doi: 10.1145/2660267.2660330  Abstract: We present an epidemiological study of malware encounters in a large, multi-national enterprise. Our data sets allow us to observe or infer not only malware presence on enterprise computers, but also malware entry points, network locations of the computers (i.e., inside the enterprise network or outside) when the malware were encountered, and for some web-based malware encounters, web activities that gave rise to them. By coupling this data with demographic information for each host's primary user, such as his or her job title and level in the management hierarchy, we are able to paint a reasonably comprehensive picture of malware encounters for this enterprise. We use this analysis to build a logistic regression model for inferring the risk of hosts encountering malware; those ranked highly by our model have a >3x higher rate of encountering malware than the base rate. We also discuss where our study confirms or refutes other studies and guidance that our results suggest.
Keywords: enterprise security, logistic regression, malware encounters, measurement (ID#: 15-4686)
URLhttp://doi.acm.org/10.1145/2660267.2660330

 

Patrick Cousot, Radhia Cousot; Abstract Interpretation: Past, Present and Future; CSL-LICS '14 Proceedings of the Joint Meeting of the Twenty-Third EACSL Annual Conference on Computer Science Logic (CSL) and the Twenty-Ninth Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), July 2014, Article No. 2. Doi: 10.1145/2603088.2603165  Abstract:  Abstract Interpretation is a theory of abstraction and constructive approximation of the mathematical structures used in the formal description of complex or infinite systems and the inference or verification of their combinatorial or undecidable properties. Developed in the late seventies, it has been since then used, implicitly or explicitly, to many aspects of computer science (such as static analysis and verification, contract inference, type inference, termination inference, model-checking, abstraction/refinement, program transformation (including watermarking, obfuscation, etc), combination of decision procedures, security, malware detection, database queries, etc) and more recently, to system biology and SAT/SMT solvers. Production-quality verification tools based on abstract interpretation are available and used in the advanced software, hardware, transportation, communication, and medical industries.  The talk will consist in an introduction to the basic notions of abstract interpretation and the induced methodology for the systematic development of sound abstract interpretation-based tools. Examples of abstractions will be provided, from semantics to typing, grammars to safety, reachability to potential/definite termination, numerical to protein-protein abstractions, as well as applications (including those in industrial use) to software, hardware and system biology.  This paper is a general discussion of abstract interpretation, with selected publications, which unfortunately are far from exhaustive both in the considered themes and the corresponding references.
Keywords: abstract interpretation, proof, semantics, static analysis, verification (ID#: 15-4687)
URL:   http://doi.acm.org/10.1145/2603088.2603165

 

Christopher Neasbitt, Roberto Perdisci, Kang Li, Terry Nelms; ClickMiner: Towards Forensic Reconstruction of User-Browser Interactions from Network Traces; CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 1244-1255. Doi: 10.1145/2660267.2660268  Abstract: Recent advances in network traffic capturing techniques have made it feasible to record full traffic traces, often for extended periods of time. Among the applications enabled by full traffic captures, being able to automatically reconstruct user-browser interactions from archived web traffic traces would be helpful in a number of scenarios, such as aiding the forensic analysis of network security incidents. Unfortunately, the modern web is becoming increasingly complex, serving highly dynamic pages that make heavy use of scripting languages, a variety of browser plugins, and asynchronous content requests. Consequently, the semantic gap between user-browser interactions and the network traces has grown significantly, making it challenging to analyze the web traffic produced by even a single user.  In this paper, we propose ClickMiner, a novel system that aims to automatically reconstruct user-browser interactions from network traces. Through a user study involving 21 participants, we collected real user browsing traces to evaluate our approach. We show that, on average, ClickMiner can correctly reconstruct between 82% and 90% of user-browser interactions with false positives between 0.74% and 1.16%, and that it outperforms reconstruction algorithms based solely on referrer-based approaches. We also present a number of case studies that aim to demonstrate how ClickMiner can aid the forensic analysis of malware downloads triggered by social engineering attacks.
Keywords: forensics, network traffic replay (ID#: 15-4688)
URL:   http://doi.acm.org/10.1145/2660267.2660268

 

Abdullah J. Alzahrani, Ali A. Ghorbani; SMS Mobile Botnet Detection Using a Multi-Agent System: Research in Progress;  ACySE '14 Proceedings of the 1st International Workshop on Agents and CyberSecurity, May 2014, Article No. 2. Doi: 10.1145/2602945.2602950  Abstract: With the enormous growth of Android mobile devices and the huge increase in the number of published applications (apps), Short Message Service (SMS) is becoming an important issue. SMS can be abused by attackers when they send SMS spam, transfer all command and control (C&C) instructions, launch denial-of-service (DoS) attacks to send premium-rate SMS messages without user permission, and propagate malware via URLs sent within SMS messages. Thus, SMS has to be reliable as well as secure. In this paper, we propose a SMS botnet detection framework that uses multi-agent technology based on observations of SMS and Android smartphone features. This system detects SMS botnets and identifies ways to block the attacks in order to prevent damage caused by these attacks. An adaptive hybrid model of SMS botnet detectors is being developed by using a combination of signature-based and anomaly-based methods. The model is designed to recognize malicious SMS messages by applying behavioural analysis to find the correlation between suspicious SMS messages and reported profiling. Behaviour profiles of Android smartphones are being created to carry out robust and efficient anomaly detection. A multi-agent system technology was selected to perform light-weight detection without exhausting smartphone resources such as battery and memory.
Keywords: SMS, botnet detection, multi-agent system, smartphone (ID#: 15-4689)
URLhttp://doi.acm.org/10.1145/2602945.2602950

 

Zhenlong Yuan, Yongqiang Lu, Zhaoguo Wang, Yibo Xue;  Droid-Sec: Deep Learning in Android Malware Detection; SIGCOMM '14 Proceedings of the 2014 ACM Conference on SIGCOMM, August 2014, Pages 371-372. Doi: 10.1145/2619239.2631434  Abstract: As smartphones and mobile devices are rapidly becoming indispensable for many network users, mobile malware has become a serious threat in the network security and privacy. Especially on the popular Android platform, many malicious apps are hiding in a large number of normal apps, which makes the malware detection more challenging. In this paper, we propose a ML-based method that utilizes more than 200 features extracted from both static analysis and dynamic analysis of Android app for malware detection. The comparison of modeling results demonstrates that the deep learning technique is especially suitable for Android malware detection and can achieve a high level of 96% accuracy with real-world Android application sets.
Keywords: android malware, deep learning, detection (ID#: 15-4690)
URLhttp://doi.acm.org/10.1145/2619239.2631434

 

 Yuru Shao, Xiapu Luo, Chenxiong Qian, Pengfei Zhu, Lei Zhang; Towards a Scalable Resource-Driven Approach for Detecting Repackaged Android Applications;  ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 56-65. Doi: 10.1145/2664243.2664275  Abstract: Repackaged Android applications (or simply apps) are one of the major sources of mobile malware and also an important cause of severe revenue loss to app developers. Although a number of solutions have been proposed to detect repackaged apps, the majority of them heavily rely on code analysis, thus suffering from two limitations: (1) poor scalability due to the billion opcode problem; (2) unreliability to code obfuscation/app hardening techniques. In this paper, we explore an alternative approach that exploits core resources, which have close relationships with codes, to detect repackaged apps. More precisely, we define new features for characterizing apps, investigate two kinds of algorithms for searching similar apps, and propose a two-stage methodology to speed up the detection. We realize our approach in a system named ResDroid and conduct large scale evaluation on it. The results show that ResDroid can identify repackaged apps efficiently and effectively even if they are protected by obfuscation or hardening systems.
Keywords:  (not provided) (ID#: 15-4691)
URL:   http://doi.acm.org/10.1145/2664243.2664275

 

Justin Hummel, Andrew McDonald, Vatsal Shah, Riju Singh, Bradford D. Boyle, Tingshan Huang, Nagarajan Kandasamy, Harish Sethu, Steven Weber;  A Modular Multi-Location Anonymized Traffic Monitoring Tool for a Wifi Network ; CODASPY '14 Proceedings of the 4th ACM Conference on Data and Application Security and Privacy, March 2014, Pages 135-138. Doi: 10.1145/2557547.2557580  Abstract: Network traffic anomaly detection is now considered a surer approach to early detection of malware than signature-based approaches and is best accomplished with traffic data collected from multiple locations. Existing open-source tools are primarily signature-based, or do not facilitate integration of traffic data from multiple locations for real-time analysis, or are insufficiently modular for incorporation of newly proposed approaches to anomaly detection. In this paper, we describe DataMap, a new modular open-source tool for the collection and real-time analysis of sampled, anonymized, and filtered traffic data from multiple WiFi locations in a network and an example of its use in anomaly detection.
Keywords: open source tool, real time analysis, traffic anomaly detection (ID#: 15-4692)
URLhttp://doi.acm.org/10.1145/2557547.2557580

 

Battista Biggio; On Learning and Recognition of Secure Patterns;  AISec '14 Proceedings of the 2014 Workshop on Artificial Intelligence and Security Workshop, November 2014, Pages 1-2. Doi: 10.1145/2666652.2666653 Abstract: Learning and recognition of secure patterns is a well-known problem in nature. Mimicry and camouflage are widely-spread techniques in the arms race between predators and preys. All of the information acquired by our senses is therefore not necessarily secure or reliable. In machine learning and pattern recognition systems, we have started investigating these issues only recently, with the goal of learning to discriminate between secure and hostile patterns. This phenomenon has been especially observed in the context of adversarial settings like biometric recognition, malware detection and spam filtering, in which data can be adversely manipulated by humans to undermine the outcomes of an automatic analysis. As current pattern recognition methods are not natively designed to deal with the intrinsic, adversarial nature of these problems, they exhibit specific vulnerabilities that an adversary may exploit either to mislead learning or to avoid detection. Identifying these vulnerabilities and analyzing the impact of the corresponding attacks on pattern classifiers is one of the main open issues in the novel research field of adversarial machine learning.  In the first part of this talk, I introduce a general framework that encompasses and unifies previous work in the field, allowing one to systematically evaluate classifier security against different, potential attacks. As an example of application of this framework, in the second part of the talk, I discuss evasion attacks, where malicious samples are manipulated at test time to avoid detection. I then show how carefully-designed poisoning attacks can mislead learning of support vector machines by manipulating a small fraction of their training data, and how to poison adaptive biometric verification systems to compromise the biometric templates (face images) of the enrolled clients. Finally, I briefly discuss our ongoing work on attacks against clustering algorithms, and sketch some possible future research directions.
Keywords: adversarial machine learning, evasion attacks, poisoning attacks, secure pattern recognition (ID#: 15-4693)
URL: http://doi.acm.org/10.1145/2666652.2666653

 

Alexander Long, Joshua Saxe, Robert Gove; Detecting Malware Samples with Similar Image Sets;  VizSec '14 Proceedings of the Eleventh Workshop on Visualization for Cyber Security, November 2014, Pages 88-95. Doi: 10.1145/2671491.2671500 Abstract: This paper proposes a method for identifying and visualizing similarity relationships between malware samples based on their embedded graphical assets (such as desktop icons and button skins). We argue that analyzing such relationships has practical merit for a number of reasons. For example, we find that malware desktop icons are often used to trick users into running malware programs, so identifying groups of related malware samples based on these visual features can highlight themes in the social engineering tactics of today's malware authors. Also, when malware samples share rare images, these image sharing relationships may indicate that the samples were generated or deployed by the same adversaries.  To explore and evaluate this malware comparison method, the paper makes two contributions. First, we provide a scalable and intuitive method for computing similarity measurements between malware based on the visual similarity of their sets of images. Second, we give a visualization method that combines a force-directed graph layout with a set visualization technique so as to highlight visual similarity relationships in malware corpora. We evaluate the accuracy of our image set similarity comparison method against a hand curated malware relationship ground truth dataset, finding that our method performs well. We also evaluate our overall concept through a small qualitative study we conducted with three cyber security researchers. Feedback from the researchers confirmed our use cases and suggests that computer network defenders are interested in this capability.
Keywords: human computer interaction, malware, security, visualization (ID#: 15-4694)
URL:   http://doi.acm.org/10.1145/2671491.2671500

 

Luke Deshotels, Vivek Notani, Arun Lakhotia; DroidLegacy: Automated Familial Classification of Android Malware;  PPREW'14 Proceedings of ACM SIGPLAN on Program Protection and Reverse Engineering Workshop 2014, January 2014, Article No. 3. Doi: 10.1145/2556464.2556467  Abstract: We present an automated method for extracting familial signatures for Android malware, i.e., signatures that identify malware produced by piggybacking potentially different benign applications with the same (or similar) malicious code. The APK classes that constitute malware code in a repackaged application are separated from the benign code and the Android API calls used by the malicious modules are extracted to create a signature. A piggybacked malicious app can be detected by first decomposing it into loosely coupled modules and then matching the Android API calls called by each of the modules against the signatures of the known malware families. Since the signatures are based on Android API calls, they are related to the core malware behavior, and thus are more resilient to obfuscations.  In triage, AV companies need to automatically classify large number of samples so as to optimize assignment of human analysts. They need a system that gives low false negatives even if it is at the cost of higher false positives. Keeping this goal in mind, we fine tuned our system and used standard 10 fold cross validation over a dataset of 1,052 malicious APKs and 48 benign APKs to verify our algorithm. Results show that we have 94% accuracy, 97% precision, and 93% recall when separating benign from malware. We successfully classified our entire malware dataset into 11 families with 98% accuracy, 87% precision, and 94% recall.
Keywords: Android malware, class dependence graphs, familial classification, malware detection, module generation, piggybacked malware, signature generation, static analysis (ID#: 15-4695)
URLhttp://doi.acm.org/10.1145/2556464.2556467

 

Ting Wang, Shicong Meng, Wei Gao, Xin Hu; Rebuilding the Tower of Babel: Towards Cross-System Malware Information Sharing; CIKM '14 Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management, November 2014, Pages 1239-1248. Doi: 10.1145/2661829.2662086  Abstract: Anti-virus systems developed by different vendors often demonstrate strong discrepancies in how they name malware, which signficantly hinders malware information sharing. While existing work has proposed a plethora of malware naming standards, most anti-virus vendors were reluctant to change their own naming conventions. In this paper we explore a new, more pragmatic alternative. We propose to exploit the correlation between malware naming of different anti-virus systems to create their consensus classification, through which these systems can share malware information without modifying their naming conventions. Specifically we present Latin, a novel classification integration framework leveraging the correspondence between participating anti-virus systems as reflected in heterogeneous information sources at instance-instance, instance-name, and name-name levels. We provide results from extensive experimental studies using real malware datasets and concrete use cases to verify the efficacy of Latin in supporting cross-system malware information sharing.
Keywords: classification integration, consensus learning, malware naming (ID#: 15-4696)
URL:   http://doi.acm.org/10.1145/2661829.2662086

 

Andrew Henderson, Aravind Prakash, Lok Kwong Yan, Xunchao Hu, Xujiewen Wang, Rundong Zhou, Heng Yin; Make It Work, Make It Right, Make It Fast: Building a Platform-Neutral Whole-System Dynamic Binary Analysis Platform; ISSTA 2014 Proceedings of the 2014 International Symposium on Software Testing and Analysis, July 2014, Pages 248-258. Doi: 10.1145/2610384.2610407  Abstract: Dynamic binary analysis is a prevalent and indispensable technique in program analysis. While several dynamic binary analysis tools and frameworks have been proposed, all suffer from one or more of: prohibitive performance degradation, semantic gap between the analysis code and the program being analyzed, architecture/OS specificity, being user-mode only, lacking APIs, etc. We present DECAF, a virtual machine based, multi-target, whole-system dynamic binary analysis framework built on top of QEMU. DECAF provides Just-In-Time Virtual Machine Introspection combined with a novel TCG instruction-level tainting at bit granularity, backed by a plugin based, simple-to-use event driven programming interface. DECAF exercises fine control over the TCG instructions to accomplish on-the-fly optimizations. We present 3 platform-neutral plugins - Instruction Tracer, Keylogger Detector, and API Tracer, to demonstrate the ease of use and effectiveness of DECAF in writing cross-platform and system-wide analysis tools. Implementation of DECAF consists of 9550 lines of C++ code and 10270 lines of C code and we evaluate DECAF using CPU2006 SPEC benchmarks and show average overhead of 605% for system wide tainting and 12% for VMI.
Keywords: Dynamic binary analysis, dynamic taint analysis, virtual machine introspection (ID#: 15-4697)
URL: http://doi.acm.org/10.1145/2610384.2610407

 

Battista Biggio, Konrad Rieck, Davide Ariu, Christian Wressnegger, Igino Corona, Giorgio Giacinto, Fabio Roli; Poisoning Behavioral Malware Clustering;  AISec '14 Proceedings of the 2014 Workshop on Artificial Intelligence and Security Workshop, November 2014, Pages 27-36. Doi: 10.1145/2666652.2666666 Abstract: Clustering algorithms have become a popular tool in computer security to analyze the behavior of malware variants, identify novel malware families, and generate signatures for antivirus systems. However, the suitability of clustering algorithms for security-sensitive settings has been recently questioned by showing that they can be significantly compromised if an attacker can exercise some control over the input data. In this paper, we revisit this problem by focusing on behavioral malware clustering approaches, and investigate whether and to what extent an attacker may be able to subvert these approaches through a careful injection of samples with poisoning behavior. To this end, we present a case study on Malheur, an open-source tool for behavioral malware clustering. Our experiments not only demonstrate that this tool is vulnerable to poisoning attacks, but also that it can be significantly compromised even if the attacker can only inject a very small percentage of attacks into the input data. As a remedy, we discuss possible countermeasures and highlight the need for more secure clustering algorithms.
Keywords: adversarial machine learning, clustering, computer security, malware detection, security evaluation, unsupervised learning (ID#: 15-4698)
URL:   http://doi.acm.org/10.1145/2666652.2666666

 

Igino Corona, Davide Maiorca, Davide Ariu, Giorgio Giacinto; Lux0R: Detection of Malicious PDF-embedded JavaScript code through Discriminant Analysis of API References;  AISec '14 Proceedings of the 2014 Workshop on Artificial Intelligence and Security Workshop, November 2014, Pages 47-57. Doi: 10.1145/2666652.2666657  Abstract: JavaScript is a dynamic programming language adopted in a variety of applications, including web pages, PDF Readers, widget engines, network platforms, office suites. Given its widespread presence throughout different software platforms, JavaScript is a primary tool for the development of novel -rapidly evolving- malicious exploits. If the classical signature- and heuristic-based detection approaches are clearly inadequate to cope with this kind of threat, machine learning solutions proposed so far suffer from high false-alarm rates or require special instrumentation that make them not suitable for protecting end-user systems. In this paper we present Lux0R "Lux 0n discriminant References", a novel, lightweight approach to the detection of malicious JavaScript code. Our method is based on the characterization of JavaScript code through its API references, i.e., functions, constants, objects, methods, keywords as well as attributes natively recognized by a JavaScript Application Programming Interface (API). We exploit machine learning techniques to select a subset of API references that characterize malicious code, and then use them to detect JavaScript malware. The selection algorithm has been thought to be "secure by design" against evasion by mimicry attacks. In this investigation, we focus on a relevant application domain, i.e., the detection of malicious JavaScript code within PDF documents. We show that our technique is able to achieve excellent malware detection accuracy, even on samples exploiting never-before-seen vulnerabilities, i.e., for which there are no examples in training data. Finally, we experimentally assess the robustness of Lux0R against mimicry attacks based on feature addition.
Keywords: adversarial machine learning, javascript code, malware detection, mimicry attacks, pdf documents (ID#: 15-4699)
URL:   http://doi.acm.org/10.1145/2666652.2666657

 

Markus Kammerstetter, Christian Platzer, Wolfgang Kastner; Prospect: Peripheral Proxying Supported Embedded Code Testing; ASIA CCS '14 Proceedings of the 9th ACM Symposium On Information, Computer And Communications Security; June 2014, pages 329-340. Doi: 10.1145/2590296.2590301  Abstract: Embedded systems are an integral part of almost every electronic product today. From consumer electronics to industrial components in SCADA systems, their possible fields of application are manifold. While especially in industrial and critical infrastructures the security requirements are high, recent publications have shown that embedded systems do not cope well with this demand. One of the reasons is that embedded systems are being less scrutinized as embedded security analysis is considered to be more time consuming and challenging in comparison to PC systems. One of the key challenges on proprietary, resource constrained embedded devices is dynamic code analysis. The devices typically do not have the capabilities for a full-scale dynamic security evaluation. Likewise, the analyst cannot execute the software implementation inside a virtual machine due to the missing peripheral hardware that is required by the software to run. In this paper, we present PROSPECT, a system that can overcome these shortcomings and enables dynamic code analysis of embedded binary code inside arbitrary analysis environments. By transparently forwarding peripheral hardware accesses from the original host system into a virtual machine, PROSPECT allows security analysts to run the embedded software implementation without the need to know which and how embedded peripheral hardware components are accessed. We evaluated PROSPECT with respect to the performance impact and conducted a case study by doing a full-scale security audit of a widely used commercial fire alarm system in the building automation domain. Our results show that PROSPECT is both practical and usable for real-world application.
Keywords: device tunneling, dynamic analysis, embedded system, fuzz testing, security (ID#: 15-4700)
URL: http://doi.acm.org/10.1145/2590296.2590301

 

Jyun-Yu Jiang, Chun-Liang Li, Chun-Pai Yang, Chung-Tsai Su; POSTER: Scanning-free Personalized Malware Warning System by Learning Implicit Feedback from Detection Logs; CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 1436-1438. Doi: 10.1145/2660267.2662359  Abstract: Nowadays, World Wide Web connects people to each other in many ways ubiquitously. Followed along with the convenience and usability, millions of malware infect various devices of numerous users through the web every day. In contrast, traditional anti-malware systems detect such malware by scanning file systems and provide secure environments for users. However, some malware might not be detected by traditional scanning-based detection systems due to hackers' obfuscation techniques. Also, scanning-based approaches cannot caution users for uninfected malware with high risks. In this paper, we aim to build a personalized malware warning system. Different from traditional scanning-based approaches, we focus on discovering the potential malware which has not been detected for each user. If users and the system know the potentially infected malware in advance, they can be alert against the corresponding risks. We propose a novel approach to learn the implicit feedback from detection logs and give a personalized risk ranking of malware for each user. Finally, the experiments on real-world detection datasets demonstrate the proposed algorithm outperforms traditional popularity-based algorithms.
Keywords: computer security, malware detection, malware warning system, personalized collaborative filtering (ID#: 15-4701)
URL:   http://doi.acm.org/10.1145/2660267.2662359

 

Olatunji Ruwase, Michael A. Kozuch, Phillip B. Gibbons, Todd C. Mowry;  Guardrail: A High Fidelity Approach to Protecting Hardware Devices from Buggy Drivers; ASPLOS '14 Proceedings of the 19th International Conference On Architectural Support For Programming Languages And Operating Systems, February 2014, Pages 655-670. Doi: 10.1145/2654822.2541970  Abstract: Device drivers are an Achilles' heel of modern commodity operating systems, accounting for far too many system failures. Previous work on driver reliability has focused on protecting the kernel from unsafe driver side-effects by interposing an invariant-checking layer at the driver interface, but otherwise treating the driver as a black box. In this paper, we propose and evaluate Guardrail, which is a more powerful framework for run-time driver analysis that performs decoupled instruction-grain dynamic correctness checking on arbitrary kernel-mode drivers as they execute, thereby enabling the system to detect and mitigate more challenging correctness bugs (e.g., data races, uninitialized memory accesses) that cannot be detected by today's fault isolation techniques. Our evaluation of Guardrail shows that it can find serious data races, memory faults, and DMA faults in native Linux drivers that required fixes, including previously unknown bugs. Also, with hardware logging support, Guardrail can be used for online protection of persistent device state from driver bugs with at most 10% overhead on the end-to-end performance of most standard I/O workloads.
Keywords: device drivers, dynamic analysis (ID#: 15-4702)
URL:   http://doi.acm.org/10.1145/2654822.2541970

 

Mordechai Guri, Gabi Kedma, Buky Carmeli, Yuval Elovici; Limiting Access to Unintentionally Leaked Sensitive Documents Using Malware Signatures;  SACMAT '14 Proceedings of the 19th ACM Symposium On Access Control Models And Technologies, June 2014, Pages 129-140. Doi: 10.1145/2613087.2613103 Abstract: Organizations are repeatedly embarrassed when their sensitive digital documents go public or fall into the hands of adversaries, often as a result of unintentional or inadvertent leakage. Such leakage has been traditionally handled either by preventive means, which are evidently not hermetic, or by punitive measures taken after the main damage has already been done. Yet, the challenge of preventing a leaked file from spreading further among computers and over the Internet is not resolved by existing approaches. This paper presents a novel method, which aims at reducing and limiting the potential damage of a leakage that has already occurred. The main idea is to tag sensitive documents within the organization's boundaries by attaching a benign detectable malware signature (DMS). While the DMS is masked inside the organization, if a tagged document is somehow leaked out of the organization's boundaries, common security services such as Anti-Virus (AV) programs, firewalls or email gateways will detect the file as a real threat and will consequently delete or quarantine it, preventing it from spreading further. This paper discusses various aspects of the DMS, such as signature type and attachment techniques, along with proper design considerations and implementation issues. The proposed method was implemented and successfully tested on various file types including documents, spreadsheets, presentations, images, executable binaries and textual source code. The evaluation results have demonstrated its effectiveness in limiting the spread of leaked documents.
Keywords: anti-virus program, data leakage, detectable malware signature, sensitive document (ID#: 15-4703)
URL:   http://doi.acm.org/10.1145/2613087.2613103

 

Paul Pearce, Vacha Dave, Chris Grier, Kirill Levchenko, Saikat Guha, Damon McCoy, Vern Paxson, Stefan Savage, Geoffrey M. Voelker; Characterizing Large-Scale Click Fraud in ZeroAccess;  CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 141-152. Doi: 10.1145/2660267.2660369  Abstract: Click fraud is a scam that hits a criminal sweet spot by both tapping into the vast wealth of online advertising and exploiting that ecosystem's complex structure to obfuscate the flow of money to its perpetrators. In this work, we illuminate the intricate nature of this activity through the lens of ZeroAccess--one of the largest click fraud botnets in operation. Using a broad range of data sources, including peer-to-peer measurements, command-and-control telemetry, and contemporaneous click data from one of the top ad networks, we construct a view into the scale and complexity of modern click fraud operations. By leveraging the dynamics associated with Microsoft's attempted takedown of ZeroAccess in December 2013, we employ this coordinated view to identify "ad units" whose traffic (and hence revenue) primarily derived from ZeroAccess. While it proves highly challenging to extrapolate from our direct observations to a truly global view, by anchoring our analysis in the data for these ad units we estimate that the botnet's fraudulent activities plausibly induced advertising losses on the order of $100,000 per day.
Keywords: click fraud, cybercrime, malware, measurement, ZeroAccess (ID#: 15-4704)
URL:   http://doi.acm.org/10.1145/2660267.2660369

 

Byeongho Kang, Eul Gyu Im; Analysis of Binary Code Topology for Dynamic Analysis; SAC '14 Proceedings of the 29th Annual ACM Symposium on Applied Computing, March 2014, Pages  1731-1732. Doi: 10.1145/2554850.2559912  Abstract: A better understanding of binary code topology is essential in making execution path exploration method. The execution path exploration is closely related to code coverage in binary code dynamic analysis. Since the number of execution paths in a program is astronomically high, the efficient exploration strategy is needed. In this paper, we analyze binary code topology in a viewpoint of basic blocks. We find that the incoming edges show unbalanced distribution which follows power law, instead of balanced distribution. This unbalanced distribution of incoming edges can help understanding binary code topology, and we propose our study for deciding efficient execution path exploration strategy of dynamic binary code analysis.
Keywords: basic block topology, binary analysis, dynamic analysis (ID#: 15-4705)
URL:   http://doi.acm.org/10.1145/2554850.2559912

 

Josh Marston, Komminist Weldemariam, Mohammad Zulkernine; On Evaluating and Securing Firefox for Android Browser Extensions; MOBILESoft 2014 Proceedings of the 1st International Conference on Mobile Software Engineering and Systems, June 2014, Pages 27-36. Doi: 10.1145/2593902.2593909  Abstract: Unsafely or maliciously coded extensions allow an attacker to run their own code in the victim's browser with elevated privileges. This gives the attacker a large amount of control over not only the browser but the underlying machine as well. The topic of securing desktop browsers from such threats has been well studied but mitigating the same danger on mobile devices has seen little attention. Similarly, mobile device use continues to grow world-wide at a rapid pace along with their capability and ability to perform sensitive actions. In an effort to mitigate the risks inherent with these actions, this paper details the dangers of JavaScript injection on the mobile browser. We further present a defense technique that was developed by extending from the desktop environment to work in the mobile space. Our prototype implementation is a combination of extensions on the Firefox for Android and a slightly modified browser of Firefox for Android. When the user attempts to install a new extension or update an existing one, the modified browser is called a priori. The overall extension logic, code transformation, and static analyzer components were implemented in JavaScript and SQLLite database. Our preliminary evaluation shows that our prototype implementation can effectively prevent real-world attacks against extensions on Firefox for Android without affecting users' browsing experience.
Keywords: Browser Extensions, Firefox for Android, Information Flow, JavaScript, Mobile Security, Static Analysis (ID#: 15-4706)
URL: http://doi.acm.org/10.1145/2593902.2593909

 

Zhengyang Qu, Vaibhav Rastogi, Xinyi Zhang, Yan Chen, Tiantian Zhu, Zhong Chen; AutoCog: Measuring the Description-to-permission Fidelity in Android Applications; CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 1354-1365. Doi: 10.1145/2660267.2660287  Abstract: The booming popularity of smartphones is partly a result of application markets where users can easily download wide range of third-party applications. However, due to the open nature of markets, especially on Android, there have been several privacy and security concerns with these applications. On Google Play, as with most other markets, users have direct access to natural-language descriptions of those applications, which give an intuitive idea of the functionality including the security-related information of those applications. Google Play also provides the permissions requested by applications to access security and privacy-sensitive APIs on the devices. Users may use such a list to evaluate the risks of using these applications. To best assist the end users, the descriptions should reflect the need for permissions, which we term description-to-permission fidelity. In this paper, we present a system AutoCog to automatically assess description-to-permission fidelity of applications. AutoCog employs state-of-the-art techniques in natural language processing and our own learning-based algorithm to relate description with permissions. In our evaluation, AutoCog outperforms other related work on both performance of detection and ability of generalization over various permissions by a large extent. On an evaluation of eleven permissions, we achieve an average precision of 92.6% and an average recall of 92.0%. Our large-scale measurements over 45,811 applications demonstrate the severity of the problem of low description-to-permission fidelity. AutoCog helps bridge the long-lasting usability gap between security techniques and average users.
Keywords: android, google play, machine learning, mobile, natural language processing, permissions (ID#: 15-4707)
URLhttp://doi.acm.org/10.1145/2660267.2660287

 

Chuangang Ren, Kai Chen, Peng Liu; Droidmarking: Resilient Software Watermarking for Impeding Android Application Repackaging; ASE '14 Proceedings of the 29th ACM/IEEE International Conference On Automated Software Engineering, September 2014, Pages 635-646. Doi: 10.1145/2642937.2642977  Abstract: Software plagiarism in Android markets (app repackaging) is raising serious concerns about the health of the Android ecosystem. Existing app repackaging detection techniques fall short in detection efficiency and in resilience to circumventing attacks; this allows repackaged apps to be widely propagated and causes extensive damages before being detected. To overcome these difficulties and instantly thwart app repackaging threats, we devise a new dynamic software watermarking technique - Droidmarking - for Android apps that combines the efforts of all stakeholders and achieves the following three goals: (1) copyright ownership assertion for developers, (2) real-time app repackaging detection on user devices, and (3) resilience to evading attacks. Distinct from existing watermarking techniques, the watermarks in Droidmarking are non-stealthy, which means that watermark locations are not intentionally concealed, yet still are impervious to evading attacks. This property effectively enables normal users to recover and verify watermark copyright information without requiring a confidential watermark recognizer. Droidmarking is based on a primitive called self-decrypting code (SDC). Our evaluations show that Droidmarking is a feasible and robust technique to effectively impede app repackaging with relatively small performance overhead.
Keywords: android, app repackaging, software watermarking (ID#: 15-4708)
URL: http://doi.acm.org/10.1145/2642937.2642977


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Malware Analysis, Part 1

 
SoS Logo

Malware Analysis, Part 1

 

Malware detection, analysis, and classification are perennial issues in cybersecurity.  The research presented here advances malware analysis in some unique and interesting ways.  The works cited were published or presented in 2014.  Because of the volume of work,  the bibliography will be broken into multiple parts.


 

Alam, S.; Horspool, R.N.; Traore, I., "MARD: A Framework for Metamorphic Malware Analysis and Real-Time Detection," Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on, pp.480,489, 13-16 May 2014. doi: 10.1109/AINA.2014.59 Because of the financial and other gains attached with the growing malware industry, there is a need to automate the process of malware analysis and provide real-time malware detection. To hide a malware, obfuscation techniques are used. One such technique is metamorphism encoding that mutates the dynamic binary code and changes the opcode with every run to avoid detection. This makes malware difficult to detect in real-time and generally requires a behavioral signature for detection. In this paper we present a new framework called MARD for Metamorphic Malware Analysis and Real-Time Detection, to protect the end points that are often the last defense, against metamorphic malware. MARD provides: (1) automation (2) platform independence (3) optimizations for real-time performance and (4) modularity. We also present a comparison of MARD with other such recent efforts. Experimental evaluation of MARD achieves a detection rate of 99.6% and a false positive rate of 4%.
Keywords: binary codes; digital signatures; encoding; invasive software; real-time systems; MARD; behavioral signature; dynamic binary code; malware analysis process automation; malware industry; metamorphic malware analysis and real-time detection; metamorphism encoding; obfuscation techniques; opcode; Malware; Optimization; Pattern matching; Postal services; Real-time systems; Runtime; Software; Automation; Control Flow Analysis; End Point Security; Malware Analysis and Detection; Metamorphism (ID#: 15-4638)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838703&isnumber=6838626

 

Miles, C.; Lakhotia, A.; LeDoux, C.; Newsom, A.; Notani, V., "VirusBattle: State-of-the-art malware analysis for better cyber threat intelligence," Resilient Control Systems (ISRCS), 2014 7th International Symposium on, pp.1,6, 19-21 Aug. 2014. doi: 10.1109/ISRCS.2014.6900103 Discovered interrelationships among instances of malware can be used to infer connections among seemingly unconnected objects, including actors, machines, and the malware itself. However, such malware interrelationships are currently underutilized in the cyber threat intelligence arena. To fill that gap, we are developing VirusBattle, a system employing state-of-the-art malware analyses to automatically discover interrelationships among instances of malware. VirusBattle analyses mine malware interrelationships over many types of malware artifacts, including the binary, code, code semantics, dynamic behaviors, malware metadata, distribution sites and e-mails. The result is a malware interrelationships graph which can be explored automatically or interactively to infer previously unknown connections.
Keywords: computer viruses; data mining; graph theory; VirusBattle; binary; code semantics; cyber threat intelligence; distribution sites; dynamic behaviors ;e-mails; malware analysis; malware artifacts; malware interrelationship mining; malware interrelationships graph; malware metadata; Computers; Data visualization; Electronic mail; Malware; Performance analysis; Semantics; Visualization (ID#: 15-4639)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900103&isnumber=6900080

 

Allix, K.; Jerome, Q.; Bissyande, T.F.; Klein, J.; State, R.; Le Traon, Y., "A Forensic Analysis of Android Malware -- How is Malware Written and How it Could Be Detected?," Computer Software and Applications Conference (COMPSAC), 2014 IEEE 38th Annual, pp.384,393, 21-25 July 2014. doi: 10.1109/COMPSAC.2014.61 We consider in this paper the analysis of a large set of malware and benign applications from the Android ecosystem. Although a large body of research work has dealt with Android malware over the last years, none has addressed it from a forensic point of view. After collecting over 500,000 applications from user markets and research repositories, we perform an analysis that yields precious insights on the writing process of Android malware. This study also explores some strange artifacts in the datasets, and the divergent capabilities of state-of-the-art antivirus to recognize/define malware. We further highlight some major weak usage and misunderstanding of Android security by the criminal community and show some patterns in their operational flow. Finally, using insights from this analysis, we build a naive malware detection scheme that could complement existing antivirus software.
Keywords: Android (operating system); digital forensics; invasive software; Android ecosystem; Android malware; Android security; antivirus software; criminal community; forensic analysis; malware detection; operational flow patterns; writing process; Androids; Bioinformatics; Genomics; Google; Humanoid robots; Malware; Software; Android Security; Digital Forensics; Malware Analysis; Malware development (ID#: 15-4640)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6899240&isnumber=6899181

 

Kruczkowski, M.; Szynkiewicz, E.N., "Support Vector Machine for Malware Analysis and Classification," Web Intelligence (WI) and Intelligent Agent Technologies (IAT), 2014 IEEE/WIC/ACM International Joint Conferences on, vol.2, no., pp.415,420, 11-14 Aug. 2014. doi: 10.1109/WI-IAT.2014.127 Malware is widely used to disrupt computer operation, gain access to users' computer systems or gather sensitive information. Nowadays, malware is a serious threat of the Internet. Extensive analysis of data on the Web can significantly improve the results of malware detection. However malware analysis has to be supported by methods capable of events correlation and cross-layer correlation detection, heterogeneous data classification, etc. Recently, a class of learning methods building on kernels have emerged as a powerful techniques for combining diverse types of data. The Support Vector Machine (SVM) is a widely used kernel-based method for binary classification. SVM is theoretically well founded and has been already applied to many practical problems. In this paper, we evaluate the results of the application of SVM to threat data analysis to increase the efficiency of malware detection. Our results suggest that SVM is a robust and efficient method that can be successfully used to heterogeneous web datasets classification.
Keywords: Internet; data analysis; invasive software; pattern classification; support vector machines; Internet threat; SVM; Web data analysis; binary classification; computer operation; cross-layer correlation detection; heterogeneous Web dataset classification; heterogeneous data classification; kernel-based method; learning methods; malware analysis; malware classification; malware detection; support vector machine; threat data analysis; user computer system access; Computer networks; Correlation; Kernel; Malware; Support vector machines; Training; Vectors; Support Vector Machine; machine learning; malware classification (ID#: 15-4641)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6927654&isnumber=6927590

 

Vasilescu, M.; Gheorghe, L.; Tapus, N., "Practical Malware Analysis Based On Sandboxing," RoEduNet Conference 13th Edition: Networking in Education and Research Joint Event RENAM 8th Conference, 2014 , vol., no., pp.1,6, 11-13 Sept. 2014. doi: 10.1109/RoEduNet-RENAM.2014.6955304 The past years have shown an increase in the both number and sophistication of cyber-attacks targeting Windows and Linux operating systems. Traditional network security solutions such as firewalls are incapable of detecting and stopping these attacks. In this paper, we describe our distributed firewall solution Distfw and its integration with a sandbox for malware analysis and detection. We demonstrate the effectiveness and shortcomings of such a solution. We use Cuckoo to perform automated analysis of malware samples and compare the results with the ones from manual analysis. We discover that Cuckoo provides similar results in a considerable amount of time.
Keywords: Linux; invasive software; Cuckoo; Distfw solution; Linux operating system; Windows operating system; cyber-attacks; distributed firewall solution; malware analysis; malware detection; network security solutions; sandboxing; Firewalls (computing);IP networks; Malware; Manuals; Operating systems; Servers; malware; malware analysis; network security; sandbox (ID#: 15-4642)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6955304&isnumber=6955289

 

Pandey, S.K.; Mehtre, B.M., "A Lifecycle Based Approach for Malware Analysis," Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on, pp.767,771, 7-9 April 2014. doi: 10.1109/CSNT.2014.161 Most of the detection approaches like Signature based, Anomaly based and Specification based are not able to analyze and detect all types of malware. Signature-based approach for malware detection has one major drawback that it cannot detect zero-day attacks. The fundamental limitation of anomaly based approach is its high false alarm rate. And specification-based detection often has difficulty to specify completely and accurately the entire set of valid behaviors a malware should exhibit. Modern malware developers try to avoid detection by using several techniques such as polymorphic, metamorphic and also some of the hiding techniques. In order to overcome these issues, we propose a new approach for malware analysis and detection that consist of the following twelve stages Inbound Scan, Inbound Attack, Spontaneous Attack, Client-Side Exploit, Egg Download, Device Infection, Local Reconnaissance, Network Surveillance, & Communications, Peer Coordination, Attack Preparation, and Malicious Outbound Propagation. These all stages will integrate together as interrelated process in our proposed approach. This approach had solved the limitations of all the three approaches by monitoring the behavioral activity of malware at each any every stage of life cycle and then finally it will give a report of the maliciousness of the files or software's.
Keywords: invasive software; anomaly based approach; attack preparation; client-side exploit; device infection; egg download; hiding techniques; inbound attack; inbound scan; lifecycle based approach; local reconnaissance; malicious outbound propagation; malware analysis; network surveillance; peer coordination; signature-based approach; specification-based detection; spontaneous attack; Computers; Educational institutions; Malware; Monitoring; Reconnaissance; Malware; Metamorphic; Polymorphic; Reconnaissance; Signature based; Zero day attack (ID#: 15-4643)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821503&isnumber=6821334

 

Suarez-Tangil, G.; Tapiador, J.E.; Peris-Lopez, P.; Ribagorda, A., "Evolution, Detection and Analysis of Malware for Smart Devices," Communications Surveys & Tutorials, IEEE, vol.16, no.2, pp.961, 987, Second Quarter 2014. doi: 10.1109/SURV.2013.101613.00077 Smart devices equipped with powerful sensing, computing and networking capabilities have proliferated lately, ranging from popular smartphones and tablets to Internet appliances, smart TVs, and others that will soon appear (e.g., watches, glasses, and clothes). One key feature of such devices is their ability to incorporate third-party apps from a variety of markets. This poses strong security and privacy issues to users and infrastructure operators, particularly through software of malicious (or dubious) nature that can easily get access to the services provided by the device and collect sensory data and personal information. Malware in current smart devices -mostly smartphones and tablets- have rocketed in the last few years, in some cases supported by sophisticated techniques purposely designed to overcome security architectures currently in use by such devices. Even though important advances have been made on malware detection in traditional personal computers during the last decades, adopting and adapting those techniques to smart devices is a challenging problem. For example, power consumption is one major constraint that makes unaffordable to run traditional detection engines on the device, while externalized (i.e., cloud-based) techniques rise many privacy concerns. This article examines the problem of malware in smart devices and recent progress made in detection techniques. We first present a detailed analysis on how malware has evolved over the last years for the most popular platforms. We identify exhibited behaviors, pursued goals, infection and distribution strategies, etc. and provide numerous examples through case studies of the most relevant specimens. We next survey, classify and discuss efforts made on detecting both malware and other suspicious software (grayware), concentrating on the 20 most relevant techniques proposed between 2010 and 2013. Based on the conclusions extracted from this study, we finally provide constructive discussion on open- research problems and areas where we believe that more work is needed.
Keywords: data privacy; invasive software; notebook computers ;smart phones; telecommunication security; Internet appliances; cloud-based technique; computing capabilities; distribution strategies; exhibited behavior identification; externalized techniques; infection identification; malicious software; malware analysis; malware detection; malware evolution; networking capabilities; personal computers; privacy issues; pursued goal identification; security architectures; security issues; sensing capabilities; smart TV; smart devices; smartphones; tablets; third-party apps; Androids; Humanoid robots; Malware; Privacy; Smart phones; Software; grayware; malware; privacy; security; smart devices; smartphones (ID#: 15-4644)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6657497&isnumber=6811383

 

Park, Wonjoo; Lee, Kyong-Ha; Cho, Kee-Seong; Ryu, Won, "Analyzing and Detecting Method Of Android Malware Via Disassembling And Visualization," Information and Communication Technology Convergence (ICTC), 2014 International Conference on, pp.817,818, 22-24 Oct. 2014. doi: 10.1109/ICTC.2014.6983300 In light of their rapid growth, there is a pressing need to develop analysis and decision solutions whether or not. However, most of protections are limited understanding of these mobile malware and sophisticated analyzing. In this paper, we propose a method of analyzing and deciding malware on the basis of similarity with existing malware families on the popular platform, Android. We focus on the checking visual similarity among Android malwares and deciding the degree of similarity with other malware families to help distributing to inspector appropriately.
Keywords: Accuracy; Androids; Humanoid robots; Malware; Mobile communication; Smart phones; Visualization; Android malware; Smartphone security; malware analysis (ID#: 15-4645)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6983300&isnumber=6983064

 

Mendez-Garcia, V.; Jimenez-Ramirez, P.; Melendez-Ramirez, M.A.; Torres-Martinez, F.M.; Llamas-Contreras, R.; Gonzalez, H., "Comparative analysis of banking malware," Central America and Panama Convention (CONCAPAN XXXIV), 2014 IEEE, pp.1,5, 12-14 Nov. 2014. doi: 10.1109/CONCAPAN.2014.7000412 The research focused on the analysis of banking malware such as Zeus, Citadel, Carberp, SpeEye and Soraya, which infected personal computers between 2006–2014. This work described briefly each malware, compared major features and ranked the malware by impact. An experiment was performed running the samples and then analyzing the network traffic for each infected machine.
Keywords: Banking; Encyclopedias; IP networks; Internet; Malware; Silicon compounds; Software; banking malware; malware analysis (ID#: 15-4646)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7000412&isnumber=7000388

 

Raphael, R.; Vinod, P.; Omman, B., "X-ANOVA and X-Utest Features For Android Malware Analysis," Advances in Computing, Communications and Informatics (ICACCI, 2014 International Conference on, pp. 1643, 1649, 24-27 Sept. 2014. doi: 10.1109/ICACCI.2014.6968608 In this paper we proposed a static analysis framework to classify the android malware. The three different feature likely (a) opcode (b) method and (c) permissions are extracted from the each android .apk file. The dominant attributes are aggregated by modifying two different ranked feature methods such as ANOVA to Extended ANOVA (X-ANOVA) and Wann-Whiteney U-test to Extended U-Test (X-U-Test). These two statistical feature ranking methods retrieve the significant features by removing the irrelevant attributes based on their score. Accuracy of the proposed system is computed by using three different classifiers (J48, ADAboost and Random forest) as well as voted classification technique. The X-U-Test exhibits better accuracy results compared with X-ANOVA. The highest accuracy 89.36% is obtained with opcode while applying X-U-Test and X-ANOVA shows high accuracy of 87.81% in the case of method as a feature. The permission based model acquired highest accuracy in independent (90.47%) and voted (90.63%) classification model.
Keywords: Android (operating system); invasive software; learning (artificial intelligence); program diagnostics; program testing; statistical analysis; AdaBoost;Android malware analysis; Wann-Whiteney U-test; X-ANOVA; X-U-Test; X-Utest features; extended U-Test; opcode; random forest; static analysis; Accuracy; Analysis of variance; Equations; Malware; Mathematical model; Smart phones; Training; ANOVA; Android Malware; Classifiers; Feature Ranking; Mobile Malware; U-Test; Wann-Whiteney Test (ID#: 15-4647)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6968608&isnumber=6968191

 

Aswini, A.M.; Vinod, P., "Droid Permission Miner: Mining Prominent Permissions For Android Malware Analysis," Applications of Digital Information and Web Technologies (ICADIWT), 2014 Fifth International Conference on the, pp.81,86, 17-19 Feb. 2014. doi: 10.1109/ICADIWT.2014.6814679 In this paper, we propose static analysis of android malware files by mining prominent permissions. The proposed technique is implemented by extracting permissions from 436 .apk files. Feature pruning is carried out to investigate the impact of feature length on accuracy. The prominent features that give way to lesser misclassification are determined using Bi-Normal Separation (BNS) and Mutual Information (MI) feature selection techniques. Results suggest that Droid permission miner can be used for preliminary classification of Android package files.
Keywords: Android (operating system);data mining; feature selection; invasive software; mobile computing; pattern classification; smart phones; Android package file classification; BNS; Droid permission miner; MI feature selection; android malware analysis; bi-normal separation; mutual information; permission extraction; prominent permission mining; static analysis; Accuracy; Androids; Feature extraction; Humanoid robots; Malware; Smart phones; Training; Androguard; Android malware; Feature extraction; Static analysis (ID#: 15-4648)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814679&isnumber=6814661

 

Vlad, M.; Reiser, H.P., "Towards a Flexible Virtualization-Based Architecture for Malware Detection and Analysis," Database and Expert Systems Applications (DEXA), 2014 25th International Workshop on, pp.303,307, 1-5 Sept. 2014. doi: 10.1109/DEXA.2014.67 The complexity and sophistication of malicious attacks against IT systems have steadily increased over the past decades. Tools used to detect and analyse such attacks need to evolve continuously as well in order to cope with such attacks. In this paper, we identify some limitation of existing approaches and propose a novel architecture for an attack detection and analysis framework. This architecture is based on virtualization technology to execute target systems, supports a broad spectrum of low-level tracing modules and sophisticated, extensible virtual-machine introspection mechanisms, combined with an extensible plug-in interface for specialized detection and analysis mechanisms, and it offers support for deployment in cloud infrastructures.
Keywords: cloud computing; invasive software; virtual machines; virtualisation; IT systems; analysis mechanisms; cloud infrastructures; extensible plug-in interface; flexible virtualization-based architecture; low-level tracing modules; malicious attacks; malware analysis; malware detection; specialized detection mechanisms; virtual-machine introspection mechanisms; virtualization technology; Computer architecture; Computers; Hardware; Malware; Virtual machining; Virtualization; Intrusion Detection; Malware Analysis; attack detection; plug-in architecture (ID#: 15-4649)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6974866&isnumber=6974758

 

Aydogan, E.; Sen, S., "Analysis of Machine Learning Methods On Malware Detection," Signal Processing and Communications Applications Conference (SIU), 2014 22nd, pp.2066,2069, 23-25 April 2014. doi: 10.1109/SIU.2014.6830667 Nowadays, one of the most important security threats are new, unseen malicious executables. Current anti-virus systems have been fairly successful against known malicious softwares whose signatures are known. However they are very ineffective against new, unseen malicious softwares. In this paper, we aim to detect new, unseen malicious executables using machine learning techniques. We extract distinguishing structural features of softwares and, employ machine learning techniques in order to detect malicious executables.
Keywords: invasive software; learning (artificial intelligence); anti-virus systems; machine learning methods; malicious executables detection; malicious softwares; malware detection; security threats; software structural features; Conferences; Internet; Malware; Niobium; Signal processing; Software; machine learning; malware analysis and detection (ID#: 15-4650)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830667&isnumber=6830164

 

Guanhua Yan, "Finding Common Ground Among Experts' Opinions On Data Clustering: With Applications In Malware Analysis," Data Engineering (ICDE), 2014 IEEE 30th International Conference on, pp.15,27, March 31 2014-April 4 2014. doi: 10.1109/ICDE.2014.6816636 Data clustering is a basic technique for knowledge discovery and data mining. As the volume of data grows significantly, data clustering becomes computationally prohibitive and resource demanding, and sometimes it is necessary to outsource these tasks to third party experts who specialize in data clustering. The goal of this work is to develop techniques that find common ground among experts' opinions on data clustering, which may be biased due to the features or algorithms used in clustering. Our work differs from the large body of existing approaches to consensus clustering, as we do not require all data objects be grouped into clusters. Rather, our work is motivated by real-world applications that demand high confidence in how data objects - if they are selected - are grouped together. We formulate the problem rigorously and show that it is NP-complete. We further develop a lightweight technique based on finding a maximum independent set in a 3-uniform hypergraph to select data objects that do not form conflicts among experts' opinions. We apply our proposed method to a real-world malware dataset with hundreds of thousands of instances to find malware clusters based on how multiple major AV (Anti-Virus) software classify these samples. Our work offers a new direction for consensus clustering by striking a balance between the clustering quality and the amount of data objects chosen to be clustered.
Keywords: computational complexity; computer viruses; data mining; graph theory; pattern clustering;3-uniform hypergraph; AV software; NP-complete; antivirus software; clustering quality; common ground; consensus clustering; data clustering; data mining; data objects; expert opinions; knowledge discovery; malware analysis; malware clusters; Feature extraction (ID#: 15-4651)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6816636&isnumber=6816620

 

Mead, N.R.; Morales, J.A., "Using Malware Analysis To Improve Security Requirements On Future Systems," Evolving Security and Privacy Requirements Engineering (ESPRE), 2014 IEEE 1st Workshop on, pp.37, 41, 25-25 Aug. 2014. doi: 10.1109/ESPRE.2014.6890526 In this position paper, we propose to enhance current software development lifecycle models by including use cases, based on previous cyberattacks and their associated malware, and to propose an open research question: Are specific types of systems prone to specific classes of malware exploits? If this is the case, developers can create future systems that are more secure, from inception, by including use cases that address previous attacks.
Keywords: invasive software; software engineering; cyberattacks; malware analysis; malware exploits; security requirement improvement; software development lifecycle models; use cases; Authentication; Computer crime; Malware; Software; Software engineering; Standards; SDLC; cyberattacks; malware; software security (ID#: 15-4652)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890526&isnumber=6890516

 

Adeel, M.; Tokarchuk, L.N.; Azam, M.A.; Khan, S.K.A.; Khalil, M.A., "Propagation Analysis of Malware Families in Mobile P2P Networks," Information Technology: New Generations (ITNG), 2014 11th International Conference on, pp.220,226, 7-9 April 2014. doi: 10.1109/ITNG.2014.123 Viral propagation modelling acts as sandbox for testing intensity of malware, understand patterns adopted for malware propagation and consequently help device strategies for malware detection. Success of P2P networks has encouraged mobile vendors to offer P2P services on mobile networks. Handheld mobile devices though constrained in memory, power and processing resources are capable of using communication technologies like Bluetooth, MMS, SMS, Infrared and WLAN services. Such versatility has however exposed mobile devices to threats like mobile P2P malware. With the number of mobile phone malware escalating to an alarming figure of more than one thousand, it has become ever more important to analyze the affects of propagation of such malware in the wild that could subsequently act as the baseline for protection against such malware. This paper initially presents propagation analysis of generic mobile P2P malware categories and then provides a detailed analysis of propagation of real-world malware from three malware families accommodating around 100 well known mobile P2P malware. Paper is aimed at providing a much needed insight into propagation characteristics of mobile P2P malware like their propagation speed and battery depletion affect.
Keywords: invasive software; mobile computing; peer-to-peer computing; Viral propagation modelling; handheld mobile device; malware detection; malware propagation analysis; mobile P2P network; mobile phone malware; Batteries; Bluetooth; Grippers; Malware; Mathematical model; Mobile communication; Viruses (medical);Malware classification; Malware propagation; Mobile P2P;Mobile malware families (ID#: 15-4653)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6822202&isnumber=6822158

 

Yerima, S.Y.; Sezer, S.; McWilliams, G., "Analysis of Bayesian Classification-Based Approaches For Android Malware Detection," Information Security, IET, vol.8, no.1, pp.25, 36, Jan. 2014. doi: 10.1049/iet-ifs.2013.0095 Mobile malware has been growing in scale and complexity spurred by the unabated uptake of smartphones worldwide. Android is fast becoming the most popular mobile platform resulting in sharp increase in malware targeting the platform. Additionally, Android malware is evolving rapidly to evade detection by traditional signature-based scanning. Despite current detection measures in place, timely discovery of new malware is still a critical issue. This calls for novel approaches to mitigate the growing threat of zero-day Android malware. Hence, the authors develop and analyse proactive machine-learning approaches based on Bayesian classification aimed at uncovering unknown Android malware via static analysis. The study, which is based on a large malware sample set of majority of the existing families, demonstrates detection capabilities with high accuracy. Empirical results and comparative analysis are presented offering useful insight towards development of effective static-analytic Bayesian classification-based solutions for detecting unknown Android malware.
Keywords: invasive software; learning (artificial intelligence);operating system kernels; pattern classification; smart phones; Android malware detection; machine learning; mobile malware; signature based scanning; smartphones; static analysis; static analytic Bayesian classification (ID#: 15-4654)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6687155&isnumber=6687150

 

Naidu, V.; Narayanan, A., "Further Experiments In Biocomputational Structural Analysis Of Malware," Natural Computation (ICNC), 2014 10th International Conference on, pp.605,610, 19-21 Aug. 2014. doi: 10.1109/ICNC.2014.6975904 Initial work on structural analysis of malware using the nature-inspired technique of projecting malware signatures into the amino acid/protein domain was promising in a number of ways, including the demonstration of potential links with real-world pathogen proteins. That initial work was necessarily speculative and limited by a number of experimental factors. The aim of the research reported here is to address some of these limitations and to repeat, with malware code and signatures that can be assured as genuine, the experiments previously reported but with enhancements and improvements. Intriguingly, the outcome is the same: for some reason that is not yet known, matching artificial malware code consensuses after multiple alignment against protein databases returns a high proportion of naturally occurring viral proteins.
Keywords: digital signatures; invasive software; amino acid; artificial malware code consensuses; biocomputational structural analysis; malware signatures; nature-inspired technique; protein databases; real-world pathogen proteins; viral proteins; Amino acids; Biological information theory; Grippers; Malware; Matrices; Payloads; Proteins; Blaster worm; automatic signature generation; malware modelling; malware structural analysis (ID#: 15-4655)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6975904&isnumber=6975799

 

Moghaddam, Samaneh Hosseini; Abbaspour, Maghsood, "Sensitivity Analysis Of Static Features For Android Malware Detection," Electrical Engineering (ICEE), 2014 22nd Iranian Conference on, pp.920,924, 20-22 May 2014. doi: 10.1109/IranianCEE.2014.6999667 The recent explosion of the number of mobile malware in the wild, significantly increases the importance of developing techniques to detect them. There are many published research in this area which employed traditional desktop malware detection approaches like dynamic and static analysis techniques to detect mobile malwares, but none of them applied a thorough study on the sensitivity analysis of the features used. In this paper we divide static features of classification-based Android malware detection techniques proposed in different papers into some related categories and study the influence of using each category of features on the efficiency of classification-based Android malware detections technique using all the static features.
Keywords: Androids; Feature extraction; Humanoid robots; Malware; Mobile communication; Sensitivity analysis; Smart phones; Android malware detection; mobile malware detection; sensitivity analysis; static analysis; static feature (ID#: 15-4656)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6999667&isnumber=6999486

 

Kuriakose, J.; Vinod, P., "Ranked Linear Discriminant Analysis Features For Metamorphic Malware Detection," Advance Computing Conference (IACC), 2014 IEEE International, pp.112,117, 21-22 Feb. 2014. doi: 10.1109/IAdCC.2014.6779304 Metamorphic malware modifies the code of every new offspring by using code obfuscation techniques. Recent research have depicted that metamorphic writers make use of benign dead code to thwart signature and Hidden Markov based detectors. Failure in the detection is due to the fact that the malware code appear statistically similar to benign programs. In order to detect complex malware generated with hacker generated tool i.e. NGVCK known to the research community, and the intricate metamorphic worm available as benchmark data we propose, a novel approach using Linear Discriminant Analysis (LDA) to rank and synthesize most prominent opcode bi-gram features for identifying unseen malware and benign samples. Our investigation resulted in 99.7% accuracy which reveals that the current method could be employed to improve the detection rate of existing malware scanner available in public.
Keywords: hidden Markov models; security of data; benign dead code; code obfuscation technique; hidden Markov based detectors ;intricate metamorphic worm; metamorphic malware detection; opcode bi-gram features; ranked linear discriminant analysis features; thwart signature; Conferences; Decision support systems; Handheld computers; Nickel; linear discriminant analysis; metamorphic malware; obfuscation; optimal features (ID#: 15-4657)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779304&isnumber=6779283

 

Zolotukhin, M.; Hamalainen, T., "Detection of Zero-Day Malware Based On The Analysis Of Opcode Sequences," Consumer Communications and Networking Conference (CCNC), 2014 IEEE 11th, pp.386,391, 10-13 Jan. 2014. doi: 10.1109/CCNC.2014.6866599 Today, rapid growth in the amount of malicious software is causing a serious global security threat. Unfortunately, widespread signature-based malware detection mechanisms are not able to deal with constantly appearing new types of malware and variants of existing ones, until an instance of this malware has damaged several computers or networks. In this research, we apply an anomaly detection approach which can cope with the problem of new malware detection. First, executable files are analyzed in order to extract operation code sequences and then n-gram models are employed to discover essential features from these sequences. A clustering algorithm based on the iterative usage of support vector machines and support vector data descriptions is applied to analyze feature vectors obtained and to build a benign software behavior model. Finally, this model is used to detect malicious executables within new files. The scheme proposed allows one to detect malware unseen previously. The simulation results presented show that the method results in a higher accuracy rate than that of the existing analogues.
Keywords: invasive software; iterative methods; pattern clustering; support vector machines; anomaly detection approach; benign software behavior model; clustering algorithm; global security threat; iterative usage; malicious software; n-gram models; opcode sequences analysis; operation code sequences; support vector data descriptions; support vector machines; widespread signature-based malware detection mechanism; zero-day malware detection; Feature extraction; Malware; Software; Software algorithms; Support vector machines; Training; Vectors (ID#: 15-4658)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866599&isnumber=6866537

 

Rai, S., "Combining register value analysis with similarity based technique for metamorphic malware detection," Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on, pp.720,725, 12-13 July 2014. doi: 10.1109/ICSPCT.2014.6884974 Metamorphic malwares are one of the most deceiving category of malwares inspired from a natural phenomenon of camouflaging. The variation occurs in appearance only without interfering with the core element or properties of subject. It is implemented by utilizing simple code obfuscation techniques like dead code, sequence reordering etc. Nevertheless, Anti-Virus (AV) companies are struggling to tackle this strategy of malware writers due to incompetent syntactic signature pattern based detection. This paper discusses feasibility of malware evasion from detectors and a comparative study of detection methods to deal with metamorphic malware such as Zero transform, Hidden Markov Model, semantic analysis etc.is presented. In this paper, I propose an approach for combining value analysis of registers with other similarity based techniques for improved rate of detection with reduced false negative.
Keywords: hidden Markov models; invasive software; transforms; AV companies; antivirus companies; code obfuscation techniques; false negative reduction; hidden Markov model; malware evasion; malware writers; metamorphic malware detection; register value analysis; semantic analysis; similarity based technique; syntactic signature pattern-based detection; zero transform; Automata; Cryptography; Hidden Markov models; Malware; Reactive power; Registers; Transforms; Code obfuscation; Cyber Security; Detection techniques; Malware; Metamorphic malwares (ID#: 15-4659)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6884974&isnumber=6884878

 

Koniaris, I.; Papadimitriou, G.; Nicopolitidis, P.; Obaidat, M., "Honeypots Deployment For The Analysis And Visualization Of Malware Activity And Malicious Connections," Communications (ICC), 2014 IEEE International Conference on, pp.1819,1824, 10-14 June 2014. doi: 10.1109/ICC.2014.6883587 Honeypots are systems aimed at deceiving threat agents. In most of the cases the latter are cyber attackers with financial motivations, and malicious software with the ability to launch automated attacks. Honeypots are usually deployed as either production systems or as research units to study the methods employed by attackers. In this paper we present the results of two distinct research honeypots. The first acted as a malware collector, a device usually deployed in order to capture self-propagating malware and monitor their activity. The second acted as a decoy server, dropping but logging every malicious connection attempt. Both of these systems have remained online for a lengthy period of time to study the aforementioned malicious activity. During this assessment it was shown that human attackers and malicious software are constantly attacking servers, trying to break into systems or spread across networks. It was also shown that the usage of honeypots for malware monitoring and attack logging can be very effective and provide valuable data. Lastly, we present an open source visualization tool which was developed to help security professionals and researchers during the analysis and conclusion drawing phases, for use with one of the systems fielded in our study.
Keywords: data visualisation; invasive software; public domain software; cyber attackers; financial motivations ;honeypots deployment; malicious connections; malicious software; malware activity; open source visualization tool; threat agents; Data visualization; Grippers; IP networks; Malware; Ports (Computers);Servers; Software; data visualization; honeypot; intrusion detection; log file analysis; malware (ID#: 15-4660)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883587&isnumber=6883277


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

 

Malware Analysis, Part 2

 
SoS Logo

Malware Analysis, Part 2

 

Malware detection, analysis, and classification are perennial issues in cybersecurity.  The research presented here advances malware analysis in some unique and interesting ways.  The works cited were published or presented in 2014.  Because of the volume of work, the bibliography is broken into multiple parts.


 

Sheng Wen; Wei Zhou; Jun Zhang; Yang Xiang; Wanlei Zhou; Weijia Jia; Zou, C.C., "Modeling and Analysis on the Propagation Dynamics of Modern Email Malware," Dependable and Secure Computing, IEEE Transactions on, vol. 11, no.4, pp.361,374, July-Aug. 2014. doi: 10.1109/TDSC.2013.49 Due to the critical security threats imposed by email-based malware in recent years, modeling the propagation dynamics of email malware becomes a fundamental technique for predicting its potential damages and developing effective countermeasures. Compared to earlier versions of email malware, modern email malware exhibits two new features, reinfection and self-start. Reinfection refers to the malware behavior that modern email malware sends out malware copies whenever any healthy or infected recipients open the malicious attachment. Self-start refers to the behavior that malware starts to spread whenever compromised computers restart or certain files are visited. In the literature, several models are proposed for email malware propagation, but they did not take into account the above two features and cannot accurately model the propagation dynamics of modern email malware. To address this problem, we derive a novel difference equation based analytical model by introducing a new concept of virtual infected user. The proposed model can precisely present the repetitious spreading process caused by reinfection and self-start and effectively overcome the associated computational challenges. We perform comprehensive empirical and theoretical study to validate the proposed analytical model. The results show our model greatly outperforms previous models in terms of estimation accuracy.
Keywords: invasive software; electronic mail; email-based malware; malware countermeasures ;malware propagation dynamics; reinfection feature; repetitious spreading process; security threats; self-start feature; virtual infected user concept; Analytical models; Computational modeling; Computers; Electronic mail; Malware; Mathematical model; Topology; Network security; email malware; propagation modeling (ID#: 15-4904)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6671578&isnumber=6851971

 

Suarez-Tangil, G.; Tapiador, J.E.; Lombardi, F.; Di Pietro, R., "Thwarting Obfuscated Malware via Differential Fault Analysis," Computer, vol.47, no.6, pp.24,31, June 2014. doi: 10.1109/MC.2014.169 Detecting malware in mobile applications has become increasingly complex as malware developers turn to advanced techniques to hide or obfuscate malicious components. Alterdroid is a dynamic-analysis tool that compares the behavioral differences between an original app and numerous automatically generated versions of it containing carefully injected modifications.
Keywords: invasive software; mobile computing; software fault tolerance; system monitoring; Alterdroid; differential fault analysis; dynamic-analysis tool; injected modifications; malicious components; malware detection; mobile applications; obfuscated malware; Computational modeling; Fault diagnosis; Feature extraction; Malware; Payloads; Smart phones; Alterdroid; Android; automatic testing; differential fault analysis; dynamic analysis; fuzzy testing; grayware; malware; privacy; security; smartphones (ID#: 15-4905)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838909&isnumber=6838865

 

Arora, Anshul; Garg, Shree; Peddoju, Sateesh K., "Malware Detection Using Network Traffic Analysis in Android Based Mobile Devices," Next Generation Mobile Apps, Services and Technologies (NGMAST), 2014 Eighth International Conference on, pp.66,71, 10-12 Sept. 2014. doi: 10.1109/NGMAST.2014.57 Smart phones, particularly Android based, have attracted the users community for their feature rich apps to use with various applications like chatting, browsing, mailing, image editing and video processing. However the popularity of these devices attracted the malicious attackers as well. Statistics have shown that Android based smart phones are more vulnerable to malwares compared to other smart phones. None of the existing malware detection techniques have focused on the network traffic features for detection of malicious activity. To the best of our knowledge, almost no work is reported for the detection of Android malware using its network traffic analysis. This paper analyzes the network traffic features and builds a rule-based classifier for detection of Android malwares. Our experimental results suggest that the approach is remarkably accurate and it detects more than 90% of the traffic samples.
Keywords: Feature extraction; Malware; Mobile communication; Mobile computing; Servers; Smart phones; Telecommunication traffic; Analysis; Android; Detection; Malware; Mobile Devices; Network Traffic (ID#: 15-4906)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6982893&isnumber=6982871

 

Dadlani, A.; Kumar, M.S.; Kiseon Kim; Sohraby, K., "Stability and Immunization Analysis of a Malware Spread Model Over Scale-Free Networks," Communications Letters, IEEE, vol.18, no.11, pp.1907, 1910, Nov. 2014. doi: 10.1109/LCOMM.2014.2361525 The spreading dynamics and control of infectious agents primarily depend on the connectivity properties of underlying networks. Here, we investigate the stability of a susceptible- infected-susceptible epidemic model incorporated with multiple infection stages and propagation vectors to mimic malware behavior over scale-free communication networks. In particular, we derive the basic reproductive ratio (R0) and provide results for stability analysis at infection-free and infection-chronic equilibrium points. Based on R0, the effectiveness of four prevailing immunization strategies as countermeasures is studied and compared. The outperformance of proportional and targeted immunization is justified via numerical results.
Keywords: computer crime; invasive software; R0;connectivity properties; immunization analysis; immunization strategies; infection stages; infection-chronic equilibrium points; infection-free equilibrium points; infectious agents control; malware behavior; malware spread model; propagation vectors; proportional immunization; reproductive ratio; scale-free communication networks; spreading dynamics; stability analysis; susceptible- infected-susceptible epidemic model; targeted immunization; Analytical models; Computational modeling; Malware; Mathematical model; Numerical models; Stability analysis; Vectors; Malware modeling; basic reproductive ratio; epidemiology; immunization; scale-free network; stability analysis; stability analysis (ID#: 15-4907)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6915859&isnumber=6949702

 

Mas'ud, M.Z.; Sahib, S.; Abdollah, M.F.; Selamat, S.R.; Yusof, R., "Analysis of Features Selection and Machine Learning Classifier in Android Malware Detection," Information Science and Applications (ICISA), 2014 International Conference on, pp.1,5, 6-9 May 2014. doi: 10.1109/ICISA.2014.6847364 The proliferation of Android-based mobile devices and mobile applications in the market has triggered the malware author to make the mobile devices as the next profitable target. With user are now able to use mobile devices for various purposes such as web browsing, ubiquitous services, online banking, social networking, MMS and etc, more credential information is expose to exploitation. Applying a similar security solution that work in Desktop environment to mobile devices may not be proper as mobile devices have a limited storage, memory, CPU and power consumption. Hence, there is a need to develop a mobile malware detection that can provide an effective solution to defence the mobile user from any malicious threat and at the same time address the limitation of mobile devices environment. Prior to this matter, this research focused on evaluating the best features selection to be used in the best machine-learning classifiers. To find the best combination of both features selection and classifier, five sets of different feature selection are applies to five different machine learning classifiers. The classifier outcome is evaluated using the True Positive Rate (TPR), False Positive Rate (FPR), and Accuracy. The best combination of both features selection and classifier can be used to reduce features selection and at the same time able to classify the infected android application accurately.
Keywords: Android (operating system); invasive software; learning (artificial intelligence);mobile computing; pattern classification; Android malware detection; Android-based mobile devices; FPR;TPR; accuracy; classifier outcome; false positive rate; features selection; information exploitation; machine learning classifier; mobile applications; mobile devices environment; mobile malware detection; true positive rate; Accuracy; Androids; Feature extraction; Humanoid robots; Malware; Mobile communication; Mobile handsets (ID#: 15-4908)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847364&isnumber=6847317

 

Junghwan Rhee; Riley, R.; Zhiqiang Lin; Xuxian Jiang; Dongyan Xu, "Data-Centric OS Kernel Malware Characterization," Information Forensics and Security, IEEE Transactions on, vol. 9, no.1, pp.72, 87, Jan. 2014. doi: 10.1109/TIFS.2013.2291964 Traditional malware detection and analysis approaches have been focusing on code-centric aspects of malicious programs, such as detection of the injection of malicious code or matching malicious code sequences. However, modern malware has been employing advanced strategies, such as reusing legitimate code or obfuscating malware code to circumvent the detection. As a new perspective to complement code-centric approaches, we propose a data-centric OS kernel malware characterization architecture that detects and characterizes malware attacks based on the properties of data objects manipulated during the attacks. This framework consists of two system components with novel features: First, a runtime kernel object mapping system which has an un-tampered view of kernel data objects resistant to manipulation by malware. This view is effective at detecting a class of malware that hides dynamic data objects. Second, this framework consists of a new kernel malware detection approach that generates malware signatures based on the data access patterns specific to malware attacks. This approach has an extended coverage that detects not only the malware with the signatures, but also the malware variants that share the attack patterns by modeling the low level data access behaviors as signatures. Our experiments against a variety of real-world kernel rootkits demonstrate the effectiveness of data-centric malware signatures.
Keywords: data encapsulation; digital signatures; invasive software; operating system kernels; attack patterns; code-centric approach; data access patterns; data object manipulation; data-centric OS kernel malware characterization architecture; dynamic data object hiding; low level data access behavior modeling; malware attack characterization; malware signatures; real-world kernel rootkits; runtime kernel object mapping system; Data structures; Dynamic scheduling; Kernel; Malware; Monitoring; Resource management; Runtime; OS kernel malware characterization; data-centric malware analysis; virtual machine monitor (ID#: 15-4909)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6671356&isnumber=6684617

 

Farhadi, M.R.; Fung, B.C.M.; Charland, P.; Debbabi, M., "BinClone: Detecting Code Clones in Malware," Software Security and Reliability (SERE), 2014 Eighth International Conference on, pp.78,87, June 30 2014-July 2 2014. doi: 10.1109/SERE.2014.21 To gain an in-depth understanding of the behaviour of a malware, reverse engineers have to disassemble the malware, analyze the resulting assembly code, and then archive the commented assembly code in a malware repository for future reference. In this paper, we have developed an assembly code clone detection system called BinClone to identify the code clone fragments from a collection of malware binaries with the following major contributions. First, we introduce two deterministic clone detection methods with the goals of improving the recall rate and facilitating malware analysis. Second, our methods allow malware analysts to discover both exact and inexact clones at different token normalization levels. Third, we evaluate our proposed clone detection methods on real-life malware binaries. To the best of our knowledge, this is the first work that studies the problem of assembly code clone detection for malware analysis.
Keywords: invasive software; program diagnostics; reverse engineering; Bin Clone; BinClone; assembly code analysis; assembly code clone detection system; code clone fragment identification; commented assembly code archiving; deterministic clone detection method; inexact clone discovery; malware analysis; malware behaviour understanding; malware binaries; malware disassembly; malware repository; recall rate; reverse engineers; token normalization level; Assembly; Cloning; Detectors; Feature extraction; Malware; Registers; Vectors; Assembly Code Clone Detection; Binary Analysis; Malware Analysis; Reverse Engineering (ID#: 15-4910)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6895418&isnumber=6895396

 

Yen-Ju Liu; Chong-Kuan Chen; Cho, M.C.Y.; Shiuhpyng Shieh, "Fast Discovery of VM-Sensitive Divergence Points with Basic Block Comparison," Software Security and Reliability (SERE), 2014 Eighth International Conference on, pp.196,205, June 30 2014-July 2 2014. doi: 10.1109/SERE.2014.33 To evade VM-based malware analysis systems, VM-aware malware equipped with the ability to detect the presence of virtual machine has appeared. To cope with the problem, detecting VM-aware malware and locating VM-sensitive divergence points of VM-aware malware is in urgent need. In this paper, we propose a novel block-based divergence locator. In contrast to the conventional instruction-based schemes, the block-based divergence locator divides malware program into basic blocks, instead of binary instructions, and uses them as the analysis unit. The block-based divergence locator significantly decrease the cost of behavior logging and trace comparison, as well as the size of behavior traces. As the evaluation showed, behavior logging is 23.87-39.49 times faster than the conventional schemes. The total number of analysis unit, which is highly related to the cost of trace comparisons, is 11.95%-16.00% of the conventional schemes. Consequently, VM-sensitive divergence points can be discovered more efficiently. The correctness of our divergence point discovery algorithm is also proved formally in this paper.
Keywords: invasive software; virtual machines; VM-based malware analysis systems; VM-sensitive divergence points; basic block comparison; binary instructions; block-based divergence locator; virtual machine; Emulation; Hardware; Indexes; Malware; Timing; Virtual machining; Virtualization; Malware Behavior Analysis; VM-Aware Malware; Virtual Machine (ID#: 15-4911)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6895430&isnumber=6895396

 

Guri, M.; Kedma, G.; Kachlon, A.; Elovici, Y., "Resilience of Anti-malware Programs to Naïve Modifications of Malicious Binaries," Intelligence and Security Informatics Conference (JISIC), 2014 IEEE Joint, pp.152,159, 24-26 Sept. 2014. doi: 10.1109/JISIC.2014.31 The massive amounts of malware variants which are released each day demand fast in-lab analysis, along with fast in-field detection. Traditional malware detection methodology depends on either static or dynamic in-lab analysis to identify a suspicious file as malicious. When a file is identified as malware, the analyst extracts a structural signature, which is dispatched to subscriber machines. The signature should enable fast scanning, and should also be flexible enough to detect simple variants. In this paper we discuss 'naïve' variants which can be produced by a modestly skilled individual with publically accessible tools and knowhow which, if needed, can be found on the Internet. Furthermore, those variants can be derived directly from the malicious binary file, allowing anyone who has access to the binary file to modify it at his or her will. Modification can be automated, to produce large amounts of variants in short time. We describe several naïve modifications. We also put them to test against multiple antivirus products, resulting in significant decline of the average detection rate, compared to the original (unmodified) detection rate. Since the aforementioned decline may be related, at least in some cases, to avoidance of probable false positives, we also discuss the acceptable rate of false positives in the context of malware detection.
Keywords: invasive software; Internet; anti-malware program resilience; antivirus products; average detection rate; malicious binary file; naive variants; Conferences; Informatics; Joints; Security; crafty malware; false positive; malware analysis; malware detection; malware variants (ID#: 15-4912)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6975567&isnumber=6975536

 

Belaoued, M.; Mazouzi, S., "Statistical Study of Imported APIs by PE Type Malware," Advanced Networking Distributed Systems and Applications (INDS), 2014 International Conference on, pp.82,86, 17-19 June 2014. doi: 10.1109/INDS.2014.22 In this paper we introduce a statistical study which enable us to know which are Windows APIs that are most imported by malware codes. To do that, we have used a given number of infected Portable Executable (PE) files and another number of none infected ones. We used statistical Khi2 test to set if an API is likely used by malware or not. We guess that a given work is necessary and important for behavior-based malware detection, especially which use API importations to analyze PE codes. For experimentation purpose, we have used a large set of PE files extracted from known databases to perform our analysis and establish our conclusions.
Keywords: application program interfaces; invasive software; operating systems (computers); statistical testing; API importations; PE type malware; Windows API; behavior-based malware detection; infected portable executable files; malware codes; statistical Khi2 test; statistical study; Computers; Data mining; Malware; Operating systems; Testing; Malware; Malware analysis; Statistical hypothesis testing; windows API (ID#: 15-4913)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6969062&isnumber=6969040

 

Yamamoto, T.; Kawauchi, K.; Sakurai, S., "Proposal of a Method Detecting Malicious Processes," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on, pp. 518, 523, 13-16 May 2014. doi: 10.1109/WAINA.2014.164 Malwares' communication detection methods based on communication characteristics have been proposed. However as malwares are getting more sophisticated and legitimate software communication is getting diverse, it becomes harder to correctly tell malwares' communication and legitimate software communication apart. Therefore we propose a method to check whether a process generating suspicious communication is malicious or not. This method focuses on malwares which impersonate a legitimate process by injecting malicious codes into the process. This method extracts two process images. One is obtained from a process to be checked (target process) generating suspicious communication. The other is obtained by executing the same executable as the target process in a clean Virtual Machine. Then the two process images are compared to extract injected codes. Finally the codes are verified whether the codes are malicious or not.
Keywords: invasive software; virtual machines; legitimate software communication; malicious codes; malicious process detection; malware communication detection methods; suspicious communication; virtual machine; Binary codes; Cryptography; Data mining; Malware; Organizations; Ports (Computers);Software; Malware; communication; process; code injection; memory analysis (ID#: 15-4914)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844689&isnumber=6844560

 

Xin Li; Xinyuan Wang; Wentao Chang, "CipherXRay: Exposing Cryptographic Operations and Transient Secrets from Monitored Binary Execution," Dependable and Secure Computing, IEEE Transactions on, vol. 11, no. 2, pp.101,114, March-April 2014. doi: 10.1109/TDSC.2012.83 Malwares are becoming increasingly stealthy, more and more malwares are using cryptographic algorithms (e.g., packing, encrypting C&C communication) to protect themselves from being analyzed. The use of cryptographic algorithms and truly transient cryptographic secrets inside the malware binary imposes a key obstacle to effective malware analysis and defense. To enable more effective malware analysis, forensics, and reverse engineering, we have developed CipherXRay - a novel binary analysis framework that can automatically identify and recover the cryptographic operations and transient secrets from the execution of potentially obfuscated binary executables. Based on the avalanche effect of cryptographic functions, CipherXRay is able to accurately pinpoint the boundary of cryptographic operation and recover truly transient cryptographic secrets that only exist in memory for one instant in between multiple nested cryptographic operations. CipherXRay can further identify certain operation modes (e.g., ECB, CBC, CFB) of the identified block cipher and tell whether the identified block cipher operation is encryption or decryption in certain cases. We have empirically validated CipherXRay with OpenSSL, popular password safe KeePassX, the ciphers used by malware Stuxnet, Kraken and Agobot, and a number of third party softwares with built-in compression and checksum. CipherXRay is able to identify various cryptographic operations and recover cryptographic secrets that exist in memory for only a few microseconds. Our results demonstrate that current software implementations of cryptographic algorithms hardly achieve any secrecy if their execution can be monitored.
Keywords: cryptography; invasive software; reverse engineering; Agobot; CipherXRay; KeePassX; Kraken; OpenSSL; Stuxnet; avalanche effect; binary analysis framework; block cipher operation; cryptographic algorithms; cryptographic functions; cryptographic operations; forensics; malware analysis; monitored binary execution; reverse engineering; third party softwares; transient cryptographic secrets; transient secrets; Algorithm design and analysis; Encryption; Malware; Monitoring; Transient analysis; Binary analysis; avalanche effect; key recovery attack on cryptosystem; reverse engineering; secrecy of monitored execution; transient cryptographic secret recovery (ID#: 15-4915)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6311407&isnumber=6785951

 

Yuan Zhang; Min Yang; Zhemin Yang; Guofei Gu; Peng Ning; Binyu Zang, "Permission Use Analysis for Vetting Undesirable Behaviors in Android Apps," Information Forensics and Security, IEEE Transactions on, vol. 9, no.11, pp.1828,1842, Nov. 2014. doi: 10.1109/TIFS.2014.2347206 The android platform adopts permissions to protect sensitive resources from untrusted apps. However, after permissions are granted by users at install time, apps could use these permissions (sensitive resources) with no further restrictions. Thus, recent years have witnessed the explosion of undesirable behaviors in Android apps. An important part in the defense is the accurate analysis of Android apps. However, traditional syscall-based analysis techniques are not well-suited for Android, because they could not capture critical interactions between the application and the Android system. This paper presents VetDroid, a dynamic analysis platform for generally analyzing sensitive behaviors in Android apps from a novel permission use perspective. VetDroid proposes a systematic permission use analysis technique to effectively construct permission use behaviors, i.e., how applications use permissions to access (sensitive) system resources, and how these acquired permission-sensitive resources are further utilized by the application. With permission use behaviors, security analysts can easily examine the internal sensitive behaviors of an app. Using real-world Android malware, we show that VetDroid can clearly reconstruct fine-grained malicious behaviors to ease malware analysis. We further apply VetDroid to 1249 top free apps in Google Play. VetDroid can assist in finding more information leaks than TaintDroid, a state-of-the-art technique. In addition, we show how we can use VetDroid to analyze fine-grained causes of information leaks that TaintDroid cannot reveal. Finally, we show that VetDroid can help to identify subtle vulnerabilities in some (top free) applications otherwise hard to detect.
Keywords: Android (operating system);invasive software; mobile computing; Android system; Google Play; TaintDroid; VetDroid; analysis technique; android apps; android platform; critical interactions; dynamic analysis platform; internal sensitive behaviors; malicious behaviors; malware analysis; permission use analysis; real-world Android malware; security analysts; sensitive resource protection; systematic permission; vetting undesirable behaviors; Androids; Humanoid robots; Kernel; Linux; Malware; Smart phones; Android security; android behavior representation; permission use analysis; vetting undesirable behaviors (ID#: 15-4916)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6876208&isnumber=6912034

 

Hirono, S.; Yamaguchi, Y.; Shimada, H.; Takakura, H., "Development of a Secure Traffic Analysis System to Trace Malicious Activities on Internal Networks," Computer Software and Applications Conference (COMPSAC), 2014 IEEE 38th Annualpp.305, 310, 21-25 July 2014. doi: 10.1109/COMPSAC.2014.41 In contrast to conventional cyberattacks such as mass infection malware, targeted attacks take a long time to complete their mission. By using a dedicated malware for evading detection at the initial attack, an attacker quietly succeeds in setting up a front-line base in the target organization. Communication between the attacker and the base adopts popular protocols to hide its existence. Because conventional countermeasures deployed on the boundary between the Internet and the internal network will not work adequately, monitoring on the internal network becomes indispensable. In this paper, we propose an integrated sandbox system that deploys a secure and transparent proxy to analyze internal malicious network traffic. The adoption of software defined networking technology makes it possible to redirect any internal traffic from/to a suspicious host to the system for an examination of its insidiousness. When our system finds malicious activity, the traffic is blocked. If the malicious traffic is regarded as mandatory, e.g., For controlled delivery, the system works as a transparent proxy to bypass it. For benign traffic, the system works as a transparent proxy, as well. If binary programs are found in traffic, they are automatically extracted and submitted to a malware analysis module of the sandbox. In this way, we can safely identify the intention of the attackers without making them aware of our surveillance.
Keywords: Internet; invasive software; telecommunication security ;telecommunication traffic; Internet; cyberattacks; integrated sandbox system; internal malicious network traffic analysis; internal networks; malware analysis module; mass infection malware; secure proxy; secure traffic analysis system; software defined networking technology; transparent proxy; Electronic mail; Indexes; Internet; Malware; Protocols; Servers; dynamic analysis; malware; sandbox; targeted attack (ID#: 15-4917)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6899231&isnumber=6899181

 

Yutao Liu; Yubin Xia; Haibing Guan; Binyu Zang; Haibo Chen, "Concurrent and Consistent Virtual Machine Introspection With Hardware Transactional Memory," High Performance Computer Architecture (HPCA), 2014 IEEE 20th International Symposium on, pp. 416,427, 15-19 Feb. 2014. doi: 10.1109/HPCA.2014.6835951 Virtual machine introspection, which provides tamperresistant, high-fidelity “out of the box” monitoring of virtual machines, has many prominent security applications including VM-based intrusion detection, malware analysis and memory forensic analysis. However, prior approaches are either intrusive in stopping the world to avoid race conditions between introspection tools and the guest VM, or providing no guarantee of getting a consistent state of the guest VM. Further, there is currently no effective means for timely examining the VM states in question. In this paper, we propose a novel approach, called TxIntro, which retrofits hardware transactional memory (HTM) for concurrent, timely and consistent introspection of guest VMs. Specifically, TxIntro leverages the strong atomicity of HTM to actively monitor updates to critical kernel data structures. Then TxIntro can mount introspection to timely detect malicious tampering. To avoid fetching inconsistent kernel states for introspection, TxIntro uses HTM to add related synchronization states into the read set of the monitoring core and thus can easily detect potential inflight concurrent kernel updates. We have implemented and evaluated TxIntro based on Xen VMM on a commodity Intel Haswell machine that provides restricted transactional memory (RTM) support. To demonstrate the effectiveness of TxIntro, we implemented a set of kernel rootkit detectors using TxIntro. Evaluation results show that TxIntro is effective in detecting these rootkits, and is efficient in adding negligible performance overhead.
Keywords: digital forensics; invasive software; virtual machines; HTM; TxIntro; VM-based intrusion detection; Xen VMM; commodity Intel Haswell machine; hardware transactional memory; kernel state; malicious tampering; malware analysis; memory forensic analysis; security application; virtual machine introspection; Abstracts; Continuous wavelet transforms; Educational institutions; Kernel; Monitoring; Single photon emission computed tomography; Virtual machine monitors (ID#: 15-4918)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6835951&isnumber=6835920

 

Zhao Lei; Ren Xiangyu; Liu Mengleng; Wang Lina; Zhang Hao; Zhang Huanguo, "Collaborative Reversing Of Input Formats And Program Data Structures For Security Applications," Communications, China, vol. 11, no.9, pp.135,147, Sept. 2014. doi: 10.1109/CC.2014.6969778 Reversing the syntactic format of program inputs and data structures in binaries plays a vital role for understanding program behaviors in many security applications. In this paper, we propose a collaborative reversing technique by capturing the mapping relationship between input fields and program data structures. The key insight behind our paper is that program uses corresponding data structures as references to parse and access different input fields, and every field could be identified by reversing its corresponding data structure. In details, we use a finegrained dynamic taint analysis to monitor the propagation of inputs. By identifying base pointers for each input byte, we could reverse data structures and conversely identify fields based on their referencing data structures. We construct several experiments to evaluate the effectiveness. Experiment results show that our approach could effectively reverse precise input formats, and provide unique benefits to two representative security applications, exploit diagnosis and malware analysis.
Keywords: data structures; groupware; security of data; collaborative reversing technique; exploit diagnosis; input formats; malware analysis; program behavior understanding; program data structures; security applications; Collaboration; Computer security; Data structures; Monitoring; Protocols; Syntactics; fine-grained dynamic tainting; reversing engineering; software security (ID#: 15-4919)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6969778&isnumber=6969702

 

Boukhtouta, A.; Lakhdari, N.-E.; Debbabi, M., "Inferring Malware Family through Application Protocol Sequences Signature," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on,  pp. 1, 5, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814026 The dazzling emergence of cyber-threats exert today's cyberspace, which needs practical and efficient capabilities for malware traffic detection. In this paper, we propose an extension to an initial research effort, namely, towards fingerprinting malicious traffic by putting an emphasis on the attribution of maliciousness to malware families. The proposed technique in the previous work establishes a synergy between automatic dynamic analysis of malware and machine learning to fingerprint badness in network traffic. Machine learning algorithms are used with features that exploit only high-level properties of traffic packets (e.g. packet headers). Besides, the detection of malicious packets, we want to enhance fingerprinting capability with the identification of malware families responsible in the generation of malicious packets. The identification of the underlying malware family is derived from a sequence of application protocols, which is used as a signature to the family in question. Furthermore, our results show that our technique achieves promising malware family identification rate with low false positives.
Keywords: computer network security; invasive software; learning (artificial intelligence);application protocol sequences signature; cyber-threats; machine learning algorithm; malicious packets detection; malware automatic dynamic analysis; malware traffic detection; network traffic; Cryptography; Databases; Engines; Feeds; Malware; Protocols (ID#: 15-4920)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814026&isnumber=6813963

 

Finickel, E.; Lahmadi, A.; Beck, F.; Festor, O., "Empirical Analysis Of Android Logs Using Self-Organizing Maps," Communications (ICC), 2014 IEEE International Conference on, pp. 1802, 1807, 10-14 June 2014. doi: 10.1109/ICC.2014.6883584 In this paper, we present an empirical analysis of the logs generated by the logging system available in Android environments. The logs are mainly related to the execution of the different components of applications and services running on an Android device. We have analysed the logs using self organizing maps where our goal is to establish behavioural fingerprints of Android applications. Each fingerprint is build using information available in logs and related to the structure of an application and its interaction with the system. The developed methodology allows us the better understand Android Apps regarding their granted permissions and performed actions and it proves to be promising for the analysis of malware applications with a minimal overhead and cost.
Keywords: invasive software; self-organising feature maps; smart phones; Android Apps; Android device; Android logs analysis; behavioural fingerprints; logging system; malware application analysis; self-organizing maps; Androids; Humanoid robots; Image color analysis; Malware; Smart phones; Software; Vectors (ID#: 15-4921)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883584&isnumber=6883277

 

Kellogg, Lee; Ruttenberg, Brian; O'Connor, Alison; Howard, Michael; Pfeffer, Avi, "Hierarchical Management of Large-Scale Malware Data," Big Data (Big Data), 2014 IEEE International Conference on, pp. 666,674, 27-30 Oct. 2014. doi: 10.1109/BigData.2014.7004290 As the pace of generation of new malware accelerates, clustering and classifying newly discovered malware requires new approaches to data management. We describe our Big Data approach to managing malware to support effective and efficient malware analysis on large and rapidly evolving sets of malware. The key element of our approach is a hierarchical organization of the malware, which organizes malware into families, maintains a rich description of the relationships between malware, and facilitates efficient online analysis of new malware as they are discovered. Using clustering evaluation metrics, we show that our system discovers malware families comparable to those produced by traditional hierarchical clustering algorithms, while scaling much better with the size of the data set. We also show the flexibility of our system as it relates to substituting various data representations, methods of comparing malware binaries, clustering algorithms, and other factors. Our approach will enable malware analysts and investigators to quickly understand and quantify changes in the global malware ecosystem.
Keywords:  (not provided) (ID#: 15-4922)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7004290&isnumber=7004197

 

Xiong Ping; Wang Xiaofeng; Niu Wenjia; Zhu Tianqing; Li Gang, "Android Malware Detection With Contrasting Permission Patterns," Communications, China, vol.11, no.8, pp.1,14, Aug. 2014. doi: 10.1109/CC.2014.6911083 As the risk of malware is sharply increasing in Android platform, Android malware detection has become an important research topic. Existing works have demonstrated that required permissions of Android applications are valuable for malware analysis, but how to exploit those permission patterns for malware detection remains an open issue. In this paper, we introduce the contrasting permission patterns to characterize the essential differences between malwares and clean applications from the permission aspect. Then a framework based on contrasting permission patterns is presented for Android malware detection. According to the proposed framework, an ensemble classifier, Enclamald, is further developed to detect whether an application is potentially malicious. Every contrasting permission pattern is acting as a weak classifier in Enclamald, and the weighted predictions of involved weak classifiers are aggregated to the final result. Experiments on real-world applications validate that the proposed Enclamald classifier outperforms commonly used classifiers for Android Malware Detection.
Keywords: Android (operating system);invasive software; pattern classification; Android malware detection; Enclamald ensemble classifier; contrasting permission patterns; weak classifiers; weighted predictions; Androids; Educational institutions; Humanoid robots; Internet; Malware; Smart phones; Training; Android; classification; contrast set; malware detection; permission pattern (ID#: 15-4923)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6911083&isnumber=6911078

 

Ban Xiaofang; Chen Li; Hu Weihua; Wu Qu, "Malware Variant Detection Using Similarity Search Over Content Fingerprint," Control and Decision Conference (2014 CCDC), The 26th Chinese, pp. 5334,5339, May 31 2014-June 2 2014. doi: 10.1109/CCDC.2014.6852216 Detection of polymorphic malware variants plays an important role to improve information system security. Traditional static/dynamic analysis technologies have shown to be an effective characteristic that represents polymorphic malware instances. While these approaches demonstrate promise, they are themselves subject to a growing array of countermeasures that increase the cost of capturing these malware code features. Further, feature extraction requires a time investment per malware that does not scale well to the daily volume of malwares being reported by those who diligently collect malware. In this paper, we propose a similarity search of malware using novel distance (similarity) metrics of malware content fingerprint based on the locality-sensitive hashing (LSH) schemes. We describe a malware by the binary content of the malware contains; the next step is to compute an feature fingerprint for the malware binary image sample by using the SURF algorithm, and then do fast fingerprint matching with the LSH from malware code corpus to return the top most visually (structurally) similar variants. The LSH algorithm that captures malware similarity is based on image similarity. We implement B2M (Binary mapping to image) algorithm, the SURF algorithm and the LSH algorithm in a complete malware variant detection system. The evaluation shows that our approach is highly effective in terms of response time and malware variant detection.
Keywords: cryptography; feature extraction; fingerprint identification; image coding; image matching; invasive software;B2M;LSH;SURF algorithm; binary mapping to image algorithm; content fingerprint; distance metrics; fast fingerprint matching; feature extraction; feature fingerprint; image similarity; information system security; locality-sensitive hashing schemes; malware binary image; malware code corpus; malware code features; malware variant detection; similarity search; Algorithm design and analysis; Data visualization; Feature extraction; Fingerprint recognition; Force; Malware; Vectors; Content Fingerprint; Locality-sensitive Hashing; Malware Variant Detection; Similarity Search (ID#: 15-4924)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6852216&isnumber=6852105

 

Wang, Ping; Chao, Wun Jie; Chao, Kuo-Ming; Lo, Chi-Chun, "Using Taint Analysis for Threat Risk of Cloud Applications," e-Business Engineering (ICEBE), 2014 IEEE 11th International Conference on, pp.185,190, 5-7 Nov. 2014. doi: 10.1109/ICEBE.2014.40 Most existing approaches to developing cloud applications using threat analysis involve program vulnerability analyses for identifying the security holes associated with malware attacks. New malware attacks can bypass firewall-based detection by bypassing stack protection and by using Hypertext Transfer Protocol logging, kernel hacks, and library hack techniques, and to the cloud applications. In performing threat analysis for unspecified malware attacks, software engineers can use a taint analysis technique for tracking information flows between attack sources (malware) and detect vulnerabilities of targeted network applications. This paper proposes a threat risk analysis model incorporating an improved attack tree analysis scheme for solving the mobile security problem, in the model, Android programs perform taint checking to analyse the risks posed by suspicious applications. In probabilistic risk analysis, defence evaluation metrics are used for each attack path for assisting a defender simulate the attack results against malware attacks and estimate the impact losses. Finally, a case of threat analysis of a typical cyber security attack is presented to demonstrate the proposed approach.
Keywords: Analytical models; Malware; Measurement; Probabilistic logic; Risk analysis; Software; Attack defence tree; Cyber attacks; Taint checking; Threat; analysis (ID#: 15-4925)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6982078&isnumber=6982037


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Malware Analysis, Part 3

 
SoS Logo

Malware Analysis, Part 3

 

Malware detection, analysis, and classification are perennial issues in cybersecurity. The research presented here advances malware analysis in some unique and interesting ways. The works cited were published or presented in 2014.  Because of the volume of work, the bibliography is broken into multiple parts.


Maier, Dominik; Müller, Tilo; Protsenko, Mykola, "Divide-and-Conquer: Why Android Malware Cannot Be Stopped," Availability, Reliability and Security (ARES), 2014 Ninth International Conference on,  pp.30,39, 8-12 Sept. 2014. doi: 10.1109/ARES.2014.12 Abstract: In this paper, we demonstrate that Android malware can bypass all automated analysis systems, including AV solutions, mobile sandboxes, and the Google Bouncer. We propose a tool called Sand-Finger for the fingerprinting of Android-based analysis systems. By analyzing the fingerprints of ten unique analysis environments from different vendors, we were able to find characteristics in which all tested environments differ from actual hardware. Depending on the availability of an analysis system, malware can either behave benignly or load malicious code at runtime. We classify this group of malware as Divide-and-Conquer attacks that are efficiently obfuscated by a combination of fingerprinting and dynamic code loading. In this group, we aggregate attacks that work against dynamic as well as static analysis. To demonstrate our approach, we create proof-of-concept malware that surpasses up-to-date malware scanners for Android. We also prove that known malware samples can enter the Google Play Store by modifying them only slightly. Due to Android's lack of an API for malware scanning at runtime, it is impossible for AV solutions to secure Android devices against these attacks.
Keywords: Androids; Google; Hardware; Humanoid robots; Malware; Mobile communication; Smart phones; AV; Android Malware; Google Bouncer; Mobile Sandboxes; Obfuscation; Static and Dynamic Analysis (ID#: 15-4926)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6980261&isnumber=6980232

 

Xiaoguang Han; Jigang Sun; Wu Qu; Xuanxia Yao, "Distributed Malware Detection Based on Binary File Features in Cloud Computing Environment," Control and Decision Conference (2014 CCDC), The 26th Chinese, pp.4083,4088, May 31 2014-June 2 2014. doi: 10.1109/CCDC.2014.6852896  Abstract: A number of techniques have been devised by researchers to counter malware attacks, and machine learning techniques play an important role in automated malware detection. Several machine learning approaches have been applied to malware detection, based on different features derived from dynamic analysis of the malware. While these methods demonstrate promise, they pose at least two major challenges. First, these approaches are subjected to a growing array of countermeasures that increase the cost of capturing these malware binary executable file features. Further, feature extraction requires a time investment per binary file that does not scale well to the daily volume of malware instances being reported by those who diligently collect malware. In order to address the first challenge, this article proposed a binary-to-image projection algorithm based on a new type of feature extraction for the malware, was introduced in [2]. To address the second challenge, the technique's scalability is demonstrated through an implementation for the distributed (Key, Value) abstraction in cloud computing environment. Both theoretical and empirical evidence demonstrate its effectiveness over other state-of-the-art malware detection techniques on malware corpus, and the proposed method could be a useful and efficient complement to dynamic analysis.
Keywords: cloud computing; invasive software; learning (artificial intelligence); automated malware detection; binary-to-image projection algorithm; cloud computing environment; distributed malware detection; dynamic analysis; feature extraction; machine learning; malware attacks; malware binary executable file features; time investment; Arrays; Entropy; Feature extraction; Malware; Real-time systems; Vectors; Data Mining; Distributed Entropy LSH; Malware Detection; Malware Images (ID#: 15-4927)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6852896&isnumber=6852105

 

Chia-Mei Chen; Je-Ming Lin; Gu-Hsin Lai, "Detecting Mobile Application Malicious Behaviors Based on Data Flow of Source Code," Trustworthy Systems and their Applications (TSA), 2014 International Conference on, pp.1,6, 9-10 June 2014. doi: 10.1109/TSA.2014.10 Abstract: Mobile devices have become powerful and popular. Most Internet applications are ported to mobile platform. Confidential personal information such as credit card and passwords are stored in mobile device for convenience. Therefore, mobile devices become the attack targets due to financial gain. Mobile applications are published in many market platforms without verification, hence malicious mobile applications can be deployed in such marketplaces. Two approaches for detecting malware, dynamic and static analysis, are commonly used in the literature. Dynamic analysis requires is that analyst run suspicious apps in a controlled environment to observe the behavior of apps to determine if the app is malicious or not. However, Dynamic analysis is time consuming, as some mobile application might be triggered after certain amount of time or special input sequence. In this paper static analysis is adopted to detect mobile malware and sensitive information is tracked to check if it is been released or used by malicious malware. In this paper, we present a mobile malware detection approach which is based on data flow of the reversed source code of the application. The proposed system tracks the data flow to detect and identify malicious behavior of malware in Android system. To validate the performance of proposed system, 252 malware form 19 families and 50 free apps from Google Play are used. The results proved that our method can successfully detecting malicious behaviours of Android APPs with the TPR 91.6%.
Keywords: Android (operating system); data flow analysis; invasive software; mobile computing; source code (software); Android APP; Google Play; Internet applications; TPR; confidential personal information storage; controlled environment; data flow; dynamic analysis; malware malicious behavior detection; malware malicious behavior identification; market platforms; mobile application malicious behavior detection; mobile devices; mobile malware detection approach; mobile platform; performance evaluation; reversed source code; sensitive information tracking; source code; static analysis; Androids; Humanoid robots; Malware; Mobile communication; Smart phones; Software (ID#: 15-4928)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956704&isnumber=6956693

 

Wenjun Hu; Jing Tao; Xiaobo Ma; Wenyu Zhou; Shuang Zhao; Ting Han, "MIGDroid: Detecting APP-Repackaging Android Malware via Method Invocation Graph," Computer Communication and Networks (ICCCN), 2014 23rd International Conference on, pp. 1,7, 4-7 Aug. 2014. doi: 10.1109/ICCCN.2014.6911805  Abstract: With the increasing popularity of Android platform, Android malware, especially APP-Repackaging malware wherein the malicious code is injected into legitimate Android applications, is spreading rapidly. This paper proposes a new system named MIGDroid, which leverages method invocation graph based static analysis to detect APP-Repackaging Android malware. The method invocation graph reflects the “interaction” connections between different methods. Such graph can be naturally exploited to detect APP-Repackaging malware because the connections between injected malicious code and legitimate applications are expected to be weak. Specifically, MIGDroid first constructs method invocation graph on the smali code level, and then divides the method invocation graph into weakly connected sub-graphs. To determine which sub-graph corresponds to the injected malicious code, the threat score is calculated for each sub-graph based on the invoked sensitive APIs, and the subgraphs with higher scores will be more likely to be malicious. Experiment results based on 1,260 Android malware samples in the real world demonstrate the specialty of our system in detecting APP-Repackaging Android malware, thereby well complementing existing static analysis systems (e.g., Androguard) that do not focus on APP-Repackaging Android malware.
Keywords: Android (operating system); graph theory; invasive software; Android applications; Android malware samples; Android platform; MIGDroid; connected subgraphs; detecting APP-Repackaging Android malware; injected malicious code; invocation graph method; threat score; Androids; Google; Humanoid robots; Receivers; Trojan horses; Android; malware; method invocation graph; static analysis (ID#: 15-4929)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6911805&isnumber=6911704

 

Min Zheng; Mingshen Sun; Lui, J.C.S., "DroidTrace: A Ptrace Based Android Dynamic Analysis System With Forward Execution Capability," Wireless Communications and Mobile Computing Conference (IWCMC), 2014 International, pp. 128,133, 4-8 Aug. 2014. doi: 10.1109/IWCMC.2014.6906344  Abstract: Android, being an open source smartphone operating system, enjoys a large community of developers who create new mobile services and applications. However, it also attracts malware writers to exploit Android devices in order to distribute malicious apps in the wild. In fact, Android malware are becoming more sophisticated and they use advanced “dynamic loading” techniques like Java reflection or native code execution to bypass security detection. To detect dynamic loading, one has to use dynamic analysis. Currently, there are only a handful of Android dynamic analysis tools available, and they all have shortcomings in detecting dynamic loading. The aim of this paper is to design and implement a dynamic analysis system which allows analysts to perform systematic analysis of dynamic payloads with malicious behaviors. We propose “DroidTrace”, a ptrace based dynamic analysis system with forward execution capability. Our system uses ptrace to monitor selected system calls of the target process which is running the dynamic payloads, and classifies the payloads behaviors through the system call sequence, e.g., behaviors such as file access, network connection, inter-process communication and even privilege escalation. Also, DroidTrace performs “physical modification” to trigger different dynamic loading behaviors within an app. Using DroidTrace, we carry out a large scale analysis on 36,170 dynamic payloads in 50,000 apps and 294 malware in 10 families (four of them are zero-day) with various dynamic loading behaviors.
Keywords: Android (operating system); Java; invasive software; mobile computing; program diagnostics; public domain software; Android malware; DroidTrace; Java reflection; dynamic loading detection; dynamic payload analysis; file access; forward execution capability; interprocess communication; malicious apps; malicious behaviors; mobile applications; mobile services; native code execution; network connection; open source smartphone operating system; physical modification; privilege escalation; ptrace based Android dynamic analysis system; security detection system call monitoring; Androids; Humanoid robots; Java; Loading; Malware; Monitoring; Payloads (ID#: 15-4930)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6906344&isnumber=6906315

 

Kumar, S.; Rama Krishna, C.; Aggarwal, N.; Sehgal, R.; Chamotra, S., "Malicious Data Classification Using Structural Information and Behavioral Specifications in Executables," Engineering and Computational Sciences (RAECS), 2014 Recent Advances in, pp. 1,6, 6-8 March 2014. doi: 10.1109/RAECS.2014.6799525  Abstract: With the rise in the underground Internet economy, automated malicious programs popularly known as malwares have become a major threat to computers and information systems connected to the internet. Properties such as self healing, self hiding and ability to deceive the security devices make these software hard to detect and mitigate. Therefore, the detection and the mitigation of such malicious software is a major challenge for researchers and security personals. The conventional systems for the detection and mitigation of such threats are mostly signature based systems. Major drawback of such systems are their inability to detect malware samples for which there is no signature available in their signature database. Such malwares are known as zero day malware. Moreover, more and more malware writers uses obfuscation technology such as polymorphic and metamorphic, packing, encryption, to avoid being detected by antivirus. Therefore, the traditional signature based detection system is neither effective nor efficient for the detection of zero-day malware. Hence to improve the effectiveness and efficiency of malware detection system we are using classification method based on structural information and behavioral specifications. In this paper we have used both static and dynamic analysis approaches. In static analysis we are extracting the features of an executable file followed by classification. In dynamic analysis we are taking the traces of executable files using NtTrace within controlled atmosphere. Experimental results obtained from our algorithm indicate that our proposed algorithm is effective in extracting malicious behavior of executables. Further it can also be used to detect malware variants.
Keywords: Internet; invasive software; pattern classification; program diagnostics; NtTrace; antivirus; automated malicious programs; behavioral specifications; dynamic analysis; executable file; information systems; malicious behavior extraction; malicious data classification; malicious software detection; malicious software mitigation; malware detection system effectiveness improvement; malware detection system efficiency improvement; malwares; obfuscation technology; security devices; signature database; signature-based detection system; static analysis; structural information; threat detection; threat mitigation; underground Internet economy; zero-day malware detection; Algorithm design and analysis; Classification algorithms; Feature extraction; Internet; Malware; Software; Syntactics; behavioral specifications; classification algorithms; dynamic analysis; malware detection; static analysis; system call (ID#: 15-4931)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6799525&isnumber=6799496

 

Cen, L.; Gates, C.; Si, L.; Li, N., "A Probabilistic Discriminative Model for Android Malware Detection with Decompiled Source Code," Dependable and Secure Computing, IEEE Transactions on, vol.12, no.4,  pp.400-412, July-August 1 2015. doi: 10.1109/TDSC.2014.2355839  Abstract: Mobile devices are an important part of our everyday lives, and the Android platform has become a market leader. In recent years a number of approaches for Android malware detection have been proposed, using permissions, source code analysis, or dynamic analysis. In this paper, we propose to use a probabilistic discriminative model based on regularized logistic regression for Android malware detection. Through extensive experimental evaluation, we demonstrate that it can generate probabilistic outputs with highly accurate classification results. In particular, we propose to use Android API calls as features extracted from decompiled source code, and analyze and explore issues in feature granularity, feature representation, feature selection, and regularization. We show that the probabilistic discriminative model also works well with permissions, and substantially outperforms the state-of-the-art methods for Android malware detection with application permissions. Furthermore, the discriminative learning model achieves the best detection results by combining both decompiled source code and application permissions. To the best of our knowledge, this is the first research that proposes probabilistic discriminative model for Android malware detection with a thorough study of desired representation of decompiled source code and is the first research work for Android malware detection task that combines both analysis of decompiled source code and application permissions.
Keywords: Androids; Feature extraction; Humanoid robots; Malware; Measurement; Probabilistic logic; Smart phones (ID#: 15-4932)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6894210&isnumber=4358699

 

Haltas, F.; Uzun, E.; Siseci, N.; Posul, A.; Emre, B., "An Automated Bot Detection System Through Honeypots for Large-Scale," Cyber Conflict (CyCon 2014), 2014 6th International Conference on, pp.255,270, 3-6 June 2014. doi: 10.1109/CYCON.2014.6916407  Abstract: One of the purposes of active cyber defense systems is identifying infected machines in enterprise networks that are presumably root cause and main agent of various cyber-attacks. To achieve this, researchers have suggested many detection systems that rely on host-monitoring techniques and require deep packet inspection or which are trained by malware samples by applying machine learning and clustering techniques. To our knowledge, most approaches are either lack of being deployed easily to real enterprise networks, because of practicability of their training system which is supposed to be trained by malware samples or dependent to host-based or deep packet inspection analysis which requires a big amount of storage capacity for an enterprise. Beside this, honeypot systems are mostly used to collect malware samples for analysis purposes and identify coming attacks. Rather than keeping experimental results of bot detection techniques as theory and using honeypots for only analysis purposes, in this paper, we present a novel automated bot-infected machine detection system BFH (BotFinder through Honeypots), based on BotFinder, that identifies infected hosts in a real enterprise network by learning approach. Our solution, relies on NetFlow data, is capable of detecting bots which are infected by most-recent malwares whose samples are caught via 97 different honeypot systems. We train BFH by created models, according to malware samples, provided and updated by 97 honeypot systems. BFH system automatically sends caught malwares to classification unit to construct family groups. Later, samples are automatically given to training unit for modeling and perform detection over NetFlow data. Results are double checked by using full packet capture of a month and through tools that identify rogue domains. Our results show that BFH is able to detect infected hosts with very few false-positive rates and successful on handling most-recent malware families since it is fed by 97 Honey- ot and it supports large networks with scalability of Hadoop infrastructure, as deployed in a large-scale enterprise network in Turkey.
Keywords: invasive software; learning (artificial intelligence); parallel processing; pattern clustering; BFH; Hadoop infrastructure; NetFlow data; active cyber defense systems; automated bot detection system; bot detection techniques; bot-infected machine detection system; botfinder through honeypots; clustering technique; cyber-attacks; deep packet inspection; enterprise networks; honeypot systems; host-monitoring techniques; learning approach; machine learning technique; malware; Data models; Feature extraction; Malware; Monitoring; Scalability; Training; Botnet; NetFlow analysis; honeypots; machine learning (ID#: 15-4933)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916407&isnumber=6916383

 

Saleh, M.; Ratazzi, E.P.; Shouhuai Xu, "Instructions-Based Detection of Sophisticated Obfuscation and Packing," Military Communications Conference (MILCOM), 2014 IEEE, pp. 1, 6, 6-8 Oct. 2014. doi: 10.1109/MILCOM.2014.9 Abstract: Every day thousands of malware are released online. The vast majority of these malware employ some kind of obfuscation ranging from simple XOR encryption, to more sophisticated anti-analysis, packing and encryption techniques. Dynamic analysis methods can unpack the file and reveal its hidden code. However, these methods are very time consuming when compared to static analysis. Moreover, considering the large amount of new malware being produced daily, it is not practical to solely depend on dynamic analysis methods. Therefore, finding an effective way to filter the samples and delegate only obfuscated and suspicious ones to more rigorous tests would significantly improve the overall scanning process. Current techniques of identifying obfuscation rely mainly on signatures of known packers, file entropy score, or anomalies in file header. However, these features are not only easily bypass-able, but also do not cover all types of obfuscation. In this paper, we introduce a novel approach to identify obfuscated files based on anomalies in their instructions-based characteristics. We detect the presence of interleaving instructions which are the result of the opaque predicate anti-disassembly trick, and present distinguishing statistical properties based on the opcodes and control flow graphs of obfuscated files. Our detection system combines these features with other file structural features and leads to a very good result of detecting obfuscated malware.
Keywords: invasive software; control flow graphs; dynamic analysis methods; encryption techniques; file entropy score; file header anomaly; instructions-based detection; malware detection; obfuscated file identification; obfuscation detection; opcodes; packing detection; simple XOR encryption; static analysis methods; Electronic mail; Encryption; Entropy; Feature extraction; Malware; Reverse engineering (ID#: 15-4934)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956729&isnumber=6956719

 

Prakash, A.; Venkataramani, E.; Yin, H.; Lin, Z., "On the Trustworthiness of Memory Analysis—An Empirical Study from the Perspective of Binary Execution," Dependable and Secure Computing, IEEE Transactions on, vol.12, no.5, pp.557-570, Sept.-Oct. 1 2015. doi: 10.1109/TDSC.2014.2366464  Abstract: Memory analysis serves as a foundation for many security applications such as memory forensics, virtual machine introspection and malware investigation. However, malware, or more specifically a kernel rootkit, can often tamper with kernel memory data, putting the trustworthiness of memory analysis under question. With the rapid deployment of cloud computing and increase of cyberattacks, there is a pressing need to systematically study and understand the problem of memory analysis. In particular, without ground truth, the quality of the memory analysis tools widely used for analyzing closed-source operating systems (like Windows) has not been thoroughly studied. Moreover, while it is widely accepted that value manipulation attacks pose a threat to memory analysis, its severity has not been explored and well understood. To answer these questions, we have devised a number of novel analysis techniques including (1) binary level ground-truth collection, and (2) value equivalence set directed field mutation. Our experimental results demonstrate not only that the existing tools are inaccurate even under a non-malicious context, but also that value manipulation attacks are practical and severe. Finally, we show that exploiting information redundancy can be a viable direction to mitigate value manipulation attacks, but checking information equivalence alone is not an ultimate solution.
Keywords: Context; Data structures; Kernel; Robustness; Security; Semantics; Virtual machining; DKOM; Invasive Software; Kernel Rootkit; Memory Forensics; Operating Systems Security; Virtual Machine Introspection (ID#: 15-4935)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6942280&isnumber=4358699

 

Okane, P.; Sezer, S.; McLaughlin, K.; Eul Gyu Im, "Malware Detection: Program Run Length Against Detection Rate," Software, IET, vol. 8, no.1, pp.42, 51, February 2014. doi: 10.1049/iet-sen.2013.0020  Abstract: N-gram analysis is an approach that investigates the structure of a program using bytes, characters or text strings. This research uses dynamic analysis to investigate malware detection using a classification approach based on N-gram analysis. A key issue with dynamic analysis is the length of time a program has to be run to ensure a correct classification. The motivation for this research is to find the optimum subset of operational codes (opcodes) that make the best indicators of malware and to determine how long a program has to be monitored to ensure an accurate support vector machine (SVM) classification of benign and malicious software. The experiments within this study represent programs as opcode density histograms gained through dynamic analysis for different program run periods. A SVM is used as the program classifier to determine the ability of different program run lengths to correctly determine the presence of malicious software. The findings show that malware can be detected with different program run lengths using a small number of opcodes.
Keywords: invasive software; pattern classification; runlength codes; support vector machines; system monitoring; N-gram analysis; SVM classification; benign software; detection rate; dynamic analysis; malicious software; malware detection; opcode density histograms; operational codes; program classifier; program monitoring time; program run length; support vector machine (ID#: 15-4936)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6720049&isnumber=6720044

 

Euijin Choo; Younghee Park; Siyamwala, H., "Identifying Malicious Metering Data in Advanced Metering Infrastructure," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp. 490, 495, 7-11 April 2014. doi: 10.1109/SOSE.2014.75  Abstract: Advanced Metering Infrastructure (AMI) has evolved to measure and control energy usage in communicating through metering devices. However, the development of the AMI network brings with it security issues, including the increasingly serious risk of malware in the new emerging network. Malware is often embedded in the data payloads of legitimate metering data. It is difficult to detect malware in metering devices, which are resource constrained embedded systems, during time-critical communications. This paper describes a method in order to distinguish malware-bearing traffic and legitimate metering data using a disassembler and statistical analysis. Based on the discovered unique characteristic of each data type, the proposed method detects malicious metering data. (i.e. malware-bearing data). The analysis of data payloads is statistically performed while investigating a distribution of instructions in traffic by using a disassembler. Doing so demonstrates that the distribution of instructions in metering data is significantly different from that in malware-bearing data. The proposed approach successfully identifies the two different types of data with complete accuracy, with 0% false positives and 0% false negatives.
Keywords: invasive software; metering; power system security; program assemblers; smart meters; statistical analysis; AMI network; advanced metering infrastructure; data payloads; disassembler; energy usage; malicious metering data; malware-bearing data; malware-bearing traffic; metering devices; resource constrained embedded systems; security issues; statistical analysis; time-critical communications; Malware; Registers; Statistical analysis; Testing; Training; ARM Instructions; Advanced Metering Infrastructure; Diassembler; Malware; Security; Smart Meters (ID#: 15-4937)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830954&isnumber=6825948

 

Hui Zhu; Cheng Huang; Hui Li, "MPPM: Malware Propagation and Prevention Model in Online SNS," Communications Workshops (ICC), 2014 IEEE International Conference on pp.682, 687, 10-14 June 2014. doi: 10.1109/ICCW.2014.6881278 Abstract: With the pervasiveness of online social network service (SNS), many people express their views and share information with others on them, and the information propagation model of online SNS has attracted considerable interest recently. However, the flourish of information propagation model in online SNS still faces many challenges, especially considering more and more malicious software's propagation in SNS. In this paper, we proposed a malware propagation and prevention model based on the propagation probability model, called MPPM, for online SNS. With this model, we can describe the relationships among malware propagation, habits of users and malware detection in online SNS. In specific, based on characteristics of online SNS, we define users' states and the rules of malware propagation with dynamics of infectious disease; then, we introduce the detection factor to affect the propagation of malwares, and present the malwares propagation and prevention in online SNS by dynamic evolution equations; finally, we analyze the factors which influence the malware propagation in online SNS. Detailed analysis and simulation demonstrate that the MPPM model can precisely describe the process of malware's propagation and prevention in online SNS.
Keywords: invasive software; probability; social networking (online); ubiquitous computing; MPPM model; dynamic evolution equations; infectious disease dynamics; information propagation model; malicious software propagation; malware detection; malware propagation and prevention model; online SNS pervasiveness; online social network service pervasiveness; propagation probability model; Analytical models; Computational modeling; Conferences; Malware; Mathematical model; Social network services; Social network service; dynamic evolution equations; dynamics of infectious disease; malware prevention (ID#: 15-4938)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6881278&isnumber=6881162

 

O'Kane, P.; Sezer, S.; McLaughlin, K., "N-gram Density Based Malware Detection," Computer Applications & Research (WSCAR), 2014 World Symposium on, pp.1, 6, 18-20 Jan. 2014. doi: 10.1109/WSCAR.2014.6916806  Abstract: N-gram analysis is an approach that investigates the structure of a program using bytes, characters or text strings. This research uses dynamic analysis to investigate malware detection using a classification approach based on N-gram analysis. The motivation for this research is to find a subset of N-gram features that makes a robust indicator of malware. The experiments within this paper represent programs as N-gram density histograms, gained through dynamic analysis. A Support Vector Machine (SVM) is used as the program classifier to determine the ability of N-grams to correctly determine the presence of malicious software. The preliminary findings show that an N-gram size N=3 and N=4 present the best avenues for further analysis.
Keywords: invasive software; pattern classification; support vector machines; N-gram analysis; N-gram density histograms; SVM; classification approach; malware detection; support vector machine; Information technology; Malware; Support vector machines; Three-dimensional displays; Malware; N-gram; Support Vector Machine (ID#: 15-4939)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916806&isnumber=6916766

 

Yerima, Suleiman Y.; Sezer, Sakir; Muttik, Igor, "Android Malware Detection Using Parallel Machine Learning Classifiers," Next Generation Mobile Apps, Services and Technologies (NGMAST), 2014 Eighth International Conference on, pp. 37,42, 10-12 Sept. 2014. doi: 10.1109/NGMAST.2014.23  Abstract: Mobile malware has continued to grow at an alarming rate despite on-going mitigation efforts. This has been much more prevalent on Android due to being an open platform that is rapidly overtaking other competing platforms in the mobile smart devices market. Recently, a new generation of Android malware families has emerged with advanced evasion capabilities which make them much more difficult to detect using conventional methods. This paper proposes and investigates a parallel machine learning based classification approach for early detection of Android malware. Using real malware samples and benign applications, a composite classification model is developed from parallel combination of heterogeneous classifiers. The empirical evaluation of the model under different combination schemes demonstrates its efficacy and potential to improve detection accuracy. More importantly, by utilizing several classifiers with diverse characteristics, their strengths can be harnessed not only for enhanced Android malware detection but also quicker white box analysis by means of the more interpretable constituent classifiers.
Keywords: Accuracy; Androids; Classification algorithms; Feature extraction; Humanoid robots; Malware; Training; Android; data mining; machine learning; malware detection; mobile security; parallel classifiers; static analysis (ID#: 15-4940)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6982888&isnumber=6982871

 

Cobb, S.; Lee, A., "Malware is Called Malicious for a Reason: The Risks of Weaponizing Code," Cyber Conflict (CyCon 2014), 2014 6th International Conference on, pp.71, 84, 3-6 June 2014. doi: 10.1109/CYCON.2014.6916396 Abstract: The allure of malware, with its tremendous potential to infiltrate and disrupt digital systems, is understandable. Criminally motivated malware is now directed at all levels and corners of the cyber domain, from servers to endpoints, laptops, smartphones, tablets, and industrial control systems. A thriving underground industry today produces ever-increasing quantities of malware for a wide variety of platforms, which bad actors seem able to deploy with relative impunity. The urge to fight back with “good” malware is understandable. In this paper we review and assess the arguments for and against the use of malicious code for either active defense or direct offense. Our practical experiences analyzing and defending against malicious code suggest that the effect of deployment is hard to predict with accuracy. There is tremendous scope for unintended consequences and loss of control over the code itself. Criminals do not feel restrained by these factors and appear undeterred by moral dilemmas like collateral damage, but we argue that persons or entities considering the use of malware for “justifiable offense” or active defense need to fully understand the issues around scope, targeting, control, blowback, and arming the adversary. Using existing open source literature and commentary on this topic we review the arguments for and against the use of “malicious” code for “righteous” purposes, introducing the term “righteous malware”. We will cite select instances of prior malicious code deployment to reveal lessons learned for future missions. In the process, we will refer to a range of techniques employed by criminally-motivated malware authors to evade detection, amplify infection, leverage investment, and execute objectives that range from denial of service to information stealing, fraudulent, revenue generation, blackmail and surveillance. Examples of failure to retain control of criminally- motivated malicious code development will also be examined for what they may tell us about code persistence and life cycles. In closing, we will present our considered opinions on the risks of weaponizing code.
Keywords: computer crime; invasive software; public domain software; amplify infection; blackmail; criminal; cyber domain; distrupt digital system; evade detection; fraudulent; information stealing; leverage investment; open source literature; prior malicious code deployment; revenue generation; righteous malware; surveillance; weaponising code risk; Computers; Malware; National security; Software; Viruses (medical);Weapons; active defense; cyber conflict; malicious code; malware; weaponize (ID#: 15-4941)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916396&isnumber=6916383

 

Bou-Harb, E.; Fachkha, C.; Debbabi, M.; Assi, C., "Inferring Internet-Scale Infections by Correlating Malware and Probing Activities," Communications (ICC), 2014 IEEE International Conference on, pp. 640, 646, 10-14 June 2014. doi: 10.1109/ICC.2014.6883391  Abstract: This paper presents a new approach to infer malware-infected machines by solely analyzing their generated probing activities. In contrary to other adopted methods, the proposed approach does not rely on symptoms of infection to detect compromised machines. This allows the inference of malware infection at very early stages of contamination. The approach aims at detecting whether the machines are infected or not as well as pinpointing the exact malware type/family, if the machines were found to be compromised. The latter insights allow network security operators of diverse organizations, Internet service providers and backbone networks to promptly detect their clients' compromised machines in addition to effectively providing them with tailored anti-malware/patch solutions. To achieve the intended goals, the proposed approach exploits the darknet Internet space and employs statistical methods to infer large-scale probing activities. Subsequently, such activities are correlated with malware samples by leveraging fuzzy hashing and entropy based techniques. The proposed approach is empirically evaluated using 60 GB of real darknet traffic and 65 thousand real malware samples. The results concur that the rationale of exploiting probing activities for worldwide early malware infection detection is indeed very promising. Further, the results demonstrate that the extracted inferences exhibit noteworthy accuracy and can generate significant cyber security insights that could be used for effective mitigation.
Keywords: Internet; computer network security; cryptography; entropy; fuzzy reasoning; invasive software; Internet scale infections; darknet Internet; entropy based techniques; fuzzy hashing; inference; malware infection detection; network security operators; probing activities; Correlation; Entropy; Internet ;Malware; Unsolicited electronic mail (ID#: 15-4942)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883391&isnumber=6883277

 

Musavi, S.A.; Kharrazi, M., "Back to Static Analysis for Kernel-Level Rootkit Detection," Information Forensics and Security, IEEE Transactions on, vol.9, no.9, pp.1465, 1476, Sept. 2014. doi: 10.1109/TIFS.2014.2337256  Abstract: Rootkit's main goal is to hide itself and other modules present in the malware. Their stealthy nature has made their detection further difficult, especially in the case of kernel-level rootkits. There have been many dynamic analysis techniques proposed for detecting kernel-level rootkits, while on the other hand, static analysis has not been popular. This is perhaps due to its poor performance in detecting malware in general, which could be attributed to the level of obfuscation employed in binaries which make static analysis difficult if not impossible. In this paper, we make two important observations, first there is usually little obfuscation used in legitimate kernel-level code, as opposed to the malicious kernel-level code. Second, one of the main approaches to penetrate the Windows operating system is through kernel-level drivers. Therefore, by focusing on detecting malicious kernel drivers employed by the rootkit, one could detect the rootkit while avoiding the issues with current detection technique. Given these two observation, we propose a simple static analysis technique with the aim of detecting malicious driver. We first study the current trends in the implementation of kernel-level rootkits. Afterward, we proposed a set of features to quantify the malicious behavior in kernel drivers. These features are then evaluated through a set of experiments on 4420 malicious and legitimate drivers, obtaining an accuracy of 98.15% in distinguishing between these drivers.
Keywords: device drivers; invasive software; operating system kernels; program diagnostics; Windows operating system; dynamic analysis techniques; kernel-level code; kernel-level drivers; kernel-level rootkit detection; malicious driver detection; malicious kernel-level code; malware; obfuscation level; static analysis; Feature extraction; Hardware; Kernel; Malware; Market research; Malware; kernel driver; rootkit; static analysis (ID#: 15-4943)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6850033&isnumber=6867417

 

Mohaisen, Aziz; West, Andrew G.; Mankin, Allison; Alrawi, Omar, "Chatter: Classifying Malware Families Using System Event Ordering," Communications and Network Security (CNS), 2014 IEEE Conference on, pp. 283, 291, 29-31 Oct. 2014. doi: 10.1109/CNS.2014.6997496  Abstract: Using runtime execution artifacts to identify malware and its associated “family” is an established technique in the security domain. Many papers in the literature rely on explicit features derived from network, file system, or registry interaction. While effective, use of these fine-granularity data points makes these techniques computationally expensive. Moreover, the signatures and heuristics this analysis produces are often circumvented by subsequent malware authors. To this end we propose CHATTER, a system that is concerned only with the order in which high-level system events take place. Individual events are mapped onto an alphabet and execution traces are captured via terse concatenations of those letters. Then, leveraging an analyst labeled corpus of malware, n-gram document classification techniques are applied to produce a classifier predicting malware family. This paper describes that technique and its proof-of-concept evaluation. In its prototype form only network events are considered and three malware families are highlighted. We show the technique achieves roughly 80% accuracy in isolation and makes non-trivial performance improvements when integrated with a baseline classifier of non-ordered features (with an accuracy of roughly 95%).
Keywords: Accuracy; Decision trees; Feature extraction; Machine learning algorithms; Malware; Support vector machines (ID#: 15-4944)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6997496&isnumber=6997445

 

Zongqu Zhao; Junfeng Wang; Jinrong Bai, "Malware Detection Method Based on the Control-Flow Construct Feature of Software," Information Security, IET, vol.8, no. 1, pp.18, 24, Jan. 2014. doi: 10.1049/iet-ifs.2012.0289  Abstract: The existing anti-virus methods extract signatures of software by manual analysis. It is inefficient when they deal with a large number of malware. Meanwhile, the limitation of unknown malware detection often is found in them too. By the research on software structure, it has been found that the control flow of software can be divided into many basic blocks by the interior cross-references, and a feature-selection approach based on this phenomenon is proposed. It can extract opcode sequences from the disassembled program, and translate them into features by vector space model. The algorithms of data mining are employed to find the classify rules from the software features, and then the rules can be applied to the malware detection. Experimental results illustrate that the proposed method can achieve the 97.0% malware detection accuracy and 3.2% false positive rate with the Random Forest classifier. Furthermore, as high as 94.5% overall accuracy can be achieved when only 5% experimental data are used as training data.
Keywords: data mining; invasive software; learning (artificial intelligence);pattern classification; anti-virus methods; control-flow construct feature; data mining; disassembled program; feature-selection approach; interior cross-references; malware detection method; opcode sequences; random forest classifier; software structure; vector space model (ID#: 15-4945)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6687154&isnumber=6687150


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Malware Analysis, Part 4

 
SoS Logo

Malware Analysis, Part 4

 

Malware detection, analysis, and classification are perennial issues in cybersecurity. The research presented here advances malware analysis in some unique and interesting ways. The works cited were published or presented in 2014.  Because of the volume of work, the bibliography is broken into multiple parts.


 

Zhao Xiaoyan; Fang Juan; Wang Xiujuan, "Android Malware Detection Based on Permissions," Information and Communications Technologies (ICT 2014), 2014 International Conference on, pp.1,5, 15-17 May 2014. doi:10.1049/cp.2014.0605 Abstract: In this paper, we propose a permission-based malware detection framework for Android platform. The proposed framework uses PCA (Principal Component Analysis) algorithm for features selection after permissions extracted, and applies SVM(support vector machine) methods to classify the collected data as benign or malicious in the process of detection. The simulation experimental results suggest that this proposed detection framework is effective in detecting unknown malware, and compared with traditional antivirus software, it can detect unknown malware effectively and immediately without updating the newest malware sample library in time. It also illustrates that using permissions features alone with machine learning methods can achieve good detection result.
Keywords: Android; Malware Detection; PCA; SVM (ID#: 15-4946)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6913658&isnumber=6913610

 

Seungyong Yoon; Jeongnyeo Kim; Hyunsook Cho, "Detection of SMS Mobile Malware," Electronics, Information and Communications (ICEIC), 2014 International Conference on, pp. 1, 2, 15-18 Jan. 2014. doi: 10.1109/ELINFOCOM.2014.6914392 Abstract: This paper relates to mobile malware detection for prevention against financial charge caused by the malicious behavior using SMS. In this paper, we propose the method that conducts malicious behavior monitoring and various analysis techniques to detect the attack. This method includes malware installation check, SMS sending and receiving analysis, and signature-based pattern matching. Therefore, we can effectively respond against SMS mobile malware attacks.
Keywords: financial data processing; invasive software; mobile computing; pattern matching; SMS mobile malware detection; SMS sending; attack detection; financial charge; malicious behavior; malware installation check; receiving analysis; signature based pattern matching; Computer crime; Inspection; Malware; Mobile communication; Pattern matching; Smart phones; SMS; mobile malware (ID#: 15-4947)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6914392&isnumber=6914344

 

Yong Li; Pan Hui; Depeng Jin; Li Su; Lieguang Zeng, "Optimal Distributed Malware Defense in Mobile Networks with Heterogeneous Devices," Mobile Computing, IEEE Transactions on, vol. 13, no.2, pp. 377, 391, Feb. 2014. doi: 10.1109/TMC.2012.255 Abstract: As malware attacks become more frequently in mobile networks, deploying an efficient defense system to protect against infection and to help the infected nodes to recover is important to prevent serious spreading and outbreaks. The technical challenges are that mobile devices are heterogeneous in terms of operating systems, the malware infects the targeted system in any opportunistic fashion via local and global connectivity, while the to-be-deployed defense system on the other hand would be usually resource limited. In this paper, we investigate the problem of how to optimally distribute the content-based signatures of malware, which helps to detect the corresponding malware and disable further propagation, to minimize the number of infected nodes. We model the defense system with realistic assumptions addressing all the above challenges that have not been addressed in previous analytical work. Based on the framework of optimizing the system welfare utility, which is the weighted summation of individual utility depending on the final number of infected nodes through the signature allocation, we propose an encounter-based distributed algorithm based on Metropolis sampler. Through theoretical analysis and simulations with both synthetic and realistic mobility traces, we show that the distributed algorithm achieves the optimal solution, and performs efficiently in realistic environments.
Keywords: invasive software; mobile radio; operating systems (computers); telecommunication security; Metropolis sampler; content-based signatures; encounter-based distributed algorithm; global connectivity ;heterogeneous devices; infected node minimization; infection protection; local connectivity; malware attacks; mobile devices; mobile networks; operating systems; optimal distributed malware defense; realistic mobility trace; signature allocation; synthetic mobility trace; system welfare utility; theoretical analysis; to-be-deployed defense system; Distributed algorithms; Educational institutions; Malware; Mathematical model; Mobile communication; Mobile computing; Mobile handsets; Security threat; distributed algorithm; heterogeneous mobile networks; mobile malware (ID#: 15-4948)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6381416&isnumber=6689256

 

Jun Li; Lidong Zhai; Xinyou Zhang; Daiyong Quan, "Research of Android Malware Detection Based On Network Traffic Monitoring," Industrial Electronics and Applications (ICIEA), 2014 IEEE 9th Conference on, pp.1739, 1744, 9-11 June 2014. doi: 10.1109/ICIEA.2014.6931449 Abstract: With Android terminal into the life of people, the spread of Android malware seriously affected people's life. As a result of the Android security flaws, attackers can easily collect private information of users, and the information can be utilized in APT attacks. It is not only a threat to the end user, but also poses a threat to industrial control systems and mobile Internet. In this paper, we propose a network traffic monitoring system used in the detection of Android malware. The system consists of four components: traffic monitoring, traffic anomaly recognition, response processing and cloud storage. The system parses the protocol of data packets and extracts the feature data, then use SVM classification algorithm for data classification, determine whether the network traffic is abnormal, and locate the application that produced abnormal through the correlation analysis. The system not only can automatic response and process the malicious software, but also can generate new security policy from existing information and training data; When training data is reaching a certain amount, it will trigger a new round of training to improve the ability of detection. Finally, we experiment on the system, the experimental results show that our system can effectively detect the Android malware and control the application.
Keywords: Android (operating system);cloud computing; invasive software; mobile computing; pattern classification; support vector machines; telecommunication traffic; APT attacks; Android malware detection; Android security flaws; Android terminal; SVM classification algorithm; cloud storage; correlation analysis; data packets protocol; feature data; industrial control systems; mobile Internet; network traffic; network traffic monitoring; private information; response processing; security policy; traffic anomaly recognition; Feature extraction; Malware; Monitoring; Smart phones; Software; Telecommunication traffic; Android; Malware; Network traffic monitoring; SVM (ID#: 15-4949)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6931449&isnumber=6931119

 

Criscione, C.; Bosatelli, F.; Zanero, S.; Maggi, F., "ZARATHUSTRA: Extracting Webinject Signatures from Banking Trojans," Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on, pp.139,148, 23-24 July 2014. doi: 10.1109/PST.2014.6890933 Abstract: Modern trojans are equipped with a functionality, called WebInject, that can be used to silently modify a web page on the infected end host. Given its flexibility, WebInject-based malware is becoming a popular information-stealing mechanism. In addition, the structured and well-organized malware-as-a-service model makes revenue out of customization kits, which in turns leads to high volumes of binary variants. Analysis approaches based on memory carving to extract the decrypted webinject.txt and config.bin files at runtime make the strong assumption that the malware will never change the way such files are handled internally, and therefore are not future proof by design. In addition, developers of sensitive web applications (e.g., online banking) have no tools that they can possibly use to even mitigate the effect of WebInjects.
Keywords: Web sites; banking; digital signatures; invasive software; Web page; WebInject-based malware; Webinject signature extraction; ZARATHUSTRA; banking trojans; binary variants; config.bin files extraction; customization kits; decrypted webinject.txt extraction; information-stealing mechanism; malware-as-a-service model; memory carving; sensitive Web applications; Cryptography; Engines; Fingerprint recognition; HTML; Monitoring; Servers; Surgery (ID#: 15-4950)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890933&isnumber=6890911

 

Derhab, A.; Saleem, K.; Youssef, A., "Third line of Defense Strategy to Fight Against SMS-Based Malware in Android Smartphones," Wireless Communications and Mobile Computing Conference (IWCMC), 2014 International, pp. 542, 547, 4-8 Aug. 2014. doi: 10.1109/IWCMC.2014.6906414 Abstract: In this paper, we inspire from two analogies: the warfare kill zone and the airport check-in system, to design and deploy a new line in the defense-in-depth strategy, called the third line. This line is represented by a security framework, named the Intrusion Ambushing System and is designed to tackle the issue of SMS-based malware in the Android-based Smartphones. The framework exploits the security features offered by Android operating system to prevent the malicious SMS from going out of the phone and detect the corresponding SMS-based malware. We show that the proposed framework can ensure full security against SMS-based malware. In addition, an analytical study demonstrates that the framework offers optimal performance in terms of detection time and execution cost in comparison to intrusion detection systems based on static and dynamic analysis.
Keywords: Android (operating system);electronic messaging; invasive software; smart phones; Android-based smart phones; SMS-based malware; airport check-in system; analytical analysis; defense-in-depth strategy; detection time; execution cost ;intrusion ambushing system; malicious SMS prevention; operating system; optimal performance; security features ;security framework; third line-of-defense strategy; warfare kill zone; Airports; Cryptography; Intrusion detection; Malware; Operating systems; Smart phones; Malware; SMS; intrusion ambushing; intrusion detection; third line of defense (ID#: 15-4951)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6906414&isnumber=6906315

 

Raveendranath, Rahul; Rajamani, Venkiteswaran; Babu, Anoop Joseph; Datta, Soumya Kanti, "Android Malware Attacks and Countermeasures: Current and Future Directions," Control, Instrumentation, Communication and Computational Technologies (ICCICCT), 2014 International Conference on, pp. 137, 143, 10-11 July 2014. doi: 10.1109/ICCICCT.2014.6992944 Abstract: Smartphones are rising in popularity as well as becoming more sophisticated over recent years. This popularity coupled with the fact that smartphones contain a lot of private user data is causing a proportional rise in different malwares for the platform. In this paper we analyze and classify state-of-the-art malware techniques and their countermeasures. The paper also reports a novel method for malware development and novel attack techniques such as mobile botnets, usage pattern based attacks and repackaging attacks. The possible countermeasures are also proposed. Then a detailed analysis of one of the proposed novel malware methods is explained. Finally the paper concludes by summarizing the paper.
Keywords: Androids; Humanoid robots; Malware; Permission; Servers; Smart phones; Android; Countermeasures; Malware; Permissions; Security threats (ID#: 15-4952)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6992944&isnumber=6992918

 

Burgess, C.; Sezer, S.; McLaughlin, K.; Eul Gyu Im, "Feature Set Reduction for the Detection of Packed Executables," Irish Signals & Systems Conference 2014 and 2014 China-Ireland International Conference on Information and Communications Technologies (ISSC 2014/CIICT 2014). 25th IET, pp.263, 268, 26-27 June 2014. doi: 10.1049/cp.2014.0696 Abstract: Emerging sophisticated malware utilises obfuscation to circumvent detection. This is achieved by using packers to disguise their malicious intent. In this paper a novel malware detection method for detecting packed executable files using entropy analysis is proposed. It utilises a reduced feature set of variables to calculate an entropy score from which classification can be performed. Competitive analysis with state-of-the-art reveals an increase in classification accuracy.
Keywords: invasive software; pattern classification; classification accuracy; entropy analysis; entropy score; feature set reduction; malware detection method; obfuscation; packed executable files detection; packed executables detection; Malware; Obfuscation; Packing; Security (ID#: 15-4953)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6912767&isnumber=6912720

 

Josse, S., "Malware Dynamic Recompilation," System Sciences (HICSS), 2014 47th Hawaii International Conference on, pp. 5080, 5089, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.624  Abstract: Malware are more and more difficult to analyze, using conventional static and dynamic analysis tools, because they use commercially off-the-shelf specialized tools to protect their code. We present in this paper the bases of a multi-targets, generic and automatic binary rewriting tool adapted to the analysis of protected and potentially hostile binary programs. It implements an emulator and several specialized analysis functions to firstly observe the target program and its execution environment, and next extract and simplify its representation. This simplification is done through the use of a new and generic method of information extraction and de-obfuscation.
Keywords: invasive software; program diagnostics; binary program analysis; code protection; dynamic malware recompilation; emulators; execution environment; information deobfuscation; information extraction; multi-target-generic-automatic binary rewriting tool; off-the-shelf specialized tools; target program analysis functions; Computer architecture; Data mining; Engines ;Instruments; Malware; Operating systems (ID#: 15-4954)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759227&isnumber=6758592

 

Bahador, Mohammad Bagher; Abadi, Mahdi; Tajoddin, Asghar, "HPCMalHunter: Behavioral Malware Detection Using Hardware Performance Counters and Singular Value Decomposition," Computer and Knowledge Engineering (ICCKE), 2014 4th International eConference on, pp. 703, 708, 29-30 Oct. 2014. doi: 10.1109/ICCKE.2014.6993402 Abstract: Malicious programs, also known as malware, often use code obfuscation techniques to make static analysis more difficult and to evade signature-based detection. To resolve this problem, various behavioral detection techniques have been proposed that focus on the run-time behaviors of programs in order to dynamically detect malicious ones. Most of these techniques describe the run-time behavior of a program on the basis of its data flow and/or its system call traces. Recent work in behavioral malware detection has shown promise in using hardware performance counters (HPCs), which are a set of special-purpose registers built into modern processors providing detailed information about hardware and software events. In this paper, we pursue this line of research by presenting HPCMalHunter, a novel approach for real-time behavioral malware detection. HPCMalHunter uses HPCs to collect a set of event vectors from the beginning of a program's execution. It also uses the singular value decomposition (SVD) to reduce these event vectors and generate a behavioral vector for the program. By applying support vector machines (SVMs) to the feature vectors of different programs, it is able to identify malicious programs in real-time. Our results of experiments show that HPCMalHunter can detect malicious programs at the beginning of their execution with a high detection rate and a low false alarm rate.
Keywords: Hardware; Malware; Matrix decomposition; Radiation detectors; Real-time systems; Support vector machines; Vectors; behavioral malware detection; hardware performance counter; hardware-level detection; real-time detection; singular value decomposition (ID#: 15-4955)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6993402&isnumber=6993332

 

Hosseini, Soodeh; Azgomi, Mohammad Abdollahi; Rahmani, Adel Torkaman, "On the Global Dynamics of an SEIRS Epidemic Model of Malware Propagation," Telecommunications (IST), 2014 7th International Symposium on, pp. 646, 651, 9-11 Sept. 2014. doi: 10.1109/ISTEL.2014.7000784 Abstract: In this paper, we attempt to mathematically formulate a susceptible-exposed-infectious-recovered-susceptible (SEIRS) epidemic model to study dynamical behaviors of malware propagation in scale-free networks (SFNs). In the proposed discrete-time epidemic model, we consider defense mechanism of software diversity to limit epidemic spreading in SFNs. Dynamical behaviors of the SEIRS epidemic model is determined by basic reproductive ratio, which is often used as a threshold parameter. Also, the impact of the assignment of diverse software packages on the propagation process is examined. Theoretical results show that basic reproductive ratio is significantly dependent on diverse software packages and the network topology. The installation of diverse software packages on nodes leads to decrease reproductive ratio and malware spreading. The results of numerical simulations are given to validate the theoretical analysis.
Keywords: Analytical models; Computational modeling; Malware; Mathematical model; Numerical models; Software packages; Scale-free network; basic reproductive ratio; malware propagation modeling; software diversity (ID#: 15-4956)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7000784&isnumber=7000650

 

Zhen Ling; Junzhou Luo; Kui Wu; Wei Yu; Xinwen Fu, "TorWard: Discovery of Malicious Traffic Over Tor," INFOCOM, 2014 Proceedings IEEE, pp.1402,1410, April 27 2014-May 2 2014. doi: 10.1109/INFOCOM.2014.6848074 Abstract: Tor is a popular low-latency anonymous communication system. However, it is currently abused in various ways. Tor exit routers are frequently troubled by administrative and legal complaints. To gain an insight into such abuse, we design and implement a novel system, TorWard, for the discovery and systematic study of malicious traffic over Tor. The system can avoid legal and administrative complaints and allows the investigation to be performed in a sensitive environment such as a university campus. An IDS (Intrusion Detection System) is used to discover and classify malicious traffic. We performed comprehensive analysis and extensive real-world experiments to validate the feasibility and effectiveness of TorWard. Our data shows that around 10% Tor traffic can trigger IDS alerts. Malicious traffic includes P2P traffic, malware traffic (e.g., botnet traffic), DoS (Denial-of-Service) attack traffic, spam, and others. Around 200 known malware have been identified. To the best of our knowledge, we are the first to perform malicious traffic categorization over Tor.
Keywords: computer network security; peer-to-peer computing; telecommunication network routing; telecommunication traffic; DoS; IDS; IDS alerts; P2P traffic; Tor exit routers; denial-of-service attack traffic; intrusion detection system; low-latency anonymous communication system; malicious traffic categorization; malicious traffic discovery; spam; Bandwidth; Computers; Logic gates; Malware; Mobile handsets; Ports (Computers);Servers; Intrusion Detection System; Malicious Traffic; Tor (ID#: 15-4957)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848074&isnumber=6847911

 

Bond, M.; Choudary, O.; Murdoch, S.J.; Skorobogatov, S.; Anderson, R., "Chip and Skim: Cloning EMV Cards with the Pre-play Attack," Security and Privacy (SP), 2014 IEEE Symposium on, pp. 49, 64, 18-21 May 2014. doi: 10.1109/SP.2014.11 Abstract: EMV, also known as "Chip and PIN", is the leading system for card payments worldwide. It is used throughout Europe and much of Asia, and is starting to be introduced in North America too. Payment cards contain a chip so they can execute an authentication protocol. This protocol requires point-of-sale (POS) terminals or ATMs to generate a nonce, called the unpredictable number, for each transaction to ensure it is fresh. We have discovered two serious problems: a widespread implementation flaw and a deeper, more difficult to fix flaw with the EMV protocol itself. The first flaw is that some EMV implementers have merely used counters, timestamps or home-grown algorithms to supply this nonce. This exposes them to a "pre-play" attack which is indistinguishable from card cloning from the standpoint of the logs available to the card-issuing bank, and can be carried out even if it is impossible to clone a card physically. Card cloning is the very type of fraud that EMV was supposed to prevent. We describe how we detected the vulnerability, a survey methodology we developed to chart the scope of the weakness, evidence from ATM and terminal experiments in the field, and our implementation of proof-of-concept attacks. We found flaws in widely-used ATMs from the largest manufacturers. We can now explain at least some of the increasing number of frauds in which victims are refused refunds by banks which claim that EMV cards cannot be cloned and that a customer involved in a dispute must therefore be mistaken or complicit. The second problem was exposed by the above work. Independent of the random number quality, there is a protocol failure: the actual random number generated by the terminal can simply be replaced by one the attacker used earlier when capturing an authentication code from the card. This variant of the pre-play attack may be carried out by malware in an ATM or POS terminal, or by a man-in-the-middle between the terminal and the acquirer. We explore the design an- implementation mistakes that enabled these flaws to evade detection until now: shortcomings of the EMV specification, of the EMV kernel certification process, of implementation testing, formal analysis, and monitoring customer complaints. Finally we discuss countermeasures. More than a year after our initial responsible disclosure of these flaws to the banks, action has only been taken to mitigate the first of them, while we have seen a likely case of the second in the wild, and the spread of ATM and POS malware is making it ever more of a threat.
Keywords: financial data processing; invasive software; ATM malware; Asia; EMV card cloning; Europe; North America; POS malware; POS terminals; automated teller machines; card payments; counters; home-grown algorithms; man-in-the-middle attack; point-of-sale terminals; preplay attack; proof-of-concept attacks; timestamps; unpredictable number; Authentication; Authorization; Cloning; Cryptography; Online banking; Protocols; Radiation detectors (ID#: 15-4958)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956556&isnumber=6956545

 

Wrench, P.M.; Irwin, B.V.W., "Towards a Sandbox for the Deobfuscation and Dissection of PHP Malware," Information Security for South Africa (ISSA), 2014, pp. 1, 8, 13-14 Aug. 2014. doi: 10.1109/ISSA.2014.6950504 Abstract: The creation and proliferation of PHP-based Remote Access Trojans (or web shells) used in both the compromise and post exploitation of web platforms has fuelled research into automated methods of dissecting and analysing these shells. Current malware tools disguise themselves by making use of obfuscation techniques designed to frustrate any efforts to dissect or reverse engineer the code. Advanced code engineering can even cause malware to behave differently if it detects that it is not running on the system for which it was originally targeted. To combat these defensive techniques, this paper presents a sandbox-based environment that aims to accurately mimic a vulnerable host and is capable of semi-automatic semantic dissection and syntactic deobfuscation of PHP code.
Keywords: Internet; authoring languages; invasive software; PHP code; PHP malware; PHP-based remote access Trojans; Web platforms; Web shells; advanced code engineering; malware tools; sandbox-based environment; semi-automatic semantic dissection; syntactic deobfuscation; Arrays; Databases; Decoding; Malware; Process control; Semantics; Software; Code deobfuscation; Reverse engineering; Sandboxing (ID#: 15-4959)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6950504&isnumber=6950479

 

Shi Pu; Zhouguo Chen; Chen Huang; Yiming Liu; Bing Zen, "Threat Analysis of Smart Mobile Device," General Assembly and Scientific Symposium (URSI GASS), 2014 XXXIth URSI, pp.1,3, 16-23 Aug. 2014. doi: 10.1109/URSIGASS.2014.6929439 Abstract: With the development of telecommunication and network bands, there is a great increase in the number of services and applications available for smart mobile devices while the population of malicious mobile software is growing rapidly. Most smart mobile devices do not run anti-malware programs to protect against threats, such as virus, trojan, ddos, malware and botnet, which give the chance for hackers to control the system. The paper mainly analyses the typical threats which smart mobile devices face.
Keywords: mobile computing; security of data; DDOS; anti-malware programs; botnet; malicious mobile software; malware; mobile security; network bands; smart mobile device; telecommunication network; threat analysis; trojan; virus; Market research; Mobile communication; Mobile handsets; Operating systems; Trojan horses (ID#: 15-4960)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6929439&isnumber=6928981

 

Yoon, Seungyong; Jeon, YongSung, "Security Threats Analysis for Android Based Mobile Device," Information and Communication Technology Convergence (ICTC), 2014 International Conference on, pp. 775,  776, 22-24 Oct. 2014. doi: 10.1109/ICTC.2014.6983285 Abstract: Recently, the number of mobile malware is rapidly growing. In order to cope with mobile malware, the detection and response method of rooting attack is actively studied. However, the damages caused information leakage and financial charge can be occurred without rooting attack. In this paper, we have shown through experiments that it is possible to conduct DDoS attacks, privacy information leakage, and illegal financial charging without rooting attacks, and analyzed security vulnerabilities and threats in detail.
Keywords: Computer crime; Computer hacking; Malware; Mobile communication; Privacy; Smart phones; Android; Mobile Device; Mobile Malware; Rooting (ID#: 15-4961)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6983285&isnumber=6983064

 

Eslahi, Meisam; Rostami, Mohammad Reza; Hashim, H.; Tahir, N.M.; Naseri, Maryam Var, "A Data Collection Approach for Mobile Botnet Analysis and Detection," Wireless Technology and Applications (ISWTA), 2014 IEEE Symposium on, pp.199,204, Sept. 28 2014-Oct. 1 2014. doi: 10.1109/ISWTA.2014.6981187 Abstract: Recently, MoBots or Mobile Botnets have become one of the most critical challenges in mobile communication and cyber security. The integration of Mobile devices with the Internet along with enhanced features and capabilities has made them an environment of interest for cyber criminals. Therefore, the spread of sophisticated malware such as Botnets has significantly increased in mobile devices and networks. On the other hand, the Bots and Botnets are newly migrated to mobile devices and have not been fully explored yet. Thus, the efficiency of current security solutions is highly limited due to the lack of available Mobile Botnet datasets and samples. As a result providing a valid dataset to analyse and understand the Mobile botnets has become a crucial issue in mobile security and privacy. In this paper we present an overview of the current available data set and samples and we discuss their advantages and disadvantages. We also propose a model to implement a mobile Botnet test bed to collect data for further analysis.
Keywords: Command and control systems; Malware; Mobile communication; Mobile computing; Mobile handsets; Servers; Botnets; Dataset; Mobile malware; network traffic; smartphone security (ID#: 15-4962)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6981187&isnumber=6981155

 

Nordvik, R.; Yi-Ching Liao; Langweg, H., "AccountabilityFS: A File System Monitor for Forensic Readiness," Intelligence and Security Informatics Conference (JISIC), 2014 IEEE Joint, pp.308,311, 24-26 Sept. 2014. doi: 10.1109/JISIC.2014.61 Abstract: We present a file system monitor, AccountabilityFS, which prepares an organization for forensic analysis and incident investigation in advance by ensuring file system operation traces readily available. We demonstrate the feasibility of AccountabilityFS in terms of performance and storage overheads, and prove its reliability against malware attacks.
Keywords: digital forensics; invasive software; Accountability FS file system monitor; file system operation; forensic analysis; forensic readiness; malware attacks; performance overhead; storage overhead; Educational institutions; Forensics; Kernel; Malware; Monitoring; Reliability (ID#: 15-4963)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6975599&isnumber=6975536

 

Ming-Yang Su; Wen-Chuan Chang, "Permission-based Malware Detection Mechanisms For Smart Phones," Information Networking (ICOIN), 2014 International Conference on, pp. 449, 452, 10-12 Feb. 2014. doi: 10.1109/ICOIN.2014.6799722 Abstract: Smart phone users often neglect security issues, and directly confirm the pop-up windows without reading the permission requirement of the software. As a result, many smart phones have been implanted with virus. In the Android market, malicious software is disguised as games for users to download, thus resulting in malicious consumption, phone resource consumption, assistance in crime, or information theft. This study focuses on the prevention of the malware installed on Android smart phones, and analyzes whether an app is malware according to the announced permission combinations of the application.
Keywords: computer viruses; smart phones; Android market; crime assistance; information theft; malicious consumption; malicious software; permission requirement; permission-based malware detection mechanisms; phone resource consumption; security issues; smart phone users; Internet; Malware; Operating systems; Probability; Smart phones; Android; permission; security; smart phone (ID#: 15-4964)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6799722&isnumber=6799467

 

Bolton, A.; Heard, N., "Application of a Linear Time Method for Change Point Detection to the Classification of Software," Intelligence and Security Informatics Conference (JISIC), 2014 IEEE Joint, pp.292,295, 24-26 Sept. 2014. doi: 10.1109/JISIC.2014.58 Abstract: A computer program's dynamic instruction trace is the sequence of instructions it generates during run-time. This article presents a method for analysing dynamic instruction traces, with an application in malware detection. Instruction traces can be modelled as piecewise homogeneous Markov chains and an exact linear time method is used for detecting change points in the transition probability matrix. The change points divide the instruction trace into segments performing different functions. If segments performing malicious functions can be detected then the software can be classified as malicious. The change point detection method is applied to both a simulated dynamic instruction trace and the dynamic instruction trace generated by a piece of malware.
Keywords: Markov processes; invasive software; matrix algebra; probability; change point detection method; computer program dynamic instruction trace analysis; exact linear time method; instruction sequence; instruction trace modelling; malicious functions; malware detection; piecewise homogeneous Markov chains; simulated dynamic instruction trace; software classification; transition probability matrix; Computational modeling; Computers; Educational institutions; Heuristic algorithms; Malware; Markov processes; Software; PELT algorithm; change point analysis; malware; piecewise homogeneous Markov chain (ID#: 15-4965)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6975595&isnumber=6975536

 

Kharraz, A.; Kirda, E.; Robertson, W.; Balzarotti, D.; Francillon, A., "Optical Delusions: A Study of Malicious QR Codes in the Wild," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.192, 203, 23-26 June 2014. doi: 10.1109/DSN.2014.103 Abstract: QR codes, a form of 2D barcode, allow easy interaction between mobile devices and websites or printed material by removing the burden of manually typing a URL or contact information. QR codes are increasingly popular and are likely to be adopted by malware authors and cyber-criminals as well. In fact, while a link can "look" suspicious, malicious and benign QR codes cannot be distinguished by simply looking at them. However, despite public discussions about increasing use of QR codes for malicious purposes, the prevalence of malicious QR codes and the kinds of threats they pose are still unclear. In this paper, we examine attacks on the Internet that rely on QR codes. Using a crawler, we performed a large-scale experiment by analyzing QR codes across 14 million unique web pages over a ten-month period. Our results show that QR code technology is already used by attackers, for example to distribute malware or to lead users to phishing sites. However, the relatively few malicious QR codes we found in our experiments suggest that, on a global scale, the frequency of these attacks is not alarmingly high and users are rarely exposed to the threats distributed via QR codes while surfing the web.
Keywords: Internet; Web sites; computer crime; invasive software; telecommunication security;2D barcode; Internet; URL; Web crawler; Web sites; contact information; malicious QR code; mobile device; optical delusion; phishing sites; Crawlers; Malware; Mobile communication; Servers; Smart phones; Web pages; Mobile devices; malicious QR codes; malware; phishing (ID#: 15-4966)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903579&isnumber=6903544

 

Gupta, Mukesh Kumar; Govil, Mahesh Chand; Singh, Girdhari, "A Context-Sensitive Approach for Precise Detection of Cross-Site Scripting Vulnerabilities," Innovations in Information Technology (INNOVATIONS), 2014 10th International Conference on, pp.7,12, 9-11 Nov. 2014. doi: 10.1109/INNOVATIONS.2014.6987553 Abstract: Currently, dependence on web applications is increasing rapidly for social communication, health services, financial transactions and many other purposes. Unfortunately, the presence of cross-site scripting vulnerabilities in these applications allows malicious user to steals sensitive information, install malware, and performs various malicious operations. Researchers proposed various approaches and developed tools to detect XSS vulnerability from source code of web applications. However, existing approaches and tools are not free from false positive and false negative results. In this paper, we propose a taint analysis and defensive programming based HTML context-sensitive approach for precise detection of XSS vulnerability from source code of PHP web applications. It also provides automatic suggestions to improve the vulnerable source code. Preliminary experiments and results on test subjects show that proposed approach is more efficient than existing ones.
Keywords: Browsers; Context; HTML; Security; Servers; Software; Standards; Cross-Site Scripting; Software Development Life Cycle; Taint Analysis; Vulnerability Detection; XSS Attacks (ID#: 15-4967)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6987553&isnumber=6985764

 

Heard, N.; Rubin-Delanchy, P.; Lawson, D., "Filtering Automated Polling Traffic in Computer Network Flow Data," Intelligence and Security Informatics Conference (JISIC), 2014 IEEE Joint, pp.268, 271, 24-26 Sept. 2014. doi: 10.1109/JISIC.2014.52 Abstract: Detecting polling behaviour in a computer network has two important applications. First, the polling can be indicative of malware beaconing, where an undetected software virus sends regular communications to a controller. Second, the cause of the polling may not be malicious, since it may correspond to regular automated update requests permitted by the client, to build models of normal host behaviour for signature-free anomaly detection, this polling behaviour needs to be understood. This article presents a simple Fourier analysis technique for identifying regular polling, and focuses on the second application: modelling the normal behaviour of a host, using real data collected from the computer network of Imperial College London.
Keywords: Fourier analysis; computer network security; system monitoring; Fourier analysis technique; Imperial College London; automated polling traffic filtering; computer network flow data; regular automated update requests; signature-free anomaly detection; Computational modeling; Educational institutions; IP networks; Malware; Monitoring; Servers (ID#: 15-4968)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6975589&isnumber=6975536

 

Shulman, H.; Waidner, M., "Towards Forensic Analysis of Attacks with DNSSEC," Security and Privacy Workshops (SPW), 2014 IEEE, pp.69, 76, 17-18 May 2014. doi: 10.1109/SPW.2014.20 Abstract: DNS cache poisoning is a stepping stone towards advanced (cyber) attacks, and can be used to monitor users' activities, for censorship, to distribute malware and spam, and even to subvert correctness and availability of Internet networks and services. The DNS infrastructure relies on challenge-response defences, which are deemed effective for thwarting attacks by (the common) off-path adversaries. Such defences do not suffice against stronger adversaries, e.g., man-in-the-middle (MitM). However, there seems to be little willingness to adopt systematic, cryptographic mechanisms, since stronger adversaries are not believed to be common. In this work we validate this assumption and show that it is imprecise. In particular, we demonstrate that: (1) attackers can frequently obtain MitM capabilities, and (2) even weaker attackers can subvert DNS security. Indeed, as we show, despite wide adoption of challenge-response defences, cache-poisoning attacks against DNS infrastructure are highly prevalent. We evaluate security of domain registrars and name servers, experimentally, and find vulnerabilities, which expose DNS infrastructure to cache poisoning. We review DNSSEC, the defence against DNS cache poisoning, and argue that, not only it is the most suitable mechanism for preventing cache poisoning attacks, but it is also the only proposed defence that enables a-posteriori forensic analysis of attacks. Specifically, DNSSEC provides cryptographic evidences, which can be presented to, and validated by, any third party and can be used in investigations and for detection of attacks even long after the attack took place.
Keywords: cache storage computer crime; cryptographic protocols; digital forensics; digital signatures; invasive software; DNS cache poisoning attacks; DNS infrastructure; DNS security; DNSSEC; Internet networks ;Internet services; MitM capabilities; a-posteriori forensic analysis; advanced cyber attacks;attacks detection; censorship; challenge-response defences; cryptographic evidences; cryptographic mechanisms; digital signature; domain registrars; malware; man-in-the-middle; name servers; spam; thwarting attacks; users activities monitoring; Computer crime; Cryptography; Forensics; Internet; Routing; Servers; DNS cache-poisoning; DNSSEC; cryptographic evidences; cyber attacks;digital signatures; security (ID#: 15-4969)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957288&isnumber=6957265

 

Jermyn, J.; Jover, R.P.; Istomin, M.; Murynets, I., "Firecycle: A scalable test bed for large-scale LTE security research," Communications (ICC), 2014 IEEE International Conference on, pp. 907, 913, 10-14 June 2014. doi: 10.1109/ICC.2014.6883435 Abstract: LTE (Long Term Evolution) is the latest cellular communications standard to provide advanced mobile services that go beyond traditional voice and short messaging traffic. Mobility networks are experiencing a drastic evolution with the advent of Machine to Machine (M2M) systems and the Internet of Things (IoT), which is expected to result in billions of connected devices in the near future. In parallel, the security threat landscape against communication networks has rapidly evolved over the last few years, with major Distributed Denial of Service (DDoS) attacks and the substantial spread of mobile malware. In this paper we introduce Firecycle, a new modeling and simulation platform for next-generation LTE mobility network security research. This standards compliant platform is suitable for large-scale security analysis of threats against a real LTE mobile network. It is designed with the ability to be distributed over the cloud, with an arbitrary number of virtual machines running different portions of the network, thus allowing simulation and testing of a full-scale LTE mobility network with millions of connected devices. Moreover, the mobile traffic generated by the platform is modeled from real data traffic observations from one of the major tier-1 operators in the US.
Keywords: Internet of Things; Long Term Evolution; cellular radio; computer network security; invasive software; DDoS attacks; Firecycle; Internet of Things; IoT; Long Term Evolution;M2M machine; cellular communications; distributed denial of service attacks; large-scale LTE security research; machine to machine system; mobile malware; next-generation LTE mobility network security research; Analytical models; IP networks; Long Term Evolution; Mobile communication; Mobile computing; Security; Smart phones (ID#: 15-4970)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883435&isnumber=6883277


Note:

 

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Malware Analysis, Part 5

 
SoS Logo

Malware Analysis, Part 5

 

Malware detection, analysis, and classification are perennial issues in cybersecurity. The research presented here advances malware analysis in some unique and interesting ways. The works cited were published or presented in 2014.  Because of the volume of work, the bibliography is broken into multiple parts.


Dainotti, A.; King, A.; Claffy, K.; Papale, F.; Pescapé, A., "Analysis of a '/0' Stealth Scan from a Botnet," Networking, IEEE/ACM Transactions on, vol. 23, no. 2, pp. 341-354, April 2015. doi: 10.1109/TNET.2013.2297678  Abstract: Botnets are the most common vehicle of cyber-criminal activity. They are used for spamming, phishing, denial-of-service attacks, brute-force cracking, stealing private information, and cyber warfare. Botnets carry out network scans for several reasons, including searching for vulnerable machines to infect and recruit into the botnet, probing networks for enumeration or penetration, etc. We present the measurement and analysis of a horizontal scan of the entire IPv4 address space conducted by the Sality botnet in February 2011. This 12-day scan originated from approximately 3 million distinct IP addresses and used a heavily coordinated and unusually covert scanning strategy to try to discover and compromise VoIP-related (SIP server) infrastructure. We observed this event through the UCSD Network Telescope, a /8 darknet continuously receiving large amounts of unsolicited traffic, and we correlate this traffic data with other public sources of data to validate our inferences. Sality is one of the largest botnets ever identified by researchers. Its behavior represents ominous advances in the evolution of modern malware: the use of more sophisticated stealth scanning strategies by millions of coordinated bots, targeting critical voice communications infrastructure. This paper offers a detailed dissection of the botnet's scanning behavior, including general methods to correlate, visualize, and extrapolate botnet behavior across the global Internet.
Keywords: Animation; Geology; IP networks; Internet; Ports (Computers);Servers; Telescopes; Botnet Internet background radiation; Internet telephony; Network Telescope; VoIP; communication system security; darknet; network probing; scanning (ID#: 15-4971)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6717049&isnumber=4359146

 

Vance, Andrew, "Flow based analysis of Advanced Persistent Threats Detecting Targeted Attacks in Cloud Computing," Infocommunications Science and Technology, 2014 First International Scientific-Practical Conference Problems of, pp.173,176, 14-17 Oct. 2014. doi: 10.1109/INFOCOMMST.2014.6992342 Abstract: Cloud computing provides industry, government, and academic users' convenient and cost-effective access to distributed services and shared data via the Internet. Due to its distribution of diverse users and aggregation of immense data, cloud computing has increasingly been the focus of targeted attacks. Meta-analysis of industry studies and retrospective research involving cloud service providers reveal that cloud computing is demonstrably vulnerable to a particular type of targeted attack, Advanced Persistent Threats (APTs). APTs have proven to be difficult to detect and defend against in cloud based infocommunication systems. The prevalent use of polymorphic malware and encrypted covert communication channels make it difficult for existing packet inspecting and signature based security technologies such as; firewalls, intrusion detection sensors, and anti-virus systems to detect APTs. In this paper, we examine the application of an alternative security approach which applies an algorithm derived from flow based monitoring to successfully detect APTs. Results indicate that statistical modeling of APT communications can successfully develop deterministic characteristics for detection is a more effective and efficient way to protect against APTs.
Keywords: Cloud computing; Computer security; Logic gates; Telecommunication traffic; Vectors; Advanced Persistent Threats ;Cloud Computing; Cyber Security; Flow Based Analysis; Threat Detection (ID#: 15-4972)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6992342&isnumber=6992271

 

Sang, F.L.; Nicomette, V.; Deswarte, Y., "A Tool to Analyze Potential I/O Attacks against PCs," Security & Privacy, IEEE, vol.12, no.2, pp.60,66, Mar.-Apr. 2014. doi: 10.1109/MSP.2013.79 Abstract: Instead of making the CPU execute malware, I/O attacks exploit peripheral devices and, as such, can't be detected by traditional anti-malware techniques. The proposed multipurpose FPGA-based tool can help analyze such attacks and be programmed to mimic a malicious I/O controller, host a Trojan horse, and even apply fuzzing techniques to identify vulnerabilities that could be exploited from I/O controllers or peripheral devices.
Keywords: field programmable gate arrays; invasive software; microcomputers; peripheral interfaces; I/O attack; I/O controller; Trojan horse; antimalware technique; fuzzing technique; multipurpose FPGA-based tool; peripheral device; Central Processing Unit; Computer security; Field programmable gate arrays; Input variables; Malware; Memory management;I/O attacks; fuzzing; vulnerability analysis (ID#: 15-4973)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6567863&isnumber=6798534

 

Kara, A.M.; Binsalleeh, H.; Mannan, M.; Youssef, A.; Debbabi, M., "Detection of Malicious Payload Distribution Channels in DNS," Communications (ICC), 2014 IEEE International Conference on, pp.853,858, 10-14 June 2014. doi: 10.1109/ICC.2014.6883426 Abstract: Botmasters are known to use different protocols to hide their activities. Throughout the past few years, several protocols have been abused, and recently Domain Name System (DNS) also became a target of such malicious activities. In this paper, we study the use of DNS as a malicious payload distribution channel. We present a system to analyze the resource record activities of domain names and build DNS zone profiles to detect payload distribution channels. Our work is based on an extensive analysis of malware datasets for one year, and a near real-time feed of passive DNS traffic. The experimental results reveal a few previously unreported long-running hidden domains used by the Morto worm for distributing malicious payloads. Our experiments on passive DNS traffic indicate that our system can detect these channels regardless of the payload format.
Keywords: computer network security; invasive software; protocols; telecommunication traffic; Botmasters; DNS traffic; Morto worm; domain name system; malicious activities; malicious payload distribution channel; malicious payload distribution channel detection; malware datasets; passive DNS traffic; protocols; resource record activities;Databases;Malware;Payloads;Protocols;Servers;Syntactics; Tunneling (ID#: 15-4974)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883426&isnumber=6883277

 

Crussell, J.; Gibler, C.; Chen, H., "AnDarwin: Scalable Detection of Android Application Clones Based on Semantics," Mobile Computing, IEEE Transactions on, vol.14, no.10, pp.2007-2019, Oct. 1 2015. doi: 10.1109/TMC.2014.2381212 Abstract: Smartphones rely on their vibrant application markets; however, plagiarism threatens the long-term health of these markets. We present a scalable approach to detecting similar Android apps based on their semantic information. We implement our approach in a tool called AnDarwin and evaluate it on 265,359 apps collected from 17 markets including Google Play and numerous third-party markets. In contrast to earlier approaches, AnDarwin has four advantages: it avoids comparing apps pairwise, thus greatly improving its scalability; it analyzes only the app code and does not rely on other information — such as the app’s market, signature, or description — thus greatly increasing its reliability; it can detect both full and partial app similarity; and it can automatically detect library code and remove it from the similarity analysis.We present two use cases for AnDarwin: finding similar apps by different developers (“clones”) and similar apps from the same developer (“rebranded”). In ten hours, AnDarwin detected at least 4,295 apps that are the victims of cloning and 36,106 rebranded apps. Additionally, AnDarwin detects similar code that is injected into many apps, which may indicate the spread of malware. Our evaluation demonstrates AnDarwin’s ability to accurately detect similar apps on a large scale.
Keywords: Cloning; Feature extraction; Libraries; Malware; Semantics; Smart phones; Vectors (ID#: 15-4975)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6985631&isnumber=4358975

 

Daiping Liu; Haining Wang; Stavrou, A., "Detecting Malicious Javascript in PDF through Document Instrumentation," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.100,111, 23-26 June 2014. doi: 10.1109/DSN.2014.92 Abstract: An emerging threat vector, embedded malware inside popular document formats, has become rampant since 2008. Owed to its wide-spread use and Javascript support, PDF has been the primary vehicle for delivering embedded exploits. Unfortunately, existing defenses are limited in effectiveness, vulnerable to evasion, or computationally expensive to be employed as an on-line protection system. In this paper, we propose a context-aware approach for detection and confinement of malicious Javascript in PDF. Our approach statically extracts a set of static features and inserts context monitoring code into a document. When an instrumented document is opened, the context monitoring code inside will cooperate with our runtime monitor to detect potential infection attempts in the context of Javascript execution. Thus, our detector can identify malicious documents by using both static and runtime features. To validate the effectiveness of our approach in a real world setting, we first conduct a security analysis, showing that our system is able to remain effective in detection and be robust against evasion attempts even in the presence of sophisticated adversaries. We implement a prototype of the proposed system, and perform extensive experiments using 18623 benign PDF samples and 7370 malicious samples. Our evaluation results demonstrate that our approach can accurately detect and confine malicious Javascript in PDF with minor performance overhead.
Keywords: Java; document handling; feature extraction; invasive software; ubiquitous computing; Javascript execution; Javascript support; PDF; context monitoring code; context-aware approach; document format; document instrumentation; embedded malware; emerging threat vector; evasion attempt; malicious Javascript confinement; malicious Javascript detection; malicious document identification; online protection system; potential infection attempt detection; runtime feature; runtime monitoring; security analysis; sophisticated adversaries; static feature extraction; Context; Feature extraction; Instruments; Malware; Monitoring; Portable document format; Runtime; Malcode bearing PDF; document instrumentation; malicious Javascript; malware detection and confinement (ID#: 15-4976)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903571&isnumber=6903544

 

Vargheese, R., "Dynamic Protection for Critical Health Care Systems Using Cisco CWS: Unleashing the Power of Big Data Analytics," Computing for Geospatial Research and Application (COM.Geo), 2014 Fifth International Conference on, pp.77,81, 4-6 Aug. 2014. doi: 10.1109/COM.Geo.2014.28 Abstract: Critical Care IT systems such as life support devices, vitals monitoring systems, information systems that provide point of care guidance to care teams are a key component of a lifesaving effort in Healthcare. The mega trends of mobility, social, cloud combined with wide spread increase and sophistication of malware, has created new challenges and the point in time detection methods at the hospitals are no longer effective and pose a big threat to the critical care systems. To maintain the availability and integrity of these critical care systems, new adaptive, learning security defense systems are required that not only learns from the traffic entering the hospital, but also proactively learns from the traffic worldwide. Cisco's Cloud web security (CWS) provides industry-leading security and control for the distributed enterprise by protecting users everywhere, anytime through Cisco worldwide threat intelligence, advanced threat defense capabilities, and roaming user protection. It leverages the big data to perform behavioral analysis, anomaly detection, evasion resistance, rapid Detection services using flow based, signature based, behavior based and full packet capture models to identify threats. This tech talk looks at how big Data Analytics is used in combination with other security capabilities to proactively identify threats and prevent wide spread damage to healthcare critical assets.
Keywords: Big Data; cloud computing; data analysis; data protection; health care; hospitals; medical information systems; security of data; Cisco CWS; Cisco Cloud Web security; Cisco worldwide threat intelligence; advanced threat defense capabilities; anomaly detection; behavioral analysis; big data analytics; care guidance; care teams; critical care IT systems; critical health care systems; dynamic protection; evasion resistance; healthcare critical assets; information systems; life support devices lifesaving effort; monitoring systems; rapid detection services; roaming user protection; Big data;Industries; Malware; Medical services; Monitoring; Behavior Analysis; Big Data Analytics; Cloud; Cloud Web Security; Critical Care; Healthcare; Machine Learning; Malware; Security (ID#: 15-4977)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6910124&isnumber=6910097

 

Pek, G.; Buttyan, L., "Towards the Automated Detection of Unknown Malware on Live Systems," Communications (ICC), 2014 IEEE International Conference on, pp. 847, 852, 10-14 June 2014. doi: 10.1109/ICC.2014.6883425 Abstract: In this paper, we propose a new system monitoring framework that can serve as an enabler for automated malware detection on live systems. Our approach takes advantage of the increased availability of hardware assisted virtualization capabilities of modern CPUs, and its basic novelty consists in launching a hypervisor layer on the live system without stopping and restarting it.This hypervisor runs at a higher privilege level than the OS itself, thus, it can be used to observe the behavior of the analyzed system in a transparent manner. For this purpose, we also propose a novel system call tracing method that is designed to be configurable in terms of transparency and granularity.
Keywords: computer network security; invasive software; virtualisation; CPU; automated malware detection; hardware assisted virtualization capability; hypervisor layer; l ive systems; system call tracing method; system monitoring framework; unknown malware; Data structures; Hardware; Malware; Monitoring; Program processors; Virtual machine monitors; Virtualization (ID#: 15-4978)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883425&isnumber=6883277

 

Shaw, A.L.; Bordbar, B.; Saxon, J.; Harrison, K.; Dalton, C.I., "Forensic Virtual Machines: Dynamic Defence in the Cloud via Introspection," Cloud Engineering (IC2E), 2014 IEEE International Conference on, pp.303, 310, 11-14 March 2014. doi: 10.1109/IC2E.2014.59  Abstract: The Cloud attempts to provide its users with automatically scalable platforms to host many applications and operating systems. To allow for quick deployment, they are often homogenised to a few images, restricting the variations used within the Cloud. An exploitable vulnerability stored within an image means that each instance will suffer from it and as a result, an attacker can be sure of a high pay-off for their time. This makes the Cloud a prime target for malicious activities. There is a clear requirement to develop an automated and computationally-inexpensive method of discovering malicious behaviour as soon as it starts, such that remedial action can be adopted before substantial damage is caused. In this paper we propose the use of Mini-OS, a virtualised operating system that uses minimal resources on the Xen virtualisation platform, for analysing the memory space of other guest virtual machines. These detectors, which we call Forensic Virtual Machines (FVMs), are lightweight such that they are inherently computationally cheap to run. Such a small footprint allows the physical host to run numerous instances to find symptoms of malicious behaviour whilst potentially limiting attack vectors. We describe our experience of developing FVMs and how they can be used to complement existing methods to combat malware. We also evaluate them in terms of performance and the resources that they require.
Keywords: cloud computing; digital forensics; invasive software; operating systems (computers); virtual machines; virtualisation; FVM; Mini-OS virtualised operating system; Xen virtualisation platform; cloud defence; forensic virtual machines; guest virtual machines; image vulnerability; malicious activities; malicious behaviour discovery; malware; Forensics; Kernel; Libraries; Malware; Monitoring; Virtual machining; Xen; cloud computing; forensics; introspection; intrusion detection; monitoring; security virtual machine; virtualization (ID#: 15-4979)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903487&isnumber=6903436

 

Bou-Harb, E.; Debbabi, M.; Assi, C., "Behavioral Analytics for Inferring Large-Scale Orchestrated Probing Events," Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on, pp.506,511, April 27 2014-May 2 2014. doi: 10.1109/INFCOMW.2014.6849283 Abstract: The significant dependence on cyberspace has indeed brought new risks that often compromise, exploit and damage invaluable data and systems. Thus, the capability to proactively infer malicious activities is of paramount importance. In this context, inferring probing events, which are commonly the first stage of any cyber attack, render a promising tactic to achieve that task. We have been receiving for the past three years 12 GB of daily malicious real darknet data (i.e., Internet traffic destined to half a million routable yet unallocated IP addresses) from more than 12 countries. This paper exploits such data to propose a novel approach that aims at capturing the behavior of the probing sources in an attempt to infer their orchestration (i.e., coordination) pattern. The latter defines a recently discovered characteristic of a new phenomenon of probing events that could be ominously leveraged to cause drastic Internet-wide and enterprise impacts as precursors of various cyber attacks. To accomplish its goals, the proposed approach leverages various signal and statistical techniques, information theoretical metrics, fuzzy approaches with real malware traffic and data mining methods. The approach is validated through one use case that arguably proves that a previously analyzed orchestrated probing event from last year is indeed still active, yet operating in a stealthy, very low rate mode. We envision that the proposed approach that is tailored towards darknet data, which is frequently, abundantly and effectively used to generate cyber threat intelligence, could be used by network security analysts, emergency response teams and/or observers of cyber events to infer large-scale orchestrated probing events for early cyber attack warning and notification.
Keywords: IP networks; Internet; computer network security; data mining; fuzzy set theory ;information theory; invasive software; statistical analysis telecommunication traffic; Internet traffic; coordination pattern; cyber attack; cyber threat intelligence; cyberspace; data mining methods; early cyber attack notification; early cyber attack warning; emergency response teams; fuzzy approaches; information theoretical metrics; large-scale orchestrated probing events; malicious activities; malicious real darknet data; malware traffic; network security analysts; orchestration pattern; routable unallocated IP addresses; signal techniques; statistical techniques; Conferences; IP networks; Internet; Malware; Probes (ID#: 15-4980)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849283&isnumber=6849127

 

Maghrabi, L.A., "The Threats of Data Security over the Cloud as Perceived by Experts and University Students," Computer Applications & Research (WSCAR), 2014 World Symposium on, pp. 1, 6, 18-20 Jan. 2014. doi: 10.1109/WSCAR.2014.6916842 Abstract: This research investigates the privacy, confidentiality and integrity of data over the Cloud. It explores different data security concerns over the Cloud as perceived by experts and university students. This topic is significant because of the increasing demand for Cloud services that attracts many people to use it more frequently. Being aware of data security concerns will undoubtedly help users take precautions from unauthorized access up to data theft. The comparison between the views of experts and users of data threats over the Cloud encourages investigators to conduct further research to increase awareness and maximize security measures. This study is based on the assumption that data over the Cloud are secure. This paper reviews the literature that focuses on the experts' findings and interpretations of data security issues and threats over the Cloud. The Cloud Security Alliance (CSA) [I] points out seven security threats: abuse and nefarious use of Cloud Computing, insecure Application Programming Interfaces (APIs), malicious insiders, shared technology vulnerabilities, data loss or leakage, account or service hijacking, and unknown risk profile. In addition, experts state different attacks that may occur at any time: DoS attacks, Cloud malware injection, side channels attack, authentication attacks, and Man-In-The-Middle (MITM) cryptographic attack. In this study, completed questionnaires were collected from students of the University of the West of England to examine their perception and awareness of data threats over the Cloud. Both perceptions from experts and students were compared and analyzed to derive conclusions about data security over the Cloud. A number of findings are discovered. As experts prove that data might be compromised over the Cloud, the outcome of this research reveals that users are unaware of these threats. Many users are unaware of the issues they face concerning their data's privacy, confidentiality, and integrity. However, the participants value their data privacy. The results also show that they utilize the Cloud for different purposes and various benefits. As for further research, many ideas are proposed with regard to research settings in terms of size of sample, type and background of population, and the choice of qualitative methodology.
Keywords: application program interfaces; authorisation; cloud computing; cryptography; data integrity; data privacy; invasive software; risk analysis; API; CSA; DoS attacks; MITM; University of the West of England; account hijacking; authentication attacks; cloud computing; cloud malware injection; cloud security alliance; cloud services; data confidentiality; data integrity; data leakage; data loss; data privacy; data security threats; data theft; insecure application programming interfaces; malicious insiders; man-in-the-middle cryptographic attack; qualitative methodology; service hijacking; shared technology vulnerabilities; side channels attack; unauthorized access; university students; unknown risk profile; Cryptography; Data privacy; Educational institutions; Cloud Computing; data security; data threats; information security; security threats (ID#: 15-4981)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6916842&isnumber=6916766

 

Manek, A.S.; Sumithra, V.; Shenoy, P.D.; Mohan, M.C.; Venugopal, K.R.; Patnaik, L.M., "DeMalFier: Detection of Malicious Web Pages Using an Effective Classifier," Data Science & Engineering (ICDSE), 2014 International Conference on, pp.83, 88, 26-28 Aug. 2014. doi: 10.1109/ICDSE.2014.6974616 Abstract: The web has become an indispensable global platform that glues together daily communication, sharing, trading, collaboration and service delivery. Web users often store and manage critical information that attracts cybercriminals who misuse the web and the internet to exploit vulnerabilities for illegitimate benefits. Malicious web pages are transpiring threatening issue over the internet because of the notoriety and their capability to influence. Detecting and analyzing them is very costly because of their qualities and intricacies. The complexities of attacks are increasing day by day because the attackers are using blended approaches of various existing attacking techniques. In this paper, a model DeMalFier (Detection of Malicious Web Pages using an Effective ClassiFier) has been developed to apply supervised learning approaches to identify malicious web pages relevant to malware distribution, phishing, drive-by-download and injection by extracting the content of web pages, URL-based features and features based on host information. Experimental evaluation of DeMalFier model achieved 99.9% accuracy recommending the impact of our approach for real-life deployment.
Keywords: Internet; computer crime; invasive software; learning (artificial intelligence); DeMalFier; Internet; URL-based features; Web security; cybercriminal attracts; malicious Web pages; malware distribution; phishing; supervised learning approaches; threatening issue; Accuracy; Crawlers; Data models; Feature extraction; HTML; Uniform resource locators; Web pages; DeMalFier; Malicious Web Pages; Pre-Processing Techniques; Supervised Learning; Web Security (ID#: 15-4982)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6974616&isnumber=6974596

 

Idrees, F.; Rajarajan, M., "Investigating the Android Intents and Permissions for Malware Detection," Wireless and Mobile Computing, Networking and Communications (WiMob), 2014 IEEE 10th International Conference on, pp.354,358, 8-10 Oct. 2014. doi: 10.1109/WiMOB.2014.6962194 Abstract: Mobile phones are mastering our day to day scheduling, entertainment, information and almost every aspect of life. With the increasing human dependence on smart phones, threats against these devices have also increased exponentially. Almost all the mobile apps are playing with the mobile user's privacy besides the targeted actions by the malicious apps. Android applications use permissions to use different features and resources of mobile device along with the intents to launch different activities. Various aspects of permission framework have been studied but sufficient attention has not been given to the intent framework. This work is first of its kind which is investigating the combined effects of permissions and intent filters to distinguish between the malware and benign apps. This paper proposes a novel approach to identify the malicious apps by analyzing the permission and intent patterns of android apps. This approach is supplemented with the machine learning algorithms for further classification of apps. Performance of proposed approach has been validated by applying the technique to the available malicious and benign samples collected from a number of sources.
Keywords: Android (operating system);data privacy; invasive software; learning (artificial intelligence);pattern classification; smart phones; Android applications; Android intents; Android permissions; benign apps; human dependence; machine learning algorithms; malicious apps; malware detection; mobile app classification; mobile device features; mobile device resources; mobile phones; mobile user privacy; permission framework; smart phones; Androids; Conferences; Humanoid robots; Malware; Mobile communication; Smart phones; classification; intents; malware detection; permission model (ID#: 15-4983)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6962194&isnumber=6962120

 

Kuriakose, J.; Vinod, P., "Discriminant Features for Metamorphic Malware Detection," Contemporary Computing (IC3), 2014 Seventh International Conference on, pp. 406, 411, 7-9 Aug. 2014. doi: 10.1109/IC3.2014.6897208 Abstract: To unfold a solution for the detection of metamorphic viruses (obfuscated malware), we propose a non signature based approach using feature selection techniques such as Categorical Proportional Difference (CPD), Weight of Evidence of Text (WET), Term Frequency-Inverse Document Frequency (TF-IDF) and Term Frequency-Inverse Document Frequency-Class Frequency (TF-IDF-CF). Feature selection methods are employed to rank and prune bi-gram features obtained from malware and benign files. Synthesized features are further evaluated for their prominence in either of the classes. Using our proposed methodology 100% accuracy is obtained with test samples. Hence, we argue that the statistical scanner proposed by us can identify future metamorphic variants and can assist antiviruses with high accuracy.
Keywords: computer viruses; feature extraction; statistical analysis; CPD; TF-IDF-CF; WET; antivirus; benign files; bigram feature pruning; bigram feature ranking; categorical proportional difference; discriminant features; feature selection technique; feature synthesis; metamorphic malware detection; metamorphic variant identification; metamorphic virus detection; nonsignature based approach; obfuscated malware; statistical scanner; term frequency-inverse document frequency-class frequency; weight of evidence of text; Accuracy; Detectors; Feature extraction; Hidden Markov models; Malware; Measurement; Viruses (medical);classifiers; discriminant; feature selection; metamorphic malware; obfuscation (ID#: 15-4984)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6897208&isnumber=6897132

 

Hasegawa, H.; Yamaguchi, Y.; Shimada, H.; Takakura, H., "A Countermeasure Recommendation System against Targeted Attacks with Preserving Continuity of Internal Networks," Computer Software and Applications Conference (COMPSAC), 2014 IEEE 38th Annual, pp.400,405, 21-25 July 2014. doi: 10.1109/COMPSAC.2014.63 Abstract: Recently, the sophistication of targeted cyber attacks makes conventional countermeasures useless to defend our network. Proper network design, i.e., Moderate segmentation and adequate access control, is one of the most effective countermeasures to prevent stealth activities of the attacks inside the network. By paying attention to the violation of the control, we can be aware of the existence of the attacks. In case that suspicious activities are found, we should adopt more strict design for further analysis and mitigation of damage. However, an organization must assume that its network administrators have full knowledge of its business and enough information of its network structure for selecting the most suitable design. This paper discusses a recommendation system to enhance the ability of a semi-automatic network design system previously proposed by us. Our new system evaluates on the viewpoint of two criteria, the effectiveness against malicious activities and the impact on business. The former takes the infection probability and hazardousness of communication into account and the latter considers the impact of the countermeasure which affects the organization's activities. By reviewing the candidate of the countermeasures with these criteria, the most suitable one to the organization can be selected.
Keywords: authorisation; probability; recommender systems; access control; countermeasure recommendation system; cyber attacks;hazardousness; infection probability; internal networks; network administrators; network design; targeted attacks; Access control; Malware; Organizations; Personnel; Servers; VLAN; access control; design evaluation; targeted attack (ID#: 15-4985)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6899242&isnumber=6899181

 

Wei Wang; Xing Wang; Dawei Feng; Jiqiang Liu; Zhen Han; Xiangliang Zhang, "Exploring Permission-Induced Risk in Android Applications for Malicious Application Detection," Information Forensics and Security, IEEE Transactions on, vol.9, no.11, pp.1869,1882, Nov. 2014. doi: 10.1109/TIFS.2014.2353996 Abstract: Android has been a major target of malicious applications (malapps). How to detect and keep the malapps out of the app markets is an ongoing challenge. One of the central design points of Android security mechanism is permission control that restricts the access of apps to core facilities of devices. However, it imparts a significant responsibility to the app developers with regard to accurately specifying the requested permissions and to the users with regard to fully understanding the risk of granting certain combinations of permissions. Android permissions requested by an app depict the app's behavioral patterns. In order to help understanding Android permissions, in this paper, we explore the permission-induced risk in Android apps on three levels in a systematic manner. First, we thoroughly analyze the risk of an individual permission and the risk of a group of collaborative permissions. We employ three feature ranking methods, namely, mutual information, correlation coefficient, and T-test to rank Android individual permissions with respect to their risk. We then use sequential forward selection as well as principal component analysis to identify risky permission subsets. Second, we evaluate the usefulness of risky permissions for malapp detection with support vector machine, decision trees, as well as random forest. Third, we in depth analyze the detection results and discuss the feasibility as well as the limitations of malapp detection based on permission requests. We evaluate our methods on a very large official app set consisting of 310 926 benign apps and 4868 real-world malapps and on a third-party app sets. The empirical results show that our malapp detectors built on risky permissions give satisfied performance (a detection rate as 94.62% with a false positive rate as 0.6%), catch the malapps' essential patterns on violating permission access regulations, and are universally applicable to unknown malapps (detection rate as 74.03%).
Keywords: Android (operating system);invasive software; principal component analysis; smart phones; Android security mechanism; T-test; collaborative permissions; correlation coefficient; decision trees; malapp detection; malicious applications; mutual information; permission control; permission-induced risk; principal component analysis; random forest; sequential forward selection; support vector machine; third-party app sets; Androids; Correlation; Humanoid robots; Principal component analysis ;Security; Smart phones; Support vector machines; Android security; Android system; intrusion detection; malware detection; permission usage analysis (ID#: 15-4986)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6891250&isnumber=6912034

 

Byungho Min; Varadharajan, V., "Design and Analysis of Security Attacks against Critical Smart Grid Infrastructures," Engineering of Complex Computer Systems (ICECCS), 2014 19th International Conference on, pp.59,68, 4-7 Aug. 2014. doi: 10.1109/ICECCS.2014.16 Abstract: Smart grid, the future power grid, is expected to provide better energy efficiency, more customer choices and improved reliability and security. As the smart grid is an integrated system that consists of multiple subsystems, understanding it as a whole system is required to fully understand the security risks it faces. In this paper, a sophisticated cyber-physical system (CPS) unique malware attack against the smart grid is proposed. The paper first outlines the architecture of the smart grid in general. Then we present the characteristics of recent malware attacks targeting the CPS such as Stuxnet and Shamoon. These lead to the design of our proposed attack that incorporates the key features from the smart grid architecture and the recent real attacks. One key aspect of the proposed attack is that it manipulates various physical field devices as well as cyber systems to illustrate how a blackout is possible even under the security-improved smart grid environment. Then, we explain the application of defensive techniques in the context of the suggested attack. Lastly, prototype implementation showing the effectiveness of the attack and the defensive measures is described.
Keywords: critical infrastructures; invasive  software; power engineering computing; smart power grids; CPS; Shamoon; Stuxnet; critical smart grid infrastructures; cyber-physical system; defensive techniques; malware attack; physical field devices; security attacks; smart grid architecture; Control systems; Malware; Payloads; Protocols; Smart grids; Software; cyber attack; cyber-physical system; deceptive attack; malware; security; smart grid (ID#: 15-4987)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6923118&isnumber=6923102

 

Irakiza, D.; Karim, M.E.; Phoha, V.V., "A Non-Interactive Dual Channel Continuous Traffic Authentication Protocol," Information Forensics and Security, IEEE Transactions on, vol.9, no.7, pp.1133, 1140, July 2014. doi: 10.1109/TIFS.2014.2323700 Abstract: We introduce a non-interactive dual-channel protocol for continuous traffic authentication and analyze its security properties. We realize the proposed protocol by facilitating dual channels at the keyboard with the assistance of a lightweight hardware module. The proposed protocol does not require users' explicit engagement in the authentication process. Empirical results show that, for a 30-day period, the maximum false reject rate for all legitimate requests on a day is 6% (with a 30 day daily average of 2.4%) and the false accept rate on any given day is 0%. The daily maximum false reject rate of the user requests falls to 0% if the users are forced to engage explicitly in the protocol operation for a maximum of 1.2% of users' non-typed requests.
Keywords: cryptographic protocols; keyboards; authentication process; continuous traffic authentication; keyboard; lightweight hardware module; noninteractive dual channel continuous traffic authentication protocol; security property; time 30 day; Authentication; Computers; Hardware; Malware; Protocols; Servers; information exfiltration; non-interactive dual channel protocol (ID#: 15-4988)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6815645&isnumber=6819111

 

Smith, A.J.; Mills, R.F.; Bryant, A.R.; Peterson, G.L.; Grimaila, M.R., "REDIR: Automated Static Detection of Obfuscated Anti-Debugging Techniques," Collaboration Technologies and Systems (CTS), 2014 International Conference on, vol., no., pp.173,180, 19-23 May 2014. doi: 10.1109/CTS.2014.6867561 Abstract: Reverse Code Engineering (RCE) to detect anti-debugging techniques in software is a very difficult task. Code obfuscation is an anti-debugging technique makes detection even more challenging. The Rule Engine Detection by Intermediate Representation (REDIR) system for automated static detection of obfuscated anti-debugging techniques is a prototype designed to help the RCE analyst improve performance through this tedious task. Three tenets form the REDIR foundation. First, Intermediate Representation (IR) improves the analyzability of binary programs by reducing a large instruction set down to a handful of semantically equivalent statements. Next, an Expert System (ES) rule-engine searches the IR and initiates a sense-making process for anti-debugging technique detection. Finally, an IR analysis process confirms the presence of an anti-debug technique. The REDIR system is implemented as a debugger plug-in. Within the debugger, REDIR interacts with a program in the disassembly view. Debugger users can instantly highlight anti-debugging techniques and determine if the presence of a debugger will cause a program to take a conditional jump or fall through to the next instruction.
Keywords: program debugging; program diagnostics; reverse engineering; ES; IR analysis process; REDIR system; automated static detection; binary program analysis; code obfuscation; expert system rule-engine; obfuscated anti-debugging techniques; reverse code engineering; rule engine detection by intermediate representation system; Debugging; Engines; Instruments; Malware; Registers; Testing; Timing; Anti-debugging; Expert systems; Reverse code engineering; Sensemaking (ID#: 15-4989)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6867561&isnumber=6867522

 

Hu Ge; Li Ting; Dong Hang; Yu Hewei; Zhang Miao, "Malicious Code Detection for Android Using Instruction Signatures," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp.332,337, 7-11 April 2014. doi: 10.1109/SOSE.2014.48 Abstract: This paper provides an overview of the current static analysis technology of Android malicious code, and a detailed analysis of the format of APK which is the application name of Android platform executable file (dex). From the perspective of binary sequence, Dalvik VM file is syncopated in method, and these test samples are analyzed by automated DEX file parsing tools and Levenshtein distance algorithm, which can detect the malicious Android applications that contain the same signatures effectively. Proved by a large number of samples, this static detection system that based on signature sequences can't only detect malicious code quickly, but also has a very low rate of false positives and false negatives.
Keywords: Android (operating system); digital signatures; program compilers; program diagnostics; APK format; Android malicious code detection; Android platform executable file; Dalvik VM file; Levenshtein distance algorithm; automated DEX file parsing tools; binary sequence; instruction signatures; malicious Android applications detection; signature sequences; static analysis technology; static detection system; Libraries; Malware; Mobile communication; Smart phones; Software; Testing; Android; DEX; Static Analysis; malicious code (ID#: 15-4990)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830926&isnumber=6825948

 

Oprisa, C.; Checiches, M.; Nandrean, A., "Locality-Sensitive Hashing Optimizations for Fast Malware Clustering," Intelligent Computer Communication and Processing (ICCP), 2014 IEEE International Conference on, pp.97,104, 4-6 Sept. 2014. doi: 10.1109/ICCP.2014.6936960 Abstract: Large datasets, including malware collections are difficult to cluster. Although we are mainly dealing with polynomial algorithms, the long running times make them difficult to use in practice. The main issue consists in the fact that the classical hierarchical algorithms need to compute the distance between each pair of items. This paper will show a faster approach for clustering large collections of malware samples using a technique called locality-sensitive hashing. This approach performs single-linkage clustering faster than the state of the art methods, while producing clusters of a similar quality. Although our proposed algorithm is still quadratic in theory, the coefficient for the quadratic term is several orders of magnitude smaller. Our experiments show that we can reduce this coefficient to under 0.02% and still produce clusters 99.9% similar with the ones produced by the single linkage algorithm.
Keywords: cryptography; invasive software; optimisation; pattern clustering; polynomials; locality-sensitive hashing optimization; malware clustering; polynomial algorithm; single-linkage clustering; Algorithm design and analysis; Approximation algorithms; Arrays; Clustering algorithms; Dictionaries; Equations; Malware (ID#: 15-4991)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6936960&isnumber=6936959

 

Xiangyu Ju, "Android Malware Detection Through Permission and Package," Wavelet Analysis and Pattern Recognition (ICWAPR), 2014 International Conference on, pp.61, 65, 13-16 July 2014. doi: 10.1109/ICWAPR.2014.6961291 Abstract: Malicious Android applications are a seriously problem due to the large share of the Android operating system market and also the flexibility of Android. An application should be checked before installing to a phone to avoid the privacy information leak. This paper proposes a static android malware detection method by using not only the permission but also the package of an Android application. The experimental results show the proposed method can detect the malicious software effectively. It suggests that the information provided by the package is useful for detection.
Keywords: Android (operating system); invasive software; Android malware detection; Android operating system; malicious Android application; privacy information leak; Accuracy; Androids; Conferences; Feature extraction; Humanoid robots; Malware; Smart phones; APK; Android; DEX; Malware; Package; Permission (ID#: 15-4992)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6961291&isnumber=6961275


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

 

Resilient Security Architectures (IEEE)

 
SoS Logo

Resilient Security Architectures (IEEE)

 

Resilient security architectures are a hard problem in the Science of Security. A survey of the IEEE Digital Library found these scholarly articles about research into resilient security architectures that were published in 2014. A separate listing of works published by ACM is referenced under the heading “Hard Problems: Resilient Security Architectures (ACM).” A great deal of research useful to resilience is coming from the literature on control theory. In addition to the Science of Security community, much of this work is also relevant to the SURE project.


 

Enose, Nampuraja, "Implementing an Integrated Security Management Framework to Ensure a Secure Smart Grid," Advances in Computing, Communications and Informatics (ICACCI, 2014 International Conference on, pp.778, 784, 24-27 Sept. 2014. doi: 10.1109/ICACCI.2014.6968521 Abstract: The paradigm-shifting transition in today's ‘smart grid’ is the perfect convergence of IT and OT systems that build an intelligent electricity system to distribute electricity more effectively all the way from transmission to customer appliances. While this transformation promises immense operational benefits to the utilities, it brings along significant security concerns in terms of increasing the enterprise-class security risk. The challenge for the utilities therefore, is to implement new approaches and tools in building a secure smart grid network that is reliable and resilient. This paper therefore introduces an ‘integrated security management framework’ that offers critical infrastructure-grade security, to multiple utility technologies in establishing an enterprise wide integrated security management system. This comprehensive security architecture offers improved interconnect of diverse systems, and establishes both physical security and cyber-security, integrated to all functional aspects of the grid.
Keywords: Computer architecture; Computer security; Control systems; Reliability; Smart grids; Standards; IT/OT convergence; critical infrastructure; integrated architecture; security management; smart grid (ID#: 15-5471)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6968521&isnumber=6968191

 

Atighetchi, M.; Adler, A., "A Framework for Resilient Remote Monitoring," Resilient Control Systems (ISRCS), 2014 7th International Symposium on, pp.1, 8, 19-21 Aug. 2014. doi: 10.1109/ISRCS.2014.6900090 Abstract: Today's activities in cyber space are more connected than ever before, driven by the ability to dynamically interact and share information with a changing set of partners over a wide variety of networks. To support dynamic sharing, computer systems and network are stood up on a continuous basis to support changing mission critical functionality. However, configuration of these systems remains a manual activity, with misconfigurations staying undetected for extended periods, unneeded systems remaining in place long after they are needed, and systems not getting updated to include the latest protections against vulnerabilities. This environment provides a rich environment for targeted cyber attacks that remain undetected for weeks to months and pose a serious national security threat. To counter this threat, technologies have started to emerge to provide continuous monitoring across any network-attached device for the purpose of increasing resiliency by virtue of identifying and then mitigating targeted attacks. For these technologies to be effective, it is of utmost importance to avoid any inadvertent increase in the attack surface of the monitored system. This paper describes the security architecture of Gestalt, a next-generation cyber information management platform that aims to increase resiliency by providing ready and secure access to granular cyber event data available across a network. Gestalt's federated monitoring architecture is based on the principles of strong isolation, least-privilege policies, defense-in-depth, crypto-strong authentication and encryption, and self-regeneration. Remote monitoring functionality is achieved through an orchestrated workflow across a distributed set of components, linked via a specialized secure communication protocol, that together enable unified access to cyber observables in a secure and resilient way.
Keywords: Web services; information management; security of data; Gestalt platform; attack identification; attack mitigation; communication protocol; computer networks; computer systems; cyber attacks;cyber observables; cyber space; granular cyber event data; mission critical functionality; national security threat; network-attached device; next-generation cyber information management platform; remote monitoring functionality; resilient remote monitoring; Bridges; Firewalls (computing); Monitoring; Protocols;Servers; XML; cyber security; federated access; middleware; semantic web (ID#: 15-5472)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900090&isnumber=6900080

 

Wei Zhang; Yue-Ji Wang; Xiao-Lei WangWang, "A Survey of Defense Against P2P Botnets," Dependable, Autonomic and Secure Computing (DASC), 2014 IEEE 12th International Conference on, pp.97,102, 24-27 Aug. 2014. doi: 10.1109/DASC.2014.26 Abstract: Botnet, a network of computers that are compromised and controlled by the attacker, is one of the most significant and serious threats to the Internet. Researchers have done plenty of research and made significant progress. As the extensive use and unique advantages of peer-to-peer (P2P) technology, the new advanced form of botnets with the P2P architecture have emerged and become more resilient to defense methods and countermeasures than traditional centralized botnets. Due to the underlying security limitation of current system and Internet architecture, and the complexity of P2P botnet itself, how to effectively counter the global threat of P2P botnets is still a very challenging issue. In this paper, we present an overall overview and analysis of the current defense methods against P2P botnets. We also separately analyse the challenges in botnets detection, measurement and mitigation in detail which introduced by the new form of P2P botnets and propose our suggestions to corresponding challenges.
Keywords: Internet; invasive software; peer-to-peer computing; Internet architecture; P2P architecture; P2P botnet complexity; P2P botnet threat; P2P technology; botnet detection; botnet measurement; botnet mitigation; countermeasures; defense method; peer-to-peer technology; security limitation; serious threat; Crawlers; Current measurement; Feature extraction; Monitoring; Peer-to-peer computing; Protocols; Topology; Botnets detection; Botnets measurement; Botnets mitigation; P2P botnet (ID#: 15-5473)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6945311&isnumber=6945641

 

Srivastava, M., "In Sensors We Trust—A Realistic Possibility?," Distributed Computing in Sensor Systems (DCOSS), 2014 IEEE International Conference on, pp.1,1, 26-28 May 2014. doi: 10.1109/DCOSS.2014.65 Abstract: Sensors of diverse capabilities and modalities, carried by us or deeply embedded in the physical world, have invaded our personal, social, work, and urban spaces. Our relationship with these sensors is a complicated one. On the one hand, these sensors collect rich data that are shared and disseminated, often initiated by us, with a broad array of service providers, interest groups, friends, and family. Embedded in this data is information that can be used to algorithmically construct a virtual biography of our activities, revealing intimate behaviors and lifestyle patterns. On the other hand, we and the services we use, increasingly depend directly and indirectly on information originating from these sensors for making a variety of decisions, both routine and critical, in our lives. The quality of these decisions and our confidence in them depend directly on the quality of the sensory information and our trust in the sources. Sophisticated adversaries, benefiting from the same technology advances as the sensing systems, can manipulate sensory sources and analyze data in subtle ways to extract sensitive knowledge, cause erroneous inferences, and subvert decisions. The consequences of these compromises will only amplify as our society increasingly complex human-cyber-physical systems with increased reliance on sensory information and real-time decision cycles.Drawing upon examples of this two-faceted relationship with sensors in applications such as mobile health and sustainable buildings, this talk will discuss the challenges inherent in designing a sensor information flow and processing architecture that is sensitive to the concerns of both producers and consumer. For the pervasive sensing infrastructure to be trusted by both, it must be robust to active adversaries who are deceptively extracting private information, manipulating beliefs and subverting decisions. While completely solving these challenges would require a new science of resilient, secure and trustworthy networked sensing and decision systems that would combine hitherto disciplines of distributed embedded systems, network science, control theory, security, behavioral science, and game theory, this talk will provide some initial ideas. These include an approach to enabling privacy-utility trade-offs that balance the tension between risk of information sharing to the producer and the value of information sharing to the consumer, and method to secure systems against physical manipulation of sensed information.
Keywords: information dissemination; sensors; information sharing; processing architecture; secure systems; sensing infrastructure; sensor information flow; Architecture; Buildings; Computer architecture; Data mining; Information management; Security; Sensors (ID#: 15-5474)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846138&isnumber=6846129

 

 

Kannan, S.; Karimi, N.; Sinanoglu, O.; Karri, R., "Security Vulnerability of Emerging Non-volatile Main Memories and Countermeasures," Computer-Aided Design of Integrated Circuits and Systems, IEEE Transactions on, vol.34, no.1, pp.2-15, Jan. 2015. doi: 10.1109/TCAD.2014.2369741 Abstract: Emerging non-volatile memory devices such as phase change memories and memristors are replacing SRAM and DRAM. However, non-volatile main memories (NVMM) are susceptible to probing attacks even when powered down. This way they may compromise sensitive data such as passwords and keys that reside in the NVMM. To eliminate this vulnerability, we propose sneak-path encryption (SPE), a hardware intrinsic encryption technique for memristor-based NVMMs. SPE is instruction set architecture (ISA) independent and has minimal impact on performance. SPE exploits the physical parameters, such as sneak-paths in crossbar memories, to encrypt the data stored in a memristor-based NVMM. SPE is resilient to a number of attacks that may be performed on NVMMs. We use a cycle accurate simulator to evaluate the performance impact of SPE based NVMM and compare against other security techniques. SPE can secure an NVMM with a ~1.3% performance overhead.
Keywords: Ciphers; Encryption; Memristors; Nonvolatile memory; Random access memory; Encryption; Hardware Security; Memory Security; Memristor; RRAM (ID#: 15-5475)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6952995&isnumber=6917053

 

Borges Hink, R.C.; Beaver, J.M.; Buckner, M.A.; Morris, T.; Adhikari, U.; Shengyi Pan, "Machine Learning for Power System Disturbance and Cyber-Attack Discrimination," Resilient Control Systems (ISRCS), 2014 7th International Symposium on, pp.1,8, 19-21 Aug. 2014. doi: 10.1109/ISRCS.2014.6900095 Abstract: Power system disturbances are inherently complex and can be attributed to a wide range of sources, including both natural and man-made events. Currently, the power system operators are heavily relied on to make decisions regarding the causes of experienced disturbances and the appropriate course of action as a response. In the case of cyber-attacks against a power system, human judgment is less certain since there is an overt attempt to disguise the attack and deceive the operators as to the true state of the system. To enable the human decision maker, we explore the viability of machine learning as a means for discriminating types of power system disturbances, and focus specifically on detecting cyber-attacks where deception is a core tenet of the event. We evaluate various machine learning methods as disturbance discriminators and discuss the practical implications for deploying machine learning systems as an enhancement to existing power system architectures.
Keywords: learning (artificial intelligence); power engineering computing; power system faults; security of data; cyber-attack discrimination; machine learning; power system architectures; power system disturbance; power system operators; Accuracy; Classification algorithms; Learning systems; Protocols; Relays; Smart grids; SCADA; Smart grid; cyber-attack; machine learning (ID#: 15-5476)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900095&isnumber=6900080

 

Mozaffari-Kermani, M.; Kai Tian; Azarderakhsh, R.; Bayat-Sarmadi, S., "Fault-Resilient Lightweight Cryptographic Block Ciphers for Secure Embedded Systems," Embedded Systems Letters, IEEE, vol.6, no.4, pp.89, 92, Dec. 2014. doi: 10.1109/LES.2014.2365099 Abstract: The development of extremely-constrained embedded systems having sensitive nodes such as RFID tags and nanosensors necessitates the use of lightweight block ciphers. Nevertheless, providing the required security properties does not guarantee their reliability and hardware assurance when the architectures are prone to natural and malicious faults. In this letter, error detection schemes for lightweight block ciphers are proposed with the case study of XTEA (eXtended TEA). Lightweight block ciphers such as XTEA, PRESENT, SIMON, and the like might be better suited for low-resource deeply-embedded systems compared to the Advanced Encryption Standard. Three different error detection approaches are presented and according to our fault-injection simulations, high error coverage is achieved. Finally, field-programmable gate array (FPGA) implementations of these proposed error detection structures are presented to assess their efficiency and overhead. The schemes presented can also be applied to lightweight hash functions with similar structures, making the presented schemes suitable for providing reliability to their lightweight security-constrained hardware implementations.
Keywords: cryptography; embedded systems; error correction; fault tolerant computing; field programmable gate arrays; nanosensors; radiofrequency identification; telecommunication network reliability; FPGA; PRESENT; RFID tags; SIMON; XTEA; advanced encryption standard; error detection schemes; extended TEA; extremely-constrained secure embedded systems; fault-resilient lightweight cryptographic block ciphers; field-programmable gate array; lightweight hash functions; nanosensors; sensitive nodes; tiny encryption algorithm; Ciphers; Cryptography; Data security; Encryption; Fault diagnosis; Field programmable gate arrays; Cryptography; error detection; security (ID#: 15-5477)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6936334&isnumber=6954216

 

He Li; Peng Li; Song Guo; Shui Yu, "Byzantine-Resilient Secure Software-Defined Networks With Multiple Controllers," Communications (ICC), 2014 IEEE International Conference on, pp. 695, 700, 10-14 June 2014. doi: 10.1109/ICC.2014.6883400 Abstract: Software-defined network (SDN) is the next generation of networking architecture that is dynamic, manageable, cost-effective, and adaptable, making it ideal for the high-bandwidth, dynamic nature of today's applications. In SDN, network management is facilitated through software rather than low-level device configurations. However, the centralized control plane introduced by SDN imposes a great challenge for the network security. In this paper, we present a secure SDN structure, in which each device is managed by multiple controllers rather than a single one as in a traditional manner. It can resist Byzantine attacks on controllers and the communication links between controllers and SDN switches. Furthermore, we design a cost-efficient controller assignment algorithm to minimize the number of required controllers for a given set of switches. Extensive simulations have been conducted to show that our proposed algorithm significantly outperforms random algorithms.
Keywords: fault tolerant control; telecommunication control; telecommunication network management; telecommunication security; Byzantine attacks; SDN switches; centralized control plane; communication links; cost-efficient controller assignment algorithm; multiple controllers; network management; network security; networking architecture; secure SDN structure; software-defined network; Bismuth; Control systems; Fault tolerance; Fault tolerant systems; Protocols; Resource management; Security (ID#: 15-5478)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883400&isnumber=6883277

 

Yang, Q.; Hu, X.; Qin, Z., "Secure Systolic Montgomery Modular Multiplier Over Prime Fields Resilient to Fault-Injection Attacks," Very Large Scale Integration (VLSI) Systems, IEEE Transactions on, vol.23, no.9, pp.1889-1902, September 2015. doi: 10.1109/TVLSI.2014.2356015 Abstract: This paper focuses on the security architecture for Montgomery modular multiplication over prime fields (MMMopfs). We propose a class of noninterleaved systolic secure architectures for MMMopf. Each of the proposed secure architectures has two modules, in which one is a main function module (MFM) which computes MMMopf, the other is an error detection module (EDM) which detects faults either owing to natural causes or deliberate fault injection by an attacker. In our secure architectures, several computing types of systolic array structures are adopted to implement the MFMs, and two error-detecting styles based on linear arithmetic codes are employed to construct the EDMs. We explore various combinations of computing types and error-detecting styles to get some excellent secure architectures. The best implementation of our secure architecture of Style-I can detect 99.9985% of faults in processing elements (PEs), with an average delay of 8.56% of whole Montgomery modular multiplication (MMM) computing time, and about 26.73% overhead resources. Meanwhile, the throughput rate of its MFM is 34.44% higher than that of the best pure MMMopf implementation in literature, with almost the same hardware consumption. The error detection capability, overhead proportion, and the average error-reporting delay of our secure architectures are comparable with or better than Hariri and Reyhani-Masoleh’s work on secure MMM over binary extension fields. Moreover, our secure architecture of Style-II can localize 90.63% of injected PEs faults, on condition that the number of affected PEs does not exceed 3. The property of our secure architectures that the injected faults could be localized and detected is novel and valuable.
Keywords: Arrays; Delays; Hardware; Prediction algorithms; Registers; Throughput; Concurrent error detection; Montgomery modular multiplication (MMM);systolic array (ID#: 15-5479)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6909065&isnumber=4359553

 

Binun, A.; Bloch, M.; Dolev, S.; Kahil, M.R.; Menuhin, B.; Yagel, R.; Coupaye, T.; Lacoste, M.; Wailly, A., "Self-Stabilizing Virtual Machine Hypervisor Architecture for Resilient Cloud," Services (SERVICES), 2014 IEEE World Congress on, pp.200,207, June 27 2014-July 2 2014. doi: 10.1109/SERVICES.2014.44 Abstract: This paper presents the architecture for a self-stabilizing hypervisor able to recover itself in the presence of Byzantine faults regardless of the state it is currently in. Our architecture is applicable to wide variety of underlying hardware and software and does not require augmenting computers with special hardware. The actions representing defense and recovery strategies can be specified by a user. We describe our architecture in OS-independent terms, thus making it applicable to various virtualization infrastructures. We also provide a prototype extending the Linux-based hypervisor KVM with the self-stabilizing functionality. These features allow augmenting KVM with robustness functionality in the coming stages and moving to cloud management system architectures such as OpenStack to support more industrial scenarios.
Keywords: cloud computing; virtual machines; virtualisation; Byzantine faults; Linux-based hypervisor KVM;OS-independent terms; OpenStack; cloud management system architectures; resilient cloud; robustness functionality; self-stabilizing functionality; self-stabilizing virtual machine hypervisor architecture; virtualization infrastructures; Computer architecture; Context; Hardware; Kernel; Security; Virtual machine monitors; Virtual machining; IaaS; hypervisor; resilience; self-stabilization (ID#: 15-5480)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903266&isnumber=6903223

 

Hoefling, M.; Mill, C.G.; Menth, M., "Distributed Load Balancing for Resilient Information-Centric SeDAX Networks," Network Operations and Management Symposium (NOMS), 2014 IEEE, pp. 1, 9, 5-9 May 2014. doi: 10.1109/NOMS.2014.6838254 Abstract: SeDAX is a publish/subscribe information-centric networking architecture where publishers send messages to the appropriate message broker over a Delaunay-triangulated overlay network. Resilient data forwarding and data redundancy enable a high level of reliability. Overlay nodes and topics are addressed via geo-coordinates. A topic is stored on primary and secondary nodes, those nodes closest and second-closest to the topic's coordinate, respectively. The overlay automatically reroutes a topic's messages to its secondary node should its primary node fail. Currently, SeDAX determines the coordinate of a topic by hashing its name. This kind of topic allocation is static, which can lead to unintended load imbalances. In this paper, we propose a topic delegation mechanism to make the assignment of topics to nodes dynamic. Our proposed mechanism is the only existing method to improve the flexibility and resource management of the SeDAX architecture so far. We define the load of SeDAX nodes and coordinates at different levels of resilience. On this basis, we develop distributed algorithms for load balancing. Simulations show that significant load imbalance can occur with static topic assignment and that the proposed algorithms achieve very good load balancing results.
Keywords: computer network security; distributed algorithms; overlay networks; resource allocation; telecommunication network reliability; Delaunay-triangulated overlay network; SeDAX; data redundancy; distributed algorithms; geocoordinates; load balancing; message broker; overlay nodes; primary nodes; publish-subscribe information-centric networking architecture; resilient data forwarding;resource management; secondary nodes; static topic assignment; topic allocation; topic delegation mechanism; Computer architecture; Load management; Load modeling; Measurement; Overlay networks; Resilience; Resource management (ID#: 15-5481)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838254&isnumber=6838210

 

Leontiadis, I.; Molva, R.; Onen, M., "A P2P Based Usage Control Enforcement Scheme Resilient to Re-Injection Attacks," A World of Wireless, Mobile and Multimedia Networks (WoWMoM), 2014 IEEE 15th International Symposium on, pp. 1,8, 19-19 June 2014. doi: 10.1109/WoWMoM.2014.6918974 Abstract: Existing privacy controls based on access control techniques do not prevent massive dissemination of private data by unauthorized users. We suggest a usage control enforcement scheme that allows users to gain control over their data during its entire lifetime. The scheme is based on a peer-to-peer architecture whereby a different set of peers is randomly selected for data assignment. Usage control is achieved based on the assumption that at least t out of any set of n peers will not behave maliciously. Such a system would still suffer from re-injection attacks whereby attackers can gain ownership of data and the usage policy thereof by simply re-storing data after slight modification of the content. In order to cope with re-injection attacks the scheme relies on a similarity detection mechanism. The robustness of the scheme has been evaluated in an experimental setting using a variety of re-injection attacks.
Keywords: authorisation; data privacy; peer-to-peer computing; P2P based usage control enforcement scheme; access control techniques; data assignment; peer-to-peer architecture; privacy control; re-injection attacks; similarity detection mechanism; Access control; Cryptography; Distributed databases; Peer-to-peer computing; Protocols; Resistance (ID#: 15-5482)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6918974&isnumber=6918912

 

Martins, G.; Bhattacharjee, A.; Dubey, A.; Koutsoukos, X.D., "Performance Evaluation of an Authentication Mechanism in Time-Triggered Networked Control Systems," Resilient Control Systems (ISRCS), 2014 7th International Symposium on, pp.1,6, 19-21 Aug. 2014. doi: 10.1109/ISRCS.2014.6900098 Abstract: An important challenge in networked control systems is to ensure the confidentiality and integrity of the message in order to secure the communication and prevent attackers or intruders from compromising the system. However, security mechanisms may jeopardize the temporal behavior of the network data communication because of the computation and communication overhead. In this paper, we study the effect of adding Hash Based Message Authentication (HMAC) to a time-triggered networked control system. Time Triggered Architectures (TTAs) provide a deterministic and predictable timing behavior that is used to ensure safety, reliability and fault tolerance properties. The paper analyzes the computation and communication overhead of adding HMAC and the impact on the performance of the time-triggered network. Experimental validation and performance evaluation results using a TTEthernet network are also presented.
Keywords: authorisation; computer network security; local area networks; networked control systems; HMAC; TTEthernet network; authentication mechanism; communication overhead; computation overhead; fault tolerance property; hash based message authentication; message confidentiality; message integrity; network data communication; reliability property; safety property; security mechanisms; time triggered architectures; time-triggered networked control systems; timing behavior; Cryptography; Message authentication; Receivers; Switches; Synchronization; HMAC; Performance Evaluation; Secure Messages; TTEthernet; Time-Trigger Architectures (ID#: 15-5483)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900098&isnumber=6900080

 

Kounev, Velin; Tipper, David; Grainger, Brandon M.; Reed, Gregory, "Analysis of an Offshore Medium Voltage DC Microgrid Environment—Part II: Communication Network Architecture," T&D Conference and Exposition, 2014 IEEE PES, pp.1,5, 14-17 April 2014. doi: 10.1109/TDC.2014.6863567 Abstract: The microgrid is a conceptual solution proposed as a plug-and-play interface for various types of renewable generation resources and loads. The high-level technical challenges associated with microgrids include (1) operation modes and transitions that comply with IEEE1547 and (2) control architecture and communication. In Part I, the emphasis is on the design of an electrical control architecture for an offshore oil drilling platform powered by wind generation. Engineering a distributed control system having safety critical features, requiring real-time performance is challenging. In this follow-up article we introduce the communication framework for the microgrid scenario under investigation. In all communication networks, scholastic delays and performance are inherent. The only feasible approach is to put bounds on the random processes, qualitatively define the worst cases, and build the distributed control system to be resilient enough to tolerate those behaviors. This is the approach taken by this paper. We propose a communication architecture, discuss performances requirements of the sub-systems, and layout network solutions meeting those specifications.
Keywords: Communication Network Performance and Availability; DC Microgrids; Distributed Control Architecture; Security (ID#: 15-5484)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6863567&isnumber=6863147

 

Rege, A.; Ferrese, F.; Biswas, S.; Li Bai, "Adversary Dynamics and Smart Grid Security: A Multiagent System Approach," Resilient Control Systems (ISRCS), 2014 7th International Symposium on, pp.1, 7, 19-21 Aug. 2014. doi: 10.1109/ISRCS.2014.6900101 Abstract: Power grid is the backbone of infrastructures that drive the US economy and security, which makes it a prime target of cybercriminals or state-sponsored terrorists, and warrants special attention for its protection. Commonly used approaches to smart grid security are usually based on various mathematical tools, and ignore the human behavior component of cybercriminals. This paper introduces a new dimension to the cyberphysical system architecture, namely human behavior, and presents a modified CPS framework, consisting of a. cyber system: SCADA control system and related protocols, b. physical system: power grid infrastructure, c. the adversary: cybercriminals, and d. the defender: system operators and engineers. Based on interviews of ethical hackers, this paper presents an adversary-centric method that uses adversary's decision tree along with control theoretic tools to develop defense strategies against cyberattacks on power grid.
Keywords: SCADA systems; computer crime; decision trees; multi-agent systems; power engineering computing; power system control; power system protection; power system security; protocols; smart power grids; SCADA control system; Smart Grid protection; US economy; US security; adversary-centric method; cyberattack; cybercriminals; cyberphysical system architecture; decision tree; ethical hackers; human behavior; mathematical tools; modified CPS framework; multiagent system approach; power grid; power grid infrastructure; protocols; smart grid security; Computer crime; Control systems; Decision making; Mathematical model; Power grids; Power system dynamics; Grid security; cyber attackers; cyberphysical systems; ethical hackers; human behavior (ID#: 15-5485)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900101&isnumber=6900080

 

Tao Yin; Yongzheng Zhang; Shuhao Li, "DR-SNBot: A Social Network-Based Botnet with Strong Destroy-Resistance," Networking, Architecture, and Storage (NAS), 2014 9th IEEE International Conference on, pp.191, 199, 6-8 Aug. 2014. doi: 10.1109/NAS.2014.37 Abstract: Social network-based botnets have become an important research direction of botnets. To avoid the single-point failure of existing centralized botnets, we propose a Social Network-based Botnet with strong Destroy-Resistance (DR-SNBot). By enhancing the security of the Command and Control (C&C) channel and introducing a divide-and-conquer and automatic reconstruction mechanism, we greatly improve the destroy-resistance of DR-SNBot. Moreover, we design the pseudo code for nickname generation algorithm, botmaster and bot respectively. Then, we construct the DR-SNBot via sin a blog and make simulated experiments to evaluate it. Furthermore, we make comparisons of controllability between botnets Mrrbot and DR-SNBot. The experimental results indicate that DRSNBot is more resilient. It is not only available in real-world environment, but also resistant enough to varying degrees of C&C-server removals in simulated environment.
Keywords: command and control systems; divide and conquer methods; invasive software; social networking (online); C&C channel; C&C-server removals; Command and Control channel; DR-SNBot; botmaster; botnets Mrrbot; centralized botnets; divide-and-conquer and automatic reconstruction mechanism; nickname generation algorithm; single-point failure; social network-based botnet; strong destroy-resistance; Blogs; Computer architecture; Image reconstruction; Registers; Security; Servers; Social network services; botnet; command and control channel; network security; reconstruction mechanism; social networks (ID#: 15-5486)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6923180&isnumber=6923143


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Smart Grid Security

 
SoS Logo

Smart Grid Security

 

The primary value of published research in smart grid technologies—the use of cyber-physical systems to coordinate the generation, transmission, and use of electrical power and its sources—is because of its strategic importance and the consequences of intrusion. Smart grid is of particular importance to the Science of Security and its problems embrace several of the hard problems, notably resiliency and metrics.  The work cited here was published in 2015 and was recovered from IEEE.


 

Law, Y.W.; Alpcan, T.; Palaniswami, M., "Security Games for Risk Minimization in Automatic Generation Control," Power Systems, IEEE Transactions on, vol. 30, no. 1, pp. 223, 232, Jan. 2015. doi: 10.1109/TPWRS.2014.2326403 Abstract: The power grid is a critical infrastructure that must be protected against potential threats. While modern technologies at the center of the ongoing smart grid evolution increase its operational efficiency, they also make it more susceptible to malicious attacks such as false data injection to electronic monitoring systems. This paper presents a game-theoretic approach to smart grid security by combining quantitative risk management techniques with decision making on protective measures. The consequences of data injection attacks are quantified using a risk assessment process where the well-known conditional value-at-risk (CVaR) measure provides an estimate of the defender's loss due to load shed in simulated scenarios. The calculated risks are then incorporated into a stochastic security game model as input parameters. The decisions on defensive measures are obtained by solving the game using dynamic programming techniques which take into account resource constraints. Thus, the formulated security game provides an analytical framework for choosing the best response strategies against attackers and minimizing potential risks. The theoretical results obtained are demonstrated through numerical examples. Simulation results show that different risk measures lead to different defense strategies, but the CVaR measure prioritizes high-loss tail events.
Keywords: decision making; load shedding; power generation control; power system protection; smart power grids; stochastic games; automatic generation control; conditional value-at-risk measure; data injection attacks; decision making; defensive measures; dynamic programming techniques; electronic monitoring systems; false data injection; game-theoretic approach; high-loss tail events; load shed; malicious attacks; operational efficiency; power grid; protective measures; quantitative risk management techniques; resource constraints; response strategies; risk assessment process; risk minimization; security games; smart grid evolution; smart grid security; stochastic security game model; Automatic generation control; Frequency control; Game theory; Games; Risk management; Security; Smart grids; Automatic generation control; cyber-physical system security; security games; smart grid (ID#: 15-4821)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6824274&isnumber=6991618

 

Zhuo Lu; Wenye Wang; Wang, C., "Camouflage Traffic: Minimizing Message Delay for Smart Grid Applications under Jamming," Dependable and Secure Computing, IEEE Transactions on, vol. 12, no. 1, pp. 31, 44, Jan.-Feb. 1 2015. doi: 10.1109/TDSC.2014.2316795 Abstract: Smart grid is a cyber-physical system that integrates power infrastructures with information technologies. To facilitate efficient information exchange, wireless networks have been proposed to be widely used in the smart grid. However, the jamming attack that constantly broadcasts radio interference is a primary security threat to prevent the deployment of wireless networks in the smart grid. Hence, spread spectrum systems, which provide jamming resilience via multiple frequency and code channels, must be adapted to the smart grid for secure wireless communications, while at the same time providing latency guarantee for control messages. An open question is how to minimize message delay for timely smart grid communication under any potential jamming attack. To address this issue, we provide a paradigm shift from the case-by-case methodology, which is widely used in existing works to investigate well-adopted attack models, to the worst-case methodology, which offers delay performance guarantee for smart grid applications under any attack. We first define a generic jamming process that characterizes a wide range of existing attack models. Then, we show that in all strategies under the generic process, the worst-case message delay is a U-shaped function of network traffic load. This indicates that, interestingly, increasing a fair amount of traffic can in fact improve the worst-case delay performance. As a result, we demonstrate a lightweight yet promising system, transmitting adaptive camouflage traffic (TACT), to combat jamming attacks. TACT minimizes the message delay by generating extra traffic called camouflage to balance the network load at the optimum. Experiments show that TACT can decrease the probability that a message is not delivered on time in order of magnitude.
Keywords: jamming; power system security; probability; radio networks; radiofrequency interference; smart power grids; telecommunication security; telecommunication traffic; TACT; U-shaped function; camouflage traffic; code channel; control messages; cyber-physical system; delay performance guarantee; existing attack model; generic jamming process; information exchange; information technologies; jamming attack; jamming resilience; latency guarantee; message delay minimization; multiple-frequency channel; network load balance; network traffic load; power infrastructures; primary security threat; probability; radio interference broadcast; smart grid application; smart grid communication; spread spectrum systems; transmitting adaptive camouflage traffic; well-adopted attack model; wireless communication security; wireless network deployment; worst-case message delay; Communication system security; Delays; Power distribution; Receivers; Smart grids; Wireless networks; Smart grid; jamming attacks; message delay; performance modeling; wireless applications; worst-case analysis (ID#: 15-4822)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6786992&isnumber=7008601

 

Martínez, E.; De La O Serna, J., "Smart grids Part 1: Instrumentation Challenges," Instrumentation & Measurement Magazine, IEEE, vol. 18, no. 1, pp. 6, 9, February 2015. doi: 10.1109/MIM.2015.7016673 Abstract: In general, a smart grid is a modernized electrical grid that uses digital technology for measurement, control, and protection functions to ensure a network security. It tries to solve the problem of weather-dependant fluctuations of renewable energy power supplies (e.g. wind turbines, or photo-voltaic systems) when they are connected to an actual power system. In two papers in this issue, we present some of the challenges raised by Smart Grids in instrumentation and measurement applications, putting emphasis on synchrophasor estimation. In this part 1 article, we describe the problem of identifying a normal condition from a fault condition and between a fault condition and an oscillation using phasor estimations in protective relays. In "Synchrophasor Measurement Challenges in Smart Grids," we discuss a novel synchrophasor-estimation algorithm that improves the accuracy of the estimates under oscillations conditions and serves to identify electromechanical modes in Smart Grids. This algorithm ameliorates protection as well as measurement applications in smart grids.
Keywords: phasor measurement; power supplies to apparatus; power system faults; power system protection; power system security; relay protection; renewable energy sources; smart power grids; ameliorates protection; digital technology; electrical grid; electromechanical modes iden-; fault condition; instrumentation application; network security; oscillations conditions; protective relay; renewable energy power supply; smart grid; synchrophasor estimation; weather-dependent fluctuation; Circuit faults; Oscillators; Phasor measurement units; Power system stability; Power system transients; Protective relaying; Smart grids (ID#: 15-4823)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7016673&isnumber=7016663

 

Yu, W.; Xue, Y.; Luo, J.; Ni, M.; Tong, H.; Huang, T., "An UHV Grid Security and Stability Defense System: Considering the Risk of Power System Communication," Smart Grid, IEEE Transactions on, vol. PP, no.99, pp. 1, 1, 5 February 2015. doi: 10.1109/TSG.2015.2392100 Abstract: An ultra high voltage (UHV) ac and dc interconnection will become the foundation of China's future smart grid. Due to the wide spread of interconnected regions, the distance between control stations will increase dramatically. Therefore, the communication {system's} reliability and real-time performance will become increasingly crucial. However, failures of the communication {system}, such as interruptions, latency, and bit error, are inevitable. This paper uses the UHV grid security and stability defense system (SSDS) as an example to analyze its requirements for communication and the impact of communication failure on the system's performance. The effect of communication latency on the power system's stability is analyzed quantitatively and qualitatively. Based on this analysis, a framework of an UHV grid SSDS considering the risk of the communication system is proposed. A preliminary power system and communication system co-simulation tool is developed to perform a case study. The case study demonstrates that communication latency in the UHV grid changes the control strategy's effectiveness due to a delay in executing the control strategy. Furthermore, communication latency will negatively affect the power grid's stability.
Keywords: Electromagnetics; Generators; Power system stability; Real-time systems; Stability criteria; Synchronous digital hierarchy; Communication interruption; communication latency; power system and communication system co-simulation; security and stability defense system (SSDS);stability control (ID#: 15-4824)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7029711&isnumber=5446437

 

Feng Diao; Fangguo Zhang; Xiangguo Cheng, "A Privacy-Preserving Smart Metering Scheme Using Linkable Anonymous Credential," Smart Grid, IEEE Transactions on, vol. 6, no. 1, pp. 461, 467, Jan. 2015. doi: 10.1109/TSG.2014.2358225 Abstract: Smart grid, as the next power grid, can efficiently monitor, predicate, and control energy generation/consumption by using the real-time users' electricity information. However, the fine-grained user energy consumption information may reveal the private information of the user. In this paper, we construct a linkable anonymous credential protocol based on Camenisch-Lysyanskaya (CL) signature. Then, we propose a privacy preserving smart metering scheme based on the new linkable anonymous credential. In addition to providing privacy protection for the user, our protocol also has the security properties of message authentication and traceability of fault smart metering. And there are some other useful features in our protocol, such as no need of trust-third party, dynamic users' enrollment and revocation, and complex statistical analysis of the energy use information. The computation cost and communication overhead of our scheme is O(1), which is independent of the user number. The simulation results show that our scheme is efficient.
Keywords: protocols; smart meters; smart power grids; Camenisch-Lysyanskaya signature; energy consumption; energy generation; fault smart metering; linkable anonymous credential; linkable anonymous credential protocol; message authentication; power grid; privacy protection; privacy-preserving smart metering scheme; protocol; security properties; smart grid; traceability; Data privacy; Electricity; Privacy; Protocols; Security; Smart grids; Statistical analysis; Anonymous credential; authentication; privacy; smart metering; traceability (ID#: 15-4825)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6910301&isnumber=6991622

 

Chen, H.; Xuan, P.; Wang, Y.; Tan, K.; Jin, X., "Key Technologies for Integration of Multitype Renewable Energy Sources-Research on Multi-Timeframe Robust Scheduling/Dispatch," Smart Grid, IEEE Transactions on, vol. PP, no. 99, pp. 1, 1, 26 January 2015.  doi: 10.1109/TSG.2015.2388756 Abstract: Large-scale integration of multitype renewable energy (RE) sources (intermittent energy sources) has become an important feature in smart grid development all over the world. It is internationally recognized that the island (or weak-tie connected) power grids are the best platforms for intermittent energy integration test and demonstration because of their abundant RE resources, scarcity of conventional energy, and technical difficulty with accommodation of intermittent energy. The ongoing research on Hainan (the second biggest island in China) power grid will achieve a comprehensive breakthrough in power grid planning, analysis, scheduling, operation, relay protection, security control, disaster prevention, and other key areas in multitype RE source integration. To be specific, this paper focuses on the key part of the research project-optimal scheduling and complementary operation and a new framework of multitime-frame robust scheduling/dispatch system is first proposed, which is different from most other robust approaches and lays special emphasis on the engineering characteristics of power system operation. Simulation results based on the real data of Hainan power grid show that the approach presented is effective and will be put into online operation in the near future.
Keywords: Optimal scheduling; Power grids; Robustness; Uncertainty; Wind forecasting; Wind power generation; Intermittent energy source; island power grid; optimal scheduling/dispatch; robustness; smart grids (ID#: 15-4826)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7021935&isnumber=5446437

 

Amin, S.; Schwartz, G.A.; Cardenas, A.A.; Sastry, S.S., "Game-Theoretic Models of Electricity Theft Detection in Smart Utility Networks: Providing New Capabilities with Advanced Metering Infrastructure," Control Systems, IEEE, vol.35, no.1, pp.66, 81, Feb. 2015. doi: 10.1109/MCS.2014.2364711 Abstract: The smart grid refers to the modernization of the power grid infrastructure with new technologies, enabling a more intelligently networked automated system with the goal of improving efficiency, reliability, and security, while providing more transparency and choices to electricity customers. A key technology being widely deployed on the consumption side of the grid is advanced metering infrastructure (AMI).
Keywords: game theory; power meters; power system reliability; power system security; smart power grids; AMI; advanced metering infrastructure; electricity customers; electricity theft detection; game theoretic models; power grid infrastructure; smart grid; smart utility networks; Computer security; Electricity supply industry; Investment; Power distribution; Power grids; Power system reliability; Schedules; Smart meters (ID#: 15-4827)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011178&isnumber=7011167

 

Akkaya, K.; Rabieh, K.; Mahmoud, M.; Tonyali, S., "Customized Certificate Revocation Lists for IEEE 802.11s-Based Smart Grid AMI Networks," Smart Grid, IEEE Transactions on, vol.6, no.5, pp.2366-2374, September 2015. doi: 10.1109/TSG.2015.2390131 Abstract: Public-key cryptography (PKC) is widely used in smart grid (SG) communications to reduce the overhead of key management. However, PKC comes with its own problems in terms of certificate management. Specifically, certificate revocation lists (CRLs) need to be maintained and distributed to the smart meters (SMs) in order to ensure security of the communications. The size of CRLs may grow over time and eventually may introduce additional delay, bandwidth, and storage overhead when various applications are run on SG. In this paper, we propose novel algorithms for creating customized CRLs with reduced size for IEEE 802.11s-based advanced metering infrastructure (AMI) networks. Rather than maintaining a huge-size single CRL that introduces unnecessary search time and storage, the idea is to cluster/group SMs within the AMI network and create CRLs based on these groups. The grouping is mainly done in such a way that they bring together the SMs that will be very likely to communicate so that the CRLs will be kept local to that group. To this end, we propose two novel grouping algorithms. The first algorithm is a bottom-up approach, which is based on the existing routes from the SMs to the gateway. Since the SMs will be sending their data to the gateway through the nodes on the route, this forms a natural grouping. The second approach is a top-down recursive approach, which considers the minimum spanning tree of the network and then divides it into smaller subtrees. Via grouping, the length of the CRL for each SM and the corresponding distribution overhead can be reduced significantly. Simulation results have shown that our approach can maintain a balance between the size of the CRL and the number of signatures generated by CAs while guaranteeing security of the communications.
Keywords: IEEE 802.11 Standards; Logic gates; Relays; Security; Smart grids; Wireless communication; Certificate revocations; grouping schemes; public key cryptography; security; smart grid (ID#: 15-4828)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7024936&isnumber=5446437

 

Chim, T.W.; Siu-Ming Yiu; Li, V.O.K.; Hui, L.C.K.; Jin Zhong, "PRGA: Privacy-Preserving Recording & Gateway-Assisted Authentication of Power Usage Information for Smart Grid," Dependable and Secure Computing, IEEE Transactions on, vol.12, no.1, pp. 85, 97, Jan.-Feb. 1 2015.  doi: 10.1109/TDSC.2014.2313861 Abstract: Smart grid network facilitates reliable and efficient power generation and transmission. The power system can adjust the amount of electricity generated based on power usage information submitted by end users. Sender authentication and user privacy preservation are two important security issues on this information flow. In this paper, we propose a scheme such that even the control center (power operator) does not know which user makes the requests of using more power or agreements of using less power until the power is actually used. At the end of each billing period (i.e., after electricity usage), the end user can prove to the power operator that it has really requested to use more power or agreed to use less power earlier. To reduce the total traffic volume in the communications network, our scheme allows gateway smart meters to help aggregate power usage information, and the power generators to determine the total amount of power that needs to be generated at different times. To reduce the impact of attacking traffic, our scheme allows gateway smart meters to help filter messages before they reach the control center. Through analysis and experiments, we show that our scheme is both effective and efficient.
Keywords: data privacy; internetworking; message authentication; power engineering computing; smart meters; smart power grids; PRGA; billing period; communications network; gateway smart meters; power operator; power usage information; privacy-preserving recording & gateway-assisted authentication; smart grid; total traffic volume reduction; Electricity supply industry; Encryption; Logic gates; Power generation; Power transmission; Smart grids; Substations; Smart grid network; authentication; bloom filter;commitment; homomorphic encryption; privacy preserving (ID#: 15-4829)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778800&isnumber=7008601

 

Sun, H.; Zhao, F.; Wang, H.; Wang, K.; Jiang, W.; Guo, Q.; Zhang, B.; Wehenkel, L., "Automatic Learning of Fine Operating Rules for Online Power System Security Control," Neural Networks and Learning Systems, IEEE Transactions on, vol. PP, no. 99, pp. 1,1,  9 February 2015.  doi: 10.1109/TNNLS.2015.2390621 Abstract: Fine operating rules for security control and an automatic system for their online discovery were developed to adapt to the development of smart grids. The automatic system uses the real-time system state to determine critical flowgates, and then a continuation power flow-based security analysis is used to compute the initial transfer capability of critical flowgates. Next, the system applies the Monte Carlo simulations to expected short-term operating condition changes, feature selection, and a linear least squares fitting of the fine operating rules. The proposed system was validated both on an academic test system and on a provincial power system in China. The results indicated that the derived rules provide accuracy and good interpretability and are suitable for real-time power system security control. The use of high-performance computing systems enables these fine operating rules to be refreshed online every 15 min.
Keywords: Learning systems; Power system security; Power transmission lines; Real-time systems; Substations; Automatic learning; critical flowgate; knowledge discovery; online security analysis; smart grid; total transfer capability (ID#: 15-4830)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7036063&isnumber=6104215

 

Liu, X.; Bao, Z.; Lu, D.; Li, Z., "Modeling of Local False Data Injection Attacks With Reduced Network Information," Smart Grid, IEEE Transactions on, vol.6, no.4, pp.1686-1696, July 2015. doi: 10.1109/TSG.2015.2394358 Abstract: Modern power grids are becoming more prone to cyberattacks. Even worse, an attacker without the full topology and parameter information of a power grid can still execute a false data injection attack without being detected by the state estimator. This paper proposes an efficient strategy for determining the optimal attacking region that requires reduced network information. The effectiveness of the proposed algorithm is verified through extensive simulations. This paper introduces a new front in the study of smart grid cyber security: determination of a feasible attacking region by obtaining less network information. This paper is also essential and significant for finding effective protection strategies against false data injection attacks based on the deep understanding of the mechanisms and strategies of the attacks.
Keywords: Data models; Generators; Jacobian matrices; Network topology; Power grids; Topology; Vectors; False data injection attacks; incomplete information; local load redistribution; optimal attacking strategy; power systems (ID#: 15-4831)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7031948&isnumber=5446437

 

Jun Yan; Yufei Tang; Haibo He; Yan Sun, "Cascading Failure Analysis With DC Power Flow Model and Transient Stability Analysis," Power Systems, IEEE Transactions on, vol.30, no.1, pp.285, 297, Jan. 2015. doi: 10.1109/TPWRS.2014.2322082 Abstract: When the modern electrical infrastructure is undergoing a migration to the Smart Grid, vulnerability and security concerns have also been raised regarding the cascading failure threats in this interconnected transmission system with complex communication and control challenge. The DC power flow-based model has been a popular model to study the cascading failure problem due to its efficiency, simplicity and scalability in simulations of such failures. However, due to the complex nature of the power system and cascading failures, the underlying assumptions in DC power flow-based cascading failure simulators (CFS) may fail to hold during the development of cascading failures. This paper compares the validity of a typical DC power flow-based CFS in cascading failure analysis with a new numerical metric defined as the critical moment (CM). The adopted CFS is first implemented to simulate system behavior after initial contingencies and to evaluate the utility of DC-CFS in cascading failure analysis. Then the DC-CFS is compared against another classic, more precise power system stability methodology, i.e., the transient stability analysis (TSA). The CM is introduced with a case study to assess the utilization of these two models for cascading failure analysis. Comparative simulations on the IEEE 39-bus and 68-bus benchmark reveal important consistency and discrepancy between these two approaches. Some suggestions are provided for using these two models in the power grid cascading failure analysis.
Keywords: load flow; power system reliability; power system simulation; power system transient stability; DC power flow model; cascading failure analysis; critical moment; interconnected transmission system; power system stability; smart grid; transient stability analysis; Analytical models; Failure analysis; Mathematical model; Power system faults; Power system protection; Power system stability; Stability analysis; Cascading failure; DC power flow; contingency analysis; transient stability; vulnerability assessment (ID#: 15-4832)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6819069&isnumber=6991618

 

Nachabe, L.; Girod-Genet, M.; El Hassan, B., "Unified Data Model for Wireless Sensor Network," Sensors Journal, IEEE, vol.15, no.7, pp.3657-3667, July 2015. doi: 10.1109/JSEN.2015.2393951 Abstract: The constant evolution of technology in terms of inexpensive and embedded wireless interfaces and powerful chipsets has led to the massive usage and deployment of Wireless Sensors Networks (WSNs). These networks are made of a growing number of small sensing devices and are used in multiple use cases such as home automation (e.g. Smart Buildings), energy management and Smart Grids, crisis management and security, e-Health, entertainment... Sensor devices, generally self-organized in clusters and domain-dedicated, are provided by an increasing number of manufacturers, which leads to interoperability problems (e.g. heterogeneous interfaces and/or grounding, heterogeneous descriptions, profiles, models...). Furthermore, data provided by these WSNs are very heterogeneous because they are coming from sensing nodes with various abilities (e.g. different sensing ranges, formats, coding schemes, etc.). In this paper, we propose a solution for handling WSNs’ heterogeneity, as well as easing interoperability management. The solution consists of a semantic open data model for sensor and sensor data generic description. This data model, designed for handling any kind of sensors/actuators and measured data (which is still not the case of existing WSNs data models), is fully detailed and formalized in an original ontology format called “MyOntoSens” and written using OWL 2 DL language. The proposed ontology has been implemented using Protégé 4.3, pre-validated with Pellet Reasoner, and is being standardized1. In addition, this original ontology has been pre-qualified through a runner’s exercise monitoring application, using in particular SPARQL query language, within a small WBAN platform comprising heartbeat, GPS sensors, and Android mobile phones.
Keywords: Data models; Ontologies; Security; Semantics; Sensor phenomena and characterization; Wireless sensor networks; BANs; OWL 2 DL; Pellet; Protege; WSNs; heterogeneity management; ontology; open data model; semantic; sensor (ID#: 15-4833)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7014284&isnumber=4427201

 

Ismail, Ziad; Leneutre, Jean; Bateman, David; Chen, Lin; "A Game-Theoretical Model for Security Risk Management of Interdependent ICT and Electrical Infrastructures," High Assurance Systems Engineering (HASE), 2015 IEEE 16th International Symposium on, pp.101,109, 8-10 Jan. 2015. doi: 10.1109/HASE.2015.24 Abstract: The communication infrastructure is a key element for management and control of the power system in the smart grid. The communication infrastructure, which can include equipment using off-the-shelf vulnerable operating systems, has the potential to increase the attack surface of the power system. The interdependency between the communication and the power system renders the management of the overall security risk a challenging task. In this paper, we address this issue by presenting a mathematical model for identifying and hardening the most critical communication equipment used in the power system. Using non-cooperative game theory, we model interactions between an attacker and a defender. We derive the minimum defense resources required and the optimal strategy of the defender that minimizes the risk on the power system. Finally, we evaluate the correctness and the efficiency of our model via a case study.
Keywords: Communication equipment; Games; Nash equilibrium; Power grids; Security; Substations; Cyber-physical System; Non-cooperative Game Theory; SCADA Security (ID#: 15-4834)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7027420&isnumber=7027398

 

Zhu, Y.; Yan, J.; Tang, Y.; Sun, Y.; He, H., "Joint Substation-Transmission Line Vulnerability Assessment Against the Smart Grid," Information Forensics and Security, IEEE Transactions on, vol. PP, no. 99, pp. 1, 1, 5 February 2015. doi: 10.1109/TIFS.2015.2394240 Abstract: Power grids are often run near the operational limits because of increasing electricity demand, where even small disturbances could possibly trigger major blackouts. The attacks are the potential threats to trigger large-scale cascading failures in the power grid. Specifically, the attacks mean to make substations/transmission lines lose functionality by either physical sabotages or cyber attacks. Previously, the attacks are investigated from node-only/link-only perspectives, assuming attacks can only occur on substations/transmission lines. In this paper, we introduce the joint-substation-transmission-line perspective, which assumes attacks can happen on substations, transmission lines, or both. The introduced perspective is a nature extension to substation-only and transmission-line-only perspectives. Such extension leads to discovering many joint-substation transmission line vulnerabilities. Furthermore, we investigate the joint-substation-transmission-line attack strategies. In particular, we design a new metric, the component interdependency graph (CIG), and propose the CIG-based attack strategy. In simulations, we adopt IEEE 30 bus system, IEEE 118 bus system and Bay Area power grid as test benchmarks, and use the extended degree-based and load attack strategies as comparison schemes. Simulation results show the CIG-based attack strategy has stronger attack performance.
Keywords: Load modeling; Measurement; Power system faults; Power system protection; Power transmission lines; Smart grids; Attack; Cascading Failures; Security; The Smart Grid; Vulnerability Analysis (ID#: 15-4835)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7015564&isnumber=4358835

 

Urquidez, O.A.; Le Xie, "Smart Targeted Planning of VSC-Based Embedded HVDC via Line Shadow Price Weighting," Smart Grid, IEEE Transactions on, vol. 6, no. 1, pp. 431, 440, Jan. 2015. doi: 10.1109/TSG.2014.2354296 Abstract: In this paper, a novel approach to incorporate voltage source converter-based embedded HVDC for improving power system economic dispatch efficiency is proposed. An analytical formulation is presented to quantify the economic benefits of embedded HVDC by modeling its flow control as an injection-extraction pair in the economic dispatch of the transmission grid. A computationally efficient algorithm is proposed to rank the potential locations of such embedded HVDC. The algorithm is based on expected economic dispatch cost reduction weighted by the historical line shadow prices. The use of a distribution of historical data as a means of weighting also allows for incorporation of diurnal and seasonal influences on congestion patterns. Numerical case studies using the proposed method of locating the embedded HVDC suggest promising results in choosing the location of improved flow control devices.
Keywords: HVDC power convertors; cost reduction; load dispatching; load flow control; power transmission control; power transmission economics; power transmission planning; VSC-based embedded HVDC; economic benefits; economic dispatch; expected economic dispatch cost reduction; flow control devices; historical line shadow prices; injection-extraction pair; line shadow price weighting; power system economic dispatch efficiency; smart targeted planning; transmission grid; voltage source converter-based embedded HVDC; Economics; Generators; HVDC transmission; Planning; Power conversion; Vectors; Mixed ac/dc; security-constrained economic dispatch (SCED);transmission planning; voltage source converter (VSC) HVDC; wind curtailment (ID#: 15-4836)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6914597&isnumber=6991622

 

Sun, Y.; Li, Z.; Shahidehpour, M.; Ai, B., "Battery-Based Energy Storage Transportation for Enhancing Power System Economics and Security," Smart Grid, IEEE Transactions on, vol.6, no.5, pp.2395-2402, Sept. 2015.  doi: 10.1109/TSG.2015.2390211 Abstract: This paper evaluates the effect of integrating battery-based energy storage transportation (BEST) by railway transportation network on power grid operation and control. A time-space network model is adopted to represent transportation constraints. The proposed model integrates the hoURLy security-constrained unit commitment with vehicle routing problem. The BEST solution provides the locational and hoURLy charging/discharging schedule of the battery storage system. The mobility of BEST will be of particular interest for enhancing the power system resilience in disaster areas where the transmission grid is congested or on outrage. Two cases are used to simulate the BEST including a six-bus power system linking with a three-station railway system, as well as the IEEE 118-bus systems linking with an eight-station railway system. The results show that under certain conditions, the mobility of battery storage system can economically relieve the transmission congestion and lower the operation costs.
Keywords: Batteries; Mathematical model; Power grids; Rail transportation; Renewable energy sources; Battery-based energy storage transportation (BEST); mixed-integer programming (MIP); security-constraint unit commitment (SCUC); time-space network (TSN) (ID#: 15-4837)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7024941&isnumber=5446437

 

Yamaguchi, Y.; Ogawa, A.; Takeda, A.; Iwata, S., "Cyber Security Analysis of Power Networks by Hypergraph Cut Algorithms," Smart Grid, IEEE Transactions on, vol.6, no.5, pp.2189-2199, Sept. 2015. doi: 10.1109/TSG.2015.2394791 Abstract: This paper presents exact solution methods for analyzing vulnerability of electric power networks to a certain kind of undetectable attacks known as false data injection attacks. We show that the problems of finding the minimum number of measurement points to be attacked undetectably reduce to minimum cut problems on hypergraphs, which admit efficient combinatorial algorithms. Experimental results indicate that our exact solution methods run as fast as the previous methods, most of which provide only approximate solutions. We also present an algorithm for enumerating all small cuts in a hypergraph, which can be used for finding vulnerable sets of measurement points.
Keywords: False data injection; hypergraph; minimum cut; power network; security index; state estimation (ID#: 15-4838)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7041192&isnumber=5446437

 

Lamadrid, A.J.; Shawhan, D.L.; Murillo-Sanchez, C.E.; Zimmerman, R.D.; Zhu, Y.; Tylavsky, D.J.; Kindle, A.G.; Dar, Z., "Stochastically Optimized, Carbon-Reducing Dispatch of Storage, Generation, and Loads," Power Systems, IEEE Transactions on, vol.30, no.2, pp.1064-1075, March 2015. doi: 10.1109/TPWRS.2014.2388214 Abstract: We present a new formulation of a hybrid stochastic-robust optimization and use it to calculate a look-ahead, security-constrained optimal power flow. It is designed to reduce carbon dioxide (CO2) emissions by efficiently accommodating renewable energy sources and by realistically evaluating system changes that could reduce emissions. It takes into account ramping costs, CO2 damages, demand functions, reserve needs, contingencies, and the temporally linked probability distributions of stochastic variables such as wind generation. The inter-temporal trade-offs and transversality of energy storage systems are a focus of our formulation. We use it as part of a new method to comprehensively estimate the operational net benefits of system changes. Aside from the optimization formulation, our method has four other innovations. First, it statistically estimates the cost and CO2 impacts of each generator's electricity output and ramping decisions. Second, it produces a comprehensive measure of net operating benefit, and disaggregates that into the effects on consumers, producers, system operators, government, and CO2 damage. Third and fourth, our method includes creating a novel, modified Ward reduction of the grid and a thorough generator dataset from publicly available information sources. We then apply this method to estimating the impacts of wind power, energy storage, and operational policies.
Keywords: Energy storage; Equations; Generators; Mathematical model; Optimization; Uncertainty; Vectors; Energy storage; environmental economics; optimization; power generation dispatch; power system economics; power system planning; power system simulation; renewable energy sources; smart grids; uncertainty; wind energy (ID#: 15-4839)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7029704&isnumber=4374138

 

Li, Z.; Wang, J.; Sun, H.; Guo, Q., "Transmission Contingency Analysis Based on Integrated Transmission and Distribution Power Flow in Smart Grid," Power Systems, IEEE Transactions on, vol.30, no.6, pp.3356-3367, November 2015. doi: 10.1109/TPWRS.2014.2381879 Abstract: In future smart grids, with distribution networks having loops more frequently, current transmission contingency analysis (TCA) which usually neglects the distribution power flow variations after a contingency may leave out severe outages. With more distribution management systems deployed on the distribution side, a new transmission CA method based on global power flow (GPF) analysis which integrates both the transmission and distribution power flow is proposed in this paper (named as GTCA) to address the problem. The definition and new features of GTCA are first introduced. Then, the necessity of GTCA is physically illustrated. Difference in the results of GTCA and TCA is mathematically analyzed. A GPF-embedded algorithm of performing GTCA is then provided. The data exchange process and the performance with communication interruptions are discussed. As multiple contingencies are considered in GTCA, several approaches are proposed and discussed to reduce communication burdens and improve the computational efficiency. Plenty of numerical tests are performed in several systems to verify the theoretical analysis. With theoretical analysis and numerical verification, it is suggested that GTCA should be performed instead of TCA to avoid potential false alarms, especially in the condition that DNs are more frequently looped in the future smart grids.
Keywords: Equations; Generators; Power system reliability; Reliability; Security; Smart grids; Contingency analysis; GPF-based transmission CA; distribution; global power flow; master-slave-splitting; transmission (ID#: 15-4840)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7001670&isnumber=4374138

 

Li, X.; Zhang, X.; Wu, L.; Lu, P.; Zhang, S., "Transmission Line Overload Risk Assessment for Power Systems With Wind and Load-Power Generation Correlation," Smart Grid, IEEE Transactions on, vol.6, no.3, pp.1233-1242, May 2015. doi: 10.1109/TSG.2014.2387281 Abstract: In the risk-based security assessment, probability and severity of events are the two main factors for measuring the security level of power systems. This paper presents a method for assessing line overload risk of wind-integrated power systems with the consideration of wind and load-power generation correlation. The established risk assessment model fully considers the probability and the consequence of wind uncertainties and line flow fluctuations. The point estimate method is employed to deal with the probability of line overload and the severity function is applied to quantify line flow fluctuations. Moreover, with the Cholesky decomposition, the correlation between loads and power generations are simulated by the spatial transformation of probability distributions of random variables. In addition, Nataf transformation is used to address wind resource correlation. Finally, the line overload risk index is obtained, which can be used as an indicator for quantifying power system security. Numerical results on the modified IEEE 30-bus system and the modified IEEE 118-bus system show that the types and the parameters of the wind speed distribution would affect the risk indices of line overload, and the risk indices obtained with the consideration of wind resource correlation and load correlation would reflect the system security more accurately.
Keywords: Correlation; Power generation; Power systems; Random variables; Risk management; Security; Wind speed; Load-power generation correlation; overload risk assessment; point estimate method (PEM);probabilistic load flow (PLF); severity function; wind resource correlation (ID#: 15-4841)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7015608&isnumber=5446437


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Software Security, 2014 (ACM), Part 1

 
SoS Logo

Software Security, 2014 (ACM), Part 1

 

This set of bibliographical references about software security research papers is from conference publications posted in the ACM Digital Library. More than 2500 conference papers were presented on this topic in 2014. The set presented here represents those likely to be of most interest to the Science of Security community. They address issues related to measurement, scalability, reliability, and other hard problem issues.  IEEE papers will be presented in a separate series.


 

Todd R. Andel, Lindsey N. Whitehurst, Jeffrey T. McDonald; Software Security and Randomization through Program Partitioning and Circuit Variation; MTD '14 Proceedings of the First ACM Workshop on Moving Target Defense, November 2014, Pages 79-86. Doi: 10.1145/2663474.2663484 Abstract: The commodity status of Field Programmable Gate Arrays (FPGAs) has allowed computationally intensive algorithms, such as cryptographic protocols, to take advantage of faster hardware speed while simultaneously leveraging the reconfigurability and lower cost of software. Numerous security applications have been transitioned into FPGA implementations allowing security applications to operate at real-time speeds, such as firewall and packet scanning on high speed networks. However, the utilization of FPGAs to directly secure software vulnerabilities is seemingly non-existent. Protecting program integrity and confidentiality is crucial as malicious attacks through injected code are becoming increasingly prevalent. This paper lays the foundation of continuing research in how to protect software by partitioning critical sections using reconfigurable hardware. This approach is similar to a traditional coprocessor approach to scheduling opcodes for execution on specialized hardware as opposed to running on the native processor. However, the partitioned program model enables the programmer the ability to split portions of an application to reconfigurable hardware at compile time. The fundamental underlying hypothesis is that synthesizing portions of programs onto hardware can mitigate potential software vulnerabilities. Further, this approach provides an avenue for randomization or diversity for software layout and circuit variation.
Keywords: circuit variation, program protection, reconfigurable hardware, secure software, software partitioning (ID#: 15-4587)
URL: http://doi.acm.org/10.1145/2663474.2663484

 

Gary E. McGraw; Software Security: A Study in Technology Transfer; SPLASH '14 Proceedings of the Companion Publication of the 2014 ACM SIGPLAN Conference on Systems, Programming, and Applications: Software for Humanity, October 2014, Pages 1-1. Doi: 10.1145/2660252.2661745 Abstract: Where do security technologies come from? Academics propose research and government (sometimes) funds it. Startups move technologies across the "research valley of death" to early adopters. Global corporations make technology widely available by acquiring startups. At every step there are gaps and pitfalls. Adoption is the acid test of innovation. Idea-generation is perhaps ten per cent of innovation; most of the work is on technology transfer and adoption. Chance plays a big role in creating opportunities (e.g., R&D involves a lot of luck), but a company's success depends on its ability to make opportunities more likely to occur, and to capitalize on those opportunities when they arise. Passionate individuals drive technology transfer more than does process; indeed, some people believe that the original researchers need to be involved all the way along the chain. Prototyping is an important practice, often resulting in "researchware" that proves a concept but is not ready for wide use. Transforming a prototype from the lab to the real-world is a multi-stage, multi-year undertaking. This talk will use the decade-long evolution of static analysis in code review as a driver for discussion. We'll talk startups, big companies, venture capital, research agencies, and subject-matter expertise. In general, technologists don't appreciate business people enough and business people don't appreciate technology enough. Most successful companies are brilliant at one, but also need to be adequate at the other.
Keywords: code review, security, static analysis, technology adoption, technology transfer (ID#: 15-4588)
URL: http://doi.acm.org/10.1145/2660252.2661745

 

Tiffany Brooke Jordan, Brittany Johnson, Jim Witschey, Emerson Murphy-Hill; Designing Interventions to Persuade Software Developers to Adopt Security Tools; SIW '14 Proceedings of the 2014 ACM Workshop on Security Information Workers, November 2014, Pages 35-38. Doi: 10.1145/2663887.2663900  Abstract: The nature of security information workers' jobs requires a certain level of care and attention to detail. There exist tools that can assist these workers with their daily tasks; however, workers may not be using these tools. Research suggests persuasive techniques can positively affect a worker's outlook on a given technology. We attempt to develop an effective way to motivate security workers to adopt and use security tools by using persuasive design guidelines. We present an system that generates automated emails to inform software developers of FindBugs, a tool that detects potential vulnerabilities within a project. We discuss the decisions supporting our overall design of the automated emails.
Keywords: persuasive interventions, security, tool adoption (ID#: 15-4589)
URL: http://doi.acm.org/10.1145/2663887.2663900

 

Mark Murphy, Per Larsen, Stefan Brunthaler, Michael Franz; Software Profiling Options and Their Effects on Security Based Diversification; MTD '14 Proceedings of the First ACM Workshop on Moving Target Defense, November 2014, Pages 87-96. Doi: 10.1145/2663474.2663485  Abstract: Imparting diversity to binaries by inserting garbage instructions is an effective defense against code-reuse attacks. Relocating and breaking up code gadgets removes an attacker's ability to craft attacks by merely studying the existing code on their own computer. Unfortunately, inserting garbage instructions also slows down program execution. The use of profiling enables optimizations that alleviate much of this overhead, while still maintaining the high level of security needed to deter attacks. These optimizations are performed by varying the probability for the insertion of a garbage instruction at any particular location in the binary. The hottest regions of code get the smallest amount of diversification, while the coldest regions get the most diversification.  We show that static and dynamic profiling methods both reduce run-time overhead to under 2.5% while preventing over 95% of original gadgets from appearing in any diversified binary. We compare static and dynamic profiling and find that dynamic profiling has a slight performance advantage in a best-case scenario. But we also show that dynamic profiling results can suffer greatly from bad training input. Additionally, we find that static profiling creates smaller binary files than dynamic profiling, and that the two methods offer nearly identical security characteristics. 
Keywords: automated software diversity, code randomization, dynamic profiling, static profiling (ID#: 15-4590)
URL: http://doi.acm.org/10.1145/2663474.2663485

 

Kostantinos Stroggylos, Dimitris Mitropoulos, Zacharias Tzermias, Panagiotis Papadopoulos, Fotios Rafailidis, Diomidis Spinellis, Sotiris Ioannidis, Panagiotis Katsaros; Securing Legacy Code with the TRACER Platform; PCI '14 Proceedings of the 18th Panhellenic Conference on Informatics, October 2014, Article No. 26, Pages 1-6. Doi: 10.1145/2645791.2645796 Abstract: Software vulnerabilities can severely affect an organization's infrastructure and cause significant financial damage to it. A number of tools and techniques are available for performing vulnerability detection in software written in various programming platforms, in a pursuit to mitigate such defects. However, since the requirements for running such tools and the formats in which they store and present their results vary wildly, it is difficult to utilize many of them in the scope of a project. By simplifying the process of running a variety of vulnerability detectors and collecting their results in an efficient, automated manner during development, the task of tracking security defects throughout the evolution history of software projects is bolstered. In this paper we present tracer, a software framework and platform to support the development of more secure applications by constantly monitoring software projects for vulnerabilities. The platform allows the easy integration of existing tools that statically detect software vulnerabilities and promotes their use during software development and maintenance. To demonstrate the efficiency and usability of the platform, we integrated two popular static analysis tools, FindBugs and Frama-c as sample implementations, and report on preliminary results from their use. 
Keywords: Legacy software, Software Security, Static Analysis, Trusted Applications (ID#: 15-4591)
URL: http://doi.acm.org/10.1145/2645791.2645796

 

Mingwei Zhang, Rui Qiao, Niranjan Hasabnis, R. Sekar; A Platform for Secure Static Binary Instrumentation; VEE '14 Proceedings of the 10th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments, March 2014, Pages 129-140. Doi: 10.1145/2576195.2576208 Abstract: Program instrumentation techniques form the basis of many recent software security defenses, including defenses against common exploits and security policy enforcement. As compared to source-code instrumentation, binary instrumentation is easier to use and more broadly applicable due to the ready availability of binary code. Two key features needed for security instrumentations are (a) it should be applied to all application code, including code contained in various system and application libraries, and (b) it should be non-bypassable. So far, dynamic binary instrumentation (DBI) techniques have provided these features, whereas static binary instrumentation (SBI) techniques have lacked them. These features, combined with ease of use, have made DBI the de facto choice for security instrumentations. However, DBI techniques can incur high overheads in several common usage scenarios, such as application startups, system-calls, and many real-world applications. We therefore develop a new platform for secure static binary instrumentation (PSI) that overcomes these drawbacks of DBI techniques, while retaining the security, robustness and ease-of-use features. We illustrate the versatility of PSI by developing several instrumentation applications: basic block counting, shadow stack defense against control-flow hijack and return-oriented programming attacks, and system call and library policy enforcement. While being competitive with the best DBI tools on CPU-intensive SPEC 2006 benchmark, PSI provides an order of magnitude reduction in overheads on a collection of real-world applications. 
Keywords: binary instrumentation, binary translation, control flow integrity, COTS binary hardening, security policy enforcement, software security (ID#: 15-4592)
URL: http://doi.acm.org/10.1145/2576195.2576208

 

Shundan Xiao, Jim Witschey, Emerson Murphy-Hill; Social Influences on Secure Development Tool Adoption: Why Security Tools Spread; CSCW '14 Proceedings of the 17th ACM Conference On Computer Supported Cooperative Work & Social Computing, February 2014, Pages 1095-1106. Doi: 10.1145/2531602.2531722  Abstract: Security tools can help developers build more secure software systems by helping developers detect or fix security vulnerabilities in source code. However, developers do not always use these tools. In this paper, we investigate a number of social factors that impact developers' adoption decisions, based on a multidisciplinary field of research called diffusion of innovations. We conducted 42 one-on-one interviews with professional software developers, and our results suggest a number of ways in which security tool adoption depends on developers' social environments and on the channels through which information about tools is communicated. For example, some participants trusted developers with strong reputations on the Internet as much as they trust their colleagues for information about security tools.
Keywords: adoption, security tools, social factors (ID#: 15-4593)
URL: http://doi.acm.org/10.1145/2531602.2531722

 

Maria Riaz, John Slankas, Jason King, Laurie Williams; Using Templates to Elicit Implied Security Requirements from Functional Requirements - A Controlled Experiment;  ESEM '14 Proceedings of the 8th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, September 2014, Article No. 22. Doi: 10.1145/2652524.2652532  Abstract: Context: Security requirements for software systems can be challenging to identify and are often overlooked during the requirements engineering process. Existing functional requirements of a system can imply the need for security requirements. Systems having similar security objectives (e.g., confidentiality) often also share security requirements that can be captured in the form of reusable templates and instantiated in the context of a system to specify security requirements.  Goal: We seek to improve the security requirements elicitation process by automatically suggesting appropriate security requirement templates implied by existing functional requirements.  Method: We conducted a controlled experiment involving 50 graduate students enrolled in a software security course to evaluate the use of automatically-suggested templates in eliciting implied security requirements. Participants were divided into treatment (automatically-suggested templates) and control groups (no templates provided).  Results: Participants using our templates identified 42% of all the implied security requirements in the oracle as compared to the control group, which identified only 16% of the implied security requirements. Template usage increased the efficiency of security requirements identified per unit of time.  Conclusion: Automatically-suggested templates helped participants (security non-experts) think about security implications for the software system and consider more security requirements than they would have otherwise. We found that participants need more incentive than just a participatory grade when completing the task. Further, we recommend to ensure task completeness, participants either need a step-driven (i.e., wizard) approach or progress indicators to identify remaining work.
Keywords: controlled experiment, security requirements, templates (ID#: 15-4594)
URL: http://doi.acm.org/10.1145/2652524.2652532

 

Benjamin D. Rodes, John C. Knight, Kimberly S. Wasson; A Security Metric Based on Security Arguments; WETSoM 2014 Proceedings of the 5th International Workshop on Emerging Trends in Software Metrics, June 2014, Pages 66-72. Doi: 10.1145/2593868.2593880 Abstract: Software security metrics that facilitate decision making at the enterprise design and operations levels are a topic of active research and debate. These metrics are desirable to support deployment decisions, upgrade decisions, and so on; however, no single metric or set of metrics is known to provide universally effective and appropriate measurements. Instead, engineers must choose, for each software system, what to measure, how and how much to measure, and must be able to justify the rationale for how these measurements are mapped to stakeholder security goals. An assurance argument for security (i.e., a security argument) provides comprehensive documentation of all evidence and rationales for justifying belief in a security claim about a software system. In this work, we motivate the need for security arguments to facilitate meaningful and comprehensive security metrics, and present a novel framework for assessing security arguments to generate and interpret security metrics.
Keywords: Assurance Case, Confidence, Security Metrics (ID#: 15-4595)
URL: http://doi.acm.org/10.1145/2593868.2593880

 

Yasser M. Hausawi, William H. Allen; Usability and Security Trade-Off: A Design Guideline; ACM SE '14 Proceedings of the 2014 ACM Southeast Regional Conference, March 2014, Article No. 21. Doi: 10.1145/2638404.2638483 Abstract: Requirements engineering and design are the first two phases of the Software Development Life-Cycle. Considerable research has addressed the requirements phase and a number of well-regarded tools exist to assist with that process. The design phase can also make use of a wide range of tools, including design principles, activities, best practices, techniques, and patterns, to improve the incorporation of requirements into the software design documents. However, the process of selecting the appropriate design tools to support each requirement is a complex task that requires considerable training and experience. It is also possible that design tools selected for different requirements can conflict with each other, reducing their effectiveness, increasing complexity, impacting usability or potentially causing security vulnerabilities. In this paper, we propose guidelines for selecting appropriate design tools to support the integration of usability and security requirements in the software design phase and to resolve conflicts between those tools. We demonstrate this approach with a case study that illustrates the design tool selection and analysis process. 
Keywords: best practices, patterns, security, software design, usability, usable-security (ID#: 15-4596)
URL: http://doi.acm.org/10.1145/2638404.2638483

 

Xiaohong Yuan, Emmanuel Borkor Nuakoh, Jodria S. Beal, Huiming Yu; Retrieving Relevant CAPEC Attack Patterns for Secure Software Development; CISR '14 Proceedings of the 9th Annual Cyber and Information Security Research Conference, April 2014, Pages 33-36. Doi: 10.1145/2602087.2602092 Abstract: To improve the security of computer systems, information, and the cyber space, it is critical to engineer more secure software. To develop secure and reliable software, software developers need to have the mindset of an attacker. Attack patterns such as CAPEC are valuable resources to help software developers to think like an attacker and have the potential to be used in each phase of the secure software development life cycle. However, systematic processes or methods for utilizing existing attack pattern resources are needed. As a first step, this paper describes our ongoing effort of developing a tool to retrieve relevant CAPEC attack patterns for software development. This tool can retrieve attack patterns most relevant to a particular STRIDE type, as well as most useful to the software being developed. It can be used in conjunction with the Microsoft SDL threat modeling tool. It also allows developers to search for CAPEC attack patterns using keywords.
Keywords: CAPEC, STRIDE, attack pattern, secure software development, secure software engineering (ID#: 15-4597)
URLhttp://doi.acm.org/10.1145/2602087.2602092

 

Frederico Araujo, Kevin W. Hamlen, Sebastian Biedermann, Stefan Katzenbeisser; From Patches to Honey-Patches: Lightweight Attacker Misdirection, Deception, and Disinformation; CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 942-953.  Doi: 10.1145/2660267.2660329 Abstract: Traditional software security patches often have the unfortunate side-effect of quickly alerting attackers that their attempts to exploit patched vulnerabilities have failed. Attackers greatly benefit from this information; it expedites their search for unpatched vulnerabilities, it allows them to reserve their ultimate attack payloads for successful attacks, and it increases attacker confidence in stolen secrets or expected sabotage resulting from attacks. To overcome this disadvantage, a methodology is proposed for reformulating a broad class of security patches into honey-patches - patches that offer equivalent security but that frustrate attackers' ability to determine whether their attacks have succeeded or failed. When an exploit attempt is detected, the honey-patch transparently and efficiently redirects the attacker to an unpatched decoy, where the attack is allowed to succeed. The decoy may host aggressive software monitors that collect important attack information, and deceptive files that disinform attackers. An implementation for three production-level web servers, including Apache HTTP, demonstrates that honey-patching can be realized for large-scale, performance-critical software applications with minimal overheads.
Keywords: honeypots, intrusion detection and prevention (ID#: 15-4598)
URL: http://doi.acm.org/10.1145/2660267.2660329

 

Andrew Meneely, Alberto C. Rodriguez Tejeda, Brian Spates, Shannon Trudeau, Danielle Neuberger, Katherine Whitlock, Christopher Ketant, Kayla Davis; An Empirical Investigation of Socio-Technical Code Review Metrics and Security Vulnerabilities; SSE 2014 Proceedings of the 6th International Workshop on Social Software Engineering, November 2014, Pages 37-44. Doi: 10.1145/2661685.2661687 Abstract: One of the guiding principles of open source software development is to use crowds of developers to keep a watchful eye on source code. Eric Raymond declared Linus' Law as "many eyes make all bugs shallow," with the socio-technical argument that high quality open source software emerges when developers combine together their collective experience and expertise to review code collaboratively. Vulnerabilities are a particularly nasty set of bugs that can be rare, difficult to reproduce, and require specialized skills to recognize. Does Linus' Law apply to vulnerabilities empirically? In this study, we analyzed 159,254 code reviews, 185,948 Git commits, and 667 post-release vulnerabilities in the Chromium browser project. We formulated, collected, and analyzed various metrics related to Linus' Law to explore the connection between collaborative reviews and vulnerabilities that were missed by the review process. Our statistical association results showed that source code files reviewed by more developers are, counter-intuitively, more likely to be vulnerable (even after accounting for file size). However, files are less likely to be vulnerable if they were reviewed by developers who had experience participating on prior vulnerability-fixing reviews. The results indicate that lack of security experience and lack of collaborator familiarity are key risk factors in considering Linus’ Law with vulnerabilities. 
Keywords: code review, socio-technical, vulnerability (ID#: 15-4599)
URL: http://doi.acm.org/10.1145/2661685.2661687

 

Amiangshu Bosu; Characteristics of the Vulnerable Code Changes Identified Through Peer Code Review; ICSE Companion 2014 Companion Proceedings of the 36th International Conference on Software Engineering, May 2014, Pages 736-738. Doi: 10.1145/2591062.2591200 Abstract: To effectively utilize the efforts of scarce security experts, this study aims to provide empirical evidence about the characteristics of security vulnerabilities. Using a three-stage, manual analysis of peer code review data from 10 popular Open Source Software (OSS) projects, this study identified 413 potentially vulnerable code changes (VCC). Some key results include: 1) the most experienced contributors authored the majority of the VCCs, 2) while less experienced authors wrote fewer VCCs, their code changes were 1.5 to 24 times more likely to be vulnerable, 3) employees of the organization sponsoring the OSS projects are more likely to write VCCs.
Keywords: code review, inspection, open source, security defects, vulnerability (ID#: 15-4600)
URLhttp://doi.acm.org/10.1145/2591062.2591200

 

Marco Patrignani, Dave Clarke; Fully Abstract Trace Semantics for Low-Level Isolation Mechanisms; SAC '14 Proceedings of the 29th Annual ACM Symposium on Applied Computing, March 2014, Pages 1562-1569. Doi: 10.1145/2554850.2554865 Abstract: Many software systems adopt isolation mechanisms of modern processors as software security building blocks. Reasoning about these building blocks means reasoning about elaborate assembly code, which can be very complex due to the loose structure of the code. A way to overcome this complexity is giving the code a more structured semantics. This paper presents one such semantics, namely a fully abstract trace semantics, for an assembly language enhanced with protection mechanisms of modern processors. The trace semantics represents the behaviour of protected assembly code with simple abstractions, unburdened by low-level details, at the maximum degree of precision. Additionally, it captures the capabilities of attackers to protected software and simplifies providing a secure compiler targeting that language.
Keywords: (not provided) (ID#: 15-4601)
URL: http://doi.acm.org/10.1145/2554850.2554865

 

Steven D. Fraser, Djenana Campara, Michael C. Fanning, Gary McGraw, Kevin Sullivan; Privacy and Security in a Networked World; SPLASH '14 Proceedings of the Companion Publication of the 2014 ACM SIGPLAN Conference on Systems, Programming, and Applications: Software for Humanity, October 2014, Pages 43-45. Doi: 10.1145/2660252.2661294 Abstract: As news stories continue to demonstrate, ensuring adequate security and privacy in a networked "always on" world is a challenge; and while open source software can mitigate problems, it is not a panacea. This panel will bring together experts from industry and academia to debate, discuss, and offer opinions -- questions might include:  What are the "costs" of "good enough" security and privacy on developers and customers?  What is the appropriate trade-off between the price provide security and cost of poor security?  How can the consequences of poor design and implementation be managed? Can systems be enabled to fail "security-safe"?  What are the tradeoffs for increased adoption of privacy and security best practices? How can the "costs" of privacy and security -- both tangible and intangible -- be reduced?
Keywords: cost, design, privacy, security, soft issues (ID#: 15-4602)
URL: http://doi.acm.org/10.1145/2660252.2661294

 

Ali Reza Honarvar, Ashkan Sami; CBR Clone Based Software Flaw Detection Issues; SIN '14 Proceedings of the 7th International Conference on Security of Information and Networks, September 2014, Pages 487. Doi: 10.1145/2659651.2659745 Abstract: The biggest problem in computer security is that most systems aren't constructed with security in mind. Being aware of common security weaknesses in programming might sound like a good way to avoid them, but awareness by itself often proves to be insufficient. Understanding security is one thing and applying your understanding in a complete and consistent fashion to meet your security goals is quite another. For this reason, static analysis is advocated as a technique for finding common security errors in source code. Manual security static analysis is a tedious work, so automatic tools which can guide programmers to detect security concerns is suggested. Good static analysis tools provide a fast way to get a detailed security related evaluation of program code. In this paper a new architecture (CBRFD) for software flaw detector, based on the concept of clone detection and case base reasoning, is proposed and various issues which concern detection of security weakness of codes through code clone detector is investigated.
Keywords: Software flaw detection, flaw clone detector, static analysis tools, static security analysis (ID#: 15-4603)
URL: http://doi.acm.org/10.1145/2659651.2659745

 

Anil Kurmus, Robby Zippel; A Tale of Two Kernels: Towards Ending Kernel Hardening Wars with Split Kernel; CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 1366-1377. Doi: 10.1145/2660267.2660331 Abstract: Software security practitioners are often torn between choosing performance or security. In particular, OS kernels are sensitive to the smallest performance regressions. This makes it difficult to develop innovative kernel hardening mechanisms: they may inevitably incur some run-time performance overhead. Here, we propose building each kernel function with and without hardening, within a single split kernel. In particular, this allows trusted processes to be run under unmodified kernel code, while system calls of untrusted processes are directed to the hardened kernel code. We show such trusted processes run with no overhead when compared to an unmodified kernel. This allows deferring the decision of making use of hardening to the run-time. This means kernel distributors, system administrators and users can selectively enable hardening according to their needs: we give examples of such cases. Although this approach cannot be directly applied to arbitrary kernel hardening mechanisms, we show cases where it can. Finally, our implementation in the Linux kernel requires few changes to the kernel sources and no application source changes. Thus, it is both maintainable and easy to use.  
Keywords: build system, kernel hardening, os security, performance (ID#: 15-4604)
URL: http://doi.acm.org/10.1145/2660267.2660331

 

Antti Evesti, Habtamu Abie, Reijo Savola; Security Measuring for Self-adaptive Security; ECSAW '14 Proceedings of the 2014 European Conference on Software Architecture Workshops; August 2014, Article No. 5. Doi: 10.1145/2642803.2642808 Abstract: Self-adaptive security is needed due to vast amount of changes in an execution environment and threat landscape, which all cannot be anticipated at software design-time. Self-adaptive security requires means for monitoring a security level and decision making capability to improve the current security level. In this paper, we describe how security metrics are able to support self-adaptive security. The paper analyses benefits and challenges of security measuring from the self-adaptive security perspective. Thus, five benefits and three challenges of security metrics in self-adaptive security are described. Furthermore, the paper derives requirements that measuring causes for self-adaptive security. Based on the derived requirements, extension components for the MAPE (Monitor, Analyse, Plan and Execute) reference model are proposed.
Keywords: Self-adaptive, architecture, decision-making, security metric (ID#: 15-4605)
URL: http://doi.acm.org/10.1145/2642803.2642808

 

Amel Bennaceur, Arosha K. Bandara, Michael Jackson, Wei Liu, Lionel Montrieux, Thein Than Tun, Yijun Yu, Bashar Nuseibeh; Requirements-Driven Mediation for Collaborative Security; SEAMS 2014 Proceedings of the 9th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, June 2014, Pages 37-42. Doi: 10.1145/2593929.2593938 Abstract: Security is concerned with the protection of assets from intentional harm. Secure systems provide capabilities that enable such protection to satisfy some security requirements. In a world increasingly populated with mobile and ubiquitous computing technology, the scope and boundary of security systems can be uncertain and can change. A single functional component, or even multiple components individually, are often insufficient to satisfy complex security requirements on their own.  Adaptive security aims to enable systems to vary their protection in the face of changes in their operational environment. Collaborative security, which we propose in this paper, aims to exploit the selection and deployment of multiple, potentially heterogeneous, software-intensive components to collaborate in order to meet security requirements in the face of changes in the environment, changes in assets under protection and their values, and the discovery of new threats and vulnerabilities.   However, the components that need to collaborate may not have been designed and implemented to interact with one another collaboratively. To address this, we propose a novel framework for collaborative security that combines adaptive security, collaborative adaptation and an explicit representation of the capabilities of the software components that may be needed in order to achieve collaborative security. We elaborate on each of these framework elements, focusing in particular on the challenges and opportunities afforded by (1) the ability to capture, represent, and reason about the capabilities of different software components and their operational context, and (2) the ability of components to be selected and mediated at runtime in order to satisfy the security requirements. We illustrate our vision through a collaborative robotic implementation, and suggest some areas for future work.
Keywords: Security requirements, collaborative adaptation, mediation (ID#: 15-4606)
URL: http://doi.acm.org/10.1145/2593929.2593938

 

Kristian Beckers, Isabelle Côté, Ludger Goeke; A Catalog of Security Requirements Patterns for the Domain of Cloud Computing Systems; SAC '14 Proceedings of the 29th Annual ACM Symposium on Applied Computing, March 2014, Pages 337-342. Doi: 10.1145/2554850.2554921 Abstract: Security and privacy concerns are essential in cloud computing scenarios, because cloud customers and end customers have to trust the cloud provider with their critical business data and even their IT infrastructure. In projects these are often addressed late in the software development life-cycle, because these are difficult to elicit in cloud scenarios, due to the large amount of stakeholders and technologies involved. We contribute a catalog of security and privacy requirement patterns that support software engineers in eliciting these requirements. As requirements patterns provide artifacts for re-using requirements. This paper shows how these requirements can be classified according to cloud security and privacy goals. Furthermore, we provide a structured method on how to elicit the right requirements for a given scenario. We mined these requirements patterns from existing security analysis of public organizations such as ENISA and the Cloud Security Alliance, from our practical experience in the cloud domain, and from our previous research in cloud security. We validate our requirements patterns in co-operation with industrial partners of the ClouDAT project.
Keywords: ISO 27001, cloud computing, patterns, privacy, requirements elicitation, requirements patterns, security requirements, security standards (ID#: 15-4607)
URL: http://doi.acm.org/10.1145/2554850.2554921

 

Kami E. Vaniea, Emilee Rader, Rick Wash; Betrayed by Updates: How Negative Experiences Affect Future Security; CHI '14 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, April 2014, Pages 2671-2674. Doi: 10.1145/2556288.2557275 Abstract: Installing security-relevant software updates is one of the best computer protection mechanisms. However, users do not always choose to install updates. Through interviewing non-expert Windows users, we found that users frequently decide not to install future updates, regardless of whether they are important for security, after negative experiences with past updates. This means that even non-security updates (such as user interface changes) can impact the security of a computer. We discuss three themes impacting users' willingness to install updates: unexpected new features in an update, the difficulty of assessing whether an update is `worth it', and confusion about why an update is necessary.
Keywords: human factors, security, software updates (ID#: 15-4608)
URL: http://doi.acm.org/10.1145/2556288.2557275

 

Markus Kammerstetter, Christian Platzer, Wolfgang Kastner; Prospect: Peripheral Proxying Supported Embedded Code Testing; ASIA CCS '14 Proceedings of the 9th ACM Symposium On Information, Computer And Communications Security, June 2014, pages 329-340. Doi: 10.1145/2590296.2590301 Abstract: Embedded systems are an integral part of almost every electronic product today. From consumer electronics to industrial components in SCADA systems, their possible fields of application are manifold. While especially in industrial and critical infrastructures the security requirements are high, recent publications have shown that embedded systems do not cope well with this demand. One of the reasons is that embedded systems are being less scrutinized as embedded security analysis is considered to be more time consuming and challenging in comparison to PC systems. One of the key challenges on proprietary, resource constrained embedded devices is dynamic code analysis. The devices typically do not have the capabilities for a full-scale dynamic security evaluation. Likewise, the analyst cannot execute the software implementation inside a virtual machine due to the missing peripheral hardware that is required by the software to run. In this paper, we present PROSPECT, a system that can overcome these shortcomings and enables dynamic code analysis of embedded binary code inside arbitrary analysis environments. By transparently forwarding peripheral hardware accesses from the original host system into a virtual machine, PROSPECT allows security analysts to run the embedded software implementation without the need to know which and how embedded peripheral hardware components are accessed. We evaluated PROSPECT with respect to the performance impact and conducted a case study by doing a full-scale security audit of a widely used commercial fire alarm system in the building automation domain. Our results show that PROSPECT is both practical and usable for real-world application.
Keywords: device tunneling, dynamic analysis, embedded system, fuzz testing, security (ID#: 15-4609)
URLhttp://doi.acm.org/10.1145/2590296.2590301

 

Adam Bates, Joe Pletcher, Tyler Nichols, Braden Hollembaek, Dave Tian, Kevin R.B. Butler, Abdulrahman Alkhelaifi; Securing SSL Certificate Verification through Dynamic Linking; CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 394-405. Doi: 10.1145/2660267.2660338 Abstract: Recent discoveries of widespread vulnerabilities in the SSL/TLS protocol stack, particular with regard to the verification of server certificates, has left the security of the Internet's communications in doubt. Newly proposed SSL trust enhancements address many of these vulnerabilities, but are slow to be deployed and do not solve the problem of securing existing software. In this work, we provide new mechanisms that offer immediate solutions to addressing vulnerabilities in legacy code. We introduce CertShim, a lightweight retrofit to SSL implementations that protects against SSL vulnerabilities, including those surveyed by Georgiev et. al., in a manner that is transparent to the application. We demonstrate CertShim's extensibility by adapting it to work with Convergence, DANE, and Client-Based Key Pinning. CertShim imposes just 20 ms overhead for an SSL verification call, and hooks the SSL dependencies of 94% of Ubuntu's most popular packages with no changes necessary to existing applications. This work significantly increases system-wide security of SSL communications in non-browser software, while simultaneously reducing the barriers to evaluating and adopting the myriad alternative proposals to the certificate authority system.
Keywords: https, public-key certificates, SSL, TLS (ID#: 15-4610)
URL: http://doi.acm.org/10.1145/2660267.2660338

 

Jeremy Tate, T. Charles Clancy; Secure and Tamper Proof Code Management;  SafeConfig '14 Proceedings of the 2014 Workshop on Cyber Security Analytics, Intelligence and Automation, November 2014, Pages 19-24. Doi: 10.1145/2665936.2665940  Abstract: In this paper, we present an additional layer of security to source code repositories by combining Keyless Signature Infrastructure (KSI) with Git to protect against insider threats as well as provide security even in the event of a private key compromise. This work was done to show the minimal effort required to integrate these two technologies to provide additional security to software development efforts using Git compared to the security benefit gained. Additionally, we designed the solution to minimize the impact to the current Git workflow, requiring no additional commands when committing code and only one new command to verify past commits.
Keywords: Git, KSI (ID#: 15-4611)
URL: http://doi.acm.org/10.1145/2665936.2665940

 

Aniket Kulkarni, Ravindra Metta; A Code Obfuscation Framework Using Code Clones; ICPC 2014 Proceedings of the 22nd International Conference on Program Comprehension, June 2014, Pages 295-299. Doi: 10.1145/2597008.2597807 Abstract: IT industry loses tens of billions of dollars annually from security attacks such as malicious reverse engineering. To protect sensitive parts of software from such attacks, we designed a code obfuscation scheme based on nontrivial code clones. While implementing this scheme, we realized that currently there is no framework to assist implementation of such advanced obfuscation techniques. Therefore, we have developed a framework to support code obfuscation using code clones. We could successfully implement our obfuscation technique using this framework in Java. In this paper, we present our framework and illustrate it with an example.
Keywords: Code Obfuscation, Framework, Reverse Engineering, Software Protection (ID#: 15-4612)
URL: http://doi.acm.org/10.1145/2597008.2597807


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Software Security, 2014 (ACM), Part 2

 
SoS Logo

Software Security, 2014 (ACM), Part 2

 

This set of bibliographical references about software security research papers is from conference publications posted in the ACM Digital Library. More than 2500 conference papers were presented on this topic in 2014. The set presented here represents those likely to be of most interest to the Science of Security community. They address issues related to measurement, scalability, reliability, and other hard problem issues.  IEEE papers will be presented in a separate series.


 

David Lazar, Haogang Chen, Xi Wang, Nickolai Zeldovich; Why Does Cryptographic Software Fail?: A Case Study and
Open Problems
; APSys '14 Proceedings of 5th Asia-Pacific Workshop on Systems, June 2014, Article No. 7. Doi: 10.1145/2637166.2637237 Abstract: Mistakes in cryptographic software implementations often undermine the strong security guarantees offered by cryptography. This paper presents a systematic study of cryptographic vulnerabilities in practice, an examination of state-of-the-art techniques to prevent such vulnerabilities, and a discussion of open problems and possible future research directions. Our study covers 269 cryptographic vulnerabilities reported in the CVE database from January 2011 to May 2014. The results show that just 17% of the bugs are in cryptographic libraries (which often have devastating consequences), and the remaining 83% are misuses of cryptographic libraries by individual applications. We observe that preventing bugs in different parts of a system requires different techniques, and that no effective techniques exist to deal with certain classes of mistakes, such as weak key generation.
Keywords:  (not provided) (ID#: 15-4613)
URL:   http://doi.acm.org/10.1145/2637166.2637237

 

Sascha Fahl, Sergej Dechand, Henning Perl, Felix Fischer, Jaromir Smrcek, Matthew Smith; Hey, NSA: Stay Away from my Market! Future Proofing App Markets Against Powerful Attackers; CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014,  Pages 1143-1155. Doi: 10.1145/2660267.2660311 Abstract: Mobile devices are evolving as the dominant computing platform and consequently application repositories and app markets are becoming the prevalent paradigm for deploying software. Due to their central and trusted position in the software ecosystem, coerced, hacked or malicious app markets pose a serious threat to user security. Currently, there is little that hinders a nation state adversary (NSA) or other powerful attackers from using such central and trusted points of software distribution to deploy customized (malicious) versions of apps to specific users. Due to intransparencies in the current app installation paradigm, this kind of attack is extremely hard to detect. In this paper, we evaluate the risks and drawbacks of current app deployment in the face of powerful attackers. We assess the app signing practices of 97% of all free Google Play apps and find that the current practices make targeted attacks unnecessarily easy and almost impossible to detect for users and app developers alike. We show that high profile Android apps employ intransparent and unaccountable strategies when they publish apps to (multiple) alternative markets. We then present and evaluate Application Transparency (AT), a new framework that can defend against ``targeted-and-stealthy'' attacks, mount by malicious markets.  We deployed AT in the wild and conducted an extensive field study in which we analyzed app installations on 253,819 real world Android devices that participate in a popular anti-virus app's telemetry program. We find that AT can effectively protect users against malicious targeted attack apps and furthermore adds transparency and accountability to the current intransparent signing and packaging strategies employed by many app developers.
Keywords: android, apps, market, nsa, security, transparency (ID#: 15-4614)
URL: http://doi.acm.org/10.1145/2660267.2660311

 

Gábor Pék, Andrea Lanzi, Abhinav Srivastava, Davide Balzarotti, Aurélien Francillon, Christoph Neumann; On the Feasibility of Software Attacks on Commodity Virtual Machine Monitors via Direct Device Assignment;  ASIA CCS '14 Proceedings of the 9th ACM Symposium on Information, Computer and Communications Security, June 2014, Pages 305-316. Doi: 10.1145/2590296.2590299 Abstract: The security of virtual machine monitors (VMMs) is a challenging and active field of research. In particular, due to the increasing significance of hardware virtualization in cloud solutions, it is important to clearly understand existing and arising VMM-related threats. Unfortunately, there is still a lot of confusion around this topic as many attacks presented in the past have never been implemented in practice or tested in a realistic scenario. In this paper, we shed light on VM related threats and defences by implementing, testing, and categorizing a wide range of known and unknown attacks based on directly assigned devices. We executed these attacks on an exhaustive set of VMM configurations to determine their potential impact. Our experiments suggest that most of the previously known attacks are ineffective in current VMM setups.  We also developed an automatic tool, called PTFuzz, to discover hardware-level problems that affect current VMMs. By using PTFuzz, we found several cases of unexpected hardware behaviour, and a major vulnerability on Intel platforms that potentially impacts a large set of machines used in the wild. These vulnerabilities affect unprivileged virtual machines that use a directly assigned device (e.g., network card) and have all the existing hardware protection mechanisms enabled. Such vulnerabilities either allow an attacker to generate a host-side interrupt or hardware faults, violating expected isolation properties. These can cause host software (e.g., VMM) halt as well as they might open the door for practical VMM exploitations. We believe that our study can help cloud providers and researchers to better understand the limitations of their current architectures to provide secure hardware virtualization and prepare for future attacks. 
Keywords: DMA attack, I/O virtualization, MMIO, PIO, interrupt attack, passthrough, virtual machine monitor (ID#: 15-4615)
URL: http://doi.acm.org/10.1145/2590296.2590299

 

Marco Balduzzi, Alessandro Pasta, Kyle Wilhoit; A Security Evaluation of AIS Automated Identification System; ACSAC '14 Proceedings of the 30th Annual Computer Security Applications Conference, December 2014, Pages 436-445. Doi: 10.1145/2664243.2664257 Abstract: AIS, Automatic Identification System, is an application of cyber-physical systems (CPS) to smart transportation at sea. Being primarily used for collision avoidance and traffic monitoring by ship captains and maritime authorities, AIS is a mandatory installation for over 300,000 vessels worldwide since 2002. Other promoted benefits are accident investigation, aids to navigation and search and rescue (SAR) operations. In this paper, we present a unique security evaluation of AIS, by introducing threats affecting both the implementation in online providers and the protocol specification. Using a novel software-based AIS transmitter that we designed, we show that our findings affect all transponders deployed globally on vessels and other maritime stations like lighthouses, buoys, AIS gateways, vessel traffic services and aircraft involved in SAR operations. Our concerns have been acknowledged by online providers and international standards organizations, and we are currently and actively working together to improve the overall security.
Keywords:  (not provided) (ID#: 15-4616)
URL: http://doi.acm.org/10.1145/2664243.2664257

 

Arto Juhola, Titta Ahola, Kimmo Ahola; Adaptive Risk Management with Ontology Linked Evidential Statistics and SDN; ECSAW '14 Proceedings of the 2014 European Conference on Software Architecture Workshops, August 2014, Article No. 2. Doi: 10.1145/2642803.2642805 Abstract: New technologies have increased the dynamism of distributed systems; advances such as Software Defined Networking (SDN) and cloud computing enable unprecedented service flexibility and scalability. By their nature, they are in a constant state of flux, presenting tough challenges for system security. Here an adaptive -- in real time - risk management system capable of keeping abreast of these developments is considered. This paper presents an on-going work on combining a hierarchical threat ontology, real-time risk analysis, and SDN to an efficient whole. The main contribution of this paper is on finding the suitable architectures, components, necessary requirements, and favorable modifications on the systems and system modelling (including the models involving the security analysis) to reach this goal.  
Keywords: Adaptive security, Dempster-Schafer, Dezert-Smarandache, Neural Network inspired Fuzzy C-means, SDN, Threat ontology (ID#: 15-4617)
URL: http://doi.acm.org/10.1145/2642803.2642805

 

Abdullah Khalili, Ashkan Sami, Mahboobeh Ghiasi, Sara Moshtari, Zahra Salehi, Mahdi Azimi; Software Engineering Issues Regarding Securing ICS: An Industrial Case Study; MoSEMInA 2014 Proceedings of the 1st International Workshop on Modern Software Engineering Methods for Industrial Automation, May 2014, Pages 1-6. Doi: 10.1145/2593783.2593789 Abstract: Industrial Control Systems (ICS) are the vital part of modern critical infrastructures. Recent attacks to ICS indicate that these systems have various types of vulnerabilities. A large number of vulnerabilities are due to secure coding problem in industrial applications. Several international and national organizations like: NIST, DHS, and US-CERT have provided extensive documentation on securing ICS; however proper details on securing software application for industrial setting is not presented. The notable point that makes securing a difficult task is the contradictions between security priorities in ICS and IT systems. In addition, none of the guidelines highlights the implications on modification of general IT security solutions to industrial settings. Moreover based on the best of our knowledge, steps to develop a successful real-world secure industrial application have not been reported. In this paper, the first attempts to employ secure coding best practices into a real world industrial application (Supervisory Control and Data Acquisition) called OpenSCADA is presented. Experiments indicate that resolving the vulnerabilities of OpenSCADA in addition to improve its availability, does not jeopardize other dimensions of security
Keywords: Availability, Industrial Control System, Memory Leak, Secure coding, Time critical process (ID#: 15-4618)
URL: http://doi.acm.org/10.1145/2593783.2593789

 

Timothy Vidas, Nicolas Christin; Evading Android Runtime Analysis via Sandbox Detection; ASIA CCS '14 Proceedings of the 9th ACM Symposium On Information, Computer And Communications Security, June 2014, Pages 447-458. Doi: 10.1145/2590296.2590325 Abstract: The large amounts of malware, and its diversity, have made it necessary for the security community to use automated dynamic analysis systems. These systems often rely on virtualization or emulation, and have recently started to be available to process mobile malware. Conversely, malware authors seek to detect such systems and evade analysis. In this paper, we present techniques for detecting Android runtime analysis systems. Our techniques are classified into four broad classes showing the ability to detect systems based on differences in behavior, performance, hardware and software components, and those resulting from analysis system design choices. We also evaluate our techniques against current publicly accessible systems, all of which are easily identified and can therefore be hindered by a motivated adversary. Our results show some fundamental limitations in the viability of dynamic mobile malware analysis platforms purely based on virtualization. 
Keywords: android, evasion, malware, sandbox (ID#: 15-4619)
URL: http://doi.acm.org/10.1145/2590296.2590325

 

Jeff Wilson, Judith M. Brown, Robert Biddle; ACH Walkthrough: A Distributed Multi-Device Tool for Collaborative Security Analysis; SIW '14 Proceedings of the 2014 ACM Workshop on Security Information Workers, November 2014, Pages 9-16. Doi: 10.1145/2663887.2663902  Abstract: This paper presents ACH Walkthrough, a prototype software client server application to demonstrate the potential benefits of surface technologies in collaborative security intelligence analysis. The basis is the ACH (Analysis of Competing Hypotheses) technique, which requests factors relating to evidence and hypotheses, and builds a model that reduces cognitive bias, thereby helping decision-making. Our application supports development of this model using visualization techniques that allow collaboration and reflection. The technology we use is surface computing, where analysts work around a large multi-touch display, but also multi-device technology, where the model is consistent across various large and small displays. The software runs in standard web-browsers, leveraging HTML5 and JavaScript libraries on both client and server. This allows deployment without installation, and thus security and flexibility.
Keywords: security intelligence analysis, surface computing, visual analytics (ID#: 15-4620)
URL: http://doi.acm.org/10.1145/2663887.2663902

 

Hassan Eldib, Chao Wang, Mostafa Taha, Patrick Schaumont; QMS: Evaluating the Side-Channel Resistance of Masked Software from Source Code;  DAC '14 Proceedings of the 51st Annual Design Automation Conference, June 2014, Article 209, pages 1-6. Doi: 10.1145/2593069.2593193 Abstract: Many commercial systems in the embedded space have shown weakness against power analysis based side-channel attacks in recent years. Designing countermeasures to defend against such attacks is both labor intensive and error prone. Furthermore, there is a lack of formal methods for quantifying the actual strength of a countermeasure implementation. Security design errors may therefore go undetected until the side-channel leakage is physically measured and evaluated. We show a better solution based on static analysis of C source code. We introduce the new notion of Quantitative Masking Strength (QMS) to estimate the amount of information leakage from software through side channels. The QMS can be automatically computed from the source code of a countermeasure implementation. Our experiments, based on side-channel measurement on real devices, show that the QMS accurately quantifies the side-channel resistance of the software implementation.
Keywords: SMT solver, Side channel attack, countermeasure, differential power analysis, quantitative masking strength (ID#: 15-4621)
URL: http://doi.acm.org/10.1145/2593069.2593193

 

Lee W. Lerner, Zane R. Franklin, William T. Baumann, Cameron D. Patterson; Using High-Level Synthesis and Formal Analysis to Predict and Preempt Attacks on Industrial Control Systems; FPGA '14 Proceedings of the 2014 ACM/SIGDA International Symposium On Field-Programmable Gate Arrays, February 2014, Pages 209-212. Doi: 10.1145/2554688.2554759 Abstract: Industrial control systems (ICSes) have the conflicting requirements of security and network access. In the event of large-scale hostilities, factories and infrastructure would more likely be targeted by computer viruses than the bomber squadrons used in WWII. ICS zero-day exploits are now a commodity sold on brokerages to interested parties including nations. We mitigate these threats not by bolstering perimeter security, but rather by assuming that potentially all layers of ICS software have already been compromised and are capable of launching a latent attack while reporting normal system status to human operators. In our approach, application-specific configurable hardware is the final authority for scrutinizing controller commands and process sensors, and can monitor and override operations at the lowest (I/O pin) level of a configurable system-on-chip platform. The process specifications, stability-preserving backup controller, and switchover logic are specified and formally verified as C code, and synthesized into hardware to resist software reconfiguration attacks. To provide greater assurance that the backup controller can be invoked before the physical process becomes unstable, copies of the production controller task and plant model are accelerated to preview the controller's behavior in the near future.
Keywords: formal analysis, high-level synthesis, industrial control systems, reconfigurable platform, security (ID#: 15-4622)
URL: http://doi.acm.org/10.1145/2554688.2554759

 

Sandra R. Murillo, J. Alfredo Sánchez; Empowering Interfaces for System Administrators: Keeping the Command Line in Mind when Designing GUIs; Interacción '14 Proceedings of the XV International Conference on Human Computer Interaction, September 2014, Article No. 47. Doi: 10.1145/2662253.2662300 Abstract: In terms of usability, network management software based on command line interfaces (CLI) is efficient but error prone. With GUIs, a new generation of security tools emerged and were adopted by young system administrators. Though usability has improved, it has been argued that CLI-based software tends to support better user performance. Incorporating CLI advantages into graphical versions (or vice versa) remains a challenge. This paper presents a quantitative study regarding system administrators' practices and preferences regarding GUIs and CLIs and reports on initial results of a usability evaluation performed on proposed interfaces that are informed by our study. Personalization features are particularly appreciated by network administrators, which suggests possible strategies for graphical interface designs that improve user experience while maintaining the positive aspects of CLI-based software.
Keywords: GUIs, Human factors, Security, Usability, command line interfaces (ID#: 15-4623)
URL: http://doi.acm.org/10.1145/2662253.2662300

 

Wolfgang Raschke, Massimiliano Zilli, Johannes Loinig, Reinhold Weiss, Christian Steger, Christian Kreiner; Embedding Research in the Industrial Field: A Case of a Transition to a Software Product Line; WISE '14 Proceedings of the 2014 International Workshop on Long-Term Industrial Collaboration on Software Engineering, September 2014, Pages 3-8. Doi: 10.1145/2647648.2647649 Abstract: Java Cards [4, 5] are small resource-constrained embedded systems that have to fulfill rigorous security requirements. Multiple application scenarios demand diverse product performance profiles which are targeted towards markets such as banking applications and mobile applications. In order to tailor the products to the customer's needs we implemented a Software Product Line (SPL). This paper reports on the industrial case of an adoption to a SPL during the development of a highly-secure software system. In order to provide a scientific method which allows the description of research in the field, we apply Action Research (AR). The rationale of AR is to foster the transition of knowledge from a mature research field to practical problems encountered in the daily routine. Thus, AR is capable of providing insights which might be overlooked in a traditional research approach. In this paper we follow the iterative AR process, and report on the successful transfer of knowledge from a research project to a real industrial application.
Keywords: action research, knowledge transfer, software reuse (ID#: 15-4624)
URL: http://doi.acm.org/10.1145/2647648.2647649

 

Hongxin Hu, Wonkyu Han, Gail-Joon Ahn, Ziming Zhao; FLOWGUARD: Building Robust Firewalls for Software-Defined Networks; HotSDN '14 Proceedings of the Third Workshop On Hot Topics In Software Defined Networking, August 2014, Pages 97-102. Doi: 10.1145/2620728.2620749 Abstract: Software-Defined Networking (SDN) introduces significant granularity, visibility and flexibility to networking, but at the same time brings forth new security challenges. One of the fundamental challenges is to build robust firewalls for protecting OpenFlow-based networks where network states and traffic are frequently changed. To address this challenge, we introduce FlowGuard, a comprehensive framework, to facilitate not only accurate detection but also effective resolution of firewall policy violations in dynamic OpenFlow-based networks. FlowGuard checks network flow path spaces to detect firewall policy violations when network states are updated. In addition, FlowGuard conducts automatic and real-time violation resolutions with the help of several innovative resolution strategies designed for diverse network update situations. We also implement our framework and demonstrate the efficacy and efficiency of the proposed detection and resolution approaches in FlowGuard through experiments with a real-world network topology.
Keywords: firewalls, openflow, security, software-defined networking (ID#: 15-4625)
URL: http://doi.acm.org/10.1145/2620728.2620749

 

Yu Liang, Zhiqiao Li, Xiang Cui; POSTER: Study of Software Plugin-based Malware; CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 1463-1465. Doi: 10.1145/2660267.2662381 Abstract: Security issues of software plugins are seldom studied in existing researches. The plugin mechanism provides a convenient way to extend an application's functionality. However, it may also introduce susceptibility to new security issues. For example, attackers can create a malicious plugin to accomplish intended goals stealthily. In this poster, we propose a Software Plugin-based Malware (SPM) model and implement SPM prototypes for Microsoft Office, Adobe Reader and mainstream browsers, with the aim to study the development feasibility of such malware and illustrate their potential threats.
Keywords: SPM, malware, software plugin (ID#: 15-4626)
URL: http://doi.acm.org/10.1145/2660267.2662381

 

Stefano Bianchi Mazzone, Mattia Pagnozzi, Aristide Fattori, Alessandro Reina, Andrea Lanzi, Danilo Bruschi; Improving Mac OS X Security Through Gray Box Fuzzing Technique; EuroSec '14 Proceedings of the Seventh European Workshop on System Security, April 2014, Article No. 2. Doi: 10.1145/2592791.2592793 Abstract: The kernel is the core of any operating system, and its security is of vital importance. A vulnerability, in any of its parts, compromises the whole system security model. Unprivileged users that find such vulnerabilities can easily crash the attacked system, or obtain administration privileges. In this paper we propose LynxFuzzer, a framework to test kernel extensions, i.e., the dynamically loadable components of Mac OS X kernel. To overcome the challenges posed by interacting with kernel-level software, LynxFuzzer includes a bare-metal hardware-assisted hypervisor, that allows to seamlessly inspect the state of a running kernel and its components. We implemented and evaluated LynxFuzzer on Mac OS X Mountain Lion and we obtained unexpected results: we indivuated 6 bugs in 17 kernel extensions we tested, thus proving the usefulness and effectiveness of our framework.
Keywords: (not provided) (ID#: 15-4627)
URLhttp://doi.acm.org/10.1145/2592791.2592793

 

Ahmad-Reza Sadeghi, Lucas Davi; Beasty Memories: The Quest for Practical Defense Against Code Reuse Attacks; TrustED '14 Proceedings of the 4th International Workshop on Trustworthy Embedded Devices, November 2014, Pages 23-23. Doi: 10.1145/2666141.2668386  Abstract: Code reuse attacks such as return-oriented programming (ROP) are predominant attack techniques that are extensively used to exploit vulnerabilities in modern software programs. ROP maliciously combines short instruction sequences (gadgets) residing in shared libraries and the application's executable to bypass data execution prevention (DEP) and launch targeted exploits. ROP attacks apply to many processor architectures from Intel x86 [1] to tiny embedded systems [2]. As a consequence, a variety of defenses have been proposed over the last few years - most prominently code randomization (ASLR) and control-flow integrity (CFI). Particularly, constructing practical CFI schemes has become a hot topic of research recently. In this talk, we present the evolution of return-oriented programming (ROP) attacks and defenses. We first give an overview of ROP attacks and techniques. Second, we investigate the security of software diversity based approaches such as finegrained code randomization [3]. Third, we dive deeper and focus on control-flow integrity (CFI) and show how to bypass all recent (coarse-grained) CFI solutions, including Microsoft's defense tool EMET [4]. Finally, we discuss new research directions to mitigate code reuse attacks, including our current work on hardware-assisted fine-grained control-flow integrity [5]. Part of this research [3-5] was conducted in collaboration with A. Dmitrienko, D. Lehmann, C. Liebchen, P. Koeberl, F. Monrose, and K. Z. Snow
Keywords: aslr, control-flow integrity, embedded systems security, return-oriented programming, runtime attacks, software/hardware co-design (ID#: 15-4628)
URL: http://doi.acm.org/10.1145/2666141.2668386

 

Giovanni Toso, Daniele Munaretto, Mauro Conti, Michele Zorzi; Attack Resilient Underwater Networks Through Software Defined Networking; WUWNET '14 Proceedings of the International Conference on Underwater Networks & Systems, November 2014, Article No. 44. Doi: 10.1145/2671490.2674589 Abstract: In this paper we discuss how security of Underwater Acoustic Networks (UANs) could be improved by leveraging the concept of Software Defined Networking (SDN). In particular, we consider a set of realistic network deployment scenarios and security threats. We propose possible approaches towards security countermeasures that employ the SDN paradigm, and that could significantly mitigate the impact of attacks. Furthermore, we discuss those approaches with respect to deployment issues such as routing configuration, nodes trajectory optimization, and management of the node buffers. We believe that the proposed approaches could pave the way to further research in the design of UANs that are more resilient to both attacks and failures.
Keywords: Software Defined Networking, Underwater Acoustic Networks (ID#: 15-4629)
URL: http://doi.acm.org/10.1145/2671490.2674589

 

Lisa J. K. Durbeck, Peter M. Athanas, Nicholas J. Macias; Secure-by-Construction Composable Componentry for Network Processing; HotSoS '14 Proceedings of the 2014 Symposium and Bootcamp on the Science of Security, April 2014, Article No. 27. Doi: 10.1145/2600176.2600203 Abstract: Techniques commonly used for analyzing streaming video, audio, SIGINT, and network transmissions, at less-than-streaming rates, such as data decimation and ad-hoc sampling, can miss underlying structure, trends and specific events held in the data [3]. This work presents a secure-by-construction approach [7] for the upper-end data streams with rates from 10- to 100 Gigabits per second. The secure-by-construction approach strives to produce system security through the composition of individually secure hardware and software components. The proposed network processor can be used not only at data centers but also within networks and onboard embedded systems at the network periphery for a wide range of tasks, including preprocessing and data cleansing, signal encoding and compression, complex event processing, flow analysis, and other tasks related to collecting and analyzing streaming data. Our design employs a four-layer scalable hardware/software stack that can lead to inherently secure, easily constructed specialized high-speed stream processing.  This work addresses the following contemporary problems: (1) There is a lack of hardware/software systems providing stream processing and data stream analysis operating at the target data rates; for high-rate streams the implementation options are limited: all-software solutions can't attain the target rates [1]. GPUs and GPGPUs are also infeasible: they were not designed for I/O at 10-100 Gbps; they also have asymmetric resources for input and output and thus cannot be pipelined [4, 2], whereas custom chip-based solutions are costly and inflexible to changes, and FPGA-based solutions are historically hard to program[6];  (2) There is a distinct advantage to utilizing high-bandwidth or line-speed analytics to reduce time-to-discovery of information, particularly ones that can be pipelined together to conduct a series of processing tasks or data tests without impeding data rates; (3) There is potentially significant network infrastructure cost savings possible from compact and power-efficient analytic support deployed at the network periphery on the data source or one hop away; (4) There is a need for agile deployment in response to changing objectives; (5) There is an opportunity to constrain designs to use only secure components to achieve their specific objectives. We address these five problems in our stream processor design to provide secure, easily specified processing for low-latency, low-power 10-100 Gbps in-line processing on top of a commodity high-end FPGA-based hardware accelerator network processor. With a standard interface a user can snap together various filter blocks, like Legos™, to form a custom processing chain. The overall design is a four-layer solution in which the structurally lowest layer provides the vast computational power to process line-speed streaming packets, and the uppermost layer provides the agility to easily shape the system to the properties of a given application. Current work has focused on design of the two lowest layers, highlighted in the design detail in Figure 1. The two layers shown in Figure 1 are the embeddable portion of the design; these layers, operating at up to 100 Gbps, capture both the low- and high frequency components of a signal or stream, analyze them directly, and pass the lower frequency components, residues to the all-software upper layers, Layers 3 and 4; they also optionally supply the data-reduced output up to Layers 3 and 4 for additional processing.  Layer 1 is analogous to a systolic array of processors on which simple low-level functions or actions are chained in series[5]. Examples of tasks accomplished at the lowest layer are: (a) check to see if Field 3 of the packet is greater than 5, or (b) count the number of X.75 packets, or (c) select individual fields from data packets. Layer 1 provides the lowest latency, highest throughput processing, analysis and data reduction, formulating raw facts from the stream; Layer 2, also accelerated in hardware and running at full network line rate, combines selected facts from Layer 1, forming a first level of information kernels. Layer 2 is comprised of a number of combiners intended to integrate facts extracted from Layer 1 for presentation to Layer 3. Still resident in FPGA hardware and hardware-accelerated, a Layer 2 combiner is comprised of state logic and soft-core microprocessors. Layer 3 runs in software on a host machine, and is essentially the bridge to the embeddable hardware; this layer exposes an API for the consumption of information kernels to create events and manage state. The generated events and state are also made available to an additional software Layer 4, supplying an interface to traditional software-based systems. As shown in the design detail, network data transitions systolically through Layer 1, through a series of light-weight processing filters that extract and/or modify packet contents. All filters have a similar interface: streams enter from the left, exit the right, and relevant facts are passed upward to Layer 2. The output of the end of the chain in Layer 1 shown in the Figure 1 can be (a) left unconnected (for purely monitoring activities), (b) redirected into the network (for bent pipe operations), or (c) passed to another identical processor, for extended processing on a given stream (scalability).
Keywords: 100 Gbps, embedded hardware, hardware-software co-design, line-speed processor, network processor, secure-by-construction, stream processing (ID#: 15-4630)
URL: http://doi.acm.org/10.1145/2600176.2600203

 

Komminist Weldemariam, Hossain Shahriar, VamsheeKrishna Devendran; Dynamic Analysis of Web Objects; SIN '14 Proceedings of the 7th International Conference on Security of Information and Networks, September 2014, Pages 423. Doi: 10.1145/2659651.2659671 Abstract: Various reports show that web browsers are known for being insecure, with growing amount of flaws that make them vulnerable to various attacks. Such attacks can be used to execute arbitrary procedures on the victims' computer and silently install malicious software, turning them into bots. In addition, browsers are complex and typically incorporate third-party libraries installed on-demand. This makes it very difficult for security experts to analyze the causes of such flaws or devise countermeasures. In this paper, we present an approach to detect and prevent attacks against a browser by intercepting the interactions between its core libraries and the underlying operating system. We then build mathematical models that capture the behavior of the browser during the rendering of web objects. Finally, we show that such models can be leveraged to automatically classify web objects as malicious or benign using real-world malicious websites.  
Keywords: Analysis, Browser, Detection, HMM, Malicious Webpages (ID#: 15-4631)
URL: http://doi.acm.org/10.1145/2659651.2659671

 

SungGyeong Bae, Hyunghun Cho, Inho Lim, Sukyoung Ryu; SAFEWAPI: Web API Misuse Detector for Web Applications; FSE 2014 Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, November 2014, Pages 507-517. Doi: 10.1145/2635868.2635916 Abstract: The evolution of Web 2.0 technologies makes web applications prevalent in various platforms including mobile devices and smart TVs. While one of the driving technologies of web applications is JavaScript, the extremely dynamic features of JavaScript make it very difficult to define and detect errors in JavaScript applications. The problem becomes more important and complicated for JavaScript web applications which may lead to severe security vulnerabilities. To help developers write safe JavaScript web applications using vendor-specific Web APIs, vendors specify their APIs often in Web IDL, which enables both API writers and users to communicate better by understanding the expected behaviors of the Web APIs. In this paper, we present SAFEWAPI, a tool to analyze Web APIs and JavaScript web applications that use the Web APIs and to detect possible misuses of Web APIs by the web applications. Even though the JavaScript language semantics allows to call a function defined with some parameters without any arguments, platform developers may require application writers to provide the exact number of arguments. Because the library functions in Web APIs expose their intended semantics clearly to web application developers unlike pure JavaScript functions, we can detect wrong uses of Web APIs precisely. For representative misuses of Web APIs defined by software quality assurance engineers, our SAFEWAPI detects such misuses in real-world JavaScript web applications.
Keywords: JavaScript, bug detection, static analysis, web application (ID#: 15-4632)
URL: http://doi.acm.org/10.1145/2635868.2635916

 

Bernhard Grill, Christian Platzer, Jürgen Eckel; A Practical Approach for Generic Bootkit Detection and Prevention; EuroSec '14 Proceedings of the Seventh European Workshop on System Security, April 2014, Article No. 4. Doi: 10.1145/2592791.2592795 Abstract: Bootkits are still the most powerful tool for attackers to stealthily infiltrate computer systems. In this paper we present a novel approach to detect and prevent bootkit attacks during the infection phase. Our approach relies on emulation and monitoring of the system's boot process. We present results of a preliminary evaluation on our approach using a Windows system and the leaked Carberp bootkit.
Keywords: bootkit detection and prevention, dynamic malware analysis, x86 emulation (ID#: 15-4633)
URL: http://doi.acm.org/10.1145/2592791.2592795

 

Amiangshu Bosu, Jeffrey C. Carver, Munawar Hafiz, Patrick Hilley, Derek Janni; Identifying the Characteristics of Vulnerable Code Changes: An Empirical Study; FSE 2014 Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering, November 2014, Pages 257-268. Doi: 10.1145/2635868.2635880 Abstract: To focus the efforts of security experts, the goals of this empirical study are to analyze which security vulnerabilities can be discovered by code review, identify characteristics of vulnerable code changes, and identify characteristics of developers likely to introduce vulnerabilities. Using a three-stage manual and automated process, we analyzed 267,046 code review requests from 10 open source projects and identified 413 Vulnerable Code Changes (VCC). Some key results include: (1) code review can identify common types of vulnerabilities; (2) while more experienced contributors authored the majority of the VCCs, the less experienced contributors' changes were 1.8 to 24 times more likely to be vulnerable; (3) the likelihood of a vulnerability increases with the number of lines changed, and (4) modified files are more likely to contain vulnerabilities than new files. Knowing which code changes are more prone to contain vulnerabilities may allow a security expert to concentrate on a smaller subset of submitted code changes. Moreover, we recommend that projects should: (a) create or adapt secure coding guidelines, (b) create a dedicated security review team, (c) ensure detailed comments during review to help knowledge dissemination, and (d) encourage developers to make small, incremental changes rather than large changes.
Keywords: code review, inspection, open source, security defects, vulnerability (ID#: 15-4634)
URL: http://doi.acm.org/10.1145/2635868.2635880

 

Florian Bergsma, Benjamin Dowling, Florian Kohlar, Jörg Schwenk, Douglas Stebila; Multi-Ciphersuite Security of the Secure Shell (SSH) Protocol; CCS '14 Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, November 2014, Pages 369-381. Doi: 10.1145/2660267.2660286 Abstract: The Secure Shell (SSH) protocol is widely used to provide secure remote access to servers, making it among the most important security protocols on the Internet. We show that the signed-Diffie--Hellman SSH ciphersuites of the SSH protocol are secure: each is a secure authenticated and confidential channel establishment (ACCE) protocol, the same security definition now used to describe the security of Transport Layer Security (TLS) ciphersuites. While the ACCE definition suffices to describe the security of individual ciphersuites, it does not cover the case where parties use the same long-term key with many different ciphersuites: it is common in practice for the server to use the same signing key with both finite field and elliptic curve Diffie--Hellman, for example. While TLS is vulnerable to attack in this case, we show that SSH is secure even when the same signing key is used across multiple ciphersuites. We introduce a new generic multi-ciphersuite composition framework to achieve this result in a black-box way. 
Keywords: authenticated and confidential channel establishment, cross-protocol security, key agility, multi-ciphersuite, secure shell (SSH) (ID#: 15-4635)
URLhttp://doi.acm.org/10.1145/2660267.2660286

 

Bob Duncan, Mark Whittington; Compliance with Standards, Assurance and Audit: Does this Equal Security?;  SIN '14 Proceedings of the 7th International Conference on Security of Information and Networks, September 2014, Pages 77ff. Doi: 10.1145/2659651.2659711  Abstract: Managing information security is a challenge. Traditional checklist approaches to meeting standards may well provide compliance, but do not guarantee to provide security assurance. The same might be said for audit. The complexity of IT relationships must be acknowledged and explicitly managed by recognising the implications of the self-interest of each party involved. We show how tensions between these parties can lead to a misalignment of the goals of security and what needs to be done to ensure this does not happen.
Keywords: Standards, assurance, audit, checklists, compliance, security (ID#: 15-4636)
URL: http://doi.acm.org/10.1145/2659651.2659711

 

Shweta Subramani, Mladen Vouk, Laurie Williams; An Analysis of Fedora Security Profile; HotSoS '14 Proceedings of the 2014 Symposium and Bootcamp on the Science of Security, April 2014, Article No. 35. Doi: 10.1145/2600176.2600211  Abstract: This paper examines security faults/vulnerabilities reported for Fedora. Results indicate that, at least in some situations, fault roughly constant may be used to guide estimation of residual vulnerabilities in an already released product, as well as possibly guide testing of the next version of the product.
Keywords: Fedora, detection, non-operational testing, prediction, security faults, vulnerabilities (ID#: 15-4637)
URL: http://doi.acm.org/10.1145/2600176.2600211


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence. 

Software Security, 2014 (IEEE), Part 1

 
SoS Logo

Software Security, 2014 (IEEE), Part 1

 

This set of bibliographical references about software security research papers is from conference publications posted in the IEEE Digital Library. More than 1100 conference papers were presented on this topic in 2014. The set presented here represents those likely to be of most interest to the Science of Security community. They address issues related to measurement, scalability, reliability, and other hard problem issues.  ACM papers are presented in a separate series.


 

Axelrod, C.W., "Reducing Software Assurance Risks for Security-Critical and Safety-Critical Systems," Systems, Applications and Technology Conference (LISAT), 2014 IEEE Long Island, pp.1,6, 2-2 May 2014. doi: 10.1109/LISAT.2014.6845212 Abstract: According to the Office of the Assistant Secretary of Defense for Research and Engineering (ASD(R&E)), the US Department of Defense (DoD) recognizes that there is a “persistent lack of a consistent approach ... for the certification of software assurance tools, testing and methodologies” [1]. As a result, the ASD(R&E) is seeking “to address vulnerabilities and weaknesses to cyber threats of the software that operates ... routine applications and critical kinetic systems ...” The mitigation of these risks has been recognized as a significant issue to be addressed in both the public and private sectors. In this paper we examine deficiencies in various software-assurance approaches and suggest ways in which they can be improved. We take a broad look at current approaches, identify their inherent weaknesses and propose approaches that serve to reduce risks. Some technical, economic and governance issues are: (1) Development of software-assurance technical standards (2) Management of software-assurance standards (3) Evaluation of tools, techniques, and metrics (4) Determination of update frequency for tools, techniques (5) Focus on most pressing threats to software systems (6) Suggestions as to risk-reducing research areas (7) Establishment of models of the economics of software-assurance solutions, and testing and certifying software We show that, in order to improve current software assurance policy and practices, particularly with respect to security, there has to be a major overhaul in how software is developed, especially with respect to the requirements and testing phases of the SDLC (Software Development Lifecycle). We also suggest that the current preventative approaches are inadequate and that greater reliance should be placed upon avoidance and deterrence. We also recommend that those developing and operating security-critical and safety-critical systems exchange best-of-breed software assurance methods to prevent the vulnerability of components leading to compromise of entire systems of systems. The recent catastrophic loss of a Malaysia Airlines airplane is then presented as an example of possible compromises of physical and logical security of on-board communications and management and control systems.
Keywords: program testing; safety-critical software; software development management; software metrics; ASD(R&E); Assistant Secretary of Defense for Research and Engineering; Malaysia Airlines airplane; SDLC; US Department of Defense; US DoD; component vulnerability prevention; control systems; critical kinetic systems; cyber threats; economic issues; governance issues; logical security; management systems; on-board communications; physical security; private sectors; public sectors; risk mitigation; safety-critical systems; security-critical systems; software assurance risk reduction; software assurance tool certification; software development; software development lifecycle ;software methodologies; software metric evaluation; software requirements; software system threats; software technique evaluation; software testing; software tool evaluation; software-assurance standard management; software-assurance technical standard development; technical issues; update frequency determination; Measurement; Organizations; Security; Software systems; Standards; Testing; cyber threats; cyber-physical systems; governance ;risk; safety-critical systems; security-critical systems; software assurance; technical standards; vulnerabilities; weaknesses (ID#: 15-4546)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845212&isnumber=6845183

 

AlBreiki, Hamda Hasan; Mahmoud, Qusay H., "Evaluation of Static Analysis Tools for Software Security," Innovations in Information Technology (INNOVATIONS), 2014 10th International Conference on, pp.93, 98, 9-11 Nov. 2014. doi: 10.1109/INNOVATIONS.2014.6987569 Abstract: Security has been always treated as an add-on feature in the software development lifecycle, and addressed by security professionals using firewalls, proxies, intrusion prevention systems, antivirus and platform security. Software is at the root of all common computer security problems, and hence hackers don't create security holes, but rather exploit them. Security holes in software applications are the result of bad design and implementation of software systems and applications. To address this problem, several initiatives for integrating security in the software development lifecycle have been proposed, along with tools to support a security-centric software development lifecycle. This paper introduces a framework for evaluating security static analysis tools such as source code analyzers, and offers evaluation of non-commercial static analysis tools such as Yasca, CAT.NET, and FindBugs. In order to evaluate the effectiveness of such tools, common software weaknesses are defined based on CWE/SANS Top 25, OWASP Top Ten and NIST source code weaknesses. The evaluation methodology is based on the NIST Software Assurance Metrics And Tool Evaluation (SAMATE). Results show that security static analysis tools are, to some extent, effective in detecting security holes in source code; source code analyzers are able to detect more weaknesses than bytecode and binary code scanners; and while tools can assist the development team in security code review activities, they are not enough to uncover all common weaknesses in software. The new test cases developed for this research have been contributed to the NIST Software Assurance Reference Dataset (samate.nist.gov/SARD).
Keywords: Binary codes; Computer architecture; Industries; Java; NIST; Security; Software; OWASP; SAMATE; security metrics; software security; static analysis (ID#: 15-4547)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6987569&isnumber=6985764

 

Razian, Mohammad Reza; Sangchi, Hasan Mokhtari, "A Threatened-Based Software Security Evaluation Method," Information Security and Cryptology (ISCISC), 2014 11th International ISC Conference on, pp.120,125, 3-4 Sept. 2014. doi: 10.1109/ISCISC.2014.6994034 Abstract: Nowadays, security evaluation of software is a substantial matter in software world. Security level of software will be determined by wealth of data and operation which it provides for us. The security level is usually evaluated by a third party, named Software Security Certification Issuance Centers. It is important for software security evaluators to perform a sound and complete evaluation, which is a complicated process considering the increasing number of emerging threats. In this paper we propose a Threatened-based Software Security Evaluation method to improve the security evaluation process of software. In this method, we focus on existing threatened entities of software which in turn result in software threats and their corresponding controls and countermeasures. We also demonstrate a Security Evaluation Assistant (SEA) tool to practically show the effectiveness of our evaluation method.
Keywords: Certification; Feature extraction; Organizations; Security; Software; Standards; Vectors; Assessment; Control; Evaluation; Security; Security Certification; Software; Software Security; Threat; Threatened (ID#: 15-4548)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6994034&isnumber=6994006

 

Zhuobing Han; Xiaohong Li; Ruitao Feng; Jing Hu; Guangquan Xu; Zhiyong Feng, "A Three-Dimensional Model for Software Security Evaluation," Theoretical Aspects of Software Engineering Conference (TASE), 2014, pp.34,41, 1-3 Sept. 2014. doi: 10.1109/TASE.2014.31 Abstract: Software security evaluation is considered as a significant and indispensable activity in all phases of software development lifecycle, and there are also many factors that should be taken into account such as the environment, risks, and development documents. Despite the achievements of the past several decades, there is still a lack of methodology in evaluating software security systematically. In this paper, we propose a comprehensive model for evaluating the software security from three different but complementary points of view: technology, management and engineering. The technological dimension is 7 security levels based on Evaluation Assurance Levels (EALs) from ISO/IEC15408, the management dimension mainly concerns the management of software infrastructures, development documents and risks, and the engineering dimension focuses on 5 stages of software development lifecycle. Experts evaluate software security through the evidence items which are collected from these three dimensions and provide their assessments. Relying on Analytic Hierarchy Process (AHP) and Dempster-Shafer Evidence Theory, assessments obtained from the experts can be combined and merged to get a score which presents the security degree of software. A case study illustrates how the evaluators may use the proposed approach to evaluate security of their system.
Keywords: analytic hierarchy process; inference mechanisms; security of data; software engineering; uncertainty handling; AHP; Dempster-Shafer evidence theory; analytic hierarchy process; software development lifecycle; software infrastructure management; software security evaluation; Analytical models; Capability maturity model; Security; Software; Solid modeling; Testing; Uncertainty; Common Criteria; Evidence; Software Life Cycle; Software Security Evaluation; Three-Dimensional Model (ID#: 15-4549)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6976565&isnumber=6976553

 

Daughtrey, T., "Prospects for Software Security Growth Modeling," Reliability and Maintainability Symposium (RAMS), 2014 Annual, pp.1,5, 27-30 Jan. 2014. doi: 10.1109/RAMS.2014.6798453 Abstract: Modern society depends on the continuing correct operation of software-based systems. Critical infrastructures -- including energy, communication, transportation, and finance -- all function within powerful and complex computing environments. The dependability of these systems is increasingly threatened by a wide range of adversaries, and increasing investments are being made to provide and assess sufficient security for these systems. Engineering and business decisions have to be made in response to questions such as: “How secure does this system have to be?” “What kinds and amounts of development and appraisal activities should be funded?” “Is the system ready to be placed into operation?” Software quality engineering has addressed similar issues for other product attributes. In particular, there is a considerable body of experience with techniques and tools for specifying and measuring software reliability. Much effort has gone into modeling the improvement in software reliability during development and testing. An analogous approach to security growth modeling would quantify how the projected security of a system increases with additional detection and removal of software vulnerabilities. Such insights could guide allocation of resources during development and ultimately assist in making the decision to release the product. This paper will first summarize software reliability engineering and its use of software reliability growth modeling before considering potential analogies in software security engineering and software security growth modeling. After describing several limitations in either type of modeling, the role of risk management will be considered.
Keywords: risk management; security of data; software reliability; business decision; communication infrastructure; computing environments; energy infrastructure; engineering decision; finance infrastructure; resource allocation; risk management; software quality engineering; software reliability engineering; software reliability growth modeling; software security engineering; software security growth modeling; software vulnerabilities; software-based systems; transportation infrastructure; Computational modeling; Data models; Security; Software; Software reliability; Testing; reliability growth; security; software quality; software reliability (ID#: 15-4550)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798453&isnumber=6798433

 

Zhioua, Z.; Short, S.; Roudier, Y., "Static Code Analysis for Software Security Verification: Problems and Approaches," Computer Software and Applications Conference Workshops (COMPSACW), 2014 IEEE 38th International, pp.102, 109, 21-25 July 2014. doi: 10.1109/COMPSACW.2014.22 Abstract: Developing and deploying secure software is a difficult task, one that is even harder when the developer has to be conscious of adhering to specific company security requirements. In order to facilitate this, different approaches have been elaborated over the years to varying degrees of success. To better understand the underlying issues, this paper describes and evaluates a number of static code analysis techniques and tools based on an example that illustrates prevalent software security challenges. The latter can be addressed by considering an approach that allows for the detection of security properties and their transformation into security policies that can be validated against security requirements. This would help the developer throughout the software development lifecycle and to insure the compliance with security specifications.
Keywords: formal specification; formal verification; program diagnostics; security of data; security policies; security properties detection; security requirements; security specifications; software development lifecycle; software security verification; static code analysis techniques; Abstracts; Analytical models; Model checking; Programming; Security; Software; code analysis tools; program modeling; security properties; static analysis (ID#: 15-4551)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903113&isnumber=6903069

 

Balachandran, V.; Ng Wee Keong; Emmanuel, S., "Function Level Control Flow Obfuscation for Software Security," Complex, Intelligent and Software Intensive Systems (CISIS), 2014 Eighth International Conference on, pp.133,140, 2-4 July 2014. doi: 10.1109/CISIS.2014.20 Abstract: Software released to the user has the risk of reverse engineering attacks. Software control flow obfuscation is one of the techniques used to make the reverse engineering of software programs harder. Control flow obfuscation, obscures the control flow of the program so that it is hard for an analyzer to decode the logic of the program. In this paper, we propose an obfuscation algorithm which obscures the control flow across functions. In our method code fragments from each function is stripped from the original function and is stored in another function. Each function will be having code fragments from different functions, thereby creating a function level shuffled version of the original program. Control flow is obscured between and within the function by this method. Experimental results indicate that the algorithm performs well against automated attacks.
Keywords: program control structures; reverse engineering; security of data; function level control flow obfuscation; function level shuffled version; reverse engineering attacks; software control flow obfuscation; software security; Algorithm design and analysis; Assembly; Heuristic algorithms; Reverse engineering; Security; Software; Software algorithms; code obfuscation; computer security; software security (ID#: 15-4552)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6915508&isnumber=6915447

 

Hui Guan; Xuan Wang; Hongji Yang, "A Framework for Security Driven Software Evolution," Automation and Computing (ICAC), 2014 20th International Conference on, pp.194, 199, 12-13 Sept. 2014. doi: 10.1109/IConAC.2014.6935485 Abstract: Security has become a key non-functional requirement in the modern software system. The need to improve the security level for legacy systems is equally important as that for new designed systems. However, how to integrate security engineering into legacy system is sometimes very difficult. After examining the current literature on security improvement, this paper proposes a framework for enhancing security for legacy system from software evolution perspective using a model driven approach. It starts from understanding and extracting models from legacy source code. Security requirements are elicited through analysing security risks and satisfied by integrating security patterns with the support of the proposed security ontology. The proposed framework in this paper provides a comprehensive approach allowing the designer to be guided through the process of security oriented evolution.
Keywords: ontologies (artificial intelligence); risk management; security of data; software maintenance; source code (software);comprehensive approach; legacy source code; legacy systems; model driven approach; nonfunctional requirement; security driven software evolution framework; security engineering integration; security level improvement; security ontology; security pattern integration; security requirements; security risk analysis; software system; Aging; Context; Object oriented modeling; Ontologies; Security; Software; Unified modeling language; model driven; ontology; security pattern; security requirement; software evolution (ID#: 15-4553)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6935485&isnumber=6935445

 

Gartner, S.; Ruhroth, T.; Burger, J.; Schneider, K.; Jurjens, J., "Maintaining Requirements for Long-Living Software Systems by Incorporating Security Knowledge," Requirements Engineering Conference (RE), 2014 IEEE 22nd International, pp.103, 112, 25-29 Aug. 2014. doi: 10.1109/RE.2014.6912252 Abstract: Security is an increasingly important quality facet in modern information systems and needs to be retained. Due to a constantly changing environment, long-living software systems “age” not by wearing out, but by failing to keep up-to-date with their environment. The problem is that requirements engineers usually do not have a complete overview of the security-related knowledge necessary to retain security of long-living software systems. This includes security standards, principles and guidelines as well as reported security incidents. In this paper, we focus on the identification of known vulnerabilities (and their variations) in natural-language requirements by leveraging security knowledge. For this purpose, we present an integrative security knowledge model and a heuristic method to detect vulnerabilities in requirements based on reported security incidents. To support knowledge evolution, we further propose a method based on natural language analysis to refine and to adapt security knowledge. Our evaluation indicates that the proposed assessment approach detects vulnerable requirements more reliable than other methods (Bayes, SVM, k-NN). Thus, requirements engineers can react faster and more effectively to a changing environment that has an impact on the desired security level of the information system.
Keywords: information systems; natural language processing; security of data; software maintenance; software quality; heuristic method; information needs; information systems; integrative security knowledge model; knowledge evolution; long-living software systems; natural-language requirements; quality facet; requirement engineering; requirement maintenance; security incidents; security standards; vulnerability identification; Analytical models; Information systems; Natural languages; Ontologies; Security; Taxonomy; Heuristics; Knowledge carrying software; Requirements analysis; Security requirements; Software evolution (ID#: 15-4554)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6912252&isnumber=6912234

 

Tlili, S.; Fernandez, J.M.; Belghith, A.; Dridi, B.; Hidouri, S., "Scalable Security Verification of Software at Compile Time," Source Code Analysis and Manipulation (SCAM), 2014 IEEE 14th International Working Conference on, pp.115,124, 28-29 Sept. 2014. doi: 10.1109/SCAM.2014.20 Abstract: Automated verification tools are required to detect coding errors that may lead to severe software vulnerabilities. However, the usage of these tools is still not well integrated into software development life cycle. In this paper, we present our approach that brings the software compilation process and security verification to a meeting point where both can be applied simultaneously in a user-friendly manner. Our security verification engine is implemented as a new GCC pass that can be enabled via flag-fsecurity-check=checks.xml where the input XML file contains a set of user-defined security checks. The verification operates on the GIMPLE intermediate representation of source code that is language and platform independent. The conducted experiments demonstrate the scalability, efficiency and performance of our engine used to verify large scale software, especially the entire Linux kernel source code.
Keywords: Linux; XML; formal verification; security of data; GCC pass; Linux kernel source code; automated verification tools; scalable security verification engine; software compilation process; software development life cycle; software vulnerabilities; Automata; Engines; Monitoring; Scalability; Security; Software; XML; Finite State Automata; GCC; Security Verification; Static Analysis (ID#: 15-4555)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6975645&isnumber=6975619

 

Khan, M.U.; Zulkernine, M., "A Hybrid Monitoring of Software Design-Level Security Specifications," Quality Software (QSIC), 2014 14th International Conference on, pp. 111, 116, 2-3 Oct. 2014. doi: 10.1109/QSIC.2014.14 Abstract: The behavior of the deployed software should be monitored against its security specifications to identify vulnerabilities introduced due to incorrect implementation of secure design decisions. Security specifications, including design-level ones, impose constraints on the behavior of the software. These constraints can be broadly categorized as non-time-critical and time-critical and have to be monitored in a manner that minimizes the monitoring overhead. In this paper, we suggest using a hybrid of event and time monitoring techniques to observe these constraints. The viability of the hybrid technique is assessed by comparing its effectiveness and performance with event and time monitoring techniques. The results indicate that the hybrid monitoring technique is more effective and efficient when compared separately with event or time monitoring.
Keywords: computerised monitoring; security of data; event monitoring techniques; hybrid monitoring technique; hybrid software design-level security specifications monitoring; monitoring overhead; secure design decisions; software behavior; time monitoring techniques; Authentication; Instruments; Monitoring; Software; Software algorithms; Time factors; design-level; monitoring; security specifications (ID#: 15-4556)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6958394&isnumber=6958368

 

Pedraza-Garcia, G.; Astudillo, H.; Correal, D., "A Methodological Approach to Apply Security Tactics in Software Architecture Design," Communications and Computing (COLCOM), 2014 IEEE Colombian Conference on, pp.1,8, 4-6 June 2014. doi: 10.1109/ColComCon.2014.6860432 Abstract: Architectural tactics are decisions to efficiently solve quality attributes in software architecture. Security is a complex quality property due to its strong dependence on the application domain. However, the selection of security tactics in the definition of software architecture is guided informally and depends on the experience of the architect. This study presents a methodological approach to address and specify the quality attribute of security in architecture design applying security tactics. The approach is illustrated with a case study about a Tsunami Early Warning System.
Keywords: security of data; software architecture; security quality attribute; security tactics; software architecture design; tsunami early warning system; Computer architecture; Decision making; Security; Software architecture; Software systems; Tsunami; Architectural tactics; secure architectures; secure software development; security tactics application; software architecture design (ID#: 15-4557)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6860432&isnumber=6860394

 

Tu, Hao; Li, Weiming; Li, Dong; Yu, Junqing, "A Scalable Flow Rule Translation Implementation for Software Defined Security," Network Operations and Management Symposium (APNOMS), 2014 16th Asia-Pacific, pp.1,5, 17-19 Sept. 2014. doi: 10.1109/APNOMS.2014.6996571 Abstract: Software defined networking brings many possibilities to network security, one of the most important security challenge it can help with is the possibility to make network traffic pass through specific security devices, in other words, determine where to deploy these devices logically. However, most researches focus on high level policy and interaction framework but ignored how to translate them to low-level OpenFlow rules with scalability. We analyze different actions used in common security scenarios and resource constraints of physical switch. Based on them, we propose a rule translation implementation which can optimize the resource consumption according to different actions by selecting forward path dynamically.
Keywords: Bandwidth; Communication networks; Mirrors; Monitoring; Ports (Computers); Security; Switches; network security; software defined networking (ID#: 15-4558)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6996571&isnumber=6996102

 

Skopik, F.; Settanni, G.; Fiedler, R.; Friedberg, I., "Semi-Synthetic Data Set Generation for Security Software Evaluation," Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on, pp.156,163, 23-24 July 2014. doi: 10.1109/PST.2014.6890935 Abstract: Threats to modern ICT systems are rapidly changing these days. Organizations are not mainly concerned about virus infestation, but increasingly need to deal with targeted attacks. These kinds of attacks are specifically designed to stay below the radar of standard ICT security systems. As a consequence, vendors have begun to ship self-learning intrusion detection systems with sophisticated heuristic detection engines. While these approaches are promising to relax the serious security situation, one of the main challenges is the proper evaluation of such systems under realistic conditions during development and before roll-out. Especially the wide variety of configuration settings makes it hard to find the optimal setup for a specific infrastructure. However, extensive testing in a live environment is not only cumbersome but usually also impacts daily business. In this paper, we therefore introduce an approach of an evaluation setup that consists of virtual components, which imitate real systems and human user interactions as close as possible to produce system events, network flows and logging data of complex ICT service environments. This data is a key prerequisite for the evaluation of modern intrusion detection and prevention systems. With these generated data sets, a system's detection performance can be accurately rated and tuned for very specific settings.
Keywords: data handling; security of data; ICT security systems; ICT systems; heuristic detection engines; information and communication technology systems; intrusion detection and prevention systems; security software evaluation; self-learning intrusion detection systems; semisynthetic data set generation; virus infestation; Complexity theory; Data models; Databases; Intrusion detection; Testing; Virtual machining; anomaly detection evaluation; scalable system behavior model; synthetic data set generation (ID#: 15-4559)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890935&isnumber=6890911

 

Younis, A.A.; Malaiya, Y.K.; Ray, I., "Using Attack Surface Entry Points and Reachability Analysis to Assess the Risk of Software Vulnerability Exploitability," High-Assurance Systems Engineering (HASE), 2014 IEEE 15th International Symposium on, pp. 1, 8, 9-11 Jan. 2014. doi: 10.1109/HASE.2014.10 Abstract: An unpatched vulnerability can lead to security breaches. When a new vulnerability is discovered, it needs to be assessed so that it can be prioritized. A major challenge in software security is the assessment of the potential risk due to vulnerability exploitability. CVSS metrics have become a de facto standard that is commonly used to assess the severity of a vulnerability. The CVSS Base Score measures severity based on exploitability and impact measures. CVSS exploitability is measured based on three metrics: Access Vector, Authentication, and Access Complexity. However, CVSS exploitability measures assign subjective numbers based on the views of experts. Two of its factors, Access Vector and Authentication, are the same for almost all vulnerabilities. CVSS does not specify how the third factor, Access Complexity, is measured, and hence we do not know if it considers software properties as a factor. In this paper, we propose an approach that assesses the risk of vulnerability exploitability based on two software properties - attack surface entry points and reach ability analysis. A vulnerability is reachable if it is located in one of the entry points or is located in a function that is called either directly or indirectly by the entry points. The likelihood of an entry point being used in an attack can be assessed by using damage potential-effort ratio in the attack surface metric and the presence of system calls deemed dangerous. To illustrate the proposed method, five reported vulnerabilities of Apache HTTP server 1.3.0 have been examined at the source code level. The results show that the proposed approach, which uses more detailed information, can yield a risk assessment that can be different from the CVSS Base Score.
Keywords: reachability analysis; risk management; security of data; software metrics; Apache HTTP server 1.3.0;CVSS base score; CVSS exploitability; CVSS metrics; access complexity; access vector; attack surface entry point; attack surface metric; authentication; damage potential-effort ratio; reachability analysis; risk assessment; security breach; severity measurement; software security; software vulnerability exploitability; Authentication; Complexity theory; Measurement; Servers; Software; Vectors; Attack Surface; CVSS Metrics; Measurement; Risk assessment; Software Security Metrics; Software Vulnerability (ID#: 15-4560)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6754581&isnumber=6754569

 

Younis, A.A.; Malaiya, Y.K., "Using Software Structure to Predict Vulnerability Exploitation Potential," Software Security and Reliability-Companion (SERE-C), 2014 IEEE Eighth International Conference on, pp. 13, 18, June 30, 2014-July 2, 2014. doi: 10.1109/SERE-C.2014.17 Abstract: Most of the attacks on computer systems are due to the presence of vulnerabilities in software. Recent trends show that a number of newly discovered vulnerabilities still continue to be significant. Studies have also shown that the time gap between the vulnerability public disclosure and the release of an automated exploit is getting smaller. Therefore, assessing vulnerabilities exploitability risk is critical as it aids decision-makers prioritize among vulnerabilities, allocate resources, and choose between alternatives. Several methods have recently been proposed in the literature to deal with this challenge. However, these methods are either subjective, requires human involvement in assessing exploitability, or do not scale. In this research, our aim is to first identify vulnerability exploitation risk problem. Then, we introduce a novel vulnerability exploitability metric based on software structure properties viz.: attack entry points, vulnerability location, presence of dangerous system calls, and reachability. Based on our preliminary results, reachability and the presence of dangerous system calls appear to be a good indicator of exploitability. Next, we propose using the suggested metric as feature to construct a model using machine learning techniques for automatically predicting the risk of vulnerability exploitation. To build a vulnerability exploitation model, we propose using Support Vector Machines (SVMs). Once the predictor is built, given unseen vulnerable function and their exploitability features the model can predict whether the given function is exploitable or not.
Keywords: decision making; learning (artificial intelligence); reachability analysis; software metrics; support vector machines; SVM; attack entry points; computer systems; decision makers; machine learning; reachability; software structure; support vector machines; vulnerabilities exploitability risk; vulnerability exploitability metric; vulnerability exploitation model; vulnerability exploitation potential; vulnerability exploitation risk problem; vulnerability location; vulnerability public disclosure; Feature extraction; Predictive models; Security; Software; Software measurement; Support vector machines; Attack Surface; Machine Learning; Measurement; Risk Assessment; Software Security Metrics; Software Vulnerability (ID#: 15-4561)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6901635&isnumber=6901618

 

Xun Zhan; Tao Zheng; Shixiang Gao, "Defending ROP Attacks Using Basic Block Level Randomization," Software Security and Reliability-Companion (SERE-C), 2014 IEEE Eighth International Conference on, pp.107,112, June 30 2014-July 2 2014. doi: 10.1109/SERE-C.2014.28 Abstract: Code reuse attacks such as return-oriented programming, one of the most powerful threats to software system, rely on the absolute address of instructions. Therefore, address space randomization should be an effective defending method. However, current randomization techniques either are lack of enough entropy or have significant time or space overhead. In this paper, we present a novel fine-grained randomization technique at basic block level. In contrast to previous work, our technique dealt with critical technical challenges including indirect branches, callbacks and position independent codes properly at least cost. We implement an efficient prototype randomization system which supports Linux ELF file format and x86 architecture. Our evaluation demonstrated that it can defend ROP attacks with tiny performance overhead (4% on average) successfully.
Keywords: Linux; security of data; software architecture Linux ELF file format; address space randomization; basic block level randomization; critical technical challenge; defending ROP attacks; dode reuse attacks; fine-grained randomization technique; performance overhead; position independent codes; prototype randomization system; randomization techniques; return-oriented programming; software system; x86 architecture; Binary codes; Engines; Entropy; Libraries; Programming; Security; Software; ASLR; randomization ;return-oriented programming; software security (ID#: 15-4562)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6901647&isnumber=6901618

 

Cooper, V.N.; Shahriar, H.; Haddad, H.M., "A Survey of Android Malware Characterisitics and Mitigation Techniques," Information Technology: New Generations (ITNG), 2014 11th International Conference on, pp.327, 332, 7-9 April 2014. doi: 10.1109/ITNG.2014.71 Abstract: As mobile applications are being developed at a faster pace, the security aspect of is being neglected. A solid understanding of the characteristics of malware is the first step to preventing many unwanted consequences. This paper provides an overview of popular security threats posed by Android malware. In particular, we focus on the characteristics commonly found in malware applications and understand the code level features that can enable detection techniques. We also discuss some common defense techniques to mitigate the impact of malware applications.
Keywords: Android (operating system); invasive software; mobile computing; smart phones; Android malware characterisitics; code level features; defense technique; detection technique; malware mitigation technique; mobile applications; security threats; Kernel; Libraries; Malware; Mobile communication; Smart phones; Social network services; Android Malware; Mobile application; Mobile security; Software Security (ID#: 15-4563)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6822218&isnumber=6822158

 

Mead, N.R.; Morales, J.A., "Using Malware Analysis to Improve Security Requirements on Future Systems," Evolving Security and Privacy Requirements Engineering (ESPRE), 2014 IEEE 1st Workshop on, pp.37, 41, 25-25 Aug. 2014. doi: 10.1109/ESPRE.2014.6890526 Abstract: In this position paper, we propose to enhance current software development lifecycle models by including use cases, based on previous cyberattacks and their associated malware, and to propose an open research question: Are specific types of systems prone to specific classes of malware exploits? If this is the case, developers can create future systems that are more secure, from inception, by including use cases that address previous attacks.
Keywords: invasive software; software engineering; cyberattacks; malware analysis; malware exploits; security requirement improvement; software development lifecycle models; use cases; Authentication; Computer crime; Malware; Software; Software engineering; Standards; SDLC; cyberattacks; malware; software security (ID#: 15-4564)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890526&isnumber=6890516

 

Busby Earle, C.C.R.; France, R.B.; Ray, I., "Analysing Requirements to Detect Latent Security Vulnerabilities," Software Security and Reliability-Companion (SERE-C), 2014 IEEE Eighth International Conference on, pp.168,175, June 30 2014-July 2 2014. doi: 10.1109/SERE-C.2014.35 Abstract: To fully embrace the challenge of securing software, security concerns must be considered at the earliest stages of software development. Studies have shown that this reduces the time, cost and effort required to integrate security features into software during development. In this paper we describe a technique for uncovering potential vulnerabilities through an analysis of software requirements and describe its use using a small, motivating example.
Keywords: security of data; software engineering; latent security vulnerabilities detection; security features; software development; software requirements; software security; Context; Educational institutions; Natural languages; Object recognition; Ontologies; Security; Software; Loophole Analysis; Requirements; Security; Vulnerabilities (ID#: 15-4565)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6901654&isnumber=6901618

 

Kundi, M.; Chitchyan, R., "Position on Metrics for Security in Requirements Engineering," Requirements Engineering and Testing (RET), 2014 IEEE 1st International Workshop on, pp.29, 31, 26-26 Aug. 2014. doi: 10.1109/RET.2014.6908676 Abstract: A number of well-established software quality metrics are in use in code testing. It is our position that for many code-testing metrics for security equivalent requirements level metrics should be defined. Such requirements-level security metrics should be used in evaluating the quality of software security early on, in order to ensure that the resultant software system possesses the required security characteristics and quality.
Keywords: formal specification; program testing; security of data; software metrics; software quality; code-testing metrics; requirements engineering; requirements-level security metrics; software quality metrics; software security; Conferences; Security; Software measurement; Software systems; Testing (ID#: 15-4566)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6908676&isnumber=6908666


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Software Security, 2014 (IEEE), Part 2

 
SoS Logo

Software Security, 2014 (IEEE), Part 2

 

This set of bibliographical references about software security research papers is from conference publications posted in the IEEE Digital Library. More than 1100 conference papers were presented on this topic in 2014. The set presented here represents those likely to be of most interest to the Science of Security community. They address issues related to measurement, scalability, reliability, and other hard problem issues.  ACM papers are presented in a separate series.


 

Jun Cai; Shangfei Yang; Jinquan Men; Jun He, "Automatic Software Vulnerability Detection Based on Guided Deep Fuzzing," Software Engineering and Service Science (ICSESS), 2014 5th IEEE International Conference on , vol., no., pp.231,234, 27-29 June 2014. doi:10.1109/ICSESS.2014.6933551 Abstract: Software security has become a very import part of information security in recent years. Fuzzing has proven successful in finding software vulnerabilities which are one major cause of information security incidents. However, the efficiency of traditional fuzz testing tools is usually very poor due to the blindness of test generation. In this paper, we present Sword, an automatic fuzzing system for software vulnerability detection, which combines fuzzing with symbolic execution and taint analysis techniques to tackle the above problem. Sword first uses symbolic execution to collect program execution paths and their corresponding constrains, then uses taint analysis to check these paths, the most dangerous paths which most likely lead to vulnerabilities will be further deep fuzzed. Thus, with the guidance of symbolic execution and taint analysis, Sword generates test cases most likely to trigger potential vulnerabilities lying deep in applications.
Keywords: program diagnostics; program testing; security of data; Sword; automatic fuzzing system; automatic software vulnerability detection; guided deep fuzzing; information security; software security; symbolic execution; taint analysis technique; Databases; Engines; Information security; Monitoring; Software; Software testing; fuzzing; software vulnerability detection; symbolic execution; taint analysis (ID#: 15-4567)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933551&isnumber=6933501

 

Kulenovic, M.; Donko, D., "A Survey of Static Code Analysis Methods for Security Vulnerabilities Detection," Information and Communication Technology, Electronics and Microelectronics (MIPRO), 2014 37th International Convention on, pp.1381,1386, 26-30 May 2014. doi: 10.1109/MIPRO.2014.6859783 Abstract: Software security is becoming highly important for universal acceptance of applications for many kinds of transactions. Automated code analyzers can be utilized to detect security vulnerabilities during the development phase. This paper is aimed to provide a survey on Static code analysis and how it can be used to detect security vulnerabilities. The most recent findings and publications are summarized and presented in this paper. This paper provides an overview of the gains, flows and algorithms of static code analyzers. It can be considered a stepping stone for further research in this domain.
Keywords: program diagnostics; security of data; software engineering; development phase; software security vulnerabilities detection; static code analysis methods; Access control; Analytical models; Java; Privacy; Software; security; static code analysis; survey; vulnerability (ID#: 15-4568)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6859783&isnumber=6859515

 

Sparrow, R.D.; Adekunle, A.A.; Berry, R.J.; Farnish, R.J., "Simulating and Modelling The Impact of Security Constructs on Latency for Open Loop Control," Computer Science and Electronic Engineering Conference (CEEC), 2014 6th, pp.85,90, 25-26 Sept. 2014. doi:10.1109/CEEC.2014.6958560 Abstract: Open loop control has commonly been used to conduct tasks for a range of Industrial Control Systems (ICS). ICS however, are susceptible to security exploits. A possible countermeasure to the active and passive attacks on ICS is to provide cryptography to thwart the attacker by providing confidentiality and integrity for transmitted data between nodes on the ICS network; however, a drawback of applying cryptographic algorithms to ICS is the additional communication latency that is generated. The proposed solution presented in this paper delivers a mathematical model suitable for predicting the latency and impact of software security constructs on ICS communications. The proposed model has been tested and validated against a software simulated open loop control scenario, the results obtained indicate on average a 1.3 percentage difference between the model and simulation.
Keywords: control engineering computing; cryptography; data integrity; open loop systems; ICS communication latency; active attack; cryptographic algorithm; data confidentiality; data integrity; industrial control system; open loop control; passive attack; software security; Crystals; Frequency conversion; Mathematical model; Microcontrollers; Real-time systems; Security; Time-frequency analysis; Impact modelling; Industrial Control Systems; Real-Time communication (ID#: 15-4569)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6958560&isnumber=6958541

 

Priya, R.L.; Lifna, C.S.; Jagli, D.; Joy, A., "Rational Unified Treatment for Web application Vulnerability Assessment," Circuits, Systems, Communication and Information Technology Applications (CSCITA), 2014 International Conference on, pp.336,340, 4-5 April 2014. doi:10.1109/CSCITA.2014.6839283 Abstract: Web applications are more and more accustomed offer e-services like online banking, online searching, and social networking over the web. With the boost of the web applications in information society, Web application software security becomes more and more important. With this advancement, the attacks over the web applications have conjointly multiplied. The root causes following these vulnerabilities are lacking of security awareness, design flaws and implementation bugs. Detecting and solving vulnerability is the effective technique to enhance Web security. Many vulnerability analysis techniques in web-based applications observe and report on different types of vulnerabilities. Even though, no particular technique provides a generic technology-independent handling of Web-based vulnerabilities. In this paper, a replacement approach is proposed, implemented and analysed results for Web application Vulnerability Assessment (WVA) based on the Rational Unified Process (RUP) framework, hereafter referred as the Rational Unified WVA.
Keywords: Internet; security of data; RUP framework; WVA; Web application software security; Web application vulnerability assessment;  design flaws; e-services; information society; online banking; online searching; rational unified process framework; rational unified treatment; security awareness; social networking; vulnerability analysis techniques; DH-HEMTs; Educational institutions; Information technology; Organizations; Security; Web servers; Rational Unified Process; The Open Web Application Security Project; Web application Vulnerability Assessment (ID#: 15-4570)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6839283&isnumber=6839219

 

Agosta, G.; Barenghi, A.; Pelosi, G.; Scandale, M., "A Multiple Equivalent Execution Trace Approach to Secure Cryptographic Embedded Software," Design Automation Conference (DAC), 2014 51st ACM/EDAC/IEEE, pp.1, 6, 1-5 June 2014. doi:10.1145/2593069.2593073 Abstract: We propose an efficient and effective method to secure software implementations of cryptographic primitives on low-end embedded systems, against passive side-channel attacks relying on the observation of power consumption or electro-magnetic emissions. The proposed approach exploits a modified LLVM compiler toolchain to automatically generate a secure binary characterized by a randomized execution flow. Also, we provide a new method to refresh the random values employed in the share splitting approaches to lookup table protection, addressing a currently open issue. We improve the current state-of-the-art in dynamic executable code countermeasures removing the requirement of a writeable code segment, and reducing the countermeasure overhead.
Keywords: cryptography; embedded systems; program compilers; table lookup; LLVM compiler toolchain; countermeasure overhead reduction; cryptographic embedded software security; cryptographic primitives; dynamic executable code countermeasures; electromagnetic emissions; lookup table protection; low-end embedded systems; multiple equivalent execution trace approach; passive side-channel attacks; power consumption observation; random values; randomized execution flow; share splitting approach; writeable code segment; Ciphers; Optimization; Power demand; Registers; Software; Power Analysis Attacks; Software Countermeasures; Static Analysis (ID#: 15-4571)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6881537&isnumber=6881325

 

Gupta, M.K.; Govil, M.C.; Singh, G., "Static Analysis Approaches to Detect SQL Injection and Cross Site Scripting Vulnerabilities in Web Applications: A survey," Recent Advances and Innovations in Engineering (ICRAIE), 2014, pp.1, 5, 9-11 May 2014. doi:10.1109/ICRAIE.2014.6909173 Abstract: Dependence on web applications is increasing very rapidly in recent time for social communications, health problem, financial transaction and many other purposes. Unfortunately, presence of security weaknesses in web applications allows malicious user's to exploit various security vulnerabilities and become the reason of their failure. Currently, SQL Injection (SQLI) and Cross-Site Scripting (XSS) vulnerabilities are most dangerous security vulnerabilities exploited in various popular web applications i.e. eBay, Google, Facebook, Twitter etc. Research on defensive programming, vulnerability detection and attack prevention techniques has been quite intensive in the past decade. Defensive programming is a set of coding guidelines to develop secure applications. But, mostly developers do not follow security guidelines and repeat same type of programming mistakes in their code. Attack prevention techniques protect the applications from attack during their execution in actual environment. The difficulties associated with accurate detection of SQLI and XSS vulnerabilities in coding phase of software development life cycle. This paper proposes a classification of software security approaches used to develop secure software in various phase of software development life cycle. It also presents a survey of static analysis based approaches to detect SQL Injection and cross-site scripting vulnerabilities in source code of web applications. The aim of these approaches is to identify the weaknesses in source code before their exploitation in actual environment. This paper would help researchers to note down future direction for securing legacy web applications in early phases of software development life cycle.
Keywords: Internet; SQL; program diagnostics; security of data; software maintenance; software reliability; source code (software); SQL injection; SQLI; Web applications; XSS; attack prevention; cross site scripting vulnerabilities; defensive programming; financial transaction; health problem; legacy Web applications; malicious users; programming mistakes; security vulnerabilities; security weaknesses; social communications; software development life cycle; source code; static analysis; vulnerability detection; Analytical models; Guidelines; Manuals; Programming; Servers; Software; Testing; SQL injection; cross site scripting; static analysis; vulnerabilities; web application (ID#: 15-4572)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6909173&isnumber=6909103

 

Mell, P.; Harang, R.E., "Using Network Tainting to Bound the Scope of Network Ingress Attacks," Software Security and Reliability (SERE), 2014 Eighth International Conference on, pp.206,215, June 30 2014-July 2 2014. doi:10.1109/SERE.2014.34 Abstract: This research describes a novel security metric, network taint, which is related to software taint analysis. We use it here to bound the possible malicious influence of a known compromised node through monitoring and evaluating network flows. The result is a dynamically changing defense-in-depth map that shows threat level indicators gleaned from monotonically decreasing threat chains. We augment this analysis with concepts from the complex networks research area in forming dynamically changing security perimeters and measuring the cardinality of the set of threatened nodes within them. In providing this, we hope to advance network incident response activities by providing a rapid automated initial triage service that can guide and prioritize investigative activities.
Keywords: network theory (graphs);security of data; defense-in-depth map; network flow evaluation; network flow monitoring; network incident response activities; network ingress attacks; network tainting metric; security metric; security perimeters; software taint analysis; threat level indicators; Algorithm design and analysis; Complex networks; Digital signal processing; Measurement; Monitoring; Security; Software; complex networks; network tainting; scale-free; security (ID#: 15-4573)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6895431&isnumber=6895396

 

Siyuan Jiang; Santelices, R.; Haipeng Cai; Grechanik, M., "How Accurate is Dynamic Program Slicing? An Empirical Approach to Compute Accuracy Bounds," Software Security and Reliability-Companion (SERE-C), 2014 IEEE Eighth International Conference on, pp.3, 4, June 30 2014-July 2 2014. doi:10.1109/SERE-C.2014.14 Abstract: Dynamic program slicing attempts to find runtime dependencies among statements to support security, reliability, and quality tasks such as information-flow analysis, testing, and debugging. However, it is not known how accurately dynamic slices identify statements that really affect each other. We propose a new approach to estimate the accuracy of dynamic slices. We use this approach to obtain bounds on the accuracy of multiple dynamic slices in Java software. Early results suggest that dynamic slices suffer from some imprecision and, more critically, can have a low recall whose upper bound we estimate to be 60% on average.
Keywords: Java; data flow analysis; program debugging; program slicing; program testing; Java software; dynamic program slicing; information-flow analysis; quality tasks; reliability; runtime dependencies; security; software debugging; software testing; Accuracy; Reliability; Runtime; Security; Semantics; Software; Upper bound; dynamic slicing; program slicing; semantic dependence; sensitivity analysis (ID#: 15-4574)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6901632&isnumber=6901618

 

Riaz, M.; King, J.; Slankas, J.; Williams, L., "Hidden in Plain Sight: Automatically Identifying Security Requirements from Natural Language Artifacts," Requirements Engineering Conference (RE), 2014 IEEE 22nd International, pp.183,192, 25-29 Aug. 2014. doi:10.1109/RE.2014.6912260 Abstract: Natural language artifacts, such as requirements specifications, often explicitly state the security requirements for software systems. However, these artifacts may also imply additional security requirements that developers may overlook but should consider to strengthen the overall security of the system. The goal of this research is to aid requirements engineers in producing a more comprehensive and classified set of security requirements by (1) automatically identifying security-relevant sentences in natural language requirements artifacts, and (2) providing context-specific security requirements templates to help translate the security-relevant sentences into functional security requirements. Using machine learning techniques, we have developed a tool-assisted process that takes as input a set of natural language artifacts. Our process automatically identifies security-relevant sentences in the artifacts and classifies them according to the security objectives, either explicitly stated or implied by the sentences. We classified 10,963 sentences in six different documents from healthcare domain and extracted corresponding security objectives. Our manual analysis showed that 46% of the sentences were security-relevant. Of these, 28% explicitly mention security while 72% of the sentences are functional requirements with security implications. Using our tool, we correctly predict and classify 82% of the security objectives for all the sentences (precision). We identify 79% of all security objectives implied by the sentences within the documents (recall). Based on our analysis, we develop context-specific templates that can be instantiated into a set of functional security requirements by filling in key information from security-relevant sentences.
Keywords: formal specification; learning (artificial intelligence); natural language processing; security of data; context-specific security requirements templates; context-specific templates; functional requirements; functional security requirements; healthcare domain; machine learning techniques; natural language artifacts; natural language requirements artifacts; requirements engineer; requirements specifications; security objectives; security-relevant sentences; software systems; tool-assisted process; Availability; Medical services; Natural languages; Object recognition; Security; Software systems; Text categorization; Security; access control; auditing; constraints; natural language parsing; objectives; requirements; templates; text classification (ID#: 15-4575)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6912260&isnumber=6912234

 

Hesse, T.-M.; Gartner, S.; Roehm, T.; Paech, B.; Schneider, K.; Bruegge, B., "Semiautomatic Security Requirements Engineering and Evolution Using Decision Documentation, Heuristics, and User Monitoring," Evolving Security and Privacy Requirements Engineering (ESPRE), 2014 IEEE 1st Workshop on, pp.1,6, 25-25 Aug. 2014. doi: 10.1109/ESPRE.2014.6890520  Abstract: Security issues can have a significant negative impact on the business or reputation of an organization. In most cases they are not identified in requirements and are not continuously monitored during software evolution. Therefore, the inability of a system to conform to regulations or its endangerment by new vulnerabilities is not recognized. In consequence, decisions related to security might not be taken at all or become obsolete quickly. But to evaluate efficiently whether an issue is already addressed appropriately, software engineers need explicit decision documentation. Often, such documentation is not performed due to high overhead. To cope with this problem, we propose to document decisions made to address security requirements. To lower the manual effort, information from heuristic analysis and end user monitoring is incorporated. The heuristic assessment method is used to identify security issues in given requirements automatically. This helps to uncover security decisions needed to mitigate those issues. We describe how the corresponding security knowledge for each issue can be incorporated into the decision documentation semiautomatically. In addition, violations of security requirements at runtime are monitored. We show how decisions related to those security requirements can be identified through the documentation and updated manually. Overall, our approach improves the quality and completeness of security decision documentation to support the engineering and evolution of security requirements.
Keywords: formal specification; security of data; system documentation; end user monitoring; heuristic analysis; heuristic assessment method; organization; security decision documentation; security decisions; security issues; security knowledge; semiautomatic security requirements engineering; software engineers; software evolution; vulnerability; Context; Documentation; IEEE Potentials; Knowledge engineering; Monitoring; Security; Software; Security requirements engineering; decision documentation; decision knowledge; heuristic analysis; knowledge carrying software; software evolution; user monitoring (ID#: 15-4576)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890520&isnumber=6890516

 

Uzunov, A.V.; Falkner, K.; Fernandez, E.B., "A Comprehensive Pattern-Driven Security Methodology for Distributed Systems," Software Engineering Conference (ASWEC), 2014 23rd Australian, pp.142, 151, 7-10 April 2014.  doi:10.1109/ASWEC.2014.14 Abstract: Incorporating security features is one of the most important and challenging tasks in designing distributed systems. Over the last decade, researchers and practitioners have come to recognize that the incorporation of security features should proceed by means of a systematic approach, combining principles from both software and security engineering. Such systematic approaches, particularly those implying some sort of process aligned with the development life-cycle, are termed security methodologies. One of the most important classes of such methodologies is based on the use of security patterns. While the literature presents a number of pattern-driven security methodologies, none of them are designed specifically for general distributed systems. Going further, there are also currently no methodologies with mixed specific applicability, e.g. for both general and peer-to-peer distributed systems. In this paper we aim to fill these gaps by presenting a comprehensive pattern-driven security methodology specifically designed for general distributed systems, which is also capable of taking into account the specifics of peer-to-peer systems. Our methodology takes the principle of encapsulation several steps further, by employing patterns not only for the incorporation of security features (via security solution frames), but also for the modeling of threats, and even as part of its process. We illustrate and evaluate the presented methodology via a realistic example -- the development of a distributed system for file sharing and collaborative editing. In both the presentation of the methodology and example our focus is on the early life-cycle phases (analysis and design).
Keywords: peer-to-peer computing; security of data; software engineering; collaborative editing; comprehensive pattern-driven security methodology; file sharing; life-cycle phase development; peer-to-peer distributed systems; security engineering; security patterns; software engineering; systematic approach; Analytical models; Computer architecture; Context; Object oriented modeling; Security; Software; Taxonomy; distributed systems security; secure software engineering; security methodologies; security patterns; security solution frames; threat patterns (ID#: 15-4577)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6824119&isnumber=6824087

 

Azab, M., "Multidimensional Diversity Employment for Software Behavior Encryption," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1, 5, March 30 2014-April 2 2014. doi:10.1109/NTMS.2014.6814033 Abstract: Modern cyber systems and their integration with the infrastructure has a clear effect on the productivity and quality of life immensely. Their involvement in our daily life elevate the need for means to insure their resilience against attacks and failure. One major threat is the software monoculture. Latest research work demonstrated the danger of software monoculture and presented diversity to reduce the attack surface. In this paper, we propose ChameleonSoft, a multidimensional software diversity employment to, in effect, induce spatiotemporal software behavior encryption and a moving target defense. ChameleonSoft introduces a loosely coupled, online programmable software-execution foundation separating logic, state and physical resources. The elastic construction of the foundation enabled ChameleonSoft to define running software as a set of behaviorally-mutated functionally-equivalent code variants. ChameleonSoft intelligently Shuffle, at runtime, these variants while changing their physical location inducing untraceable confusion and diffusion enough to encrypt the execution behavior of the running software. ChameleonSoft is also equipped with an autonomic failure recovery mechanism for enhanced resilience. In order to test the applicability of the proposed approach, we present a prototype of the ChameleonSoft Behavior Encryption (CBE) and recovery mechanisms. Further, using analysis and simulation, we study the performance and security aspects of the proposed system. This study aims to assess the provisioned level of security by measuring the avalanche effect percentage and the induced confusion and diffusion levels to evaluate the strength of the CBE mechanism. Further, we compute the computational cost of security provisioning and enhancing system resilience.
Keywords: computational complexity; cryptography; multidimensional systems; software fault tolerance; system recovery; CBE mechanism; ChameleonSoft Behavior Encryption; ChameleonSoft recovery mechanisms; autonomic failure recovery mechanism; avalanche effect percentage; behaviorally-mutated functionally-equivalent code variants; computational cost; confusion levels; diffusion levels; moving target defense; multidimensional software diversity employment; online programmable software-execution foundation separating logic; security level; security provisioning; software monoculture; spatiotemporal software behavior encryption; system resilience; Employment;Encryption;Resilience; Runtime;Software; Spatiotemporal phenomena (ID#: 15-4578)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814033&isnumber=6813963

 

Bansal, S.K.; Jolly, A., "An Encyclopedic Approach for Realization of Security Activities with Agile Methodologies," Confluence The Next Generation Information Technology Summit (Confluence), 2014 5th International Conference, pp.767,772, 25-26 Sept. 2014. doi:10.1109/CONFLUENCE.2014.6949242 Abstract: Agility among the software is searching concern during the development phase, as it boost adaptive planning, incremental and evolutionary development with many other features that are lightweight in nature. Security is one of the considerable concern in today's highly agile software development industry. More assertion is on to produce a protected software, so as to lessen the amount of risk and damage caused by the software. Evolving protected software with high agile characteristics is always a tough task to do because of heavy weight quality of security activities. This paper submit a innovative approach by which security activities can be combined with agile activities by calculating the mean agility value of both activities i.e. agile likewise security keeping in mind the aspects such as cost, time, recurrence, benefits affecting the agility of the activity. By accepting fuzzy value compatibility table (FVCT), rapport of embodiment of both the activities is done with fuzzy values.
Keywords: security of data; software prototyping; adaptive planning; agile activities; agile methodologies; agile software development; encyclopedic approach; fuzzy value compatibility table; protected software; security activities; Encoding; Industries; Next generation networking; Planning; Security; Software; Testing; Agility Degree; Fuzzy Logics; Security Activities (ID#: 15-4579)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6949242&isnumber=6949036

 

Kaur, R.; Singh, R.P., "Enhanced Cloud Computing Security and Integrity Verification via Novel Encryption Techniques," Advances in Computing, Communications and Informatics (ICACCI, 2014 International Conference on, pp.1227,1233, 24-27 Sept. 2014. doi:10.1109/ICACCI.2014.6968328 Abstract: Cloud computing is a revolutionary movement in the area of IT industry that provides storage, computing power, network and software as an abstraction and as a service, on demand over the internet, which enables its clients to access these services remotely from anywhere, anytime via any terminal equipment. Since cloud has modified the definition of data storage from personal computers to the huge data centers, security of data has become one of the major concerns for the developers of cloud. In this paper a security model is proposed, implemented in Cloud Analyst to tighten the level of cloud storage security, which provides security based on different encryption algorithms with integrity verification scheme. We begin with the storage section selection phase divided into three different sections Private, Public, and Hybrid. Various encryption techniques are implemented in all three sections based on the security factors namely authentication, confidentiality, security, privacy, non-repudiation and integrity. Unique token generation mechanism implemented in Private section helps ensure the authenticity of the user, Hybrid section provides On Demand Two Tier security architecture and Public section provides faster computation of data encryption and decryption. Overall data is wrapped in two folds of encryption and integrity verification in all the three sections. The user wants to access data, required to enter the user login and password before granting permission to the encrypted data stored either in Private, Public, or Hybrid section, thereby making it difficult for the hacker to gain access of the authorized environment.
Keywords: cloud computing; cryptography; IT industry; authentication factor; cloud analyst; cloud computing integrity verification; cloud computing security; confidentiality factor; data decryption; data encryption; data security; data storage; encryption algorithms; encryption techniques; hybrid storage selection; information technology; integrity factor; nonrepudiation factor; privacy factor; private storage selection; public storage selection; security factor; token generation mechanism; Authentication; Cloud computing; Computational modeling; Data models; Encryption; AES;Blowfish; IDEA; SAES; SHA-1; Token (ID#: 15-4580)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6968328&isnumber=6968191

 

He Sun; Lin Liu; Letong Feng; Yuan Xiang Gu, "Introducing Code Assets of a New White-Box Security Modeling Language," Computer Software and Applications Conference Workshops (COMPSACW), 2014 IEEE 38th International, pp.116,121, 21-25 July 2014. doi:10.1109/COMPSACW.2014.24 Abstract: This paper argues about a new conceptual modeling language for the White-Box (WB) security analysis. In the WB security domain, an attacker may have access to the inner structure of an application or even the entire binary code. It becomes pretty easy for attackers to inspect, reverse engineer, and tamper the application with the information they steal. The basis of this paper is the 14 patterns developed by a leading provider of software protection technologies and solutions. We provide a part of a new modeling language named i-WBS(White-Box Security) to describe problems of WB security better. The essence of White-Box security problem is code security. We made the new modeling language focus on code more than ever before. In this way, developers who are not security experts can easily understand what they need to really protect.
Keywords: computer crime; data protection; source code (software); specification languages; WB security analysis; attacker; binary code; code assets; code security; i-WBS; reverse engineer; software protection solutions; software protection technologies; white-box security modeling language; Analytical models; Binary codes; Computational modeling; Conferences; Security; Software; Testing; Code security; Security modeling language; White-box security; i-WBS (ID#: 15-4581)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903115&isnumber=6903069

 

McCarthy, M.A.; Herger, L.M.; Khan, S.M., "A Compliance Aware Software Defined Infrastructure," Services Computing (SCC), 2014 IEEE International Conference on, pp.560,567, June 27 2014-July 2 2014. doi:10.1109/SCC.2014.79 Abstract: With cloud eclipsing the $100B mark, it is clear that the main driver is no longer strictly cost savings. The focus now is to exploit the cloud for innovation, utilizing the agility to expand resources to quickly build out new designs, products, simulations and analysis. As the cloud lowers the unit cost of IT and improves agility, the time to market for applications will improve significantly. Companies will use this agility and speed as competitive advantage. An example of the agility is the adoption by enterprises of the software-defined datacenter (SDDC)[3] model, which allows for the rapid build of environments with composable infrastructures. With adoption of the SDDC model, intelligent and automated management of the SDDC is an immediate priority, required to support the changing workloads and dynamic patterns of the enterprise. Often, security and compliance become an 'after thought', bolted on later when problems arise. In this paper, we will discuss our experience in developing and deploying a centralized management system for public, as well as an Openstack [4] based cloud platform in SoftLayer, with an innovative, analytics-driven 'security compliance as a service' that constantly adjusts to varying compliance requirements based on workload, security and compliance requirements. In this paper we will also focus on techniques we have developed for capturing and replaying the previous state of a failing client virtual machine (VM) image, roll back, and then re-execute to analyze failures related to security or compliance. This technique contributes to agility, since failing VM's with security issues can quickly be analyzed and brought back online, this is often not the case with security problems, where analysis and forensics can take several days/weeks.
Keywords: cloud computing; configuration management; security of data; Openstack; SDDC model; SoftLayer; centralized management system; cloud platform; compliance aware software defined infrastructure; security compliance; software-defined datacenter; virtual machine; Companies; Forensics; Monitoring; Process control; Security; Software; Compliance; Compliance Architecture; Compliance Remediation; Compliance as a Service (ID#: 15-4582)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6930580&isnumber=6930500

 

Behl, D.; Handa, S.; Arora, A., "A Bug Mining Tool to Identify and Analyze Security Bugs Using Naive Bayes and TF-IDF," Optimization, Reliability, and Information Technology (ICROIT), 2014 International Conference on, pp.294,299, 6-8 Feb. 2014. doi:10.1109/ICROIT.2014.6798341 Abstract: Bug report contains a vital role during software development, However bug reports belongs to different categories such as performance, usability, security etc. This paper focuses on security bug and presents a bug mining system for the identification of security and non-security bugs using the term frequency-inverse document frequency (TF-IDF) weights and naïve bayes. We performed experiments on bug report repositories of bug tracking systems such as bugzilla and debugger. In the proposed approach we apply text mining methodology and TF-IDF on the existing historic bug report database based on the bug s description to predict the nature of the bug and to train a statistical model for manually mislabeled bug reports present in the database. The tool helps in deciding the priorities of the incoming bugs depending on the category of the bugs i.e. whether it is a security bug report or a non-security bug report, using naïve bayes. Our evaluation shows that our tool using TF-IDF is giving better results than the naïve bayes method.
Keywords: Bayes methods; data mining; security of data; statistical analysis; text analysis; Naive Bayes method; TF-IDF; bug mining tool; bug tracking systems; historic bug report database; nonsecurity bug identification; nonsecurity bug report; security bug report; security bugs identification; software development; statistical model; term frequency-inverse document frequency weights; text mining methodology; Computer bugs; Integrated circuit modeling; Vectors; Bug; Naïve Bayes; TF-IDF; mining; non-security bug report; security bug reports; text analysis (ID#: 15-4583)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798341&isnumber=6798279

 

Bouaziz, R.; Kallel, S.; Coulette, B., "A Collaborative Process for Developing Secure Component Based Applications," WETICE Conference (WETICE), 2014 IEEE 23rd International,  pp.306, 311, 23-25 June 2014. doi:10.1109/WETICE.2014.82 Abstract: Security patterns describe security solutions that can be used in a particular context for recurring problems in order to solve a security problem in a more structured and reusable way. Patterns in general and Security patterns in particular, have become important concepts in software engineering, and their integration is a widely accepted practice. In this paper, we propose a model-driven methodology for security pattern integration. This methodology consists of a collaborative engineering process, called collaborative security pattern Integration process (C-SCRIP), and a tool that supports the full life-cycle of the development of a secure system from modeling to code.
Keywords: groupware; object-oriented programming; security of data; C-SCRIP process; collaborative engineering process; collaborative security pattern integration process; component based application security; model-driven methodology; security pattern integration; security solutions; software engineering; system development lifecycle; Analytical models; Collaboration; Context; Prototypes; Security; Software; Unified modeling language; CMSPEM; Collaborative process; Security patterns; based systems (ID#: 15-4584)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6927071&isnumber=6926989

 

Yier Jin, "EDA Tools Trust Evaluation Through Security Property Proofs," Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014, pp.1,4, 24-28 March 2014. doi:10.7873/DATE.2014.260 Abstract: The security concerns of EDA tools have long been ignored because IC designers and integrators only focus on their functionality and performance. This lack of trusted EDA tools hampers hardware security researchers' efforts to design trusted integrated circuits. To address this concern, a novel EDA tools trust evaluation framework has been proposed to ensure the trustworthiness of EDA tools through its functional operation, rather than scrutinizing the software code. As a result, the newly proposed framework lowers the evaluation cost and is a better fit for hardware security researchers. To support the EDA tools evaluation framework, a new gate-level information assurance scheme is developed for security property checking on any gatelevel netlist. Helped by the gate-level scheme, we expand the territory of proof-carrying based IP protection from RT-level designs to gate-level netlist, so that most of the commercially trading third-party IP cores are under the protection of proof-carrying based security properties. Using a sample AES encryption core, we successfully prove the trustworthiness of Synopsys Design Compiler in generating a synthesized netlist.
Keywords: cryptography; electronic design automation; integrated circuit design; AES encryption core; EDA tools trust evaluation; Synopsys design compiler; functional operation; gate-level information assurance scheme; gate-level netlist; hardware security researchers; proof-carrying based IP protection; security property proofs; software code; third-party IP cores; trusted integrated circuits; Hardware; IP networks ;Integrated circuits; Logic gates; Sensitivity; Trojan horses (ID#: 15-4585)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6800461&isnumber=6800201

 

Bozic, J.; Wotawa, F., "Security Testing Based on Attack Patterns," Software Testing, Verification and Validation Workshops (ICSTW), 2014 IEEE Seventh International Conference on, pp.4,11, March 31 2014-April 4 2014. doi:10.1109/ICSTW.2014.58 Abstract: Testing for security related issues is an important task of growing interest due to the vast amount of applications and services available over the internet. In practice testing for security often is performed manually with the consequences of higher costs, and no integration of security testing with today's agile software development processes. In order to bring security testing into practice, many different approaches have been suggested including fuzz testing and model-based testing approaches. Most of these approaches rely on models of the system or the application domain. In this paper we suggest to formalize attack patterns from which test cases can be generated and even executed automatically. Hence, testing for known attacks can be easily integrated into software development processes where automated testing, e.g., for daily builds, is a requirement. The approach makes use of UML state charts. Besides discussing the approach, we illustrate the approach using a case study.
Keywords: Internet; Unified Modeling Language; program testing; security of data; software prototyping; Internet; UML state charts; agile software development processes; attack patterns; security testing; Adaptation models; Databases; HTML; Security; Software; Testing; Unified modeling language; Attack pattern; SQL injection; UML state machine; cross-site scripting; model-based testing; security testing (ID#: 15-4586)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825631&isnumber=6825623


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Trustworthy Systems, Part 1

 
SoS Logo

Trustworthy Systems, Part 1

 

Trust is created in information security to assure the identity of external parties.  It is one of the core problems. The growth of large scale distributed systems and outsourcing to cloud increases both the need and the challenge to address trustworthy systems. The works cited here are from 2014 conferences.


 

Waguespack, L.J.; Yates, D.J.; Schiano, W.T., "Towards a Design Theory for Trustworthy Information Systems," System Sciences (HICSS), 2014 47th Hawaii International Conference on, pp.3707,3716, 6-9 Jan. 2014. doi:10.1109/HICSS.2014.461 Abstract: The lack of a competent design theory to shape information system security policy and implementation has exacerbated an already troubling lack of security. Information systems remain insecure and therefore untrustworthy even after more than half a century of technological evolution. The issues grow ever more severe as the volume of data grows exponentially and the cloud emerges as a preferred repository. We aspire to advance security design by expanding the mindsets of stakeholder and designer to include a more complete portfolio of factors. The goal of security design is to craft choices that resonate with stake-holders' sense of a trustworthy system. To engender trust, security must be intrinsic to any definition of IS design quality. Thriving Systems Theory (TST) is an information systems design theory focused on reconciling and harmonizing stakeholder intentions. Formulating security design through TST is a starting point for a quality-based security design theory for trustworthy information systems.
Keywords: information systems; investment; security of data; IS design quality; TST; advance security design; design theory; information system security policy; portfolio; quality-based security design theory; technological evolution; thriving systems theory; trustworthy information systems; Communities; Context; Encapsulation; Information systems; Internet; Security; Shape; (ID#: 15-4734)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759063&isnumber=6758592

 

Haiying Shen; Guoxin Liu; Gemmill, J.; Ward, L., "A P2P-Based Infrastructure for Adaptive Trustworthy and Efficient Communication in Wide-Area Distributed Systems," Parallel and Distributed Systems, IEEE Transactions on, vol.25, no.9, pp.2222,2233, Sept. 2014. doi: 10.1109/TPDS.2013.159 Abstract: Tremendous advances in pervasive networking have enabled wide-area distributed systems to connect distributed resources or users such as corporate data centers and high-performance computing centers. These distributed pervasive systems take advantage of resources and enhance collaborations worldwide. However, due to lack of central management, they are severely threatened by a variety of malicious users in today's Internet. Current reputation- and anonymity-based technologies for node communication enhance system trustworthiness. However, most of these technologies gain trustworthiness at the cost of efficiency degradation. This paper presents a P2P-based infrastructure for trustworthy and efficient node communication in wide-area distributed systems. It jointly addresses trustworthiness and efficiency in its operation in order to meet the high-performance requirements of a diversified wealth of distributed pervasive applications. The infrastructure includes two policies: trust/efficiency-oriented request routing and trust-based adaptive anonymous response forwarding. This infrastructure not only offers a trustworthy environment with anonymous communication but also enhances overall system efficiency through harmonious trustworthiness and efficiency trade-offs. Experimental results from simulations and the real-world PlanetLab testbed show the superior performance of the P2P-based infrastructure in achieving both high trustworthiness and high efficiency in comparison to other related approaches.
Keywords: peer-to-peer computing; trusted computing; Internet; P2P-based infrastructure; PlanetLab; adaptive trustworthy; anonymity-based technologies; corporate data centers; distributed pervasive systems; efficiency-oriented request routing; high-performance computing centers; node communication; peer-to-peer infrastructure; pervasive networking; reputation-based technologies; system trustworthiness; trust-based adaptive anonymous response forwarding; wide-area distributed systems; Algorithm design and analysis; Overlay networks; Peer-to-peer computing; Radiation detectors; Routing; Servers; Tunneling; Wide-area distributed systems; anonymity; efficiency; peer to peer networks; reputation systems (ID#: 15-4735)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6542624&isnumber=6873370

 

Banga, G.; Crosby, S.; Pratt, I., "Trustworthy Computing for the Cloud-Mobile Era: A Leap Forward in Systems Architecture.," Consumer Electronics Magazine, IEEE, vol.3, no.4, pp.31,39, Oct. 2014. doi:10.1109/MCE.2014.2338591 Abstract: The past decade has transformed computing in astounding ways: Who could have predicted, back in 2004, that cloud computing was about to change so profoundly to democratize in many respects the availability of computing, storage, and networking? Who could have imagined the transformation of client computing that resulted from the combination of pay-as-you-go cloud infrastructure for application developers and affordable, powerful, touch-enabled mobile devices?
Keywords: cloud computing; software architecture; trusted computing; cloud infrastructure; cloud-mobile era; systems architecture; trustworthy computing; Cloud computing; Computer architecture; Computer security; Mobile communication; Systems analysis and design; Virtual machine monitors (ID#: 15-4736)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6914666&isnumber=6914657

 

Choochotkaew, S.; Piromsopa, K., "Development of a Trustworthy Authentication System in Mobile Ad-Hoc Networks for Disaster Area," Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), 2014 11th International Conference on, pp.1,6, 14-17 May 2014. doi:10.1109/ECTICon.2014.6839739 Abstract: In this paper, we propose a MANET authentication model for communication between victims in disaster areas. Our model is as secure as the Self-Generated-Certificate Public Key without pairing scheme [1], but does not require a direct connection to a centralized CA. We achieve this by combining two adjusted protocols into two independent authentication modes: main mode and emergency mode. In our scenario, a disaster area is partitioned into two adjacent zones: a damage zone (most infrastructures inside are damaged by a severe disaster), and an infrastructure zone. This partition is based on our observation from many real life disaster situations. A node, called a carrier (rescue node), moves between the two zones in order to relay between them. Our proposed hybrid approach has higher availability and more efficiency than the traditional approaches. In our system, an encrypted message can be used to verify both senders and receivers as well as to preserve confidentiality and integrity of data. The key to the success of our model is the mobility of the rescue nodes. Our model is validated using NS-3 simulator. We present security and efficiency analysis by comparing to the traditional approaches.
Keywords: cryptographic protocols; data integrity; emergency management; mobile ad hoc networks; mobility management (mobile radio);public key cryptography; radio receivers; radio transmitters; telecommunication security; MANET authentication model;NS-3 simulator; adjusted protocols; centralized CA; damage zone; disaster area; disaster situations; efficiency analysis; emergency mode; encrypted message; hybrid approach; independent authentication modes; infrastructure zone; main mode; mobile ad hoc networks; rescue node; security analysis; self-generated-certificate public key; trustworthy authentication system; Authentication; Availability; Encryption; Mobile ad hoc networks; Protocols; Public key; Authentication model; Communication in disaster area; MANET; self-generated-certificate public key; self-organized public key (ID#: 15-4737)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6839739&isnumber=6839704

 

Kroll, J.A.; Stewart, G.; Appel, A.W., "Portable Software Fault Isolation," Computer Security Foundations Symposium (CSF), 2014 IEEE 27th, pp.18,32, 19-22 July 2014. doi: 10.1109/CSF.2014.10 Abstract: We present a new technique for architecture portable software fault isolation (SFI), together with a prototype implementation in the Coq proof assistant. Unlike traditional SFI, which relies on analysis of assembly-level programs, we analyze and rewrite programs in a compiler intermediate language, the Cminor language of the Comp Cert C compiler. But like traditional SFI, the compiler remains outside of the trusted computing base. By composing our program transformer with the verified back-end of Comp Cert and leveraging Comp Cert's formally proved preservation of the behavior of safe programs, we can obtain binary modules that satisfy the SFI memory safety policy for any of Comp Cert's supported architectures (currently: Power PC, ARM, and x86-32). This allows the same SFI analysis to be used across multiple architectures, greatly simplifying the most difficult part of deploying trustworthy SFI systems.
Keywords: program compilers; software fault tolerance; theorem proving; trusted computing; Cminor language; CompCert C compiler; Coq proof assistant; SFI memory safety policy; architecture portable software fault isolation; assembly-level program analysis; compiler intermediate language; trusted computing base; trustworthy SFI systems; Abstracts; Assembly; Computer architecture; Program processors; Safety; Security; Semantics; Architecture Portability; Memory Safety; Software Fault Isolation; Verified Compilers (ID#: 15-4738)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6957100&isnumber=6957090

 

Aktas, Erdem; Afram, Furat; Ghose, Kanad, "Continuous, Low Overhead, Run-Time Validation of Program Executions," Microarchitecture (MICRO), 2014 47th Annual IEEE/ACM International Symposium on,  pp.229,241, 13-17 Dec. 2014. doi:10.1109/MICRO.2014.18 Abstract: The construction of trustworthy systems demands that the execution of every piece of code is validated as genuine, that is, the executed codes do exactly what they are supposed to do. Pre-execution validations of code integrity fail to detect run time compromises like code injection, return and jump-oriented programming, and illegal dynamic linking of program modules. We propose and evaluate a generalized mechanism called REV (for Run-time Execution Validator) that can be easily integrated into a contemporary out-of-order processor to validate, as the program executes, the control flow path and instructions executed along the control flow path. To prevent memory from being tainted by compromised code, REV also prevents updates to the memory from a basic block until its execution has been authenticated. Although control flow signature based authentication of an execution has been suggested before for software testing and for restricted cases of embedded systems, their extensions to out-of-order cores is a non-incremental effort from a micro architectural standpoint. Unlike REV, the existing solutions do not scale with binary sizes, require binaries to be altered or require new ISA support and also fail to contain errors and, in general, impose a heavy performance penalty. We show, using a detailed cycle-accurate micro architectural simulator for an out-of-order pipeline implementing the X86 ISA that the performance overhead of REV is limited to 1.87% on the average across the SPEC 2006 benchmarks.
Keywords: Authentication; Cryptography; Hardware; Kernel; Out of order; Pipelines; Computer Security; Control-Flow Integrity; Control-Flow Validation; Hardware Security; Secure Execution; Trusted Computing (ID#: 15-4739)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7011391&isnumber=7011360

 

Naeen, H.M.; Jalali, M.; Naeen, A.M., "A Trust-Aware Collaborative Filtering System Based on Weighted Items for Social Tagging Systems," Intelligent Systems (ICIS), 2014 Iranian Conference on, pp.1,5, 4-6 Feb. 2014. doi:10.1109/IranianCIS.2014.6802565 Abstract: Collaborative Filtering systems consider users' social environment to predict what each user may like to visit in a social network i.e. they collect and analyze a large amount of information on users' behavior, activities or preferences and then predict or make suggestions to users. These systems use ranks or tags each user assign to different resources to make predictions. Lately, social tagging systems, in which users can insert new contents, tag, organize, share and search for contents, are becoming more popular. These social tagging systems have a lot of valuable information, but the data expansion in them is very fast and this has led to the need for recommender systems that will predict what each user may like or need make these suggestions to them. One of the problems in these systems is: “how much can we rely on the similar users, are they trustworthy?” In this article we use trust metric, which we conclude from users' tagging behavior, beside similarities to give suggestions. Results show considering trust in a collaborative system can lead to better performance in generating suggestions.
Keywords: behavioural sciences; collaborative filtering; recommender systems; social networking (online);trusted computing; data expansion; recommender systems; social network; social tagging systems; trust metric; trust-aware collaborative filtering system; user activities; user behavior; user preferences; user social environment; user tagging behavior; weighted items; Collaboration; Measurement; Motion pictures; Recommender systems; Social network services; Tagging; Collaborative filtering systems; Recommender systems; Tag; Trust (ID#: 15-4740)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6802565&isnumber=6798982

 

Lopez, J.; Xiaoping Che; Maag, S.; Morales, G., "A Distributed Monitoring Approach for Trust Assessment Based on Formal Testing," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on, pp.702,707, 13-16 May 2014. doi:10.1109/WAINA.2014.114 Abstract: Communications systems are growing in use and in popularity. While their interactions are becoming more numerous, trust those interactions now becomes a priority. In this paper, we focus on trust management systems based on observations of trustee behaviors. Based on a formal testing methodology, we propose a formal distributed network monitoring approach to analyze the packets exchanged between the trust or, trustee and other points of observation in order to prove the trustee is acting in a trustworthy manner. Based on formal "trust properties", the monitored systems behaviors provide a verdict of trust by analyzing and testing those properties. Finally, our methodology is applied to a real industrial DNS use case scenario.
Keywords: formal specification; program testing; trusted computing; communications systems; formal distributed network monitoring approach; formal testing methodology; formal trust properties; industrial DNS use case scenario; trust assessment; trust management systems; trustee behaviors; Monitoring; Protocols; Prototypes; Security; Servers; Syntactics; Testing; Communication systems; Formal method; Monitoring; Trust (ID#: 15-4741)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844721&isnumber=6844560

 

Divya, S.V.; Shaji, R.S., "Security in Data Forwarding Through Elliptic Curve Cryptography in Cloud," Control, Instrumentation, Communication and Computational Technologies (ICCICCT), 2014 International Conference on, pp.1083,1088, 10-11 July 2014. doi:10.1109/ICCICCT.2014.6993122 Abstract: Cloud is an emerging trend in IT which moves the data along its computing away from handy systems into large remote data centers where the management of those resources are not trustworthy. Because of its magnetized features, it is gaining popularity among the IT people and the researches. However building a secure storage system with some functionality is still a challenging task in cloud. Existing methods suffer from inefficiency and delay because the data cannot be forwarded to user without retrieving back and offline verification causes delay. This paper focuses on designing a secure cloud storage system that supports data forwarding function using elliptic curve cryptography. The proposed work also concentrates on Online Alert methodology which indicates the data owner when any attacker tries to modify the data or any malpractice happens during data forwarding. Moreover, our method ensures multi-level security when compared to existing systems.
Keywords: cloud computing; public key cryptography; storage management; cloud computing; cloud storage system; data centers; data forwarding function; data forwarding security; data owner; elliptic curve cryptography; multilevel security; online alert methodology; resource management; Cloud computing; Elliptic curve cryptography; Encryption; Protocols; Servers; Data forwarding; Elliptic Curve Cryptography; Multi-level security.rel; Online Alert Methodology (ID#: 15-4742)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6993122&isnumber=6992918

 

Anisetti, Marco; Ardagna, Claudio A.; Damiani, Ernesto, "A Certification-Based Trust Model for Autonomic Cloud Computing Systems," Cloud and Autonomic Computing (ICCAC), 2014 International Conference on, pp.212,219, 8-12 Sept. 2014. doi:10.1109/ICCAC.2014.8 Abstract: Autonomic cloud computing systems react to events and context changes, preserving a stable quality of service for their tenants. Existing assurance techniques supporting trust relations between parties need to be adapted to scenarios where the assumption of responsibility on trust assertions and related information (e.g., in SLAs and certificates) cannot be done at a single point in time and by a single trusted third party. In this paper, we tackle this problem by proposing a new trust model grounded on a security certification scheme for the cloud. Our model is based on a multiple signatures process including dynamic delegation mechanisms. Our approach supports autonomic cloud computing systems in the management of dynamic content in security certificates, establishing a trustworthy cloud environment.
Keywords: Cloud computing; Clouds; Computational modeling; Context; Mechanical factors; Monitoring; Security; Autonomic Cloud Computing; Certification; Trust Model (ID#: 15-4743)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7024063&isnumber=7024029

 

Zongwei Zhou; Miao Yu; Gligor, V.D., "Dancing with Giants: Wimpy Kernels for On-Demand Isolated I/O," Security and Privacy (SP), 2014 IEEE Symposium on, pp.308,323, 18-21 May 2014. doi: 10.1109/SP.2014.27 Abstract: To be trustworthy, security-sensitive applications must be formally verified and hence small and simple, i.e., wimpy. Thus, they cannot include a variety of basic services available only in large and untrustworthy commodity systems, i.e., in giants. Hence, wimps must securely compose with giants to survive on commodity systems, i.e., rely on giants' services but only after efficiently verifying their results. This paper presents a security architecture based on a wimpy kernel that provides on-demand isolated I/O channels for wimp applications, without bloating the underlying trusted computing base. The size and complexity of the wimpy kernel are minimized by safely outsourcing I/O subsystem functions to an untrusted commodity operating system and exporting driver and I/O subsystem code to wimp applications. Using the USB subsystem as a case study, this paper illustrates the dramatic reduction of wimpy-kernel size and complexity, e.g., over 99% of the USB code base is removed. Performance measurements indicate that the wimpy-kernel architecture exhibits the desired execution efficiency.
Keywords: formal verification; operating systems (computers); peripheral interfaces; software architecture; trusted computing; I/O subsystem functions; USB code base; USB subsystem; commodity systems; formal verification; giants services; on-demand isolated I/O channels; security architecture; security-sensitive applications; trusted computing; trustworthy; untrusted commodity operating system; wimp applications; wimpy kernel complexity; wimpy kernel size; wimpy-kernel architecture; Complexity theory; Hardware; Kernel; Linux; Security; Universal Serial Bus (ID#: 15-4744)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956572&isnumber=6956545

 

Hussain, S.; Gustavsson, R.; Saleem, A.; Nordstrom, L., "SLA Conceptual Framework for Coordinating and Monitoring Information Flow in Smart Grid," Innovative Smart Grid Technologies Conference (ISGT), 2014 IEEE PES, pp.1,5, 19-22 Feb. 2014. doi:10.1109/ISGT.2014.6816470  Abstract: The EU challenges for the future energy systems will change the scene of the energy systems in Europe. A transition from centralized controlled power network to customer oriented Smart Grid operating in distributed and deregulated energy market poses several regulatory, organizational and technical challenges. In such a market scenarios, multiple stakeholders provide services to produce and deliver energy. Due to the inclusion of new stakeholders at multiple levels there is a lack of purposeful monitoring based on pre-negotiated SLAs. Hence, there exists a gap to actively monitor KPIs values among all negotiated SLAs (Service Level Agreements). This paper addresses SLA based active monitoring of information flow. The proposed SLA framework provides monitoring based on negotiated KPIs in an automated and trustworthy way to coordinate information flow. In the end a use case is presented to validate the SLA framework.
Keywords: contracts; power markets; smart power grids; SLA conceptual framework; deregulated energy market; distributed energy market; information flow coordination; information flow monitoring; service level agreement; smart grid; Availability; Business; Interoperability; Monitoring; Quality of service; Smart grids; Coordination; Monitoring; SCADA; SLA; Service Level Agreements; Smart Grid; Stakeholders (ID#: 15-4745)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6816470&isnumber=6816367

 

Leke, C.; Twala, B.; Marwala, T., "Modeling of Missing Data Prediction: Computational Intelligence and Optimization Algorithms," Systems, Man and Cybernetics (SMC), 2014 IEEE International Conference on, pp.1400, 1404, 5-8 Oct. 2014. doi: 10.1109/SMC.2014.6974111 Abstract: Four optimization algorithms (genetic algorithm, simulated annealing, particle swarm optimization and random forest) were applied with an MLP based auto associative neural network on two classification datasets and one prediction dataset. This work was undertaken to investigate the effectiveness of using auto associative neural networks and optimization algorithms in missing data prediction and classification tasks. If performed appropriately, computational intelligence and optimization algorithm systems could lead to consistent, accurate and trustworthy predictions and classifications resulting in more adequate decisions. The results reveal GA, SA and PSO to be more efficient when compared to RF in terms of predicting the forest area to be affected by fire. GA, SA, and PSO had the same accuracy of 93.3%, while RF showed 92.99% accuracy. For the classification problems, RF showed 93.66% and 92.11% accuracy on the German credit and Heart disease datasets respectively, outperforming GA, SA and PSO.
Keywords: data mining; genetic algorithms; linear programming; neural nets; particle swarm optimisation; pattern classification; simulated annealing; tree searching; German credit datasets; Heart disease datasets; MLP based auto associative neural network; computational intelligence; genetic algorithm; missing data prediction modeling; optimization algorithms; particle swarm optimization; random forest; simulated annealing; trustworthy predictions; Accuracy; Classification algorithms; Genetic algorithms; Neural networks; Optimization; Prediction algorithms; Radio frequency; auto-associative neural networks; classification; missing data; optimization algorithms; prediction (ID#: 15-4746)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6974111&isnumber=6973862

 

Ferdowsi, H.; Jagannathan, S.; Zawodniok, M., "An Online Outlier Identification and Removal Scheme for Improving Fault Detection Performance," Neural Networks and Learning Systems, IEEE Transactions on, vol.25, no.5, pp.908,919, May 2014. doi:10.1109/TNNLS.2013.2283456 Abstract: Measured data or states for a nonlinear dynamic system is usually contaminated by outliers. Identifying and removing outliers will make the data (or system states) more trustworthy and reliable since outliers in the measured data (or states) can cause missed or false alarms during fault diagnosis. In addition, faults can make the system states nonstationary needing a novel analytical model-based fault detection (FD) framework. In this paper, an online outlier identification and removal (OIR) scheme is proposed for a nonlinear dynamic system. Since the dynamics of the system can experience unknown changes due to faults, traditional observer-based techniques cannot be used to remove the outliers. The OIR scheme uses a neural network (NN) to estimate the actual system states from measured system states involving outliers. With this method, the outlier detection is performed online at each time instant by finding the difference between the estimated and the measured states and comparing its median with its standard deviation over a moving time window. The NN weight update law in OIR is designed such that the detected outliers will have no effect on the state estimation, which is subsequently used for model-based fault diagnosis. In addition, since the OIR estimator cannot distinguish between the faulty or healthy operating conditions, a separate model-based observer is designed for fault diagnosis, which uses the OIR scheme as a preprocessing unit to improve the FD performance. The stability analysis of both OIR and fault diagnosis schemes are introduced. Finally, a three-tank benchmarking system and a simple linear system are used to verify the proposed scheme in simulations, and then the scheme is applied on an axial piston pump testbed. The scheme can be applied to nonlinear systems whose dynamics and underlying distribution of states are subjected to change due to both unknown faults and operating conditions.
Keywords: fault diagnosis; fault tolerant control; neurocontrollers; nonlinear dynamical systems; observers; statistical analysis; FD framework; NN weight update law; OIR scheme; analytical model-based fault detection; axial piston pump; fault detection performance; median; model-based fault diagnosis; model-based observer; moving time window; neural network; nonlinear dynamic system; observer-based techniques; online outlier identification-and-removal scheme; simple linear system; standard deviation; three-tank benchmarking system; Fault diagnosis; Noise; Noise measurement; Observers; Pollution measurement; Vectors; Data analysis; fault diagnosis; neural networks; nonlinear systems; nonlinear systems. (ID#: 15-4747)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6658905&isnumber=6786374

 

Detken, K.-O.; Genzel, C.-H.; Rudolph, C.; Jahnke, M., "Integrity Protection in a Smart Grid Environment for Wireless Access of Smart Meters," Wireless Systems within the Conferences on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS-SWS), 2014 2nd International Symposium on, pp.79,86, 11-12 Sept. 2014. doi:10.1109/IDAACS-SWS.2014.6954628 Abstract: To meet future challenges of energy grids, secure communication between involved control systems is necessary. Therefore the German Federal Office for Information Security (BSI) has published security standards concerning a central communication unit for energy grids called Smart Meter Gateway (SMGW). The present security concept of the SPIDER project takes these standards into consideration but extends their level of information security by integrating elements from the Trusted Computing approach. Additionally, a tamper resistant grid is integrated with chosen hardware modules and a trustworthy boot process is applied. To continually measure the SMGW and smart meter (SM) integrity the approach Trusted Network Connect (TNC) from the Trusted Computing Group (TCG) is used. Hereby a Trusted Core Network (TCN) can be established to protect the smart grid components against IT based attacks. That is necessary, especially by the use of wireless connections between the SMGW and smart meter components.
Keywords: data protection; power engineering computing; security of data; smart meters; smart power grids; BSI; German Federal Office for Information Security; SMGW; TCG; TCN; TNC; central communication unit; energy grids; integrity protection; security standards; smart grid environment; smart meter gateway; smart meters; tamper resistant grid; trusted computing group; trusted core network; trusted network connect; trustworthy boot process; wireless access; Hardware; Monitoring; Security; Smart meters; Software; Standards; Wide area networks; Integrity; Smart Meter Gateway; Smart Meters; Trusted Computing; Trusted Core Network; Trusted Network Connect (ID#: 15-4748)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6954628&isnumber=6954602

 

JAIN, R.; Prabhakar, S., "Guaranteed Authenticity and Integrity of Data from Untrusted Servers," Data Engineering (ICDE), 2014 IEEE 30th International Conference on, pp.1282,1285, March 31 2014-April 4 2014. doi:10.1109/ICDE.2014.6816761 Abstract: Data are often stored at untrusted database servers. The lack of trust arises naturally when the database server is owned by a third party, as in the case of cloud computing. It also arises if the server may have been compromised, or there is a malicious insider. Ensuring the trustworthiness of data retrieved from such untrusted database is of utmost importance. Trustworthiness of data is defined by faithful execution of valid and authorized transactions on the initial data. Earlier work on this problem is limited to cases where data are either not updated, or data are updated by a single trustworthy entity. However, for a truly dynamic database, multiple clients should be allowed to update data without having to route the updates through a central server. In this demonstration, we present a system to establish authenticity and integrity of data in a dynamic database where the clients can run transactions directly on the database server. Our system provides provable authenticity and integrity of data with absolutely no requirement for the server to be trustworthy. Our system also provides assured provenance of data. This demonstration is built using the solutions proposed in our previous work[5]. Our system is built on top of Oracle with no modifications to the database internals. We show that the system can be easily adopted in existing databases without any internal changes to the database. We also demonstrate how our system can provide authentic provenance.
Keywords: data integrity; database management systems; trusted computing; Oracle; cloud computing; data authenticity; data integrity; data provenance; data transactions; data trustworthiness; database internals; database servers; dynamic database; malicious insider; trustworthy entity; Cloud computing; Hardware; Indexes; Protocols; Servers (ID#: 15-4749)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6816761&isnumber=6816620

 

Sousa, S.; Dias, P.; Lamas, D., "A Model for Human-computer Trust: A Key Contribution for Leveraging Trustful Interactions," Information Systems and Technologies (CISTI), 2014 9th Iberian Conference on, pp. 1, 6, 18-21 June 2014. doi:10.1109/CISTI.2014.6876935 Abstract: This article addresses trust in computer systems as a social phenomenon, which depends on the type of relationship that is established through the computer, or with other individuals. It starts by theoretically contextualizing trust, and then situates trust in the field of computer science. Then, describes the proposed model, which builds on what one perceives to be trustworthy and is influenced by a number of factors such as the history of participation and user's perceptions. It ends by situating the proposed model as a key contribution for leveraging trustful interactions and ends by proposing it used to serve as a complement to foster user's trust needs in what concerns Human-computer Iteration or Computermediated Interactions.
Keywords: computer mediated communication; human computer interaction; computer science; computer systems; computer-mediated interactions; human-computer iteration; human-computer trust model; participation history; social phenomenon; trustful interaction leveraging; user perceptions; user trust needs; Collaboration; Computational modeling; Computers; Context; Correlation; Educational institutions; Psychology; Collaboration; Engagement; Human-computer trust; Interaction design; Participation (ID#: 15-4750)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6876935&isnumber=6876860

 

Noorian, Z.; Mohkami, M.; Yuan Liu; Hui Fang; Vassileva, J.; Jie Zhang, "SocialTrust: Adaptive Trust Oriented Incentive Mechanism for Social Commerce," Web Intelligence (WI) and Intelligent Agent Technologies (IAT), 2014 IEEE/WIC/ACM International Joint Conferences on, vol.2, no., pp.250,257, 11-14 Aug. 2014. doi:10.1109/WI-IAT.2014.105  Abstract: In the absence of legal authorities and enforcement mechanisms in open e-marketplaces, it is extremely challenging for a user to validate the quality of opinions (i.e. Ratings and reviews) of products or services provided by other users (referred as advisers). Rationally, advisers tend to be reluctant to share their truthful experience with others. In this paper, we propose an adaptive incentive mechanism, where advisers are motivated to share their actual experiences with their trustworthy peers (friends/neighbors in the social network) in e-marketplaces (social commerce context), and malicious users will be eventually evacuated from the systems. Experimental results demonstrate the effectiveness of our mechanism in promoting the honesty of users in sharing their past experiences.
Keywords: electronic commerce; incentive schemes; social networking (online);trusted computing; SocialTrust mechanism; adaptive trust oriented incentive mechanism; e-marketplaces; social commerce; Business; Context; Measurement; Monitoring; Quality of service; Servers; Social network services; Trust; electronic commerce; incentive mechanism; reputation systems (ID#: 15-4751)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6927632&isnumber=6927590

 

Hua Chai; Wenbing Zhao, "Towards Trustworthy Complex Event Processing," Software Engineering and Service Science (ICSESS), 2014 5th IEEE International Conference on, pp.758,761, 27-29 June 2014. doi:10.1109/ICSESS.2014.6933677 Abstract: Complex event processing has become an important technology for big data and intelligent computing because it facilitates the creation of actionable, situational knowledge from potentially large amount events in soft realtime. Complex event processing can be instrumental for many mission-critical applications, such as business intelligence, algorithmic stock trading, and intrusion detection. Hence, the servers that carry out complex event processing must be made trustworthy. In this paper, we present a threat analysis on complex event processing systems and describe a set of mechanisms that can be used to control various threats. By exploiting the application semantics for typical event processing operations, we are able to design lightweight mechanisms that incur minimum runtime overhead appropriate for soft realtime computing.
Keywords: Big Data; trusted computing; Big Data; actionable situational knowledge; algorithmic stock trading; application semantics; business intelligence; complex event processing; event processing operations; intelligent computing; intrusion detection; minimum runtime overhead; mission-critical applications; servers; soft realtime computing; threat analysis; trustworthy; Business; Context; Fault tolerance; Fault tolerant systems; Runtime; Servers; Synchronization; Big Data; Business Intelligence; Byzantine Fault Tolerance; Complex Event Processing; Dependable Computing; Trust (ID#: 15-4752)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933677&isnumber=6933501

 

Addo, I.D.; Ji-Jiang Yang; Ahamed, S.I., "SPTP: A Trust Management Protocol for Online and Ubiquitous Systems," Computer Software and Applications Conference (COMPSAC), 2014 IEEE 38th Annual, pp.590,595, 21-25 July 2014. doi:10.1109/COMPSAC.2014.82 Abstract: With the recent proliferation of ubiquitous, mobile and cloud-based systems, security, privacy and trust concerns surrounding the use of emerging technologies in the ensuing wake of the Internet of Things (IoT) continues to mount. In most instances, trust and privacy concerns continuously surface as a key deterrent to the adoption of these emergent technologies. The ensuing literature presents a Secure, Private and Trustworthy protocol (named SPTP) that was prototyped for addressing critical security, privacy and trust concerns surrounding mobile, pervasive and cloud services in Collective Intelligence (CI) scenarios. The efficacy of the protocol and its associated characteristics are evaluated in CI-related scenarios including multimodal monitoring of Elderly people in smart home environments, Online Advertisement targeting in Computational Advertising settings, and affective state monitoring through game play as an intervention for Autism among Children. We present our evaluation criteria for the proposed protocol, our initial results and future work.
Keywords: Internet of Things; cloud computing; data privacy; mobile computing; security of data; trusted computing; CI scenarios; Internet of Things; IoT; SPTP; cloud-based systems; collective intelligence; computational advertising; elderly people; mobile systems; online advertisement; online systems; privacy; security; smart home environments; trust management protocol; ubiquitous systems; Cloud computing; Data privacy; Monitoring; Privacy; Protocols; Security; Senior citizens; Cloud; Collective Intelligence; Mobile Computing; Online Advertising Privacy; Privacy Framework; SPT; Security; Trust Management; Trust and Privacy Protocol; Ubiquitous Computing; mHealth (ID#: 15-4753)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6899265&isnumber=6899181


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

Trustworthy Systems, Part 2

 
SoS Logo

Trustworthy Systems, Part 2

 

Trust is created in information security to assure the identity of external parties.  It is one of the core problems. The growth of large-scale distributed systems and outsourcing to cloud increases both the need and the challenge to address trustworthy systems.  The works cited here are from 2014 conferences.


 

Msadek, N.; Kiefhaber, R.; Ungerer, T., "A Trust- and Load-Based Self-Optimization Algorithm for Organic Computing Systems," Self-Adaptive and Self-Organizing Systems (SASO), 2014 IEEE Eighth International Conference on, pp.177,178, 8-12 Sept. 2014. doi: 10.1109/SASO.2014.32 Abstract: In this paper a new design of self optimization for organic computing systems is investigated. Its main task, i.e., beside load-balancing, is to assign services with different importance levels to nodes so that the more important services are assigned to more trustworthy nodes. The evaluation results showed that the proposed algorithm is able to balance the workload between nodes nearly optimal. Moreover, it improves significantly the availability of important services.
Keywords: distributed processing; fault tolerant computing; resource allocation; self-adjusting systems; trusted computing; load-balancing; load-based self-optimization algorithm; organic computing systems; trust-based self-optimization algorithm; trustworthy nodes; Algorithm design and analysis; Availability; Computer network reliability; Conferences; Load management; Optimization; Runtime; Autonomic Computing; Organic Computing; Self-Optimization; Self-x Properties; Trust (ID#: 15-4754)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7001015&isnumber=7000942

 

Morisse, M.; Horlach, B.; Kappenberg, W.; Petrikina, J.; Robel, F.; Steffens, F., "Trust in Network Organizations — A Literature Review on Emergent and Evolving Behavior in Network Organizations," System Sciences (HICSS), 2014 47th Hawaii International Conference on, pp.4578,4587, 6-9 Jan. 2014. doi:10.1109/HICSS.2014.561 Abstract: This systematic literature review examines different forms of evolving and emergent behavior in network organizations (NO) with an emphasis on trust. Because of the difficulties and importance in researching emergent behavior in network organizations, this review summarizes the main aspects of 17 papers and tries to disclose open research points by combining the different perspectives of behavior and forms of NOs. Due to the complexity of those organizations, there are several “soft aspects” that affect the partnership implicitly. In particular, trust is intertwined with other facets (e.g. legal aspects). IT governance and IT systems can have an impact on trust and vice versa. Therefore, maintaining a trustworthy relationship in a network organization is undoubtedly an enormous challenge for all participants. At the end of this literature review, we discuss some open research gaps like the influence of different cultures in NOs or the visualization of emergent behavior.
Keywords: law; organisational aspects; trusted computing; IT governance; IT systems; emergent behavior; legal aspects; network organizations; trustworthy relationship; Collaboration; Complexity theory; Information systems; Law; Organizations; Outsourcing; Standards organizations; Network organization; emergent and evolving behavior; literature review; trust (ID#: 15-4755)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759164&isnumber=6758592

 

Franklin, Z.R.; Patterson, C.D.; Lerner, L.W.; Prado, R.J., "Isolating Trust in an Industrial Control System-on-Chip Architecture," Resilient Control Systems (ISRCS), 2014 7th International Symposium on pp.1,6, 19-21 Aug. 2014. doi:10.1109/ISRCS.2014.6900096 Abstract: A distributed industrial control system (ICS) also distributes trust across many software and hardware components. There is a need for some malware countermeasures to be independent of application, supervisory or driver software, which can introduce vulnerabilities. We describe the Trustworthy Autonomic Interface Guardian Architecture (TAIGA) that provides an on-chip, digital, security version of classic mechanical interlocks. In order to enhance trust in critical embedded processes, TAIGA redistributes responsibilities and authorities between a Programmable Logic Controller (PLC) processor and a hardware-implemented interface controller, simplifying PLC software without significantly degrading performance while separating trusted components from updatable software. The interface controller is synthesized from C code, formally analyzed, and permits runtime checked, authenticated updates to certain system parameters but not code. TAIGA's main focus is ensuring process stability even if this requires overriding commands from the processor or supervisory nodes. The TAIGA architecture is mapped to a commercial, configurable system-on-chip platform.
Keywords: control engineering computing; distributed control; industrial control; production engineering computing; programmable controllers; system-on-chip; trusted computing; ICS; PLC processor; TAIGA; distributed industrial control system; hardware components; hardware-implemented interface controller; industrial control system-on-chip architecture; malware countermeasures; programmable logic controller; software components; trust isolation; trustworthy autonomic interface guardian architecture; Monitoring; Predictive models; Process control; Production; Sensors; Software; System-on-chip (ID#: 15-4756)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900096&isnumber=6900080

 

Szu-Yin Lin; Ping-Hsien Chou, "A Semi-distributed Reputation Mechanism Based on Dynamic Data-Driven Application System," e-Business Engineering (ICEBE), 2014 IEEE 11th International Conference on, pp.164,169, 5-7 Nov. 2014. doi:10.1109/ICEBE.2014.37 Abstract: Trust is one of the important issues related to unknown networks. A mechanism which can distinguish a trustworthy node from an untrustworthy one is essential. The effectiveness of the mechanism depends on the accuracy of node's reputation. Dynamics of Trust often happens in a trusted network. It causes intoxication and disguise for nodes, resulting in abnormal behaviors. This paper proposes a semi-distributed reputation mechanism based on Dynamic Data-Driven Application System. It focuses on the Dynamics of Trust and the balance between Distributed Nodes and the Central Controller. The experimental results show that the proposed mechanism upload only averages 52.21% of the data to compare with all of uploads. It can also effectively handle the problem of Dynamics of Trust.
Keywords: computer network security; trusted computing; abnormal behaviors; central controller; distributed nodes; dynamic data-driven application system; semidistributed reputation mechanism; trusted network; trustworthy node; untrustworthy node; Computer architecture; Distributed databases; Dynamic scheduling; Equations; Mathematical model; Measurement; Peer-to-peer computing; Dynamic Data-Driven Application System; Dynamics of Trust; Reputation and Trust-based Model (ID#: 15-4757)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6982075&isnumber=6982037

 

Gupta, Subham Kumar; Rawat, Seema; Kumar, Praveen, "A Novel Based Security Architecture of Cloud Computing," Reliability, Infocom Technologies and Optimization (ICRITO) (Trends and Future Directions), 2014 3rd International Conference on, pp.1,6, 8-10 Oct. 2014. doi:10.1109/ICRITO.2014.7014676 Abstract: Cloud computing is considered as the future of IT organizations. In weigh against to conventional solutions where all type of computing services are controlled through any type of personnel controls, it transfers all computing resources to the centralized large data centers, so users can enjoy services in a large scale on demand. Chiefly small and medium-size organizations can manage their projects by using cloud-based services and also able to achieve productivity enhancement with limited budgets. But, apart from all of these benefits, it may not be fully trustworthy. Cloud Computing do not keep data on the user's system, so there is a need of data security. The user pays progressively attention about data security due to this off-side storage of data on cloud computing. In order to retain confidentiality of data against un-trusted cloud service providers, There are so many approaches. All modern cloud service providers solve this problem by encryption and decryption techniques. They all have their merits and demerits. In present thesis, the basic dilemma of cloud computing security is inspected. We have also proposed a survey of various models for cloud security. To ensure the data security in the cloud, we suggest an efficient, accessible and adaptable cryptography based scheme. In-depth security and enactment inspection proved the proposed scheme as greatly efficient and robust against spiteful data alteration outbreak. The proposed scheme achieves scalability as well as flexibility due to its hierarchical structure.
Keywords: Authentication; Cloud computing; Data models; Encryption (ID#: 15-4758)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7014676&isnumber=7014644

 

Zuxing Li; Oechtering, T.J.; Kittichokechai, K., "Parallel Distributed Bayesian Detection with Privacy Constraints," Communications (ICC), 2014 IEEE International Conference on, pp.2178, 2183, 10-14 June 2014. doi: 10.1109/ICC.2014.6883646 Abstract: In this paper, the privacy problem of a parallel distributed detection system vulnerable to an eavesdropper is proposed and studied in the Bayesian formulation. The privacy risk is evaluated by the detection cost of the eavesdropper which is assumed to be informed and greedy. It is shown that the optimal detection strategy of the sensor whose decision is eavesdropped on is a likelihood-ratio test. This fundamental insight allows for the optimization to reuse known algorithms extended to incorporate the privacy constraint. The trade-off between the detection performance and privacy risk is illustrated in a numerical example. The incorporation of physical layer privacy in the system design will lead to trustworthy sensor networks in future.
Keywords: Bayes methods; data privacy; distributed algorithms; maximum likelihood detection; optimisation; risk analysis; wireless sensor networks; detection cost; eavesdropper; likelihood ratio test; optimal detection strategy; optimization; parallel distributed Bayesian detection system; physical layer privacy; privacy constraint; privacy risk evaluation; trustworthy sensor network; Bayes methods; Light rail systems; Measurement; Optimization; Privacy; Security; Sensors (ID#: 15-4759)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883646&isnumber=6883277

 

Elizabeth, B.L.; Ramya, K.; Prakash, A.J.; Uthariaraj, V.R., "Trustworthy Mechanisms for Selecting Cloud Service Providers," Recent Trends in Information Technology (ICRTIT), 2014 International Conference on, pp.1,5, 10-12 April 2014. doi:10.1109/ICRTIT.2014.6996182 Abstract: Cloud computing has changed the nature of IT and business. However the adoption issues for the cloud is mainly due to the lack of transparency and control. Also, there are too many cloud service providers in the marketplace offering similar functionalities. In order to support the consumers in identifying trustful cloud providers, Trustworthy mechanisms for selecting cloud service providers is proposed in this paper. The proposed system is implemented using feedbacks and credential attributes (QoS) of providers. A modified Identity model is proposed to identify malicious feedbacks and provides improvements in the trust computation. Results show that the trust computation using the proposed architecture is more efficient in terms of finding accurate trust based providers.
Keywords: cloud computing; quality of service; trusted computing; QoS attribute; cloud computing; cloud service provider selection; quality of service; trust based providers; trust computation; trustworthy mechanism; Cloud computing; Computational modeling; Information technology; Market research; Mathematical model; Quality of service; Time factors; Identity model; Trust mechanisms; credential attributes; malicious feedbacks (ID#: 15-4760)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6996182&isnumber=6996087

 

Karantjias, A.; Polemi, N.; Papastergiou, S., "Advanced Security Management System for Critical Infrastructures," Information, Intelligence, Systems and Applications, IISA 2014, The 5th International Conference on, pp.291,297, 7-9 July 2014. doi: 10.1109/IISA.2014.6878837 Abstract: The maritime sector is critical in terms of economic activities and commercial impact not only for the European society but more importantly for the Mediterranean EU Member States, especially under the current economic turmoil. Commercial ports are the main gateways and face increased requirements, responsibilities and needs in view of a secure and sustainable maritime digital environment. Therefore, they have to rely on complicated and advanced facilities, ICT infrastructure and trustworthy e-maritime services in order to optimize their operations. This paper aims at alleviating this gap on the basis of a holistic approach that addresses the security of the dual nature of ports' Critical Information Infrastructures (CIIs). In particular, it introduces a collaborative security management system (CYSM system), which enables ports' operators to: (a) model physical and cyber assets and interdependencies; (b) analyse and manage internal / external / interdependent physical and cyber threats / vulnerabilities; and (c) evaluate / manage physical and cyber risks against the requirements specified in the ISPS Code and ISO27001.
Keywords: risk management; sea ports; security of data; transportation; CII; CYSM system; European Union; European society; ICT infrastructure;ISO27001 standard; ISPS Code; Mediterranean EU Member States; collaborative security management system; commercial impact; commercial ports; critical information infrastructures; critical infrastructures; cyber assets; cyber risks; cyber threats; cyber vulnerabilities; economic activities; information and communication technology; maritime digital environment; maritime sector; trustworthy e-maritime services; Airports; Atmospheric modeling; Europe; Face; IEC standards; Marine vehicles; Security; collaboration; critical infrastructure; privacy; risk assessment; security management (ID#: 15-4761)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6878837&isnumber=6878713

 

Dang, Tianli; Yan, Zheng; Tong, Fei; Zhang, Weidong; Zhang, Peng, "Implementation of a Trust-Behavior Based Reputation System for Mobile Applications," Broadband and Wireless Computing, Communication and Applications (BWCCA), 2014 Ninth International Conference on, pp.221,228, 8-10 Nov. 2014. doi:10.1109/BWCCA.2014.52 Abstract: The sharp increase of the number of mobile applications attracts special attention on mobile application trust. It becomes more and more crucial for a user to know which mobile application is trustworthy to purchase, download, install, execute and recommend. This paper presents the design and implementation of a trust-behavior based reputation system for mobile applications based on an Android platform. The system can automatically evaluate a user's trust in a mobile application based on application usage and generate application reputation according to collected individual trust information. We implement the system and evaluate its performance based on a user study. The result shows that our system is effective with regard to trust/reputation evaluation accuracy, power efficiency and system usability.
Keywords: Databases; Mobile communication; Mobile handsets; Monitoring; Robustness; Usability; Reputation systems; mobile applications; trust; trust behavior (ID#: 15-4762)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7016072&isnumber=7015998

 

Singh, N.K.; Patel, Y.S.; Das, U.; Chatterjee, A., "NUYA: An Encrypted Mechanism for Securing Cloud Data from Data Mining Attacks," Data Mining and Intelligent Computing (ICDMIC), 2014 International Conference on, pp.1,6, 5-6 Sept. 2014. doi: 10.1109/ICDMIC.2014.6954254 Abstract: Cloud Computing is a vast infrastructural and rising pool, which provides huge storage of data in one sphere. Organizations, now a days are in the marathon of equipping the whole system in a cloud form. The attackers evaluating data for a long time to extract the valued information to perform data mining based attacks on the cloud. In the recent architectures the data is sited in a single or distributed cloud provider. It gives the opportunity to the cloud providers and attackers to unauthorized access from cloud and also gives the chance to analyze the client data for a long time to extract the sensitive information, which is responsible for the privacy violation of clients. This paper proposes an approach that firstly maintains the confidentiality, integrity, and authentication for the stored data in cloud. Secondly, it presents distributed storage cloud architecture, which includes the description of trusted computing work group (TCG) and trusted platform module (TPM). It provides hardware authentication for trustworthy computing platform and also uses Kerberos authentication to avoid software attacks. This proposed approach establishes file locality by clustering the related data based on their physical distance and effective matching with client applications. It supports efficient clustering and reduces communication cost in large-scale cloud computing applications.
Keywords: cloud computing; communication complexity; cryptographic protocols; data integrity; data mining; pattern clustering; trusted computing; Kerberos authentication; NUYA; TCG; TPM; cloud data; clustering; communication cost; data authentication; data confidentiality; data integrity; data mining attacks; distributed storage cloud architecture; encrypted mechanism; file locality; hardware authentication; software attacks; trusted computing work group; trusted platform module; trustworthy computing platform; Authentication; Cloud computing; Cryptography; Data mining; Logic gates; Servers; Authentication; Cloud Computing; File Locality; Security; Trusted Platform Module (TPM) (ID#: 15-4763)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6954254&isnumber=6954218

 

Yean-Ru Chen; Sao-Jie Chen; Pao-Ann Hsiung; I-Hsin Chou, "Unified Security and Safety Risk Assessment - A Case Study on Nuclear Power Plant," Trustworthy Systems and their Applications (TSA), 2014 International Conference on, pp.22,28, 9-10 June 2014. doi: 10.1109/TSA.2014.13 Abstract: Critical systems have very stringent requirements on both security and safety. Recent mishaps such as the missing MH370 aircraft and the sunk Korean Sewol ferry go to show that our technology in safety and security risk assessment still need a more integrated approach. Nuclear plant meltdown in the recent Fukushima accident is also a typical example of insufficient risk assessments. This work is a case study on how a unified security and safety risk assessment methodology may be applied to a High Pressure Core Flooder (HPCF) system in a nuclear power plant. Individual risk security or safety assessments may overlook the possible higher risk associated with such critical systems. The case study shows how the proposed method provides a more accurate risk assessment compared to individual assessments.
Keywords: computer network security; nuclear power stations; power system security; risk analysis; Fukushima accident; HPCF system; Korean Sewol ferry;MH370 aircraft; high pressure core flooder system; nuclear power plant; safety risk assessment; unified security; Hazards; Inductors; Power generation; Risk management; Security; Valves (ID#: 15-4764)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6956707&isnumber=6956693

 

Oberle, A.; Larbig, P.; Kuntze, N.; Rudolph, C., "Integrity Based Relationships and Trustworthy Communication Between Network Participants," Communications (ICC), 2014 IEEE International Conference on, pp.610,615, 10-14 June 2014. doi: 10.1109/ICC.2014.6883386 Abstract: Establishing trust relationships between network participants by having them prove their operating system's integrity via a Trusted Platform Module (TPM) provides interesting approaches for securing local networks at a higher level. In the introduced approach on OSI layer 2, attacks carried out by already authenticated and participating nodes (insider threats) can be detected and prevented. Forbidden activities and manipulations in hard- and software, such as executing unknown binaries, loading additional kernel modules or even inserting unauthorized USB devices, are detected and result in an autonomous reaction of each network participant. The provided trust establishment and authentication protocol operates independently from upper protocol layers and is optimized for resource constrained machines. Well known concepts of backbone architectures can maintain the chain of trust between different kinds of network types. Each endpoint, forwarding and processing unit monitors the internal network independently and reports misbehaviours autonomously to a central instance in or outside of the trusted network.
Keywords: computer network security; cryptographic protocols; trusted computing; OSI layer 2; authenticated node; authentication protocol; insider threat; integrity based relationship; network participants; operating system integrity; participating node; trust establishment; trusted platform module; trustworthy communication; Authentication; Encryption; Payloads; Protocols; Servers; Unicast; Cyber-physical systems; Security; authentication; industrial networks; integrity; protocol design; trust (ID#: 15-4765)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883386&isnumber=6883277

 

Mohaisen, A.; Huy Tran; Chandra, A.; Yongdae Kim, "Trustworthy Distributed Computing on Social Networks," Services Computing, IEEE Transactions on, vol.7, no.3, pp. 333, 345, July-Sept. 2014. doi: 10.1109/TSC.2013.56 Abstract: In this paper we investigate a new computing paradigm, called SocialCloud, in which computing nodes are governed by social ties driven from a bootstrapping trust-possessing social graph. We investigate how this paradigm differs from existing computing paradigms, such as grid computing and the conventional cloud computing paradigms. We show that incentives to adopt this paradigm are intuitive and natural, and security and trust guarantees provided by it are solid. We propose metrics for measuring the utility and advantage of this computing paradigm, and using real-world social graphs and structures of social traces; we investigate the potential of this paradigm for ordinary users. We study several design options and trade-offs, such as scheduling algorithms, centralization, and straggler handling, and show how they affect the utility of the paradigm. Interestingly, we conclude that whereas graphs known in the literature for high trust properties do not serve distributed trusted computing algorithms, such as Sybil defenses-for their weak algorithmic properties, such graphs are good candidates for our paradigm for their self-load-balancing features.
Keywords: cloud computing; computer bootstrapping; resource allocation; social networking (online); trusted computing; SocialCloud; design options; self-load-balancing features; social networks; trust-possessing social graph bootstrapping; trustworthy distributed computing; Biological system modeling; Cloud computing; Computational modeling; Grid computing; Processor scheduling; Servers; Social network services; Distributed computing; social computing; trust (ID#: 15-4766)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6682915&isnumber=6893066

 

Miguel, J.; Caballe, S.; Xhafa, F.; Prieto, J.; Barolli, L., "Towards a Normalized Trustworthiness Approach to Enhance Security in On-Line Assessment," Complex, Intelligent and Software Intensive Systems (CISIS), 2014 Eighth International Conference on, pp.147,154, 2-4 July 2014. doi: 10.1109/CISIS.2014.22 Abstract: This paper proposes an approach to enhance information security in on-line assessment based on a normalized trustworthiness model. Among collaborative e-Learning drawbacks which are not completely solved, we have investigated information security requirements in on-line assessment (e-assessment). To the best of our knowledge, security requirements cannot be reached with technology alone, therefore, new models such as trustworthiness approaches can complete technological solutions and support e-assessment requirements for e-Learning. Although trustworthiness models can be defined and included as a service in e-assessment security frameworks, there are multiple factors related to trustworthiness which cannot be managed without normalization. Among these factors we discuss trustworthiness multiple sources, different data source formats, measure techniques and other trustworthiness factors such as rules, evolution or context. Hence, in this paper, we justify why trustworthiness normalization is needed and a normalized trustworthiness model is proposed by reviewing existing normalization procedures for trustworthy values applied to e-assessments. Eventually, we examine the potential of our normalized trustworthiness model in a real online collaborative learning course.
Keywords: computer aided instruction; educational administrative data processing; educational courses; groupware; security of data; collaborative e-learning; data source formats; e-assessment requirements; information security enhancement; measure techniques; normalized trustworthiness approach; online assessment; real online collaborative learning course; trustworthiness factors; trustworthiness multiple sources; Buildings; Context; Data models; Electronic learning; Information security; Vectors; collaborative learning; e-assessment; information security; normalization; trustworthiness (ID#: 15-4767)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6915510&isnumber=6915447

 

Eun Joo Kim; Jong Dae Park, "A study on a new method of sending an E-mail with an attachment using a wireless terminal," Communication Systems, Networks & Digital Signal Processing (CSNDSP), 2014 9th International Symposium on, pp. 23, 27, 23-25 July 2014. doi: 10.1109/CSNDSP.2014.6923791 Abstract: This paper provides a method for forwarding email including an attached file in wireless communication terminal. In particular, in terms of a mobile communication terminal suitable for wireless communications including WiBro with a limitation in channel bandwidth between a base station and the mobile communication terminal, unnecessary channel bandwidth occupancy needs to be reduced.
Keywords: Internet; broadband networks; electronic mail; mobile communication; WiBro; channel bandwidth occupancy; e-mail attachment; mobile communication terminal; wireless communication terminal; Electronic mail; Mobile communication; Protocols; Receivers; Servers; Transmitters; Wireless communication; E-Mail system; smart SMTP(Simple Mail Transfer Protocol) (ID#: 15-4768)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6923791&isnumber=6923783

 

Tao Zhang; Jianfeng Ma; Ning Xi; Ximeng Liu; Zhiquan Liu; Jinbo Xiong, "Trustworthy Service Composition in Service-Oriented Mobile Social Networks," Web Services (ICWS), 2014 IEEE International Conference on, pp.684,687, June 27 2014-July 2 2014. doi: 10.1109/ICWS.2014.102 Abstract: In service-oriented mobile social networks (S-MSN), many location-based services are developed to provide various applications to social participants. Services can in turn be composed with the help of these participants. However, the composite structure, the subjective interpretation of trust demand, and the opportunistic connectivity make service composition a challenging task in S-MSN. In this paper, we propose a novel approach to enable trustworthy service evaluation and invocation during the process of composition. By analyzing dependency relationships, our approach can decentralizedly evaluate the trust degree of each service based on a lattice-based trust model to prevent data from being transmitted to untrustworthy counterparts. Besides, service consumers and vendors are able to specify their global and local constraints on the trust degree of service components on demand for more effective composition. Finally, by introducing acquaintances to the neighbors iteratively, social participants form a trust-aware acquaintance graph to forward invocation messages.

Keywords: mobile computing; service-oriented architecture; social networking (online); trusted computing; S-MSN; composite structure; service-oriented mobile social networks; trust degree; trust-aware acquaintance graph; trustworthy service composition; Computational modeling; Educational institutions; Equations; Mobile communication; Mobile computing; Social network services; System-on-chip; acquaintance graph; service composition; service evaluation; service-oriented mobile social network; trust (ID#: 15-4769)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6928962&isnumber=6928859

 

Oliveira, E.; Cardoso, H.; Urbano, J.; Rocha, A.P., "Trustworthy Agents for B2B Operations under Normative Environment," Systems and Informatics (ICSAI), 2014 2nd International Conference on, pp. 252, 257, 15-17 Nov. 2014. doi: 10.1109/ICSAI.2014.7009295 Abstract: Agents intending to be involved in joint B2B operations need to rely on trust measures pointing to possible future solid and secure partnerships. Using Multi-Agent Systems (MAS) as a paradigm for an electronic institution framework enables both to simulate and facilitate the process of autonomous agents, as either enterprises or individual representatives, reaching joint agreements through automatic negotiation. In the heart of the MAS-based electronic institution framework, a Normative Environment provides monitoring capabilities and enforcement mechanisms influencing agents' behavior during joint activities. Moreover, it makes available relevant data that can be important for building up contextual-dependent agent's trust models which, consequently, also influence future possible negotiations leading to new and safer agreements. To support agents information generation, monitoring and fusion, we here present ANTE platform, a software MAS integrating Trust models with negotiation facilities and Normative environments, for the creation and monitoring of agent-based networks.
Keywords: electronic commerce; multi-agent systems;B2B operations; MAS; electronic institution framework; multiagent systems; normative environment ;trustworthy agents; Computational modeling; Context; Contracts; Joints; Monitoring; Multi-agent systems; Software (ID#: 15-4770)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7009295&isnumber=7009247

 

Fei Hao; Geyong Min; Man Lin; Changqing Luo; Yang, L.T., "MobiFuzzyTrust: An Efficient Fuzzy Trust Inference Mechanism in Mobile Social Networks," Parallel and Distributed Systems, IEEE Transactions on, vol.25, no.11, pp.2944, 2955, Nov. 2014. doi: 10.1109/TPDS.2013.309 Abstract: Mobile social networks (MSNs) facilitate connections between mobile users and allow them to find other potential users who have similar interests through mobile devices, communicate with them, and benefit from their information. As MSNs are distributed public virtual social spaces, the available information may not be trustworthy to all. Therefore, mobile users are often at risk since they may not have any prior knowledge about others who are socially connected. To address this problem, trust inference plays a critical role for establishing social links between mobile users in MSNs. Taking into account the nonsemantical representation of trust between users of the existing trust models in social networks, this paper proposes a new fuzzy inference mechanism, namely MobiFuzzyTrust, for inferring trust semantically from one mobile user to another that may not be directly connected in the trust graph of MSNs. First, a mobile context including an intersection of prestige of users, location, time, and social context is constructed. Second, a mobile context aware trust model is devised to evaluate the trust value between two mobile users efficiently. Finally, the fuzzy linguistic technique is used to express the trust between two mobile users and enhance the human's understanding of trust. Real-world mobile dataset is adopted to evaluate the performance of the MobiFuzzyTrust inference mechanism. The experimental results demonstrate that MobiFuzzyTrust can efficiently infer trust with a high precision.
Keywords: fuzzy reasoning; fuzzy set theory; graph theory; mobile computing; security of data; social networking (online); trusted computing; MSN; MobiFuzzyTrust inference mechanism; distributed public virtual social spaces; fuzzy linguistic technique; fuzzy trust inference mechanism; mobile context aware trust model; mobile devices; mobile social networks; mobile users; nonsemantical trust representation; real-world mobile dataset; social links;trust graph; trust models; trust value evaluation; Computational modeling; Context; Context modeling; Mobile communication; Mobile handsets; Pragmatics; Social network services; Mobile social networks; fuzzy inference; linguistic terms; mobile context; trust (ID#: 15-4771)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6684155&isnumber=6919360

 

Mezni, H., "Towards Trustworthy Service Adaptation: An Ontology-Based Cross-Layer Approach," Software Engineering and Service Science (ICSESS), 2014 5th IEEE International Conference on, pp.90, 94, 27-29 June 2014. doi: 10.1109/ICSESS.2014.6933520 Abstract: Although several approaches have been proposed towards self-adaptation of Web services, most of them work in isolation and few of them deal with cross-layer and trust issues. Indeed, the complex layered nature of service-based systems frequently leads to service failure and conflicting adaptation. To tackle this problem, we propose an ontology-based categorization of service behavior across all the functional layers. The proposed ontology provides support for cross-layer self-adaptation by facilitating reasoning about events to identify the real source of service failure, and reasoning about self-adaptation actions to check integrity and compatibility of self-adaptation with constraints imposed by each layer.
Keywords: Web services; ontologies (artificial intelligence); trusted computing; Web service self-adaptation; complex layered service-based systems; conflicting adaptation; cross-layer issues; cross-layer self-adaptation compatibility; cross-layer self-adaptation integrity; functional layers; ontology-based cross-layer approach; ontology-based service behavior categorization; service failure; trust issues; trustworthy service adaptation; Context; Monitoring; Ontologies; Quality of service; Semantics; Service-oriented architecture; Autonomic computing; Ontology; Trustworthiness; WS-Policy; cross-layer adaptation (ID#: 15-4772)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6933520&isnumber=6933501

 

Haiying Shen; Guoxin Liu, "An Efficient and Trustworthy Resource Sharing Platform for Collaborative Cloud Computing," Parallel and Distributed Systems, IEEE Transactions on, vol.25, no.4, pp. 862, 875, April 2014. doi:10.1109/TPDS.2013.106 Abstract: Advancements in cloud computing are leading to a promising future for collaborative cloud computing (CCC), where globally-scattered distributed cloud resources belonging to different organizations or individuals (i.e., entities) are collectively used in a cooperative manner to provide services. Due to the autonomous features of entities in CCC, the issues of resource management and reputation management must be jointly addressed in order to ensure the successful deployment of CCC. However, these two issues have typically been addressed separately in previous research efforts, and simply combining the two systems generates double overhead. Also, previous resource and reputation management methods are not sufficiently efficient or effective. By providing a single reputation value for each node, the methods cannot reflect the reputation of a node in providing individual types of resources. By always selecting the highest-reputed nodes, the methods fail to exploit node reputation in resource selection to fully and fairly utilize resources in the system and to meet users' diverse QoS demands. We propose a CCC platform, called Harmony, which integrates resource management and reputation management in a harmonious manner. Harmony incorporates three key innovations: integrated multi-faceted resource/reputation management, multi-QoS-oriented resource selection, and price-assisted resource/reputation control. The trace data we collected from an online trading platform implies the importance of multi-faceted reputation and the drawbacks of highest-reputed node selection. Simulations and trace-driven experiments on the real-world PlanetLab testbed show that Harmony outperforms existing resource management and reputation management systems in terms of QoS, efficiency and effectiveness.
Keywords: cloud computing; groupware; quality of service; resource allocation; trusted computing; CCC; Harmony platform; PlanetLab; QoS demands; collaborative cloud computing; globally-scattered distributed cloud resources; integrated multifaceted resource-reputation management; multi-QoS-oriented resource selection; node selection; online trading platform; price-assisted resource-reputation control; quality of service; reputation value; trustworthy resource sharing platform; Cloud computing; Collaboration; Indexes; Merchandise; Organizations; Quality of service; Resource management; Distributed systems; cloud computing; distributed hash tables; reputation management; resource management (ID#: 15-4773)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6495453&isnumber=6750096

 

Johnson, L.; Choi Han-Lim; How, J.P., "Convergence Analysis of the Hybrid Information and Plan Consensus Algorithm," American Control Conference (ACC), 2014, pp. 3171, 3176, 4-6 June 2014. doi: 10.1109/ACC.2014.6859325 Abstract: This paper presents a rigorous analysis of the Hybrid Information and Plan Consensus (HIPC) Algorithm previously introduced in Ref. [1]. HIPC leverages the ideas of local plan consensus and implicit coordination to exploit the features of both paradigms. Prior work on HIPC has empirically shown that it reduces the convergence time and number of messages required for distributed task allocation algorithms. This paper further explores HIPC to rigorously prove convergence and provides a worst case on the time to convergence. This worst-case bound is no slower than a comparable plan consensus algorithm, Bid Warped CBBA [2], requiring two times the number of tasks times the network diameter iterations for convergence. Additionally, the analysis of convergence highlights why the performance of HIPC is significantly better than this on average. Convergence bounds of this type are essential creating trustworthy autonomy, and for guaranteeing performance when using these algorithms in the field.
Keywords: convergence; distributed algorithms; distributed control; iterative methods; mobile robots; multi-robot systems; HIPC algorithm; bid warped CBBA; convergence analysis; convergence bounds; convergence time; distributed task allocation algorithms; hybrid information and plan consensus algorithm; implicit coordination; local plan consensus; network diameter iterations; worst-case bound; Algorithm design and analysis; Bismuth; Convergence; Nickel; Planning; Prediction algorithms; Resource management; Agents-based systems; Autonomous systems; Cooperative control (ID#: 15-4774)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6859325&isnumber=6858556

 

Kouno, Kazuaki; Aikebaier, Ailixier; Enokido, Tomoya; Takizawa, Makoto, "Trustworthiness-Based Group Communication Protocols," Network-Based Information Systems (NBiS), 2014 17th International Conference on, pp.490,494, 10-12 Sept. 2014. doi: 10.1109/NBiS.2014.52 Abstract: In distributed applications, a group of multiple process are cooperating with each other by exchanging messages in underlying networks. A message sent by each process has to be delivered to every process in a group. In this paper, we discuss a protocol for reliably, efficiently transmitting messages to every operational process in a group. We assume that each process can send messages to only neighboring processes like wireless networks and scalable peer-to-peer (P2P) overlay networks. Here, a process sends a message to its neighboring processes and then each neighboring process forwards the message to its neighboring processes. In this paper, we propose a trustworthiness-based group communication protocol where only trustworthy neighboring processes forward messages. In order to reduce the number of messages, a trustworthy neighboring process is a process which can more reliably forward messages. We discuss how to obtain the trustworthiness of a process in networks and forward messages to every process through trustworthy processes. We discuss a reactive type of protocol to reliably deliver message to a destination process m a wireless network.
Keywords: Peer-to-peer computing; Protocols; Relays; Reliability; Wireless networks; Wireless sensor networks; Broadcast protocol; Group communicate protocol; Trustworthiness (ID#: 15-4775)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7023999&isnumber=7023898


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.

SoS Lablet Quarterly Meeting - CMU

 

 
SoS Logo

SoS Lablet Quarterly Meeting - CMU


Pittsburgh, PA -- July 15, 2015

Lablet Researchers meet at CMU, Discuss Science of Security and Science of Privacy

 

The Science of Security (SoS) quarterly Lablet meeting, sponsored by NSA, was hosted by the Carnegie Mellon University Lablet on July 14 and 15, 2015.  Quarterly meetings are held to share research, coordinate, present interim findings, and stimulate thought and discussion about the Science of Security.  Two panel sessions produced lively discussions about the nature of the Science of Security and the developing Science of Privacy.

 

Bill Scherlis, Principal Investigator at CMU, coordinated the talks on cybersecurity research and the updates from the four lablets. He described the purpose of the meeting as “to showcase technical activity and progress through Lablet and federal agency technical talks, research poster displays and the sharing of plans and project information.”  The focus of lablet research is on 5 Hard Problems picked for their high level of technical challenge, significant operational value, likelihood of benefitting from scientific research methods and improved measurement capabilities.

 

Stephanie Yannacci, Science of Security program manager, described the fundamental challenge to move cybersecurity from art to science and make it rigorous, repeatable, predictable, and have lasting impact. She attributed the growth of the Science of Security community to the first two years of Lablet efforts, noting the “cascading effect” as new PhD graduates from the lablets move to new universities and continue to contribute to the Science of Security.

 

A pair of speakers from the Department of Energy Office of Advanced Scientific Computing Research (ASCR) presented the keynote.  Steve Binkley and Dr. Robinson Pino outlined issues and goals for cybersecurity R&D.   Binkley indicated DOE is increasing its attention to cybersecurity due to its large footprint and sensitive systems. Specific technology issues in high performance computing (HPC) include increased use of simulations across science, national security, the drive from petascale to exascale computing; the nexus of big data, and Moore’s Law—shift to Quantum, neomorphic, and probabilistic computing.  He described, for example, the idea of a petaflop in a 19” rack.  Pino noted that DOE has no basic cybersecurity research program in place yet, but that DOE sponsored two workshops on cybersecurity in high performance computing.  He defined “Scientific Computing Integrity” as the ability to have a high confidence in the scientific data that is collected and stored.

 

The first panel session discussed various views on the definition of Science of Security. A lively discussion ensued. 


"What is Science of Security?" Panel Discussion

 

PIs or their representatives summarized the activities that took place at their Lablets over the last year, including projects against the hard problems, research papers published, cooperation with other departments and institutions, and outreach efforts.   Jonathan Katz noted that the UMD Lablet was working on 10 projects dealing with the hard problems, and that the Lablet strengths were in Human Behavior and Security Metrics. David Nicoli of UIUC reported on 5 projects associated with the hard problems and identified outreach efforts including HotSoS 2015, an SoS graduate seminar, and SoS summer interns.   Travis Breaux reported on CMU activities, noting that Composability and Human Behavior are their hard problems focus, and that their work involves 15 senior researchers and partner universities that have formed teams comprised of diverse disciplines. Laurie Williams reported that the NCSU Lablet has 4 problems in Resiliency, 2 in Policy, 3 in Human Behavior, and 2 in Metrics, and that their 20 publications involved 55 authors from 13 institutions. 

 

Individual researchers from each Lablet and their teams presented materials from work addressing the five Hard Problems in cybersecurity.  Host Carnegie-Mellon’s Lablet gave an update on their Security Behavior Observatory and presented current research about human factors, insider threats, and logic programming for social networking sites.  User behavior in predictive models, passwords and cybersecurity circumvention were the topics presented by the University of Illinois. Maryland contributed presentations on certificate management and PKI.  NC State presented an update on its bibliometric studies of Science of Security publications and developers’ adoption and use of security tools. 

 

At the end of the second day, another panel discussed the Science of Privacy. 


"Is there a Science of Privacy?" 

 

In addition, almost a dozen research posters were presented.   Each of the Lablets’ PIs reported on the projects underway at their Lablet, the number of papers published and how many other institutions contributed to the papers, and SoS community outreach activities. 

 

The next quarterly meeting will be held at the University of Maryland College Park.


Snippets from the Poster Session

The poster session highlighted various Lablet research topics.

Upcoming Events of Interest

 

 
SoS Logo

Upcoming Events

Mark your calendars!

This section features a wide variety of upcoming security-related conferences, workshops, symposiums, competitions, and events happening in the United States and the world. This list also includes several past events with links to proceedings or summaries of the actual activities.

Note: The events may also be found on the SoS Calendar, located by clicking the 'Calendar' tab on the left-hand navigation bar.



HAISA 2015 International Symposium on Human Aspects of Information Security & Assurance
This symposium, the ninth in our series, will bring together leading figures from academia and industry to present and discuss the latest advances in information security from research and commercial perspectives.
Date: July 1 – 3
Location: Lesvos, Greece
URL: http://haisa.org/

14th European Conference on Cyber Warfare and Security ECCWS
The ECCWS 2015 Conference aims to bring together researchers, practitioners and industrialists who are interested in various cyber security aspects. This Conference is expected to attract people from the cyber security community and those researching into cyber war technology to stimulate interesting discussions about the latest development and technologies.
Date: July 2 - 3
Location: Hatfield, UK
URL: http://academic-conferences.org/eccws/eccws2015/eccws15-home.htm


SANS Capital City 2015
Information security training in Washington DC from SANS Institute, the global leader in cybersecurity training. SANS Capital City 2015 features hands-on, immersion-style cybersecurity training courses for security professionals at all levels. Many of these security courses are aligned with DoD Directive 8570 and most courses at this event are associated with GIAC Certifications.
Date: July 6 – 11
Location: Washington D.C.
URL: http://www.sans.org/event/capital-city-2015


DIMVA 2015 International Conference on Detection of Intrusions and Malware & Vulnerability Assessment
The annual DIMVA conference serves as a premier forum for advancing the state of the art in intrusion detection, malware detection, and vulnerability assessment.
Date: July 9 – 10
Location: Milano, Italy
URL: http://www.dimva2015.it/

Converge Information Security Conference
Converge Conference is the premier information security and technology gathering. Taking place in the heart of the domestic auto industry, a prominent manufacturing hub, and one of America's great tech cities, Converge Conference spotlights information security for two informative and idea-sharing days. It's a venue for professionals to come together from diverse technological backgrounds and discuss issues that every organization faces.
Date: July 16 – 17
Location: Detroit, MI
URL: http://convergeconference.org/main/

Security Congress APAC 2015
Since 2006, the (ISC)² annual SecureAsia conference has been an excellent platform for information security professionals to share and exchange their knowledge and experience as well as discuss the latest security and technology breakthroughs. In celebration of SecureAsia’s 10th anniversary, it returns to Manila, Philippines as (ISC)² Security Congress APAC - an even bigger event for information security professionals across Asia-Pacific. Don't miss this chance to learn from the experiences of luminaries from government, industry and academia and also network with your peers.
Date: July 28 – 29
Location: Manila, Philippines
URL: http://apaccongress.isc2.org/

Black Hat USA 2015
Black Hat - built by and for the global InfoSec community - returns to Las Vegas for its 18th year. This six day event begins with four days of intense Trainings for security practitioners of all levels (August 1-4) followed by the two-day main event including over 100 independently selected Briefings, Business Hall, Arsenal, Pwnie Awards, and more (August 5-6).
Date: August 1 – 6
Location: Las Vegas, NV
URL: https://www.blackhat.com/us-15/

International Conference on Security of Smart cities, Industrial Control System and Communications (SSIC 2015)
International Conference on Security of Smart cities, Industrial Control System and Communications (SSIC 2015) is the first annual conference in the area of cyber security focusing on the industry control system, cloud platform and smart cities. City and industrial control infrastructures are changing with new interconnected systems for monitoring, control and automation. The goal of SSIC is to attract cyber security researchers, industry practitioners, policy makers, and users to exchange ideas, techniques and tools, and share experience related to all practical and theoretical aspects of communications and network security.
Date: August 5 – 7
Location: Shanghai, China
URL: http://www.ssic-conf.org/2015/quickstart/

SAC Summer School (S3)
In 2015, for the first time, SAC will be preceded by the SAC Summer School (S3). The purpose of S3 is to provide participants with an opportunity to gain in-depth knowledge of specific areas of cryptography related to the current SAC topics by bringing together world-class researchers who will give extended talks in their areas of specialty. S3 is designed to create a focused learning environment that is also relaxed and collaborative. The SAC Summer School is open to all attendees, and may be of particular interest to students, postdocs, and other early researchers.
Date: August 10 – 12
Location: New Brunswick, Canada
URL: http://www.mta.ca/sac2015/s3.html

24th USENIX Security Symposium
The USENIX Security Symposium brings together researchers, practitioners, system administrators, system programmers, and others interested in the latest advances in the security and privacy of computer systems and networks
Date: August 12 – 14
Location: Washington D.C.
URL: https://www.usenix.org/conference/usenixsecurity15

5th Annual Cyber Security Training & Technology Forum (CSTTF)
CSTTF is designed to further educate Cyber, Information Assurance, Information Management Officers’, Information Technology, and Communications professionals. This will be accomplished through a number of in-depth cyber and technology sessions, as well as hands on government/industry exhibits and demos. Don’t miss this local, educational, and cost effective, cyber and technology event.
Date: August 19 – 20
Location: Colorado Springs, CO
URL: http://www.fbcinc.com/e/csttf/

44CON London
44CON London is the UK’s largest combined annual Security Conference and Training event. We will have a fully dedicated conference facility, including secure wi-fi with high bandwidth Internet access, catering, private bar and daily Gin O’Clock break.  44CON London will comprise of two main presentation tracks, two workshop tracks and a mixed presentation/workshop track over the two full days covering Technical topics. The Hidden track is run under the Chatham House Rule and we’re not going to tell you about that.
Date: September 9 – 11
Location: London, United Kingdom
URL: http://44con.com/

Global Identity Summit
The Global Identity Summit focuses on identity management solutions for the corporate, defense and homeland security communities.
Date: September 21 – 24
Location: Tampa, Fl
URL: http://events.jspargo.com/id15/Public/Enter.aspx


(ID#:15-5615)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to news@scienceofsecurity.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.