Science of Security (SoS) Newsletter (2014 - Issue 6)

SoS Newsletter

Science of Security (SoS) Newsletter (2014 - Issue 6)


Each issue of the SoS Newsletter highlights achievements in current research, as conducted by various global members of the Science of Security (SoS) community. All presented materials are open-source, and may link to the original work or web page for the respective program. The SoS Newsletter aims to showcase the great deal of exciting work going on in the security community, and hopes to serve as a portal between colleagues, research projects, and opportunities.

Please feel free to click on any issue of the Newsletter, which will bring you to their corresponding subsections:

General Topics of Interest

General Topics of Interest reflects today's most popularly discussed challenges and issues in the Cybersecurity space. GToI includes news items related to Cybersecurity, updated information regarding academic SoS research, interdisciplinary SoS research, profiles on leading researchers in the field of SoS, and global research being conducted on related topics.

Publications

The Publications of Interest provides available abstracts and links for suggested academic and industry literature discussing specific topics and research problems in the field of SoS. Please check back regularly for new information, or sign up for the CPSVO-SoS Mailing List.

Table of Contents (Issue 6)

(ID#:14-3137)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


In the News

In the News


This section features topical, current news items of interest to the international security community. These articles and highlights are selected from various popular science and security magazines, newspapers, and online sources.

(ID#:14-3328)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


International News

International News


"ICS-CERT in NTP flaw alert", Infosecurity Magazine, 22 December 2014. The Network Time Protocol (NTP), used by machines to set accurate clocks, has been recently discovered to contain "several remotely exploitable vulnerabilities", according to Infosecurity Magazine. NTP servers rose to concern after being targeted by 2014 DDoS attacks, which then declined following server patches. (ID# 14-70047)
See http://www.infosecurity-magazine.com/news/icscert-in-ntp-flaw-alert/.

"Bitcoin exec gets two years over illegal Silk Road funny money trading", The Register UK, 22 December 2014. Charlie Shrem, former Bitcoin Foundation executive, will serve a two year prison sentence for illegal currency trading. The now-shuttered Silk Road black market site was worth $19 million at the time it was seized. (ID# 14-70048)
See http://www.theregister.co.uk/2014/12/22/bitcoin_exec_gets_two_years_for_role_in_silk_road_trading/.

"Sneaky Russian hackers slurped $15 million from banks", The Register UK, 22 December 2014. The Anunak hackers group targets Russian and former CIS countries' banks and payment systems, and has stolen more than $15 million, most of which has occurred during the last 6 months. Anunak attackers gain access to internal network of banks, so that money is stolen not from customers, but from the banks. (ID# 14-70049)
See http://www.theregister.co.uk/2014/12/22/russian_cyber_heist_gang_rakes_in_15m/.

"NUKE HACK fears prompt S Korea cyber-war exercise", The Register, 22 December 2014. As a precaution following last week's online leak of plant equipment designs and manuals, South Korean firm Korea Hydro and Nuclear Power Co (KHNP) will run "cyber-war drills". Hackers released ominous warnings to stay away from the KHNP-run reactors over the holidays. (ID# 14-70050)
See http://www.theregister.co.uk/2014/12/22/nuclear_hack_threats_prompts_skorea_cyber_war_exercise/.

"Boeing turns to BlackBerry for help creating super-secret, self-destructing 'Black' smartphones", ZDnet, 22 December 2014. Boeing, known for its aviation and defense work, teams up with Canadian company Blackberry to develop a self-destructing smartphone for government use. The DoD currenty approves of certain Blackberry models on its networks, while NSA allows Samsung Galaxy devices that use Knox. (ID# 14-70051)
See http://www.zdnet.com/article/boeing-turns-to-blackberry-for-help-creating-super-secret-self-destructing-black-smartphone/.

"Hacker posts more S. Korean reactor info on Internet", Yonhap News Korea, 21 December 2014. Blueprints of South Korean nuclear reactors were leaked online, with warnings of more unauthorized releases unless authorities shut down the reactors. This has been the fourth online leak since December 15th, though none have directly affected the safety of the reactors. (ID# 14-70052)
See http://english.yonhapnews.co.kr/national/2014/12/21/94/0302000000AEN20141221003800315F.html.

"ISIS likely behind cyber-attack unmasking Syrian rebels", Infosecurity Magazine, 20 December 2014. Fears mount that The Islamic State in Iraq and Syria (ISIS) is adding cyber-warfare to its list of destructive tactics. Raqqah is being Slaughtered Silently (RSS), an advocacy group for documenting ISIS human rights abuses, has been targeted by a spearfishing email containing an infected slideshow attachment. The group believes that the malware's purpose is to send RSS's location details to ISIS militants. (ID# 14-70053)
See http://www.infosecurity-magazine.com/news/isis-likely-behind-cyberattack/.

"Trojan program based on ZeuS targets 150 banks, can hijack webcams", Computer World, 19 December 2014. Bank users around the world are targets for the Chthonic malware, based on the ZeuS banking malware. The malware modifies web pages, known as web injection, opened by customers. The malware then uses fake web forms to obtain sensitive information. (ID# 14-70054)
See http://www.computerworld.com/article/2861399/trojan-program-based-on-zeus-targets-150-banks-can-hijack-webcams.html.

"Critical flaw hits millions of home routers", Infosecurity Magazine UK, 19 December 2014. A flaw in several home router models, Misfortune Cookie, makes vulnerable millions of customers across 189 countries. Attackers would be able to remotely control compromised routers using admin privileges. (ID# 14-70055)
See http://www.infosecurity-magazine.com/news/critical-flaw-hits-millions-of/.

"Icann spear fishing attacks strikes at the heart of the internet", Infosecurity Magazine UK, 18 December 2014. Attackers were able to gain administrative access to files in the Centralized Zone Data System (CZDS), which experts say could have significant impact on root DNS servers and processes. (ID# 14-70056)
See http://www.infosecurity-magazine.com/news/icann-spear-phishing-attack/.

"Hidden backdoor in up to 10m Android phones", SC Magazine UK, 18 December 2014. Phones produced by Chinese manufacturer Coolpad have hidden backdoors installed, discovered by Palo Alto security firm. In response, Coolpad claims the backdoors are for "internal testing", but experts are skeptical. (ID# 14-70057)
See http://www.scmagazineuk.com/hidden-backdoor-in-up-to-10m-android-phones/article/389010/.

"London teenager pleads guilty to Spamhaus DDoS", Infosecurity Magazine UK, 18 December 2014. A 17-year-old teenager, arrested in April, has plead guilty to what was at the time the largest ever recorded DDoS. The teen targeted Spamhaus, an anti-spam company, and subsequently the content-delivery network CloudFlare. (ID# 14-70058)
See http://www.infosecurity-magazine.com/news/london-teenager-pleads-guilty/.

"Sony hack a 'serious national security matter': White House", Security Week, 18 December 2014. The recent cyber-attack carried out on Sony Pictures has escalated, with Sony making the decision to cancel release of "The Interview", a satirical film depicting the death of North Korean leader Kim Jong-Un. Following threats to attack cinemas that screened the film, Sony's decision to cancel release sets a "dangerous precedent". (ID# 14-70059)
See http://www.securityweek.com/sony-hack-serious-national-security-matter-white-house.

"Quantum physics behind 'unhackable' security authentication", SC Magazine UK, 17 December 2014. Researchers from universities in Twente and Eindhoven, Netherlands, propose Quantum Secure Authentication (QSA), an unclonable and unhackable authentication method using nanoparticles and photons on credit cards to create a unique, dynamic pattern. (ID# 14-70060)
See http://www.scmagazineuk.com/quantum-physics-behind-unhackable-security-authentication/article/388770/.

"Oslo mobiles eavesdropped", SC Magazine UK, 17 December 2014. Up to PS200,000 worth of mobile phone surveillance equipment has been discovered near Norwegian parliamentary and government buildings in Oslo. The discovered IMSI-catchers can rapidly register several hundred mobile numbers, which can then be eavesdropped upon. (ID# 14-70061)
See http://www.scmagazineuk.com/oslo-mobiles-eavesdropped/article/388765/.

"DoD prioritizes tech transfer to trusted Asian allies", FCW, 17 December 2014. The United States DoD has embarked on a security initiative to securely transport US defense technology to Asian ally countries, emphasizing "share what we can, protect what we must". South Korea, Japan, Australia, New Zealand, and Singapore hold friendly technology trade relations with the US. (ID# 14-70062)
See http://fcw.com/articles/2014/12/17/dod-tech-transfer.aspx.

"Mobile Threat Monday: Android apps hide windows malware", PC Magazine Security Watch, 15 December 2014. Ramnit Trojan-infected apps were available on Google Play Store, hiding malicious HTML files masquerading as About pages for the apps. The so-called Ramnit malware specifically targets the home Windows machine, and though uses Android devices as vehicles, do not damage them. (ID# 14-70063)
See http://securitywatch.pcmag.com/mobile-security/330363-mobile-threat-monday-android-apps-hide-windows-malware.

"North Korea under the spotlight for Sony hack", Infosecurity Magazine, 1 December 2014. Sony Pictures Entertainment was forced to shut down its corporate network and restrict access to company e-mail last week, when employees reported seeing an unauthorized message. The company suspects North Korean adversaries behind the attacks; the breach happens to coincide with the release of The Interview, a satirical film centered around deposing Kim Jong-un. (ID# 14-70064)
See http://www.infosecurity-magazine.com/news/north-korea-under-the-spotlight/

"Bing and Yahoo respond to 'right to be forgotten' requests", ZDNet Europe, 1 December 2014. Microsoft and Yahoo are complying with European user requests to stop returning search results for their names, particularly if the delivered links point to information that is out of date or excessive. (ID# 14-70065)
See http://www.zdnet.com/article/bing-and-yahoo-respond-to-right-to-be-forgotten-requests/.


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


US News

US News


"Drupal Admins:Assume Systems Have Been Compromised", Infosecurity Magazine, 30 October 2014. Content Management System (CMS) provider Drupal released a highly critical public service announcement warning that website admins that did install the patch for a SQLi flaw within 7 hours of its announcement should assume their site was compromised. Drupal warns that "applying the patch fixes the vulnerability but does not fix an already compromised website", and that attacks may not have left behind any evidence. (ID: 14-50176)
See http://www.infosecurity-magazine.com/news/drupal-assume-systems-compromised/

"Tor Node Red-Flagged for Slinging Malware", Infosecurity Magazine, 30 October 2014. The Tor Project announced the discovery of a malicious exit node, or "BadExit", that attempts to insert malware into binary files that TOR users download while using the anonymous browser. Though TOR guarantees anonymity, this event is seen by some, such as James Fox of KPMG, as an example that "anonymity online doesnit guarantee security". (ID: 14-50177)
See http://www.infosecurity-magazine.com/news/tor-node-red-flagged-for-malware/

"Microsoft Xbox Live back up, Sony PlayStation Network still down", Reuters, 26 December 2014. Hacking group "Lizard Squad" has claimed responsibility for interruptions of both Sony's PlayStation Network and Microsoft's Xbox Live. Though Xbox live was back up by Friday (with the exception of limited problems with third-party apps), the PlayStation Network remains down as of the 26th. The increase in business of the video game industry during the holiday season makes an interruption on Christmas day especially detrimental. (ID: 14-50179)
See http://www.reuters.com/article/2014/12/26/us-xbox-playstation-cybercrime-idUSKBN0K30RU20141226

"South Korea official says cannot rule out North's hand in hack of nuclear operator", Reuters, 23 December 2014. Following the hacking of and theft from Korea Hydro and Nuclear Power Co Ltd (KHNP), South Korean officials claim that North Korea has not been ruled out as a culprit. During the attack, which occurred on December 22nd, only non-critical data was stolen, and operations were not at risk. South Korea has requested the help of the U.S. in its investigation of the attacks, which "bore some similarities to previous cyberattacks in which North Korea has been involved." (ID: 14-50180)
See http://www.reuters.com/article/2014/12/23/us-southkorea-cybersecurity-usa-idUSKBN0K100D20141223

"Obama vows U.S. response to North Korea over Sony cyber attack", Reuters, 19 December 2014. President Obama has promised a U.S. response to the cyber attack of Sony Pictures over the movie "The Dictator", which depicts the assassination of Kim Jong Un. According to the President, Sony should not have given into the demands of the hackers in pulling the movie from theatres, calling it an instance of "a foreign dictator imposing censorship in America."(ID: 14-50181)
See http://www.reuters.com/article/2014/12/19/us-sony-cybersecurity-usa-idUSKBN0JX1MH20141219

"If South Koreais nuclear plant staff are vulnerable, then so are the reactors", Homeland Security News Wire, 24 December 2014. With increasing amounts of infrastructure connected to the internet, cyberattacks are shaping up to be an easy and cheap alternative to conventional ways of attacking enemies. When a South Korean nuclear plant was hacked (supposedly by North Korea), files were stolen that "reveal the role of the human operators in running the reactor", which is not good news considering that it is often the human factor that is often the weakest link in a cyber defense. (ID: 14-50182)
See http://www.homelandsecuritynewswire.com/dr20141224-if-south-korea-s-nuclear-plant-staff-are-vulnerable-then-so-are-the-reactors

"Obama signs five cybersecurity measures into law", Homeland Security Newswire, 23 December 2014. In the week leading up the Christmas, President Obama signed five pieces of cyber legislation: the Homeland Security Workforce Assessment Act, the Cybersecurity Workforce Assessment Act, the National Cybersecurity Protection Act (NCPA), and the Cybersecurity Enhancement Act, and the Federal Information Security Modernization Act (FISMA). A significant piece of cyber legislation has not become law since FISMA (Federal Information Security Management Act, at the time) in 2002 under President George Bush. (ID: 14-50183)
See http://www.homelandsecuritynewswire.com/dr20141223-obama-signs-five-cybersecurity-measures-into-law

"2008 Turkish oil pipeline explosion may have been Stuxnet precursor", Homeland Security Newswire, 17 December 2014. In 2008, an oil pipeline in Turkey exploded, and was later determined to be the result of human error and mechanical failure. However, Western intelligence services deduced that it was an early, Stuxnet-like cyber attack that caused the pipeline to build pressure and explode. Though the Kurdistan Workersi Party (PKK) claimed responsibility, experts doubt their technological capabilities and suspect that the sophisticated attack might have been state-sponsored. (ID: 14-50184)
See http://www.homelandsecuritynewswire.com/dr20141217-2008-turkish-oil-pipeline-explosion-may-have-been-stuxnet-precursor

"Quantum physics makes fraud-proof credit cards possible", Homeland Security Newswire, 16 December 2014. As financial transactions are becoming more common in the digital world, keeping sensitive personal data safe is becoming increasingly challenging. Dutch researchers have been able to create an unbreakable key and authentication system which is based on quantum physics. Quantum-Secure Authentication, as it is known, uses a kind of "question-and-answer" exchange that cannot be copied or replicated, thanks to the principle of quantum uncertainty, as displayed by photons. (ID: 14-50185)
See http://www.homelandsecuritynewswire.com/dr20141216-quantum-physics-makes-fraudproof-credit-cards-possible

"Turla Trojan Unearthed on Linux", TechNewsWorld, 09 December 2014. Kaspersky Labs has found new variants of Turla -- a Trojan that has been found exclusively in Windows machines in the past -- in Linux systems. As with its predecessors, Linux Turla is very stealthy, requiring no elevated privileges and being undetectable by the command-line tool "netstat". Turla is suspected to be Russian in origin, and has built-in protective measures that make it hard to reverse-engineer. (ID: 14-50186)
See http://www.technewsworld.com/story/81460.html

"The Sony Breach Carries Broad Implications Surrounding National Security", Forbes, 19 December 2014. The recent Sony breach carries hefty national security implications, considering the international level at which it took place. David Parnell interviews Roberta D. Anderson, co-founder of the K&L Gates LLP global Cyber Law and Cybersecurity practice group. (ID: 14-50187)
See http://www.forbes.com/sites/davidparnell/2014/12/19/the-sony-breach-carries-broad-implications-surrounding-national-security/?ss=Security

"What Do Security Professionals Think Sony Should Have Done Differently?", Forbes, 26 December 2014. In the wake of the most recent Sony cyber breach, many security professionals are questioning the competence of Sony's cyber defensive strategy, as well as an inability to learn from past mistakes. Sony is accused by some of not taking necessary precautions, such as proper password encryption, infrastructure defense tools, and of not having a strong response plan. (ID: 14-50188)
See http://www.forbes.com/sites/quora/2014/12/26/what-do-security-professionals-think-sony-should-have-done-differently/?ss=Security

"Backoff POS Malware Vets Targets via Surveillance Cameras", InfoSecurity Magazine, 23 December 2014. The notorious "Backoff" POS malware is not unusual in that it targets payment card information on point-of-sale devices, but RSA researchers have discovered that Backoff infections often correlate with attacks on security camera networks. The hackers use security cameras to determine if a machine that has been breached actually belongs to a business, or is just an RDP service on a personal computer. (ID: 14-50189)
See http://www.infosecurity-magazine.com/news/backoff-vets-targets-via/

"Staples Confirms Breach, 1.2Mn Cards Affected", InfoSecurity Magazine, 22 December 2014. Retail store Staples has confirmed that is was the victim of yet another high-profile data breach, with around 1.2 million payment card credentials stolen from 115 affected stores. Staples initially contacted law enforcement in October regarding a suspected breach. In the month or so that it was active, POS malware was able to steal "cardholder names, payment card numbers, expiration dates and card verification codesoeverything needed to carry out online fraud." (ID: 14-50190)
See http://www.infosecurity-magazine.com/news/staples-confirms-breach-12mn-cards/

"ISIS Likely Behind Cyber-attack Unmasking Syrian Rebels", InfoSecurity Magazine, 20 Dec 2014. ISIS is suspected to have been behind an "unmasking attack" on Raqqah is being Slaughtered Silently (RSS), a Syrian group that is advcating against human rights abuses in the ISIS-held town of Ar-Raqqah. The attackers used a "spearfishing" email, which provided a link that downloaded malware on to the victim's computer, and in turn emailed the victim's IP address to the attacker. (ID: 14-50191)
See http://www.infosecurity-magazine.com/news/isis-likely-behind-cyberattack/

"Garden-variety DDoS attack knocks North Korea off the Internet", Computerworld, 23 December 2014. The entirety of North Korea's internet went down on Monday the 22nd after a presumed DDoS attack. With a mere 1024 IP addresses, North Korea's "pipeline" to the internet is so small and weak that such an attack is not "difficult from a resource or technical standpoint", according to security researcher Ofer Gayer. Though it is possible for a DDoS attack to be carried out by an individual, many believe this attack may have been state-sponsored. (ID: 14-50192)
See http://www.computerworld.com/article/2862652/garden-variety-ddos-attack-knocks-north-korea-off-the-internet.html


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


International Security Research Conferences

Inernational Security Research Conferences


The following pages provide highlights on Science of Security related research presented at the following International Conferences:

  • 10th International Conference on Security and Privacy in Communication Networks - Beijing, China
  • 15th International Conference on Information & Communications Security (ICICS 2013) - Beijing, China
  • 2014 Iran Workshop on Communication and Information Theory (IWCIT) - Iran
  • 6th International Conference on New Technologies, Mobility & Security (NTMS) - Dubai
  • ACM CHI Conference on Human Factors in Computing Systems - Toronto, Canada
  • China Summit & International Conference on Signal and Information Processing (ChinaSIP) - Xi'an, China
  • Computer Communication and Informatics (ICCCI) - Coimbatore, India
  • Conference on Advanced Communication Technology - Korea International
  • Conferences on Service Oriented System Engineering, 2014, Oxford, U.K.
  • International Conferences: Dependable Systems and Networks (2014) - USA
  • International Science of Security Research: China Communications 2013
  • International Science of Security Research: China Communications 2014

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


10th International Conference on Security and Privacy in Communication Networks - Beijing, China

10th International Conference on Security and Privacy in Communication Networks


10th International Conference on Security and Privacy in Communication Networks September 24-26, 2014 Beijing, China
URL: http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=6120120

Accepted Papers:

  • Quanwei Cai, Jingqiang Lin, Fengjun Li, Qiongxiao Wang and Daren Zha.
    EFS: Efficient and Fault-Scalable Byzantine Fault Tolerant Systems against Faulty Clients
  • Qianying Zhang, Shijun Zhao and Dengguo Feng.
    Improving the Security of the HMQV Protocol using Tamper-Proof Hardware
  • Jialong Zhang, Jayant Notani and Guofei Gu.
    Characterizing Google Hacking: A First Large-Scale Quantitative Study
  • Yinzhi Cao, Chao Yang, Vaibhav Rastogi, Yan Chen and Guofei Gu.
    Abusing Browser Address Bar for Fun and Profit - An Empirical Investigation of Add-on Cross Site Scripting Attacks
  • Tilo Muller and Christopher Kugler.
    SCADS: Separated Control- and Data-Stack
  • Ziyu Wang and Jiahai Yang.
    A New Anomaly Detection Method Based on IGTE and IGFE
  • Byungho Min and Vijay Varadharajan.
    A Simple and Novel Technique for Counteracting Exploit Kits
  • Xiaoyan Sun, Jun Dai, Anoop Singhal and Peng Liu.
    Inferring the Stealthy Bridges between Enterprise Network Islands in Cloud Using Cross-Layer Bayesian Networks
  • Boyang Wang, Yantian Hou, Ming Li, Haitao Wang, Hui Li and Fenghua Li.
    Tree-based Multi-Dimensional Range Search on Encrypted Data with Enhanced Privacy
  • Duohe Ma.
    Defending Blind DDoS Attack on SDN Based on Moving Target Defense
  • Issa Khalil, Zuochao Dou and Abdallah Khreishah.
    TPM-based Authentication Mechanism for Apache Hadoop
  • Max Suraev.
    Implementing an affordable and effective GSM IMSI catcher with 3G authentication
  • Tayyaba Zeb, Abdul Ghafoor, Awais Shibli and Muhammad Yousaf.
    A Secure Architecture for Inter-Cloud Virtual Machine Migration
  • Nicolas Van Balen and Haining Wang.
    GridMap: Enhanced Security in Cued-Recall Graphical Passwords
  • Yi-Ting Chiang, Tsan-Sheng Hsu, Churn-Jung Liau, Yun-Ching Liu, Chih-Hao Shen, Da-Wei Wang and Justin Zhan.
    An Information-Theoretic Approach for Secure Protocol Composition
  • Haoyu Ma, Xinjie Ma, Weijie Liu, Zhipeng Huang, Debin Gao and Chunfu Jia.
    Control Flow Obfuscation using Neural Network to Fight Concolic Testing
  • Qinglong Zhang, Zongbin Liu, Quanwei Cai and Ji Xiang.
    TST:A New Randomness Test Method Based on Golden Distribution
  • Sarmad Ullah Khan.
    An Authentication and Key Management Scheme for Heterogeneous Sensor Networks
  • Shen Su and Beichuan Zhang.
    Detecting concurrent prefix hijack events online
  • Anna Squicciarini, Dan Lin, Smitha Sundareswaran and Jingwei Li.
    Policy Driven Node Selection in MapReduce
  • Vincenzo Gulisano, Magnus Almgren and Marina Papatriantafilou.
    METIS: a Two-Tier Intrusion Detection System for Advanced Metering Infrastructures
  • Chen Cao, Yuqing Zhang, Qixu Liu and Kai Wang.
    Function Escalation Attack
  • Eirini Karapistoli, Panagiotis Sarigiannidis and Anastasios Economides.
    Wormhole Attack Detection in Wireless Sensor Networks based on Visual Analytics
  • Binh Vo and Steven Bellovin.
    Anonymous Publish-Subscribe Systems
  • Jeroen Massar, Ian Mason, Linda Briesemeister and Vinod Yegneswaran.
    JumpBox -- A Seamless Browser Proxy for Tor Pluggable Transports
  • Sushama Karumanchi, Jingwei Li and Anna Squicciarini.
    Securing Resource Discovery in Content Hosting Networks
  • Hugo Gonzalez, Natalia Stakhanova and Ali Ghorbani.
    DroidKin: Lightweight Detection of Android Apps Similarity

Accepted Short Papers

  • Yazhe Wang, Mingming Hu and Chen Li.
    UAuth: A Strong Authentication Method from Personal Devices to Multi-accounts
  • Chengcheng Shao, Liang Chen, Shuo Fan and Xinwen Jiang.
    Social Authentication Identity: An Alternate to Internet Real Name System
  • Xi Xiao, Xianni Xiao, Yong Jiang and Qing Li.
    Detecting Mobile Malware with TMSVM
  • Yosra Ben Mustapha, Herve Debar and Gregory Blanc.
    Policy Enforcement Point Model
  • Zhangjie Fu, Jiangang Shu, Xingming Sun and Naixue Xiong.
    An Effective Search Scheme based on Semantic Tree over Encrypted Cloud Data Supporting Verifiability
  • Pieter Burghouwt, Marcel E.M. Spruit and Henk J. Sips.
    Detection of Botnet Command and Control Traffic by the Identification of Untrusted Destinations
  • Jingwei Li, Dan Lin, Anna Squicciarini and Chunfu Jia.
    STRE: Privacy-Preserving Storage and Retrieval over Multiple Clouds
  • Lautaro Dolberg, Quentin Jerome, Jerome Francois, Radu State and Thomas Engel.
    RAMSES: Revealing Android Malware through String Extraction and Selection
  • Kan Chen, Peidong Zhu and Yueshan Xiong.
    Keep the Fakes Out: Defending against Sybil Attack in P2P systems
  • Zhang Lulu, Yongzheng Zhang and Tianning Zang.
    Detecting Malicious Behaviors in Repackaged Android Apps with Loosely-coupled Payloads Filtering Scheme
  • Yuling Luo, Junxiu Liu, Jinjie Bi and Senhui Qiu.
    Hardware Implementation of Cryptographic Hash Function based on Spatiotemporal Chaos
  • Swarup Chandra, Zhiqiang Lin, Ashish Kundu and Latifur Khan.
    A Systematic Study of the Covert-Channel Attacks in Smartphones
  • Sami Zhioua, Adnene Ben Jabeur, Mahjoub Langar and Wael Ilahi.
    Detecting Malicious Sessions through Traffic Fingerprinting using Hidden Markov Models
  • Eslam Abdallah, Mohammad Zulkernine and Hossam Hassanein.
    Countermeasures for Mitigating ICN Routing Related DDoS Attacks
  • Ding Wang and Ping Wang.
    On the Usability of Two-Factor Authentication
  • Bhaswati Deka, Ryan Gerdes, Ming Li and Kevin Heaslip.
    Friendly Jamming for Secure Localization in Vehicular Transportation
  • Wenjun Fan.
    Catering Honeypots Creating Based on the Predefined Honeypot Context

(ID#:14-2903)



Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


15th International Conference on Information & Communications Security (ICICS 2013) - Beijing, China

15th International Conference on Information & Communications Security (ICICS 2013) - Beijing, China


15th International Conference on Information & Communications Security (ICICS 2013)
20-22 November 2013, Beijing, China

Defending against heap overflow by using randomization in nested virtual clusters
Chee Meng Tey and Debin Gao
School of Information Systems, Singapore Management University VTOS: Research on Methodology of "Light-weight" Formal Design and Verification for Microkernel OS
Zhenjiang Qian, Hao Huang and Fangmin Song
Department of Computer Science and Technology, Nanjing University

XLRF: A Cross-Layer Intrusion Recovery Framework for Damage Assessment and Recovery Plan Generation
Eunjung Yoon and Peng Liu
Pennsylvania State University

PRIDE: Practical Intrusion Detection in Resource Constrained Wireless Mesh Networks
Amin Hassanzadeh, Zhaoyan Xu, Radu Stoleru, Guofei Gu and Michalis Polychronakis
Texas A&M University; Columbia University

Fingerprint Embedding: A Proactive Strategy of Detecting Timing Channels
Jing Wang, Peng Liu, Limin Liu, Le Guan and Jiwu Jing
State Key Laboratory of Information Security, Institute of Information Engineering, CAS; University of Chinese Academy of Sciences; Pennsylvania State University

Type-Based Analysis of Protected Storage in the TPM
Jianxiong Shao, Dengguo Feng and Yu Qin
Institute of Software, Chinese Academy of Sciences

Remote Attestation Mechanism for User Centric Smart Cards using Pseudorandom Number Generators
Raja Naeem Akram, Konstantinos Markantonakis and Keith Mayes
Cyber Security Lab, Department of Computer Science, University of Waikato; ISG Smart card Centre, Royal Holloway, University of London

Direct Construction of Signcryption Tag-KEM from Standard Assumptions in the Standard Model
Xiangxue Li, Haifeng Qian, Yu Yu, Jian Weng and Yuan Zhou
Department of Computer Science and Technology, East China Normal University;
National Engineering Laboratory for Wireless Security, Xi'an University of Posts and Telecommunications;
Institute for Interdisciplinary Information Sciences, Tsinghua University;
Department of Computer Science, Jinan University;
Network Emergency Response Technical Team/Coordination Center, China

Efficient eCK-secure Authenticated Key Exchange Protocols in the Standard Model
Zheng Yang
Horst Goertz Institute for IT Security

Time-Stealer: A Stealthy Threat for Virtualization Scheduler and Its Countermeasures
Hong Rong, Ming Xian, Huimei Wang and Jiangyong Shi
State Key Laboratory of Complex Electromagnetic Environment Effects on Electronics and Information System,National University of DefenseTechnology

Detecting Malicious Co-resident Virtual Machines Indulging in Load-Variation Attacks
Smitha Sundareswaran and Anna Squicciarini
College of Information Sciences and Technology, Pennsylvania State University

A Covert Channel Using Event Channel State on Xen Hypervisor
Qingni Shen, Mian Wan, Zhuangzhuang Zhang, Zhi Zhang, Sihan Qing and Zhonghai Wu
Peking University;

Comprehensive Evaluation of AES Dual Ciphers as a Side-Channel Countermeasure
Amir Moradi and Oliver Mischke
Horst Gortz Institute for IT-Security, Ruhr University Bochum

EMD-Based Denoising for Side-Channel Attacks and Relationships between the Noises Extracted with Different Denoising Methods
Mingliang Feng, Yongbin Zhou and Zhenmei Yu
State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences; School of Information Technology, Shandong Women's University

Defeat Information Leakage from Browser Extensions Via Data Obfuscation
Wentao Chang and Songqing Chen
Department of Computer Science, George Mason University

Rating Web Pages Using Page-Transition Evidence
Jian Mao, Xinshu Dong, Pei Li, Tao Wei and Zhenkai Liang
School of Electronic and Information Engineering, BeiHang University; School of Computing, National University of Singapore; Institute of Computer Science and Technology, Peking University

OSNGuard: Detecting Worms with User Interaction Traces in Online Social Networks
Liang He, Dengguo Feng, Purui Su, Ling-Yun Ying, Yi Yang, Huafeng, Huang and Huipeng Fang
Institute of Software, Chinese Academy of Sciences

Attacking and Fixing the CS Mode Han Sui, Wenling Wu, Liting Zhang and Peng Wang
Trusted Computing and Information Assurance Laboratory, Institute of Software, Chinese Academy of Sciences; Data Assurance and Communication Security, Institute of Information Engineering, Chinese Academy of Sciences

Integral Attacks on Reduced-Round PRESENT
Shengbao Wu and Mingsheng Wang
Trusted Computing and Information Assurance Laboratory, Institute of Software, Chinese Academy of Sciences; State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences

Computationally Efficient Expressive Key-Policy Attribute Based Encryption Schemes with Constant-Size Ciphertext
Y. Sreenivasa Rao and Ratna Dutta
Indian Institute of Technology Kharagpur

Privacy-Preserving Decentralized Ciphertext-Policy Attribute-Based Encryption with Fully Hidden Access Structure
Huiling Qian, Jiguo Li and Yichen Zhang
College of Computer and Information Engineering, Hohai University

Accelerating AES in JavaScript with WebGL
Yang Yang, Jiawei Zhu, Qiuxiang Dong, Guan Zhi and Zhong Chen
Peking University

Analysis of Multiple Checkpoints in Non-perfect and Perfect Rainbow Tradeoff Revisited
Wenhao Wang and Dongdai Lin
State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences

Efficient Implementation of NIST-Compliant Elliptic Curve Cryptography for Sensor Nodes
Zhe Liu, Hwajeong Seo, Johann Groszschaedl and Howon Kim
University of Luxembourg; Pusan National University

A Secure and Efficient Scheme for Cloud Storage Against Eavesdropper
Jian Liu, Huimei Wang, Ming Xian and Kun Huang
State Key Laboratory of Complex Electromagnetic Environment Effects on Electronics and Information System, National University of Defense Technology

Secure and Private Outsourcing of Shape-Based Feature Extraction
Shumiao Wang, Mohamed Nassar, Mikhail Atallah and Qutaibah Malluhi Purdue University; Qatar University

Toward Generic Method for Server-Aided Cryptography
Sebastien Canard, Iwen Coisel, Julien Devigne, Cecilia Gallais, Thomas Peters and Olivier Sanders
Orange Labs; JRC; Universite de Rennes; Universite catholique de Louvain

Generation and Tate Pairing Computation of Ordinary Elliptic Curves with Embedding Degree One
Zhi Hu, Lin Wang, Maozhi Xu and Guoliang Zhang
Beijing International Center for Mathematical Research, Peking University; Science and Technology on Communication Security Laboratory; LMAM, School of Mathematical Sciences, Peking University

Threshold Secret Image Sharing
Teng Guo, Feng Liu, Chuankun Wu, Chingnung Yang and Wen Wang
State Key Laboratory of Information Security, Institute of Information Engineering, Chinese Academy of Sciences; National Dong Hwa University

(ID#:14-2902)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


2014 Iran Workshop on Communication and Information Theory (IWCIT) - Iran

2014 Iran Workshop on Communication and Information Theory


International Conferences: 2014 Iran Workshop on Communication and Information Theory This bibliography comes from another recently held international conference to highlight Science of Security research being conducted globally. This set is from the 2014 Iran Workshop on Communication and Information Theory (IWCIT ) held 7-8 May 2014.

  • Afshar, N.; Akhbari, B.; Aref, M.R., "Random Coding Bound For E-Capacity Region Of The Relay Channel With Confidential Messages," Communication and Information Theory (IWCIT), 2014 Iran Workshop on, pp.1,6, 7-8 May 2014. doi: 10.1109/IWCIT.2014.6842481 We study a relay channel with confidential messages (RCC), which involves a sender, a receiver and a relay. In the RCC, a common information must be transmitted to both a receiver and a relay and also a private information to the intended receiver, while keeping the relay as ignorant of it as possible. The level of ignorance of the relay rather than the private message is measured by the equivocation rate. We consider two error probability exponents (reliabilities) E1, E2 of exponentially decrease of error probability of the receiver decoder and the relay decoder, respectively. For E = (E1, E2), the E-capacity region is the set of all E-achievable rates of codes with given reliability E. We derive a random coding bound for E-capacity region of the RCC using block Markov strategies over a fixed number of blocks. We also show that, when E tends to zero, our obtained inner bound for E-capacity region converges to the inner bound for the capacity region of the RCC obtained by Y. Oohama and S. Watanabe. Keywords: Markov processes; codecs; error statistics; radio receivers; random codes; telecommunication channels ;E-achievable rates; E-capacity region; block Markov strategies; confidential messages; equivocation rate; error probability; random coding bound; receiver decoder; relay channel; relay decoder; Channel coding; Decoding; Error probability; Receivers; Relays; Vectors; E-capacity; effective rate; equivocation rate; error probability exponent; method of types; relay channel with confidential messages (ID#:14-3067) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842481&isnumber=6842477
  • Aguerri, I.E.; Varasteh, M.; Gunduz, D., "Zero-delay Joint Source-Channel Coding," Communication and Information Theory (IWCIT), 2014 Iran Workshop on, pp.1,6, 7-8 May 2014. doi: 10.1109/IWCIT.2014.6842482 In zero-delay joint source-channel coding each source sample is mapped to a channel input, and the samples are directly estimated at the receiver based on the corresponding channel output. Despite its simplicity, uncoded transmission achieves the optimal end-to-end distortion performance in some communication scenarios, significantly simplifying the encoding and decoding operations, and reducing the coding delay. Three different communication scenarios are considered here, for which uncoded transmission is shown to achieve either optimal or near-optimal performance. First, the problem of transmitting a Gaussian source over a block-fading channel with block-fading side information is considered. In this problem, uncoded linear transmission is shown to achieve the optimal performance for certain side information distributions, while separate source and channel coding fails to achieve the optimal performance. Then, uncoded transmission is shown to be optimal for transmitting correlated multivariate Gaussian sources over a multiple-input multiple-output (MIMO) channel in the low signal to noise ratio (SNR) regime. Finally, motivated by practical systems a peak-power constraint (PPC) is imposed on the transmitter's channel input. Since linear transmission is not possible in this case, nonlinear transmission schemes are proposed and shown to perform very close to the lower bound. Keywords: Gaussian channels; MIMO communication; block codes; combined source-channel coding; decoding; delays; fading channels; radio receivers; radio transmitters; MIMO communication; PPC; SNR; block fading channel; correlated multivariate Gaussian source transmission; decoding; encoding delay reduction; end-to-end distortion performance; information distribution; multiple input multiple output channel; nonlinear transmission scheme; peak power constraint; receiver; signal to noise ratio; transmitter channel; uncoded linear transmission; zero delay joint source channel coding; Channel coding; Decoding; Joints; MIMO; Nonlinear distortion; Signal to noise ratio (ID#:14-3068) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842482&isnumber=6842477
  • Akhoondi, F.; Poursaeed, O.; Salehi, J.A., "Resource Allocation Using Fragmented-Spectrum Synchronous OFDM-CDMA In Cognitive Radio Networks," Communication and Information Theory (IWCIT), 2014 Iran Workshop on, pp.1,4, 7-8 May 2014. doi: 10.1109/IWCIT.2014.6842483 This paper presents a fragmented-spectrum synchronous OFDM-CDMA modulation and utilize it as secondary users modulation in a cognitive radio-based network to provide high data rate by efficiently exploiting available spectrum bands in a target spectral range while simultaneously offering multiple-access capability. In particular, given preexisting communications in the spectrum where the system is operating, a channel sensing and estimation method is used to obtain information of subcarrier availability. Given this information, some three-level codes are provided for emerging new cognitive radio users. Furthermore, analytical results of the system performance in a typical cognitive radio network are shown. Keywords: OFDM modulation; channel estimation; code division multiple access; cognitive radio; radio networks; resource allocation; available spectrum bands; channel estimation method; channel sensing; cognitive radio users; cognitive radio-based network; fragmented-spectrum synchronous OFDM-CDMA modulation; multiple-access capability; resource allocation; secondary users modulation; subcarrier availability; target spectral range; three-level codes; Conferences; Information theory; code-division multiple-access (CDMA);cognitive radio; fragmented-spectrum; multicarrier (MC);orthogonal frequency division multiplexing (OFDM) (ID#:14-3069) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842483&isnumber=6842477
  • Hassan, N.B.; Matinfar, M.D., "On The Implementation Aspects Of Adaptive Power Control Algorithms In Free-Space Optical Communications," Communication and Information Theory (IWCIT), 2014 Iran Workshop on, pp.1,5, 7-8 May 2014. doi: 10.1109/IWCIT.2014.6842485 Atmospheric turbulence has made a significant contribution in free-space optical (FSO) communications' areas of research. Assuming slowly varying channel, a feedback can be implemented to overcome the problem of fading. In comparison with all former published works, in this paper, we apply an algorithm to reduce average power consumption by regulating transmitter Erbium Doped Fiber Amplifier (EDFA) gain given channel state information (CSI). As a benchmark, a simple but non practical power control algorithm is introduced and discussed in this paper. To make the algorithm more practical, the quantized counterpart of the algorithm is introduced and its performance is compared to continuous one. It is shown by consuming 4dB more power than the continuous algorithm, we can simply implement a practical quantized power control algorithm. The statistical analysis of the proposed adaptive algorithms is performed, considering a complex model of the channel, including a low power transmitting laser, EDFA statistical model, channel fading, channel attenuations, receiver lens, photodetector model and all sources of optical and electrical noise. It is shown the proposed algorithm brings significant improvements over its non-adaptive counterpart. Keywords: adaptive control; erbium; gain control; optical communication equipment; optical fibre amplifiers; optical links; power control; telecommunication control; EDFA gain regulation; EDFA statistical model; adaptive algorithms; adaptive power control algorithm; atmospheric turbulence; average power consumption; channel attenuations; channel fading; channel state information; electrical noise; free-space optical communications; low power transmitting laser; optical noise; photodetector model; practical power control algorithm; receiver lens; transmitter erbium doped fiber amplifier; Atmospheric modeling; Bit error rate; Erbium-doped fiber amplifiers; Fading; Noise; Optical attenuators; Optical fiber communication; EDFA; Free-space optical communication; OOK modulation; adaptive transmission; atmospheric turbulence (ID#:14-3070) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842485&isnumber=6842477
  • Khani, A.E.; Seyfe, B., "A game-theoretic Approach Based On Pricing Scheme On The Two-User Gaussian Interference Channel," Communication and Information Theory (IWCIT), 2014 Iran Workshop on, pp.1,6, 7-8 May 2014. doi: 10.1109/IWCIT.2014.6842489 In this work, a non-cooperative power control game between two selfish users over a Gaussian interference channel is presented. In this proposed scenario each user is willing to maximize its utility under power constraints in transmitters. The outcome of this non-cooperative game is considered. We show that by choosing a proper price for each of the users, the outcome of the game is a unique, Pareto-efficient and proportional fair Nash Equilibrium (NE). Numerical Results confirm our analytical developments. Keywords: Gaussian channels; game theory; pricing; telecommunication control; NE; Pareto-efficient; game-theoretic approach; noncooperative power control game; power constraints; pricing scheme; proportional fair Nash equilibrium; transmitters; two-user gaussian interference channel; Games; Integrated circuits; Interference channels; Nash equilibrium; Power control; Pricing; Gaussian interference channel; Nash equilibrium; Pareto efficiency; game theory; proportional fairness (ID#:14-3071) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842489&isnumber=6842477
  • Emadi, M.J.; Khormuji, M.N.; Skoglund, M.; Aref, M.R., "The Generalized MAC With Partial State And Message Cooperation," Communication and Information Theory (IWCIT), 2014 Iran Workshop on, pp.1, 5, 7-8 May 2014. doi: 10.1109/IWCIT.2014.6842490 We consider a two-user state-dependent generalized multiple access channel (GMAC) with correlated channel state information (CSI). It is assumed that the CSI is partially known at each encoder noncausally. We first present an achievable rate region using multi-layer Gelfand-Pinsker coding with partial state and message cooperation between the encoders. We then specialize our result to a Gaussian GMAC with additive interferences that are known partially at each encoder. We show that the proposed scheme can remove the common part known at both encoders and also mitigate a significant part of the independent interference via state cooperation when the feedback links are strong. Thus, the proposed scheme can significantly improve the rate region as compared to that with only message cooperation. Keywords: Gaussian channels; channel coding; cooperative communication; multi-access systems; CSI; Gaussian GMAC; achievable rate region; additive interferences; correlated channel state information; encoder; feedback links; independent interference; message cooperation; multilayer Gelfand-Pinsker coding; state cooperation; two-user state-dependent generalized multiple access channel; Additives; Channel models; Decoding; Encoding; Interference; Receivers; Relays (ID#:14-3072) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842490&isnumber=6842477
  • Ghasemi-Goojani, S.; Behroozi, H., "Nested Lattice Codes For The State-Dependent Gaussian Interference Channel With A Common Message," Communication and Information Theory (IWCIT), 2014 Iran Workshop on, pp.1,6, 7-8 May 2014. doi: 10.1109/IWCIT.2014.6842492 In this paper, we consider the generalized point-to-point Additive White Gaussian Noise (AWGN) channel with state: The State-Dependent Gaussian Interference Channel (SD-GIC) with a common message in which two senders transmit a common message to two receivers. Transmitter 1 knows only message W1 while transmitter 2 in addition W1 also knows the channel state sequence non-causally. In this paper, we consider the strong interference case where the channel state has unbounded variance. First, we show that a scheme based on Gelfand-Pinsker coding cannot achieve the capacity within a constant gap for channel gains smaller than unity. In contrast, we propose a lattice-based transmission scheme that can achieve the capacity region in the high SNR regime. Our proposed scheme can achieve the capacity region to within 0.5 bit for all values of channel parameters. Keywords: AWGN channels; encoding; radio transmitters; radiofrequency interference; AWGN channel; Gelfand-Pinsker coding; SD-GIC;S NR regime; capacity region; channel parameters; channel state sequence; generalized point-to-point Additive White Gaussian Noise; lattice-based transmission scheme; nested lattice codes; state-dependent Gaussian interference channel; unbounded variance; Decoding; Encoding; Interference channels; Lattices; Receivers; Transmitters (ID#:14-3073) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842492&isnumber=6842477
  • Keykhosravi, K.; Mahzoon, M.; Gohari, A.; Aref, M.R., "From Source Model To Quantum Key Distillation: An Improved Upper Bound," Communication and Information Theory (IWCIT), 2014 Iran Workshop on, pp.1,6, 7-8 May 2014. doi: 10.1109/IWCIT.2014.6842497 In this paper we derive a new upper bound on the quantum key distillation capacity. This upper bound is an extension of the classical bound of Gohari and Anantharam on the source model problem. Our bound strictly improves the quantum extension of reduced intrinsic information bound of Christandl et al. Although this bound is proposed for quantum settings, it also serves as an upper bound for the special case of classical source model, and may improve the bound of Gohari and Anantharam. The problem of quantum key distillation is one in which two distant parties, Alice and Bob, and an adversary, Eve, have access to copies of quantum systems A, B, E respectively, prepared jointly according to an arbitrary state rABE. Alice and Bob desire to distill secret key bits that are secure from Eve, using only local quantum operations and authenticated public classical communication (LOPC). Keywords: quantum cryptography; LOPC; classical source model; improved upper bound; local quantum operation-authenticated public classical communication; quantum extension; quantum key distillation capacity; quantum setting; quantum systems; reduced intrinsic information bound; secret key bits; source model problem; Entropy; Equations; Mathematical model; Mutual information; Protocols; Security; Upper bound (ID#:14-3074) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842497&isnumber=6842477
  • Kuhestani, A.; Mohammadi, A., "Finite-SNR diversity-multiplexing tradeoff of linear dispersion coded MISO systems," Communication and Information Theory (IWCIT), 2014 Iran Workshop on, pp.1,4, 7-8 May 2014. doi: 10.1109/IWCIT.2014.6842499 In this paper, we study the diversity-multiplexing tradeoff (DMT) of linear dispersion (LD) coded multiple-input single-output (MISO) systems at finite-SNRs. The tradeoff curves provide a characterization of the achievable diversity and multiplexing gains for a given space-time block code (STBC) at SNRs encountered in practice. For this purpose, first, the outage probability is derived for a broad class of LD coded MISO channels in a simple and closed-form expression. Then, for the special case of the correlated Rayleigh fading MISO channel, the outage probability is presented in an exact closed-form. Using this expression, we present a closed-form solution for the DMT framework. Keywords: Rayleigh channels; probability; space-time block codes; DMT; LD coded MISO systems; STBC; closed-form expression; correlated Rayleigh fading MISO channel; finite-SNR diversity-multiplexing tradeoff; linear dispersion coded multiple-input single-output systems; outage probability; space-time block code; Diversity methods; Fading; Gain; MIMO; Multiplexing; Signal to noise ratio; Transmitting antennas; Diversity-Multiplexing Tradeoff (DMT);Linear Dispersion (LD) Code; Multiple-Input Single-Output (MISO) channel (ID#:14-3075) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842499&isnumber=6842477
  • Mirmohseni, M.; Papadimitratos, P., "Colluding Eavesdroppers In Large Cooperative Wireless Networks," Communication and Information Theory (IWCIT), 2014 Iran Workshop on, pp.1,6, 7-8 May 2014. doi: 10.1109/IWCIT.2014.6842500 Securing communication against non-colluding passive eavesdroppers has been extensively studied. Colluding eavesdroppers were considered for interference-limited large networks. However, collusion was not investigated for large cooperative networks. This paper closes this gap: we study the improvement the eavesdroppers achieve due to collusion in terms of the information leakage rate in a large cooperative network. We consider a dense network with nl legitimate nodes, ne eavesdroppers, and path loss exponent a 2. We show that if ne(2+2/a) (log ne)g = o(nl) holds, for some positive g, then zero-cost secure communication is possible; i.e., ne colluding eavesdroppers can be tolerated. This means that our scheme achieves unbounded secure aggregate rate, given a fixed total power constraint for the entire network. Keywords: computational complexity; cooperative communication; radio networks; radiofrequency interference ;telecommunication security; eavesdropper collusion; eavesdropper improvement; fixed total power constraint ;information leakage rate; interference-limited large cooperative wireless networks; legitimate nodes; path loss exponent; zero-cost secure communication; Aggregates; Array signal processing; Encoding; Relays; Transmitters; Vectors; Wireless networks (ID#:14-3076) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842500&isnumber=6842477
  • Mirzaee, M.; Akhlaghi, S., "Maximizing The Minimum Achievable Secrecy Rate In A Two-User Gaussian Interference Channel," Communication and Information Theory (IWCIT), 2014 Iran Workshop on, pp.1,5, 7-8 May 2014. doi: 10.1109/IWCIT.2014.6842501 This paper studies a two-user Gaussian interference channel in which two single-antenna sources aim at sending their confidential messages to the legitimate destinations such that each message should be kept confidential from non-intended receiver. Also, it is assumed that the direct channel gains are stronger than the interference channel gains and the noise variances at two destinations are equal. In this regard, under Gaussian code book assumption, the problem of secrecy rate balancing which aims at exploring the optimal power allocation policy at the sources in an attempt to maximize the minimum achievable secrecy rate is investigated, assuming each source is subject to a transmit power constraint. To this end, it is shown that at the optimal point, two secrecy rates are equal, hence, the problem is abstracted to maximizing the secrecy rate associated with one of destinations while the other destination is restricted to have the same secrecy rate. Accordingly, the optimum secrecy rate associated with the investigated max-min problem is analytically derived leading to the solution of secrecy rate balancing problem. Keywords: Gaussian channels; antennas; interference (signal) ;telecommunication security; Gaussian code book assumption; achievable secrecy rate; direct channel gains; interference channel gains; max-min problem; noise variances; nonintended receiver; optimal power allocation policy; secrecy rate balancing ;single-antenna sources; transmit power constraint; two-user Gaussian interference channel; Array signal processing; Gain ;Interference channels; Linear programming; Noise; Optimization; Transmitters; Achievable secrecy rate; Gaussian interference channel; Max-Min problem (ID#:14-3077) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842501&isnumber=6842477
  • Bidokhti, S.S.; Kramer, G., "An Application Of A Wringing Lemma To The Multiple Access Channel With Cooperative Encoders," Communication and Information Theory (IWCIT), 2014 Iran Workshop on, pp.1,4, 7-8 May 2014. doi: 10.1109/IWCIT.2014.6842504 The problem of communicating over a multiple access channel with cooperative encoders is studied. A new upper bound is derived on the capacity which is motivated by the regime of operation where the relays start to cooperate. The proof technique is based on a wringing lemma by Dueck and Ahlswede which was used for the multiple description problem with no excess rate. Previous upper bounds are shown to be loose in general, and may be improved. Keywords: codecs; cooperative communication; multi-access systems; cooperative encoders; multiple access channel; multiple description problem; wringing lemma; Adders; Artificial neural networks; Diamonds; Random variables; Relays; Standards; Upper bound (ID#:14-3078) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842504&isnumber=6842477
  • Salehkalaibar, S.; Aref, M.R., "An Achievable Scheme For The One-Receiver, Two-Eavesdropper Broadcast Channel," Communication and Information Theory (IWCIT), 2014 Iran Workshop on, pp.1,6, 7-8 May 2014. doi: 10.1109/IWCIT.2014.6842505 In this paper, we consider the secrecy of the one-receiver, two-eavesdropper Broadcast Channel (BC) with three degraded message sets. There is a three-receiver BC where the common message is decoded by all receivers. The first confidential message is decoded by the first and the second receivers and is kept secret from the third receiver (eavesdropper). The second confidential message is decoded by the first receiver and is kept secret from the second and the third receivers (eavesdroppers). We propose an achievable scheme to find an inner bound to the secrecy capacity region of a class of one-receiver, two-eavesdropper BCs with three degraded message sets. We also compare our inner bound with another existing achievable region. Keywords: broadcast channels; broadcast communication; radio receivers; telecommunication security; confidential message decoding; degraded message sets; one-receiver broadcast channel; secrecy capacity region; three-receiver BC; two-eavesdropper broadcast channel; Decoding; Entropy; Joints; Mutual information; Random variables; Receivers; Transmitters (ID#:14-3079) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842505&isnumber=6842477
  • Sonee, A.; Hodtani, G.A., "Wiretap Channel With Strictly Causal Side Information At Encoder," Communication and Information Theory (IWCIT), 2014 Iran Workshop on , vol., no., pp.1,6, 7-8 May 2014. doi: 10.1109/IWCIT.2014.6842507 In this paper, the wiretap channel with side information studied in [2] is revisited for the case in which the side information is available only at the encoder and in a strictly causal manner. We derive a lower bound on the secrecy capacity of the channel based on a coding scheme which consists of block Markov encoding and key generation using the strictly causal state information available at the encoder. In order to provide the secrecy of messages, at the end of each block a description of the state sequence obtained by the encoder is used to generate the key which encrypts the whole or part of the message to be transmitted in the next block. Moreover, for the decoder to be able to decrypt the messages, the description of the sate sequence of each block is sent in common with the message of that block. Also, an upper bound on the secrecy capacity is developed which assumes that the state is noncausally known at the encoder and we prove that it would coincide the lower bound for a special case and results in the secrecy capacity. Keywords: Markov processes; encoding; block Markov encoding; key generation; sate sequence; secrecy capacity; wiretap channel; Cryptography; Decoding; Encoding; Indexes; Markov processes; Radio frequency; Upper bound (ID#:14-3080) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842507&isnumber=6842477
  • Zahabi, S.J.; Khosravifard, M., "A Note On The Redundancy Of Reversible Variable Length Codes," Communication and Information Theory (IWCIT), 2014 Iran Workshop on, pp.1,6, 7-8 May 2014. doi: 10.1109/IWCIT.2014.6842508 An improved upper bound on the redundancy of the optimal reversible variable length code (RVLC), is presented in terms of the largest symbol probability p1. The improvement is achieved for 2/9 <; p1 <; 1/4 and for 2/5 p1 1/2. The bound guarantees that in these two regions, the redundancy of the optimal RVLC is less than 1 bit per symbol. Keywords: probability; variable length codes; RVLC; reversible variable length codes; symbol probability; Computers; Conferences; Information theory; Radio frequency; Redundancy; Upper bound; Vectors (ID#:14-3081) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842508&isnumber=6842477
  • Zeinalpour-Yazdi, Z.; Jalali, S., "Outage Analysis Of Uplink Open Access Two-Tier Networks," Communication and Information Theory (IWCIT), 2014 Iran Workshop on, pp.1,6, 7-8 May 2014. doi: 10.1109/IWCIT.2014.6842511 Employing multi-tier networks is among the most promising approaches to address the rapid growth of the data demand in cellular networks. In this paper, we study a two-tier uplink cellular network consisting of femtocells and a macrocell. Femto base stations, and femto and macro users are assumed to be spatially deployed based on independent Poisson point processes. Under open-access policy, we derive analytical upper and lower bounds on the outage probabilities of femto users and macro users that are subject to fading and path loss. We also study the effect of the distance from the macro base station on the outage probability experienced by the users. In all cases, our simulation results comply with our analytical bounds. Keywords: femtocellular radio ;radio links; stochastic processes; femto base stations; femto users; femtocell network; independent Poisson point processes; macro base station; macro users; macrocell network; multi-tier networks; open-access policy; outage analysis; outage probabilities; two-tier uplink cellular network; uplink open access two-tier networks; Analytical models; Downlink; Femtocells; OFDM; Open Access; Uplink (ID#:14-3082) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842511&isnumber=6842477

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


6th International Conference on New Technologies, Mobility & Security (NTMS) - Dubai

Dubai


The 2014 6th International Conference on New Technologies, Mobility and Security (NTMS) was held March 30 --April 2, 2014 in Dubai. This conference addresses advances in new technologies, solutions for mobility and tools and techniques for information security. The concentration is on the development of smart sensor systems and sensor networks for smart cities. An emphasis is placed on integration of distributed sensors together with the optimization algorithms to achieve this goal. In the security track, twenty three security-related research papers were presented addressing a range of issues in the areas of business process application security, security assurance and assessment, social networking security, privacy and anonymity, cloud computing security, intrusion and malware detection, digital forensics and cryptography.

  • Al Barghouthy, N.B.; Marrington, A., "A Comparison of Forensic Acquisition Techniques for Android Devices: A Case Study Investigation of Orweb Browsing Sessions," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on,., pp.1,4, March 30 2014-April 2 2014 doi: 10.1109/NTMS.2014.6813993 The issue of whether to "root" a small scale digital device in order to be able to execute acquisition tools with kernel-level privileges is a vexing one. In the early research literature about Android forensics, and in the commercial forensic tools alike, the common wisdom was that "rooting" the device modified its memory only minimally, and enabled more complete acquisition of digital evidence, and thus was, on balance, an acceptable procedure. This wisdom has been subsequently challenged, and alternative approaches to complete acquisition without "rooting" the device have been proposed. In this work, we address the issue of forensic acquisition techniques for Android devices through a case study we conducted to reconstruct browser sessions carried out using the Orweb private web browser. Orweb is an Android browser which uses Onion Routing to anonymize web traffic, and which records no browsing history. Physical and logical examinations were performed on both rooted and non-rooted Samsung Galaxy S2 smartphones running Android 4.1.1. The results indicate that for investigations of Orweb browsing history, there is no advantage to rooting the device. We conclude that, at least for similar investigations, rooting the device is unnecessary and thus should be avoided.
    Keywords: Android (operating system) ;Internet; digital forensics; online front-ends; smart phones; Android 4.1.1;Android browser; Android devices; Android forensics; Onion Routing; Orweb browsing sessions;Orweb private Web browser; Web traffic anonymization; browser session reconstruction; browsing history; device rooting; digital evidence acquisition; forensic acquisition techniques; forensic tools; kernel-level privilege; nonrooted Samsung Galaxy S2 smartphone; small scale digital device; Androids; Browsers; Forensics; Humanoid robots; Random access memory; Smart phones; Workstations (ID#:14-3241)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6813993&isnumber=6813963
  • Hammi, B.; Khatoun, R.; Doyen, G., "A Factorial Space for a System-Based Detection of Botcloud Activity," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1,5, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6813996 Today, beyond a legitimate usage, the numerous advantages of cloud computing are exploited by attackers, and Botnets supporting DDoS attacks are among the greatest beneficiaries of this malicious use. Such a phenomena is a major issue since it strongly increases the power of distributed massive attacks while involving the responsibility of cloud service providers that do not own appropriate solutions. In this paper, we present an original approach that enables a source-based de- tection of UDP-flood DDoS attacks based on a distributed system behavior analysis. Based on a principal component analysis, our contribution consists in: (1) defining the involvement of system metrics in a botcoud's behavior, (2) showing the invariability of the factorial space that defines a botcloud activity and (3) among several legitimate activities, using this factorial space to enable a botcloud detection.
    Keywords: cloud computing; computer network security; distributed processing; principal component analysis; transport protocols; UDP-flood DDoS attacks; botcloud activity; botcloud detection; botcoud behavior; botnets; cloud computing; cloud service provider; distributed massive attacks; distributed system behavior analysis; factorial space; legitimate activity; legitimate usage; malicious use; principal component analysis; source-based detection; system metrics; system-based detection; Cloud computing; Collaboration; Computer crime; Intrusion detection; Measurement; Monitoring; Principal component analysis (ID#:14-3242)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6813996&isnumber=6813963
  • Hatzivasilis, G.; Papaefstathiou, I.; Manifavas, C.; Papadakis, N., "A Reasoning System for Composition Verification and Security Validation," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, no., pp. 1, 4, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814001 The procedure to prove that a system-of-systems is composable and secure is a very difficult task. Formal methods are mathematically-based techniques used for the specification, development and verification of software and hardware systems. This paper presents a model-based framework for dynamic embedded system composition and security evaluation. Event Calculus is applied for modeling the security behavior of a dynamic system and calculating its security level with the progress in time. The framework includes two main functionalities: composition validation and derivation of security and performance metrics and properties. Starting from an initial system state and given a series of further composition events, the framework derives the final system state as well as its security and performance metrics and properties. We implement the proposed framework in an epistemic reasoner, the rule engine JESS with an extension of DECKT for the reasoning process and the JAVA programming language.
    Keywords: Java; embedded systems; formal specification; formal verification; reasoning about programs; security of data; software metrics; temporal logic; DECKT; JAVA programming language; composition validation; composition verification; dynamic embedded system composition; epistemic reasoner; event calculus; formal methods; model-based framework; performance metrics; reasoning system; rule engine JESS; security evaluation; security validation; system specification ;system-of-systems; Cognition; Computational modeling; Embedded systems; Measurement; Protocols; Security; Unified modeling language (ID#:14-3243)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814001&isnumber=6813963
  • Al Sharif, S.; Al Ali, M.; Salem, N.; Iqbal, F.; El Barachi, M.; Alfandi, O., "An Approach for the Validation of File Recovery Functions in Digital Forensics' Software Tools," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1, 6, March 30 2014-April 2 2014 doi: 10.1109/NTMS.2014.6814005 Recovering lost and deleted information from computer storage media for the purpose of forensic investigation is one of the essential steps in digital forensics. There are several dozens of commercial and open source digital analysis tools dedicated for this purpose. The challenge is to identify the tool that best fits in a specific case of investigation. To measure the file recovering functionality, we have developed a validation approach for comparing five popular forensic tools: Encase, Recover my files, Recuva, Blade, and FTK. These tools were examined in a fixed scenario to show the differences and capabilities in recovering files after deletion, quick format and full format of a USB stick. Experimental results on selected commercial and open source tools demonstrate effectiveness of proposed approach.
    Keywords: digital forensics; file organisation; Blade; Encase; FTK; Recover my files; Recuva; USB stick; computer storage media; digital forensics software tool; file recovery function; forensic tools; open source digital analysis tool; Blades; Computers; Digital forensics; Media; Recycling; Universal Serial Bus (ID#:14-3244)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814005&isnumber=6813963
  • Juvonen, A.; Hamalainen, T., "An Efficient Network Log Anomaly Detection System Using Random Projection Dimensionality Reduction," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1, 5, March 30 2014-April 2 2014 doi: 10.1109/NTMS.2014.6814006 Network traffic is increasing all the time and network services are becoming more complex and vulnerable. To protect these networks, intrusion detection systems are used. Signature-based intrusion detection cannot find previously unknown attacks, which is why anomaly detection is needed. However, many new systems are slow and complicated. We propose a log anomaly detection framework which aims to facilitate quick anomaly detection and also provide visualizations of the network traffic structure. The system preprocesses network logs into a numerical data matrix, reduces the dimensionality of this matrix using random projection and uses Mahalanobis distance to find outliers and calculate an anomaly score for each data point. Log lines that are too different are flagged as anomalies. The system is tested with real-world network data, and actual intrusion attempts are found. In addition, visualizations are created to represent the structure of the network data. We also perform computational time evaluation to ensure the performance is feasible. The system is fast, finds intrusion attempts and does not need clean training data.
    Keywords: digital signatures; security of data; telecommunication traffic; Mahalanobis distance; anomaly score; data point; intrusion attempts; intrusion detection systems; log lines; network data structure; network log anomaly detection system; network services; network traffic structure; numerical data matrix; random projection dimensionality reduction; real-world network data; signature-based intrusion detection; Data mining; Data visualization; Feature extraction; Intrusion detection; Principal component analysis; Real-time systems (ID#:14-3245)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814006&isnumber=6813963
  • Binsalleeh, H.; Kara, A.M.; Youssef, A.; Debbabi, M., "Characterization of Covert Channels in DNS," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1, 5, March 30 2014-April 2, 2014. doi: 10.1109/NTMS.2014.6814008 Malware families utilize different protocols to establish their covert communication networks. It is also the case that sometimes they utilize protocols which are least expected to be used for transferring data, e.g., Domain Name System (DNS). Even though the DNS protocol is designed to be a translation service between domain names and IP addresses, it leaves some open doors to establish covert channels in DNS, which is widely known as DNS tunneling. In this paper, we characterize the malicious payload distribution channels in DNS. Our proposed solution characterizes these channels based on the DNS query and response messages patterns. We performed an extensive analysis of malware datasets for one year. Our experiments indicate that our system can successfully determine different patterns of the DNS traffic of malware families.
    Keywords: {cryptographic protocols; invasive software; DNS protocol; DNS traffic; DNS tunneling; IP addresses; communication networks; covert channel characterization; domain name system; malicious payload distribution channels; malware datasets; malware families; message patterns; translation service; Command and control systems; Malware; Payloads; Protocols; Servers; Tunneling (ID#:14-3246)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814008&isnumber=6813963
  • Bovet, G.; Hennebert, J., "Distributed Semantic Discovery for Web-of-Things Enabled Smart Buildings," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1,5, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814015 Nowadays, our surrounding environment is more and more scattered with various types of sensors. Due to their intrinsic properties and representation formats, they form small islands isolated from each other. In order to increase interoperability and release their full capabilities, we propose to represent devices descriptions including data and service invocation with a common model allowing to compose mashups of heterogeneous sensors. Pushing this paradigm further, we also propose to augment service descriptions with a discovery protocol easing automatic assimilation of knowledge. In this work, we describe the architecture supporting what can be called a Semantic Sensor Web-of-Things. As proof of concept, we apply our proposal to the domain of smart buildings, composing a novel ontology covering heterogeneous sensing, actuation and service invocation. Our architecture also emphasizes on the energetic aspect and is optimized for constrained environments.
    Keywords: {Internet of Things; Web services; home automation; ontologies (artificial intelligence);open systems; software architecture; wireless sensor networks; actuator; data invocation; distributed semantic discovery protocols; interoperability; intrinsic properties; knowledge automatic assimilation; ontology covering heterogeneous sensor; semantic sensor Web of Things; service invocation; smart building; Ontologies; Resource description framework; Semantics; Sensors; Smart buildings; Web services (ID#:14-3247)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814015&isnumber=6813963
  • Dassouki, K.; Safa, H.; Hijazi, A., "End to End Mechanism to Protect Sip from Signaling Attacks," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1, 5, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814017 SIP is among the most popular Voice over IP signaling protocols. Its deployment in live scenarios showed vulnerability to attacks defined as signaling attacks. These attacks are used to tear down a session or manipulate its parameters. In this paper we present a security mechanism that protects SIP sessions against such attacks. The mechanism uses SIP fingerprint to authenticate messages, in order to prevent spoofing. We validate our mechanism using Openssl and Sipp and show that it is light and robust.
    Keywords: Internet telephony; message authentication; signaling protocols; Openssl; SIP fingerprint; SIP sessions; Sipp; live scenarios; message authentication; security mechanism; signaling attacks; voice over IP signaling protocols; Cryptography; Fingerprint recognition; IP networks; Internet telephony; Protocols; Servers (ID#:14-3248)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814017&isnumber=6813963
  • Fachkha, C.; Bou-Harb, E.; Debbabi, M., "Fingerprinting Internet DNS Amplification DDoS Activities," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1,5, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814019 This work proposes a novel approach to infer and characterize Internet-scale DNS amplification DDoS attacks by leveraging the darknet space. Complementary to the pioneer work on inferring Distributed Denial of Service (DDoS) using darknet, this work shows that we can extract DDoS activities without relying on backscattered analysis. The aim of this work is to extract cyber security intelligence related to DNS Amplification DDoS activities such as detection period, attack duration, intensity, packet size, rate and geo- location in addition to various network-layer and flow-based insights. To achieve this task, the proposed approach exploits certain DDoS parameters to detect the attacks. We empirically evaluate the proposed approach using 720 GB of real darknet data collected from a /13 address space during a recent three months period. Our analysis reveals that the approach was successful in inferring significant DNS amplification DDoS activities including the recent prominent attack that targeted one of the largest anti-spam organizations. Moreover, the analysis disclosed the mechanism of such DNS amplification DDoS attacks. Further, the results uncover high-speed and stealthy attempts that were never previously documented. The case study of the largest DDoS attack in history lead to a better understanding of the nature and scale of this threat and can generate inferences that could contribute in detecting, preventing, assessing, mitigating and even attributing of DNS amplification DDoS activities.
    Keywords: {Internet; computer network security; Internet-scale DNS amplification DDoS attacks; anti-spam organizations; attack duration; backscattered analysis; cyber security intelligence; darknet space; detection period; distributed denial of service; fingerprinting Internet DNS amplification DDoS activities; geolocation; network-layer; packet size; storage capacity 720 Gbit; Computer crime; Grippers; IP networks; Internet; Monitoring; Sensors (ID#:14-3249)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814019&isnumber=6813963
  • Turkoglu, C.; Cagdas, S.; Celebi, A.; Erturk, S., "Hardware Design of Anembedded Real-Time Acoustic Source Location Detector," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1,4, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814022 This paper presents an embedded system that detects the 3 dimensional location of an acoustic source using a multiple microphone constellation. The system consists of a field programmable gate array (FPGA)that is used as main processing unit and the necessary peripherals. The sound signals are captured using multiple microphones that are connected to the embedded system using XLR connectors. The analog sound signals are first amplified using programmable gain amplifiers (PGAs) and then digitized before they are provided to the FPGA. The FPGA carries out the computations necessary for the algorithms to detect the acoustic source location in real-time. The system can be used for consumer electronics applications as well as security and defense applications.
    Keywords: acoustic signal detection; acoustic signal processing; audio signal processing; embedded systems; microphones; FPGA; PGAs; XLR connectors; analog sound signals; anembedded real-time acoustic source location detector; consumer electronics; embedded system; field programmable gate array; hardware design; multiple microphone constellation; programmable gain amplifiers; three dimensional location; Acoustics; Electronics packaging; Field programmable gate arrays; Hardware; Microphones; Position measurement; Synchronization (ID#:14-3250)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814022&isnumber=6813963
  • Varadarajan, P.; Crosby, G., "Implementing IPsec in Wireless Sensor Networks," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1,5, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814024 There is an increasing need for wireless sensor networks (WSNs) to be more tightly integrated with the Internet. Several real world deployment of stand-alone wireless sensor networks exists. A number of solutions have been proposed to address the security threats in these WSNs. However, integrating WSNs with the Internet in such a way as to ensure a secure End-to-End (E2E) communication path between IPv6 enabled sensor networks and the Internet remains an open research issue. In this paper, the 6LoWPAN adaptation layer was extended to support both IPsec's Authentication Header (AH) and Encapsulation Security Payload (ESP). Thus, the communication endpoints in WSNs are able to communicate securely using encryption and authentication. The proposed AH and ESP compressed headers performance are evaluated via test-bed implementation in 6LoWPAN for IPv6 communications on IEEE 802.15.4 networks. The results confirm the possibility of implementing E2E security in IPv6 enabled WSNs to create a smooth transition between WSNs and the Internet. This can potentially play a big role in the emerging "Internet of Things" paradigm.
    Keywords: IP networks; Internet; Zigbee; computer network security; cryptography; wireless sensor networks;6LoWPAN adaptation layer;AH;E2E security; ESP compressed header performance; IEEE 802.15.4 networks; IPsec authentication header;IPv6 enabled sensor networks; Internet; Internet of Things paradigm; WSNs; communication endpoints; encapsulation security payload; encryption; end-to-end communication path; security threats; stand-alone wireless sensor networks; Authentication; IEEE 802.15 Standards; IP networks; Internet; Payloads; Wireless sensor networks (ID#:14-3251)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814024&isnumber=6813963
  • Boukhtouta, A.; Lakhdari, N.-E.; Debbabi, M., "Inferring Malware Family through Application Protocol Sequences Signature," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1,5, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814026 The dazzling emergence of cyber-threats exert today's cyberspace, which needs practical and efficient capabilities for malware traffic detection. In this paper, we propose an extension to an initial research effort, namely, towards fingerprinting malicious traffic by putting an emphasis on the attribution of maliciousness to malware families. The proposed technique in the previous work establishes a synergy between automatic dynamic analysis of malware and machine learning to fingerprint badness in network traffic. Machine learning algorithms are used with features that exploit only high-level properties of traffic packets (e.g. packet headers). Besides, the detection of malicious packets, we want to enhance fingerprinting capability with the identification of malware families responsible in the generation of malicious packets. The identification of the underlying malware family is derived from a sequence of application protocols, which is used as a signature to the family in question. Furthermore, our results show that our technique achieves promising malware family identification rate with low false positives.
    Keywords: computer network security; invasive software; learning (artificial intelligence);application protocol sequences signature; cyber-threats; machine learning algorithm; malicious packets detection; malware automatic dynamic analysis; malware traffic detection; network traffic; Cryptography; Databases; Engines; Feeds; Malware; Protocols (ID#:14-3252)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814026&isnumber=6813963
  • Gritzalis, D.; Stavrou, V.; Kandias, M.; Stergiopoulos, G., "Insider Threat: Enhancing BPM through Social Media," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1, 6, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814027 Modern business environments have a constant need to increase their productivity, reduce costs and offer competitive products and services. This can be achieved via modeling their business processes. Yet, even in light of modelling's widespread success, one can argue that it lacks built-in security mechanisms able to detect and fight threats that may manifest throughout the process. Academic research has proposed a variety of different solutions which focus on different kinds of threat. In this paper we focus on insider threat, i.e. insiders participating in an organization's business process, who, depending on their motives, may cause severe harm to the organization. We examine existing security approaches to tackle down the aforementioned threat in enterprise business processes. We discuss their pros and cons and propose a monitoring approach that aims at mitigating the insider threat. This approach enhances business process monitoring tools with information evaluated from Social Media. It exams the online behavior of users and pinpoints potential insiders with critical roles in the organization's processes. We conclude with some observations on the monitoring results (i.e. psychometric evaluations from the social media analysis) concerning privacy violations and argue that deployment of such systems should be only allowed on exceptional cases, such as protecting critical infrastructures.
    Keywords: business data processing; organisational aspects; process monitoring; social networking (online);BPM enhancement; built-in security mechanism; business process monitoring tools; cost reduction; enterprise business processes; insider threat; organization business process management; privacy violations; social media; Media; Monitoring; Organizations; Privacy; Security; Unified modeling language (ID#:14-3253)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814027&isnumber=6813963
  • Azab, M., "Multidimensional Diversity Employment for Software Behavior Encryption," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, no., pp.1, 5, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814033 Modern cyber systems and their integration with the infrastructure has a clear effect on the productivity and quality of life immensely. Their involvement in our daily life elevate the need for means to insure their resilience against attacks and failure. One major threat is the software monoculture. Latest research work demonstrated the danger of software monoculture and presented diversity to reduce the attack surface. In this paper, we propose ChameleonSoft, a multidimensional software diversity employment to, in effect, induce spatiotemporal software behavior encryption and a moving target defense. ChameleonSoft introduces a loosely coupled, online programmable software-execution foundation separating logic, state and physical resources. The elastic construction of the foundation enabled ChameleonSoft to define running software as a set of behaviorally-mutated functionally-equivalent code variants. ChameleonSoft intelligently Shuffle, at runtime, these variants while changing their physical location inducing untraceable confusion and diffusion enough to encrypt the execution behavior of the running software. ChameleonSoft is also equipped with an autonomic failure recovery mechanism for enhanced resilience. In order to test the applicability of the proposed approach, we present a prototype of the ChameleonSoft Behavior Encryption (CBE) and recovery mechanisms. Further, using analysis and simulation, we study the performance and security aspects of the proposed system. This study aims to assess the provisioned level of security by measuring the avalanche effect percentage and the induced confusion and diffusion levels to evaluate the strength of the CBE mechanism. Further, we compute the computational cost of security provisioning and enhancing system resilience.
    Keywords: computational complexity; cryptography; multidimensional systems; software fault tolerance; system recovery; CBE mechanism; ChameleonSoft Behavior Encryption; ChameleonSoft recovery mechanisms; autonomic failure recovery mechanism; avalanche effect percentage; behaviorally-mutated functionally-equivalent code variants; computational cost; confusion levels; diffusion levels; moving target defense; multidimensional software diversity employment; online programmable software-execution foundation separating logic; security level; security provisioning; software monoculture; spatiotemporal software behavior encryption; system resilience; Employment; Encryption; Resilience; Runtime; Software; Spatiotemporal phenomena (ID#:14-3254)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814033&isnumber=6813963
  • Mauri, G.; Verticale, G., "On the Tradeoff between Performance and User Privacy in Information Centric Networking," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1,5, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814040 Widespread use of caching provides advantages for users and providers, such as reduced network latency, higher content availability, bandwidth reduction and server load balancing. In Information Centric Networking, the attention is shifted from users to content, which is addressed by its name and not by its location. Moreover, the content objects are stored as close as possible to the customers. Therefore, the cache has a central role for the improvement of the network performance but this is strictly related to the caching policy used. However, this comes at the price of increased tracing of users communication and users behavior to define an optimal caching policy. A malicious node could exploit such information to compromise the privacy of users. In this work, we compare different caching policies and we take the first steps for defining the tradeoff between caching performance and user privacy guarantee. In particular, we provide a way to implement prefetching and we define some bounds for the users' privacy in this context.
    Keywords: cache storage; perturbation techniques; caching policy; content centric networking; data perturbation; information centric networking; named-data networking; network latency; prefetching; privacy; server load balancing; user's ranking; Computational modeling; Data privacy; Delays; Games; Prefetching; Privacy; Vectors (ID#:14-3255)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814040&isnumber=6813963
  • Abu-Ella, O.; Elmusrati, M., "Partial Constrained Group Decoding: A New Interference Mitigation Technique for the Next Generation Networks," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1,5, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814042 This paper investigates performance of the constrained partial group decoding (CPGD) technique in interference channel (IC) environment. It demonstrates the CPGD capability to manage and mitigate interference comparing with other interference mitigation schemes which are based on interference alignment strategy; this comparison is carried out for MIMO interference channel. Numerical results show that CPGD achieves one of the highest capacities comparing to other considered schemes. As well, evaluation of bit error rate (BER) using very long low density parity-check (LDPC) codes demonstrates the competency of the CPGD which significantly outperforms the other techniques. This makes the CPGD a promising scheme for interference mitigation for the next generation of wireless communication systems; especially, if we take into account that CPGD is only based on receive-side processing; and that means, there is no need for any overwhelming feedback in such a system. Also, and more importantly, if we keep in mind the reduction of its required computational complexity, due to its complexity controlling feature, i.e., by it's flexibility to limit the group size of the jointly decoded users, comparing with the huge computational complexity of the iterative multi- user detection (MUD) schemes, as interference alignment approach.
    Keywords: MIMO communication; decoding; interference suppression; parity check codes; radiofrequency interference; MIMO interference channel ;bit error rate; constrained partial group decoding; interference alignment strategy; interference channel environment; interference mitigation technique; next generation network; partial constrained group decoding; receive side processing; very long low density parity check codes; Bit error rate; Interference channels; MIMO; Receivers; Signal to noise ratio; Transmitters (ID#:14-3256)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814042&isnumber=6813963
  • Petrlic, R.; Sorge, C., "Privacy-Preserving Digital Rights Management based on Attribute-based Encryption," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1, 5, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814044 We present a privacy-preserving multiparty DRM scheme that does not need a trusted third party. Users anonymously buy content from content providers and anonymously execute it at content execution centers. The executions are unlinkable to each other. The license check is performed as part of the used ciphertext-policy attribute-based encryption (CP-ABE) and, thus, access control is cryptographically enforced. The problem of authorization proof towards the key center in an ABE scheme is solved by a combination with anonymous payments.
    Keywords: cryptography; digital rights management; ABE scheme; access control; anonymous payments; attribute-based encryption; authorization proof; ciphertext-policy attribute-based encryption; privacy-preserving digital rights management; privacy-preserving multiparty DRM scheme; Cloud computing; Encryption; Licenses; Privacy; Protocols (ID#:14-3257)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814044&isnumber=6813963
  • Hmood, A.; Fung, B.C.M.; Iqbal, F., "Privacy-Preserving Medical Reports Publishing for Cluster Analysis," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1,8, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814045 Health data mining is an emerging research direction. High-quality health data mining results rely on having access to high-quality patient information. Yet, releasing patient-specific medical reports may potentially reveal sensitive information of the individual patients. In this paper, we study the problem of anonymizing medical reports and present a solution to anonymize a collection of medical reports while preserving the information utility of the medical reports for the purpose of cluster analysis. Experimental results show that our proposed approach can the impact of anonymization on the cluster quality is minor, suggesting that the feasibility of simultaneously preserving both information utility and privacy in anonymous medical reports.
    Keywords: data mining; data privacy; electronic health records; pattern clustering; cluster analysis; health data mining; information utility; medical report anonymization; patient-specific medical reports; privacy-preserving medical reports publishing; Clustering algorithms; Data privacy; Diseases; Information retrieval; Medical diagnostic imaging; Privacy (ID#:14-3258)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814045&isnumber=6813963
  • Dimitriou, T., "Secure and Scalable Aggregation in the Smart Grid," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1, 5, March 30 2014-April 2 2014 doi: 10.1109/NTMS.2014.6814048 In this work, we describe two decentralized protocols that can be used to securely aggregate the electricity measurements made by n smart meters. The first protocol is a very lightweight one, it uses only symmetric cryptographic primitives and provides security against honest-but-curious adversaries. The second one is public-key based and its focus in on the malicious adversarial model; malicious entities not only try to learn the private measurements of smart meters but can also disrupt protocol execution. Both protocols do not rely on centralized entities or trusted third parties to operate and they are highly scalable since every smart meter has to interact with only a few other meters. Both are very efficient in practice requiring only O(1) work and memory overhead per meter, thus making these protocols fit for real-life smart grid deployments.
    Keywords: power system security; smart meters; smart power grids; decentralized protocols; electricity measurements; malicious adversarial model; malicious entities; scalable aggregation; smart grid; smart meters; symmetric cryptographic primitives; trusted third parties; Encryption; Protocols; Public key; Silicon; Smart grids (ID#:14-3259)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814048&isnumber=6813963
  • Kabbani, B.; Laborde, R.; Barrere, F.; Benzekri, A., "Specification and Enforcement of Dynamic Authorization Policies Oriented by Situations," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1,6, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814050 Nowadays, accessing communication networks and systems faces multitude applications with large-scale requirements dimensions. Mobility -roaming services in particular- during urgent situations exacerbate the access control issues. Dynamic authorization then is required. However, traditional access control fails to ensure policies to be dynamic. Instead, we propose to externalize the dynamic behavior management of networks and systems through situations. Situations modularize the policy into groups of rules and orient decisions. Our solution limits policy updates and hence authorization inconsistencies. The authorization system is built upon the XACML architecture coupled with a complex event- processing engine to handle the concept of situations. Situation- oriented attribute based policies are defined statically allowing static verification and validation.
    Keywords: authorisation; XACML architecture; access control; dynamic authorization policies; mobility roaming services; Authorization; Computer architecture; Context; Engines; Medical services (ID#:14-3260)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814050&isnumber=6813963
  • Albino Pereira, A.; Bosco M.Sobral, J.; Merkle Westphall, C., "Towards Scalability for Federated Identity Systems for Cloud-Based Environments," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1,5, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814055 As multi-tenant authorization and federated identity management systems for cloud computing matures, the provisioning of services using this paradigm allows maximum efficiency on business that requires access control. However, regarding scalability support, mainly horizontal, some characteristics of those approaches based on central authentication protocols are problematic. The objective of this work is to address these issues by providing an adapted sticky-session mechanism for a Shibboleth architecture using CAS. This alternative, compared with the recommended shared memory approach, shown improved efficiency and less overall infrastructure complexity.
    Keywords: authorisation; cloud computing; cryptographic protocols; CAS; Shibboleth architecture; central authentication protocols; central authentication service; cloud based environments; cloud computing; federated identity management systems; federated identity system scalability; multitenant authorization; sticky session mechanism; Authentication; Cloud computing; Proposals; Scalability; Servers; Virtual machining (ID#:14-3261)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814055&isnumber=6813963

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


ACM CHI Conference on Human Factors in Computing Systems - Toronto, Canada

ACM CHI Conference on Human Factors in Computing Systems


ACM CHI Conference on Human Factors in Computing Systems CHI 2014 was held in Toronto, Canada from April 26- May 1. Papers shown below were selected based on their relevance to Human Behavior and Cybersecurity, and were presented in various sessions including Social Local Mobile; Privacy; Risks and Security; and Authentication and Passwords.

Session: Social Local Mobile

Let's Do It at My Place Instead? Attitudinal and Behavioral Study of Privacy in Client-Side Personalization
Alfred Kobsa, Bart P. Knijnenburg, Benjamin Livshits
Many users welcome personalized services, but are reluctant to provide the information about themselves that personalization requires. Performing personalization exclusively at the client side (e.g., on one's smartphone) may conceptually increase privacy, because no data is sent to a remote provider. But does client-side personalization (CSP) also increase users' perception of privacy? We developed a causal model of privacy attitudes and behavior in personalization, and validated it in an experiment that contrasted CSP with personalization at three remote providers: Amazon, a fictitious company, and the "Cloud". Participants gave roughly the same amount of personal data and tracking permissions in all four conditions. A structural equation modeling analysis reveals the reasons: CSP raises the fewest privacy concerns, but does not lead in terms of perceived protection nor in resulting self-anticipated satisfaction and thus privacy-related behavior. Encouragingly, we found that adding certain security features to CSP is likely to raise its perceived protection significantly. Our model predicts that CSP will then also sharply improve on all other privacy measures.
Keywords: Privacy; personalization; client-side; structural equation modeling (SEM); attitudes; behaviors (ID#:14-3342)
URL: http://dl.acm.org/citation.cfm?id=2557102 or http://dx.doi.org/10.1145/2556288.2557102

The Effect of Developer-Specified Explanations for Permission Requests on Smartphone User Behavior
Joshua S Tan, Khanh Nguyen, Michael Theodorides, Heidi Negron-Arroyo, Christopher Thompson, Serge Egelman, David Wagner
In Apple's iOS 6, when an app requires access to a protected resource (e.g., location or photos), the user is prompted with a permission request that she can allow or deny. These permission request dialogs include space for developers to optionally include strings of text to explain to the user why access to the resource is needed. We examine how app developers are using this mechanism and the effect that it has on user behavior. Through an online survey of 772 smartphone users, we show that permission requests that include explanations are significantly more likely to be approved. At the same time, our analysis of 4,400 iOS apps shows that the adoption rate of this feature by developers is relatively small: around 19 % of permission requests include developer-specified explanations. Finally, we surveyed 30 iOS developers to better understand why they do or do not use this feature.
Keywords: Smartphones; Privacy; Access Control; Usability (ID#:14-3343)
URL: http://dl.acm.org/citation.cfm?id=2557400 or http://dx.doi.org/10.1145/2556288.2557400

Effects of Security Warnings and Instant Gratification Cues on Attitudes toward Mobile Websites
Bo Zhang, Mu Wu, Hyunjin Kang, Eun Go, S. Shyam Sundar
In order to address the increased privacy and security concerns raised by mobile communications, designers of mobile applications and websites have come up with a variety of warnings and appeals. While some interstitials warn about potential risk to personal information due to an untrusted security certificate, others attempt to take users' minds away from privacy concerns by making tempting, time-sensitive offers. How effective are they? We conducted an online experiment (N = 220) to find out. Our data show that both these strategies raise red flags for users--appeals to instant gratification make users more leery of the site and warnings make them perceive greater threat to personal data. Yet, users tend to reveal more information about their social media accounts when warned about an insecure site. This is probably because users process these interstitials based on cognitive heuristics triggered by them. These findings hold important implications for the design of cues in mobile interfaces.
Keywords: Online privacy; security; information disclosure; trust; mobile interface. (ID#:14-3344)
URL: http://dl.acm.org/citation.cfm?id=2557347 or http://dx.doi.org/10.1145/2556288.2557347

Session: Privacy

Leakiness and Creepiness in App Space: Perceptions of Privacy and Mobile App Use
Irina A Shklovski, Scott D. Mainwaring, Halla Hrund Skuladottir, Hoskuldur Borgthorsson
Mobile devices are playing an increasingly intimate role in everyday life. However, users can be surprised when in-formed of the data collection and distribution activities of apps they install. We report on two studies of smartphone users in western European countries, in which users were confronted with app behaviors and their reactions assessed. Users felt their personal space had been violated in "creepy" ways. Using Altman's notions of personal space and territoriality, and Nissenbaum's theory of contextual integrity, we account for these emotional reactions and suggest that they point to important underlying issues, even when users continue using apps they find creepy.
Keywords: Mobile devices; data privacy; bodily integrity;learned helplessness; creepiness (ID#:14-3345)
URL: http://dl.acm.org/citation.cfm?id=2557421 or http://dx.doi.org/10.1145/2556288.2557421

A Field Trial of Privacy Nudges for Facebook
Yang Wang, Pedro Giovanni Leon, Alessandro Acquisti, Lorrie Faith Cranor, Alain Forget, Norman Sadeh
Anecdotal evidence and scholarly research have shown that Internet users may regret some of their online disclosures. To help individuals avoid such regrets, we designed two modifications to the Facebook web interface that nudge users to consider the content and audience of their online disclosures more carefully. We implemented and evaluated these two nudges in a 6-week field trial with 28 Facebook users. We analyzed participants' interactions with the nudges, the content of their posts, and opinions collected through surveys. We found that reminders about the audience of posts can prevent unintended disclosures without major burden; however, introducing a time delay before publishing users' posts can be perceived as both beneficial and annoying. On balance, some participants found the nudges helpful while others found them unnecessary or overly intrusive. We discuss implications and challenges for designing and evaluating systems to assist users with online disclosures.
Keywords: Behavioral bias; Online disclosure; Social media; Facebook; Nudge; Privacy; Regret; Soft-paternalism (ID#:14-3346)
URL: http://dl.acm.org/citation.cfm?id=2557413 or http://dx.doi.org/10.1145/2556288.2557413

Session: Risks and Security

Betrayed By Updates: How Negative Experiences Affect Future Security
Kami E. Vaniea, Emilee Rader, Rick Wash
Installing security-relevant software updates is one of the best computer protection mechanisms. However, users do not always choose to install updates. Through interviewing non-expert Windows users, we found that users frequently decide not to install future updates, regardless of whether they are important for security, after negative experiences with past updates. This means that even non-security updates (such as user interface changes) can impact the security of a computer. We discuss three themes impacting users' willingness to install updates: unexpected new features in an update, the difficulty of assessing whether an update is ``worth it'', and confusion about why an update is necessary.
Keywords: Software Updates; Human Factors; Security (ID#:14-3347)
URL: http://dl.acm.org/citation.cfm?id=2557275 or http://dx.doi.org/10.1145/2556288.2557275

Session: Authentication and Passwords

Can Long Passwords be Secure and Usable?
Richard Shay, Saranga Komanduri, Adam L. Durity, Phillip (Seyoung) Huh, Michelle L. Mazurek, Sean M. Segreti, Blase Ur, Lujo Bauer, Nicolas Christin, Lorrie Faith Cranor
To encourage strong passwords, system administrators employ password-composition policies, such as a traditional policy requiring that passwords have at least 8 characters from 4 character classes and pass a dictionary check. Recent research has suggested, however, that policies requiring longer passwords with fewer additional requirements can be more usable and in some cases more secure than this traditional policy. To explore long passwords in more detail, we conducted an online experiment with 8,143 participants. Using a cracking algorithm modified for longer passwords, we evaluate eight policies across a variety of metrics for strength and usability. Among the longer policies, we discover new evidence for a security/usability tradeoff, with none being strictly better than another on both dimensions. However, several policies are both more usable and more secure that the traditional policy we tested. Our analyses additionally reveal common patterns and strings found in cracked passwords. We discuss how system administrators can use these results to improve password-composition policies.
Keywords: Passwords; Password-composition policies; Security policy; Usable security; Authentication (ID#:14-3348)
URL: http://dl.acm.org/citation.cfm?id=2557377 or http://dx.doi.org/10.1145/2556288.2557377

An Implicit Author Verification System for Text Messages Based on Gesture Typing Biometrics
Ulrich Burgbacher, Klaus H. Hinrichs
Gesture typing is a popular text input method used on smartphones. Gesture keyboards are based on word gestures that subsequently trace all letters of a word on a virtual keyboard. Instead of tapping a word key by key, the user enters a word gesture with a single continuous stroke. In this paper, we introduce an implicit user verification approach for short text messages that are entered with a gesture keyboard. We utilize the way people interact with gesture keyboards to extract behavioral biometric features. We propose a proof-of-concept classification framework that learns the gesture typing behavior of a person and is able to decide whether a gestured message was written by the legitimate user or an imposter. Data collected from gesture keyboard users in a user study is used to assess the performance of the classification framework, demonstrating that the technique has considerable promise.
Keywords: Gesture keyboards; implicit authentication; behavioral biometrics; mobile phone security (ID#:14-3349)
URL: http://dl.acm.org/citation.cfm?id=2557346 or http://dx.doi.org/10.1145/2556288.2557346


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


China Summit & International Conference on Signal and Information Processing (ChinaSIP) - Xi'an, China

IEEE China Summit & International Conference on Signal and Information Processing (ChinaSIP)


2014 IEEE China Summit & International Conference on Signal and Information Processing (ChinaSIP) was held 9-13 July 2014 in Xi'an, China. Research includes such topics as steganography, forensics, secure burst transmissions, retrieval of encrypted JPEG images, signal reconstruction, target signal detection, synthetic aperture radar, and much more.

  • Xinpeng Zhang; Hang Cheng, "Histogram-based retrieval for encrypted JPEG images," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on, pp.446,449, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889282 This work proposes a novel scheme for encrypted JPEG image retrieval, which includes image encryption and unsupervised/supervised retrieval phases. Using this scheme, the encrypted images are produced by permuting DCT coefficients, and transmitted to a database server. With an encrypted query image, although the server does not know the plaintext content, he may get the histogram at each frequency position. After calculating the distances between the histograms of encrypted query image and database image, the server can return the encrypted images with plaintext content similar to the query image according to integrated distances. If a training image set is available, the retrieval results can be also determined by conditional probabilities calculated from a supervised mechanism.
    Keywords: cryptography; discrete cosine transforms; image coding; image retrieval; DCT coefficients; database image; encrypted JPEG image retrieval; encrypted images; encrypted query image; histogram-based retrieval; image encryption; unsupervised retrieval phases; Databases; Encryption; Feature extraction; Histograms; Servers; Transform coding; Histogram; Image encryption; Image retrieval (ID#:14-3217)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889282&isnumber=6889177
  • Jia Duan; Lei Zhang; Yifeng Wu; Mengdao Xing; Min Wu, "A Novel Signal Reconstructing Method For Radar Targets," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on , vol., no., pp.175,178, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889226 In this paper, a novel signal reconstructing method for radar targets is proposed based on the attributed scattering center model. By extracting the attributed parameters, the large amount of target data can be represented by small amounts of attributed parameters. In this way, the data amount has been compressed sharply, which releases the computer memory for storage. After extraction, a target discriminating method is presented by applying a CFAR threshold to the energy of extracted attributed scattering centers, by which, weak distributed scattering centers with relatively high energy in total can be discriminated from noise under low SNRs. Experimental results validate the effectiveness of the signal reconstructing capability of the proposal.
    Keywords: radar signal processing; radar target recognition; scattering; signal reconstruction; CFAR threshold; SNR; attributed parameters extraction; attributed scattering center model; radar target; signal reconstruction method ;target discriminating method; weak distributed scattering center; Image reconstruction; Noise; Parameter estimation; Radar imaging; Scattering; Signal reconstruction; Signal reconstruction; attributed scattering center; radar images (ID#:14-3218)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889226&isnumber=6889177
  • Ziqiang Meng; Yachao Li; Mengdao Xing; Zheng Bao, "Imaging Of Missile-Borne Bistatic Forward-Looking SAR," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on, pp.179,183, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889227 As a special imaging mode, missile-borne bistatic forward-looking synthetic aperture radar (MBFL-SAR) has many advantages in two-dimensional (2-D) imaging capability for targets in the straight-ahead position over mono-static missile-borne SAR and airborne SAR. It is difficult to obtain the 2-D frequency spectrum of the target echo signal due to the high velocity and acceleration in this configuration, which brings a lot of obstacles to the following imaging processing. A new imaging algorithm for MBFL-SAR configuration based on series reversion is proposed in this paper. The 2-D frequency spectrum obtained through this method can implement range compression and range cell migration correction (RCMC) effectively. Finally, some simulations of point targets and comparison results confirm the efficiency of our proposed algorithm.
    Keywords: airborne radar; military radar; missiles; radar imaging; synthetic aperture radar; 2D frequency spectrum;2D imaging; MBFL-SAR imaging mode; RCMC; airborne SAR; missile-borne bistatic forward-looking SAR imaging; mono-static missile-borne SAR; point target simulation; range cell migration correction; range compression; series reversion; straight-ahead position; synthetic aperture radar; target echo signal ;two-dimensional imaging capability; Algorithm design and analysis; Azimuth; Data models; Frequency-domain analysis; Imaging; Synthetic aperture radar;2-D frequency spectrum; MBFL-SAR; Method of series reversion; SAR imaging (ID#:14-3219)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889227&isnumber=6889177
  • Xun Chao Cong; Rong Qiang Zhu; Yu Lin Liu; Qun Wan, "Feature Extraction of SAR Target In Clutter Based On Peak Region Segmentation And Regularized Orthogonal Matching Pursuit," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on, pp.189,193, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889229 Feature extraction in clutter is a challenging problem in SAR target recognition because of the difficulty in distinguishing the target signature from the background. In this paper, a new feature extracting algorithm based on automated peak region segmentation (PRS) and regularized orthogonal matching pursuit (ROMP) techniques is presented and called PRS-ROMP. It combines the processes in both signal domain and image domain. First, the proposed method uses PRS and parametric model (PM) to obtain the positions and atoms of strong scattering centers of target. Then we acquire the positions and atoms of weak scattering centers by the sparse reconstruction algorithm and PM for residual region. By using all atoms of strong and weak scattering centers we get the final amplitude estimation by LS. Experimental results of electromagnetic calculations data in clutter validate the proposed target feature extraction method.
    Keywords: amplitude estimation; feature extraction; image recognition; image reconstruction; image segmentation; iterative methods; least squares approximations; radar clutter; radar imaging; synthetic aperture radar; time-frequency analysis; LS; PM; PRS; ROMP technique; SAR target recognition; amplitude estimation; automated peak region segmentation; clutter; electromagnetic calculation; parametric model; regularized orthogonal matching pursuit technique; sparse reconstruction algorithm; target feature extraction method; target scattering center; Accuracy; Clutter; Feature extraction; Matching pursuit algorithms; Scattering; Signal processing algorithms; Sparse matrices; ROMP; SAR; automated peak region segmentation; feature extraction; parametric model (ID#:14-3220)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889229&isnumber=6889177
  • Azouz, A.; Zhenfang Li, "Improved Phase Gradient Autofocus Algorithm Based On Segments Of Variable Lengths And Minimum Entropy Phase Correction," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on, pp.194,198, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889230 In this paper, an improved phase gradient autofocus (PGA) Algorithm motion compensation (MOCO) approaches is proposed for the unmanned aerial vehicle (UAV) synthetic aperture radar (SAR) imagery. The approach is implemented in two-steps. The first step determines the length of segments depending on number of good quality scatterers and motion errors obtained from navigation data. In the second step, a novel minimum-entropy phase correction based on the Discrete Cosine Transform (DCT) coefficients is proposed. In this approach, transform phase error estimates by PGA to DCT-coefficient. The entropy of a focused image is utilized as the optimization function of the DCT coefficients to improve the final images quality. Finally, real-data experiments show that the proposed approach is appropriate for highly precise imaging of UAV SAR.
    Keywords: autonomous aerial vehicles; discrete cosine transforms; gradient methods; minimum entropy methods; motion compensation; radar imaging; synthetic aperture radar; DCT coefficients; PGA algorithm; SAR imagery; UAV-SAR imagery; discrete cosine transform; improved phase gradient autofocus algorithm; minimum entropy phase correction; motion errors; navigation data; optimization function; synthetic aperture radar; unmanned aerial vehicle; variable length segment; Azimuth; Electronics packaging; Entropy; Image segmentation; Motion segmentation; Navigation; Synthetic aperture radar; Motion compensation (MOCO); phase gradient autofocus (PGA) (ID#:14-3221)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889230&isnumber=6889177
  • Sheng-juan Cheng; Wen-Qin Wang; Huaizong Shao, "MIMO OFDM chirp waveform design with spread spectrum modulation," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on, pp.208,211, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889233 This paper proposes an approach to design orthogonal multiplexing waveform for use of Multiple-input Multiple-output (MIMO) radar. The designed scheme incorporate direct sequence spread spectrum (DSSS) coding techniques on orthogonal frequency division multiplexing (OFDM) chirp signaling. We call it spread spectrum coded OFDM chirp (SSCOC) signaling. The performance of the signals are analyzed with the cross-ambiguity function. In the experiment, the influence of spread spectrum code length and type as well as the bandwidth and duration of OFDM chirp waveforms on cross-ambiguity function (CAF) is discussed. It is verified that the proposed design scheme can ensure these waveforms stay orthogonal on receive and obtain large time-bandwidth product which are beneficial to separate closely spaced targets with ultra-high resolution.
    Keywords: MIMO radar; OFDM modulation; spread spectrum communication; MIMO OFDM chirp waveform design; cross ambiguity function; direct sequence spread spectrum coding technique; multiple input multiple output radar; orthogonal frequency division multiplexing chirp signaling; orthogonal multiplexing waveform; spread spectrum code; spread spectrum modulation; Bandwidth; Chirp; Gold; MIMO; OFDM; Synthetic aperture radar; Cross-Ambiguity Function (CAF); Direct Sequence Spread Spectrum (DSSS); Multiple-input Multiple-output (MIMO); Orthogonal frequency division multiplexing (OFDM); Spread Spectrum Coded OFDM Chirp (SSCOC) (ID#:14-3222)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889233&isnumber=6889177
  • Xiaofei Wang; Yanmeng Guo; Qiang Fu; Yonghong Yan, "Reverberation Robust Two-Microphone Target Signal Detection Algorithm With Coherent Interference," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on, pp.237,241, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889239 In this paper, a reverberation robust Target Signal Detection (TSD) algorithm using two microphones is proposed. Most of traditional TSD algorithms are based on the assumption of free sound field and close-talking scene incorporate with multichannel system. They lack in achieving robustness in reverberant and noisy environment. The proposed TSD algorithm is based on Beam-to-Reference Ratio (BRR), and a novel estimator, Direct-to-Reverberate Ratio (DRR), is introduced to enlarge the basic assumption to reverberant and distant-talking scene. Spatial correlation information between microphones is used to estimate the DRR to revise threshold on each Time-Frequency (T-F) block and to form full-band likelihood using soft-decision information. Experimental results show that the proposed method performs robust in different reverberant environments with coherent interferences when target signal is from priori known direction-of-arrivals (DOA) in distant-talking scene.
    Keywords: direction-of-arrival estimation; microphones; object detection; reverberation; signal detection; beam-to-reference ratio; coherent interference; direct-to-reverberate ratio; direction-of-arrivals; distant-talking scene; estimator; full-band likelihood; microphones; multichannel system; reverberant assumption; reverberation robust target signal detection; reverberation robust two-microphone target signal detection algorithm; soft-decision information; spatial correlation information; time-frequency block ;Interference; Microphones; Noise; Reverberation; Robustness; Speech; Speech enhancement; Direct-to-Reverberate Ratio; Reverberation Robust; Speech Enhancement; Target Signal Detection (ID#:14-3223)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889239&isnumber=6889177
  • Aggarwal, H.K.; Majumdar, A., "Compressive Sensing Multi-Spectral Demosaicing From Single Sensor Architecture," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on, pp.334,338, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889259 This paper addresses the recovery of multi-spectral images from single sensor cameras using compressed sensing (CS) techniques. It is an exploratory work since this particular problem has not been addressed before. We considered two types of sensor arrays - uniform and random; and two recovery approaches - Kronecker CS (KCS) and group-sparse reconstruction. Two sets of experiments were carried out. From the first set of experiments we find that both KCS and group-sparse recovery yields good results for random sampling, but for uniform sampling only KCS yields good results. In the second set of experiments we compared our proposed techniques with state-of-the-art methods. We find that our proposed methods yields considerable better results.
    Keywords: cameras; compressed sensing; image reconstruction; image sampling ;image segmentation; image sensors; sensor arrays; KCS approach; Kronecker CS approach; compressed sensing technique; group-sparse reconstruction; multispectral demosaicing; multispectral image recovery; random sampling; sensor array; single sensor architecture; single sensor camera; Cameras; Compressed sensing; Filtering algorithms; Image reconstruction; Signal processing; Transforms; Compressed Sensing; Demosaicing; Multi-spectral Imaging (ID#:14-3224)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889259&isnumber=6889177
  • Bo Li; Yuchao Dai; Mingyi He; van den Hengel, A., "A Relaxation Method To Articulated Trajectory Reconstruction From Monocular Image Sequence," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on, pp.389,393, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889270 In this paper, we present a novel method for articulated trajectory reconstruction from a monocular image sequence. We propose a relaxation-based objective function, which utilises both smoothness and geometric constraints, posing articulated trajectory reconstruction as a non-linear optimization problem. The main advantage of this approach is that it remains the re-constructive power of the original algorithm, while improving its robustness to the inevitable noise in the data. Furthermore, we present an effective approach to estimating the parameters of our objective function. Experimental results on the CMU motion capture dataset show that our proposed algorithm is effective.
    Keywords: image motion analysis; image reconstruction; image sequences; nonlinear programming; CMU motion capture dataset; articulated trajectory reconstruction; geometric constraint; monocular image sequence; nonlinear optimization problem; relaxation method; relaxation-based objective function; Cameras; Educational institutions; Image reconstruction; Linear programming; Noise; Three-dimensional displays; Trajectory; articulated trajectory; noise; relaxation; robust; smoothness (ID#:14-3225)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889270&isnumber=6889177
  • Cong Liu; Hefei Ling; Fuhao Zou; Lingyu Yan; Xinyu Ou, "Efficient Digital Fingerprints Tracing," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on, pp.431,435, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889279 Digital fingerprinting is a promising approach to protect multimedia contents from unauthorized redistribution. Whereas, large scale and high dimensionality make existing fingerprint detection methods fail to trace the traitors efficiently. To handle this problem, we propose a novel local and global structure preserving hashing to conduct fast fingerprint detection. Applying the hashing method, we obtain a low-dimensional neighborhood-preserving hash code for each fingerprint. Through hash codes, we can find the nearest neighbors of the extracted fingerprint, thereby tracing the real traitors within a small range. These properties make the proposed approach efficient to trace the real traitors. Extensive experiments demonstrate that the proposed approach outperforms traditional linear scan detection methods in term of efficiency.
    Keywords: cryptography; fingerprint identification; digital fingerprint tracing; fingerprint detection; fingerprint extraction; linear scan detection methods; low-dimensional neighborhood-preserving hash code; multimedia content protection; Correlation; Fingerprint recognition; Forensics; Indexes; Multimedia communication; Training; Watermarking; digital fingerprinting; hash-based similarity search; multimedia security; neighborhood preserving hashing (ID#:14-3226)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889279&isnumber=6889177
  • Lingyu Yan; Hefei Ling; Cong Liu; Xinyu Ou, "Hashing based feature aggregating for fast image copy retrieval," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on, pp.441,445, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889281 Recently the methods based on visual words have become very popular in near- duplicate retrieval and content identification. However, obtaining the visual vocabulary by quantization is very time-consuming and unscalable to large databases. In this paper, we propose a fast feature aggregating method for image representation which uses machine learning based hashing to achieve fast feature aggregation. Since the machine learning based hashing effectively preserves neighborhood structure of data, it yields visual words with strong discriminability. Furthermore, the generated binary codes leads image representation building to be of low-complexity, making it efficient and scalable to large scale databases. The evaluation shows that our approach significantly outperforms state-of-the-art methods.
    Keywords: data structures; database management systems image representation ;image retrieval; learning (artificial intelligence); binary codes; content identification; fast feature aggregating method; feature aggregation; hashing based feature; image copy retrieval; image representation; large scale database; machine learning based hashing; near-duplicate retrieval; neighborhood data structure; visual vocabulary; visual words; Binary codes; Feature extraction; Histograms; Image representation; Linear programming; Training; Visualization; Feature Aggregation; Image Copy Retrieval; Machine Learning base hashing; Visual Words (ID#:14-3227)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889281&isnumber=6889177
  • Tianzhuo Wang; Xiangwei Kong; Yanqing Guo; Bo Wang, "Exposing the Double Compression In MP3 Audio By Frequency Vibration," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on , vol., no., pp.450,454, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889283 A novel approach is presented to detect double compressed MP3 audio by frequency vibration. With the analysis of double compression effect on MDCT (Modified Discrete Cosine Transform) coefficients in MP3 audio, we propose a simple feature called FVV (frequency vibration value) to measure the vibration caused by double compression. The experimental results on challenging dataset show that our method outperforms most of the existing methods in double MP3 compression detection, especially with a second bitrate higher than the first one. Besides, we can also estimate the original bit-rate for a double compressed MP3 by this technique.
    Keywords: audio coding; data compression; discrete cosine transforms; signal detection; FVV;MDCT; double MP3 compression detection; double compressed MP3 audio detection; double compression effect analysis; frequency vibration value; modified discrete cosine transform; second bitrate; Accuracy; Digital audio players; Feature extraction; Frequency measurement; Multimedia communication; Transforms; Vibrations; MDCT coefficients;MP3;audio forensics; double compression detection; frequency vibration (ID#:14-3228)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889283&isnumber=6889177
  • Xiaohua Li; Zifan Zhang, "Exploit the Scale Of Big Data For Data Privacy: An Efficient Scheme Based On Distance-Preserving Artificial Noise And Secret Matrix Transform," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on, pp.500,504, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889293 In this paper we show that the extensive results in blind/non-blind channel identification developed within the community of signal processing in communications can play an important role in guaranteeing big data privacy. It is widely believed that the sheer scale of big data makes most conventional data privacy techniques ineffective for big data. In contrast to this pessimistic common belief, we propose a scheme that exploits the sheer scale to guarantee privacy. This scheme uses jointly artificial noise and secret matrix transform to scramble the source data. Desirable data utility can be supported because the noise and the transform preserve some important geometric properties of the source data. With a comprehensive privacy analysis, we use the blind/non-blind channel identification theories to show that the secret transform matrix and the source data can not be estimated from the scrambled data. The artificial noise and the sheer scale of big data are critical for this purpose. Simulations of collaborative filtering are conducted to demonstrate the proposed scheme.
    Keywords: Big Data; data privacy; transforms; big data privacy; blind-nonblind channel identification theories; collaborative filtering; distance-preserving artificial noise; privacy analysis; secret matrix transform; source data scrambling; Accuracy; Big data; Data privacy; Estimation; Noise; Privacy; Transforms; big data; blind source separation; channel identification; privacy; signal processing (ID#:14-3229)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889293&isnumber=6889177
  • Yang Wang; Dan-Feng Zhao; Xi Liao, "Simplified Maximum Likelihood Detection For Multi-Beam Satellite Systems Using Group-Wise Interference Cancellation," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on, pp.559,562, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889305 The ideal joint detection method for multi-beam satellite systems is the maximum likelihood (ML) detection, while the complexity increases exponentially with the number of spot beams. A simplified ML detection is proposed for multi-beam satellite systems in this paper. The proposed algorithm is based on grouping of spot beams. ML detection is applied within groups after a crucial group detection and interference cancellation. The performance is improved by keeping multiple candidates for each group and a final constrained ML detection. Simulation results verify that the proposed algorithm reduces the computational complexity significantly while limiting the performance loss to within 0.2 dB from ML detection. In addition, the complexity of the proposed algorithm is reduced by 60 percent compared with that of a multistage group detection algorithm.
    Keywords: interference suppression; maximum likelihood detection; radiofrequency interference; satellite communication; crucial group detection; groupwise interference cancellation; ideal joint detection method; maximum likelihood detection; multibeam satellite systems; Computational complexity; Interference cancellation; Maximum likelihood detection; Partitioning algorithms; Satellites; Simulation; group-wise interference cancellation; maximum likelihood detection; multi-beam satellites; satellite communications (ID#:14-3230)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889305&isnumber=6889177
  • Chao Jin; Rangding Wang; Diqun Yan; Pengfei Ma; Kaiyun Yang, "A Novel Detection Scheme For MP3Stego With Low Payload," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on, pp.602,606, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889314 MP3Stego is a typical steganographic tool of MP3 audios. Though many researchers have been making every effort on attacking it, the performance of their approaches could be improved especially at the low embedding rate. In this paper, we have proposed a scheme for detecting low embedding rate of MP3Stego. Based on investigating the embedding principle of MP3Stego and observing the alteration of quantized MDCT coefficients (QMDCTs), the one-step transition probabilities of the difference of quantized MDCT coefficients were extracted. Finally, SVM was used for constructing a classification model according to the extracted features. Experimental results show that our scheme can effectively detect the MP3Stego steganography with low payload.
    Keywords: audio signal processing; discrete cosine transforms; feature extraction; probability; signal detection; steganography; support vector machines; MP3 audios; MP3Stego; QMDCTs; SVM; classification model; feature extraction; low embedding rate detection scheme; low payload; one-step transition probability; quantized MDCT coefficients; steganographic tool; Bit rate; Digital audio players; Encoding; Feature extraction; Payloads; Probability; Transform coding;MP3;low embedding rate; steganalysis; steganography; transition probability (ID#:14-3231)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889314&isnumber=6889177
  • Xiaochun Cao; Na Liu; Ling Du; Chao Li, "Preserving Privacy For Video Surveillance Via Visual Cryptography," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on, pp.607,610, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889315 The video surveillance widely installed in public areas poses a significant threat to the privacy. This paper proposes a new privacy preserving method via the Generalized Random-Grid based Visual Cryptography Scheme (GRG-based VCS). We first separate the foreground from the background for each video frame. These foreground pixels contain the most important information that needs to be protected. Every foreground area is encrypted into two shares based on GRG-based VCS. One share is taken as the foreground, and the other one is embedded into another frame with random selection. The content of foreground can only be recovered when these two shares are got together. The performance evaluation on several surveillance scenarios demonstrates that our proposed method can effectively protect sensitive privacy information in surveillance videos.
    Keywords: cryptography; data protection; video surveillance; GRG-based VCS; foreground pixels; generalized random-grid based visual cryptography scheme; performance evaluation; random selection; sensitive privacy information preservation method; video frame; video surveillance; Cameras; Cryptography; PSNR; Privacy; Video surveillance; Visualization; Random-Grid; Video surveillance; privacy protection; visual cryptography (ID#:14-3232)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889315&isnumber=6889177
  • Xianfeng Zhao; Haibo Yu; Jie Zhu; Yong Deng, "Differential Forensics Of DC-DM Based Watermarking," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on, pp.611,615, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889316 Forensics of watermarking may be desired by attackers and business competitors. It aims at not only recognizing the existence of watermark but also estimating the algorithm and its parameters. Distortion compensated-dither modulation (DC-DM) is the improved and generalized form of quantization-based embedding which is widely used in watermarking. It adopts pseudo-random dither sequences and adds back partial quantization noise so that estimation of the algorithm and its parameters seems very difficult. However, in case that changing embedding locations each time or using a private embedding domain is not a principle of designing watermarking as what we see nowadays, the differential forensics proposed in this paper, which exploits the differences between the watermarked copies, can recognize the DC-DM algorithm and estimate the algorithmic parameters well.
    Keywords: digital forensics; distortion; image watermarking; modulation; parameter estimation; quantisation (signal);DC-DM based watermarking; algorithmic parameter estimation; back partial quantization noise; differential forensics; distortion compensated-dither modulation; private embedding domain; pseudo-random dither sequences; quantization-based embedding; watermarking forensics; Discrete cosine transforms; Forensics; Lattices; Modulation; Noise; Quantization (signal); Watermarking; Forensics; distortion compensation; dither modulation; quantization index modulation; watermarking (ID#:14-3233)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889316&isnumber=6889177
  • Rongrong Ni; Cheng, H.D.; Yao Zhao; Lize Chen, "Adaptive Reversible Watermarking Using Trimmed Prediction And Pixel-Selection-Based Sorting," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on , vol., no., pp.616,620, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889317 Prediction error expansion based on sorting is an important technique in reversible watermarking since it yields large embedding capacity and low distortion. In this paper, an efficient and adaptive reversible watermarking scheme is proposed based on trimmed prediction and pixel selection sorting. The trimmed prediction excludes one singular pixel from the neighboring region. A more efficient sorting method is used to achieve lower distortion. Then, a further sorting that considers context complexity is proposed to ensure better visual quality. The smooth pixels located in rough areas are assigned high priorities for carrying bits by using the prediction error expansion method. With these improvements, our method shows better performances in terms of capacity and distortion.
    Keywords: sorting; watermarking; adaptive reversible watermarking; context complexity; embedding capacity ;low distortion; pixel selection sorting; prediction error expansion method; singular pixel; trimmed prediction; visual quality; Complexity theory; Context; Data mining; Payloads; Prediction algorithms; Sorting; Watermarking; Reversible watermarking; complexity; prediction error expansion; sorting; trimmed prediction (ID#:14-3234)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889317&isnumber=6889177
  • Ling Zou; Jichen Yang; Tangsen Huang, "Automatic Cell Phone Recognition From Speech Recordings," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on, pp.621,625, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889318 Recording device recognition is an important research field of digital audio forensic. In this paper, we utilize Gaussian mixture model-universal background model (GMM-UBM) as the classifier to form a recording device recognition system. We examine the performance of Mel-frequency cepstral coefficients (MFCCs) and Power-normalized cepstral coefficients (PNCCs) to this problem. Experiments conducted on recordings come from 14 cell phones show that MFCCs are more effective than PNCCs in cell phone recognition. We find that the identification performance can be improved by stacking MFCCs and energy feature. We also investigate the effect of speaker mismatch and de-noising processing for acoustic feature to this problem. The highest identification accuracy achieved here is 97.71%.
    Keywords: Gaussian processes; audio recording; mobile handsets; speech recognition; GMM-UBM; Gaussian mixture model-universal background model; MFCC; Mel-frequency cepstral coefficients; PNCCs; acoustic feature; automatic cell phone recognition; denoising processing; digital audio forensic; power normalized cepstral coefficients; recording device recognition; speaker mismatch; speech recordings; Accuracy; Cellular phones; Forensics; Object recognition; Speech; Speech recognition Training; Cell phone identification; Gaussian mixture model-universal background model (GMM-UBM);Mel-frequency cepstral coefficients (MFCCs);Power-normalized cepstral coefficients (PNCCs)}, (ID#:14-3235)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889318&isnumber=6889177
  • Yuxiao Yang; Jianjiang Zhou; Fei Wang; Chenguang Shi, "An LPI Design For Secure Burst Communication Systems," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on, pp.631,635, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889320 An LPI burst communication model based on conditional maximum entropy is presented in this paper. In this model, the conditional entropy of transmitting moments is the largest, and the prior data are used as the sample space, while Lagrange multipliers are selected as optimization variables. Hybrid Chaotic Particle Swarm Optimization (HCPSO) that is used in the model takes the dual programming of the conditional maximum entropy as objective function, and the conditional maximum entropy model is ultimately determined through this optimization algorithm. Compared with the usual method of fixed threshold, the simulation results show that the conditional maximum entropy method not only has longer effective communication time, but also can effectively increase the uncertainty of transmitting moments. The more the uncertainty of transmitting moments, the better the low probability of intercept performance is. So the burst communication has better performance of low probability of intercept using conditional maximum entropy model.
    Keywords: chaos; maximum entropy methods; particle swarm optimisation; telecommunication security; HCPSO; LPI burst communication model; LPI design; Lagrange multipliers; conditional maximum entropy dual programming; fixed threshold method; hybrid chaotic particle swarm optimization; low probability of intercept performance; objective function; optimization variables; secure burst communication systems; transmitting moment uncertainty; Communication systems; Entropy; Optimization; Particle swarm optimization; Probability density function; Programming; Uncertainty; Burst communication; LPI; maximum entropy technique (ID#:14-3236)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889320&isnumber=6889177
  • Chenguang Shi; Jianjiang Zhou; Fei Wang, "Low Probability Of Intercept Optimization For Radar Network Based On Mutual Information," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on, pp.683,687, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889331 This paper investigates the problem of low probability of intercept (LPI) design for radar network system and presents a novel LPI optimization strategy based on mutual information (MI) to improve the LPI performance for radar network. With the radar network system model, this paper would first derive Schleher intercept factor for radar network. Then, a novel LPI optimization strategy is proposed, where for a predefined threshold of MI to estimate the target parameters, Schleher intercept factor is minimized by optimizing transmission power allocation among netted radars in the network. Moreover, the nonlinear programming based genetic algorithm (NPGA) is employed to solve the resulting nonconvex and nonlinear optimization problem. Simulations demonstrate that our proposed scheme is valuable and effective to improve the LPI performance for radar network.
    Keywords: concave programming; genetic algorithms; nonlinear programming; parameter estimation; probability; radar theory; LPI design; LPI optimization strategy; MI; NPGA; Schleher intercept factor; low probability of intercept optimization; mutual information; netted radars; nonconvex optimization problem; nonlinear optimization problem; nonlinear programming based genetic algorithm; radar network system model; target parameter estimation; transmission power allocation; Optimization; Radar antennas; Radar cross-sections; Radar tracking; Resource management; Signal to noise ratio; Low probability of intercept (LPI); Schleher intercept factor; mutual information (MI); radar network (ID#:14-3237)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889331&isnumber=6889177
  • Qi Ding; Qian He; Zishu He; Blum, R.S., "Diversity Gain For MIMO-OTH Radar Target Detection Under Product Of Complex Gaussian Reflections," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on, pp.688,692, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889332 Consider a multiple-input multiple-output skywave over-the-horizon (MIMO-OTH) radar system withM transmit and N receive antennas employing the conventional optimal detector for a single complex Gaussian target. The signal from the mth transmit antenna reaches the target after being reflected by the ionosphere via Qm ray paths. Each of these multipath signals bounce off the target and reach the nth receiver after being reflected by the ionosphere again via Hmn ray paths. Thus the transmitted signals are reflected once off the target and twice by the ionosphere before arriving at the receive end, and any of these three reflections can be either categorized as being complex Gaussian or deterministic. If either one or two of the reflections are modeled as complex Gaussian while the others are modeled as deterministic, it is shown that the largest possible diversity gain is upper bounded by equation.
    Keywords: Gaussian processes; MIMO radar; diversity reception; radar detection; MIMO-OTH radar target detection; complex Gaussian reflection product; diversity gain; multiple input multiple output skywave; optimal detector; over-the-horizon radar system; single complex Gaussian target; Diversity methods; Ionosphere; Radar; Radar antennas; Receiving antennas; MIMO-OTH radar; complex Gaussian; diversity gain (ID#:14-3238)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889332&isnumber=6889177
  • Cai Xing-fu; Song Jian-she; Zhang Xiong-mei; Zheng Yong-an, "A Jamming Technique Against SAR Based On Inter-Pulse Subsection Randomly-Shift-Frequency And Its Application1," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on, pp.785,789, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889352 In order to ensure the safety of intelligence in important place, a jamming method against SAR based on inter-pulse subsection randomly-shift-frequency technique is brought forward. This technique can produce several noise-like jamming swathes in range direction, whose number is determined by the number of inter-pulse subsection. And, the position and width of the swathe are determined by frequency shifted. It can be concluded from the experiments that the number of subsections can't exceed 5; the centre of the shift-frequency can't exceed Br / 2 and the scope of the shift-frequency can't exceed Br / 4. In allusion to the phenomena of focusing on jamming technique but application, the application model of this technique brought forward in this paper is established, which followed the implement steps and method of this technique. The availability and advantage of this method is proved in the simulation experiments.
    Keywords: jamming; radar signal processing; synthetic aperture radar; SAR; inter-pulse subsection randomly-shift-frequency technique; jamming technique; noise-like jamming; range direction; synthetic aperture radar; Apertures; Azimuth; Coherence; Frequency modulation; Jamming; Synthetic aperture radar; Time-frequency analysis; Application; Randomly-shift-frequency; Subsection; Synthetic Aperture Radar (ID#:14-3239)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889352&isnumber=6889177
  • Lichen Zhang; Yingmin Wang; Aiping Huang, "Effect Of Seawater On Radiation Field Of Electric Dipole," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on, pp.800,803, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889355 The corrosion and corrosion resistance current of the submarine will produce extremely low frequency electric field after modulated by the propeller rotation in the seawater. It becomes one of the most important characteristics of the signal source. We derive the expression for the electric and magnetic fields of the electric dipole in the seawater using the electric Hertz vector, also give the expressions for the standard field. Measurements and numerical simulations show that the standard field amplitude of the submarine in the shaft frequency is in a great location. And the shaft - rate electric field can be received in a long distance. Therefore, submarine detection using the shaft - rate electric field can be probably best carried out.
    Keywords: corrosion resistance; electric fields; magnetic fields; object detection; propellers; seawater; shafts; signal sources; underwater vehicles; corrosion resistance current; electric Hertz vector; electric dipole; low frequency electric field; magnetic fields; propeller rotation; seawater effect; shaft rate electric field; submarine detection; Electric fields; Electromagnetic scattering; Frequency modulation; Shafts; Standards; Underwater vehicles; Vectors; electric dipole; extremely low frequency electric fields; seawater; shaft - rate electric field; submarine detection (ID#:14-3240)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889355&isnumber=6889177

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Computer Communication and Informatics (ICCCI) - Coimbatore, India

Computer Communication and Informatics (ICCCI) -India


The International Conference on Computer Communication and Informatics (ICCCI), 2014 was held 3-5 January 2014 in Coimbatore, India. The presentations and papers cited here focus on security-related research.

  • Abd El-Aziz, A.A.; Kannan, A., "JSON Encryption," Computer Communication and Informatics (ICCCI), 2014 International Conference on, pp.1,6, 3-5 Jan. 2014. doi: 10.1109/ICCCI.2014.6921719 JavaScript Object Notation (JSON) is a lightweight data-interchange format. It is easy for humans to read and write. It has a data format that is inter-changeable with a programming language's built-in data structures that eliminates translation time and reduces complexity and processing time. Moreover, JSON has the same strengths of XML. Therefore, it's better to shift form XML security to JSON security. In this paper, we will present how to shift from XML encryption to JSON encryption.
    Keywords: Arrays; Encryption; Standards; XML; JSON; JSON Encryption; JSON Security; XML; XML Encryption; XML Security (ID#:14-3262)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921719&isnumber=6921705
  • Sridharan, Srivatsan; Shrivastava, Himanshu, "Excogitation of Secure Data Authentication Model For Wireless Body Area Network," Computer Communication and Informatics (ICCCI), 2014 International Conference on, pp.1, 7, 3-5 Jan. 2014. doi: 10.1109/ICCCI.2014.6921738 This paper outlines the implementation of a secure data authentication model for the wireless body area network using a single private key exchanged during the time of configuration. The need for secure data exchange grows rapidly due the fact that the data exchanged are confined to the details of the ailing patient. Recent researchers have proposed a secure system for WBAN, but there is a huge demand to incorporate the security parameters into it. A system in place must ensure security with the use of limited amount of resources. This paper tries to address these issues of security considering the fact of limited availability of resources like power, bandwidth, thereby helping to achieve, more secure and time-efficient system in place for the effective online health monitoring scheme using WBAN. Also the security system for WBAN is proposed with low computational complexity for the secure transaction using a key utilized cryptographic encryption algorithm.
    Keywords: Authentication; Body area networks; Encryption; Monitoring; Servers; Authentication; Encryption; Key Exchange; Security (ID#:14-3263)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921738&isnumber=6921705
  • Patil, Anita; Pandit, Rakesh; Patel, Sachin, "Implementation of Security Framework For Multiple Web Applications," Computer Communication and Informatics (ICCCI), 2014 International Conference on, pp. 1, 7, 3-5 Jan. 2014. doi: 10.1109/ICCCI.2014.6921787 Single sign-on (SSO) is an identity management technique that provides users the ability to use multiple Web services with one set of credentials. However, when the authentication server is down or unavailable, users cannot access Web services, even if the services are operating normally. Therefore, enabling continuous use is important in single sign on. In this paper, we present security framework to overcome credential problems of accessing multiple web application. We explain system functionality with authorization and Authentication. We consider these methods from the viewpoint of continuity, security and efficiency makes the framework highly secure.
    Keywords: Authentication; Authorization; Computers; Encryption; Informatics; Servers; Identity Management System; MD5; OpenID; proxy signature; single sign-on (ID#:14-3264)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921787&isnumber=6921705
  • Nilesh, Dudhatra; Nagle, Malti, "The New Cryptography Algorithm With High Throughput," Computer Communication and Informatics (ICCCI), 2014 International Conference on, pp.1 ,5, 3-5 Jan. 2014. doi: 10.1109/ICCCI.2014.6921739 The Cryptography is very good area for research now a days. As we know that security is very primary requirement for the any business. And for that we need very strong and unbreakable algorithm which provides high security. For that we need encryption and decryption algorithm which is having very high security with very good throughput. If we look at the real world, there are lots of organizations that are having very large database with high security. As per security concern, some encryption and decryption algorithms are working behind confidential information like DES, 3DES, AES and Blowfish. In this paper at first new cryptography (Encryption and Decryption) algorithm has been generated and new cryptography (Encryption and Decryption) algorithm has been compared by using some components like throughput of key generation, to generate Encryption text and to generate Decryption text. If any brute force attacks are applied on this algorithm, how much security is provided by this algorithm is included. In this algorithm some arithmetic and logical mathematical operations are performed.
    Keywords: Ciphers; Computers; Encryption; Three-dimensional displays; Throughput; 3DES; AES; Blowfish; Cryptography; DES; Decryption; Encryption; Security (ID#:14-3265)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921739&isnumber=6921705
  • Khan, Aarfa; Shrivastava, Shweta; Richariya, Vineet, "Normalized Worm-hole Local Intrusion Detection Algorithm(NWLIDA)," Computer Communication and Informatics (ICCCI), 2014 International Conference on, pp.1,6, 3-5 Jan. 2014. doi: 10.1109/ICCCI.2014.6921748 A Mobile Ad-Hoc Network (MANET) is a arrangement of wireless mobile nodes which forms a temporary network for the communication without the access point, high availability of wireless devices in everyday is a measure factor in the success of infrastructure-less networks. MANET is dealing with both kinds of attacks, active and passive attacks at all the layers of network model. The lack in security measures of their routing protocols is alluring a number of attackers to intrude the network. A particular type of attack; known as Wormhole, which is launched by creation of tunnels and it results in complete disruption of routing paths on MANET. This paper presents a technique NWLID: Normalized Wormhole Local Intrusion detection Algorithm which is the modified version of Local Intrusion Detection Routing Security over mobile adhoc Network which has an intermediate neighbor node discovery mechanism, packet drop calculator, individual node receiving packet estimator followed by isolation technique for the confirmed Wormhole nodes. Result shows the effect of wormhole attack on normal behavior and improvement of performance after the application of proposed scheme. The effectiveness of NWLID algorithm is evaluated using ns2 network simulator.
    Keywords: Computers; Grippers; Mobile ad hoc networks; Peer-to-peer computing; Routing; Security; Throughput; Ad-hoc Network; Adjoining Node; Black hole; Isolation; Preclusion Ration; Security; Wormhole Tunnel Detection (ID#:14-3266)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921748&isnumber=6921705
  • Balachandar, R.; Manojkumar, S., "Towards Reliable And Secure Resource Scheduling In Clouds," Computer Communication and Informatics (ICCCI), 2014 International Conference on, pp. 1, 5, 3-5 Jan. 2014. doi: 10.1109/ICCCI.2014.6921757 Delivering the hosted services via the internet is called cloud computing. It is attractive to business owners as it eliminates the requirement for users to plan ahead for provisioning and allows enterprises to start from the small and increase resources only when there is a rise in service demand. Central component that manages the allocation of virtual resources for a cloud infrastructure's physical resources is known as the cloud scheduler. Currently available schedulers do not consider users' security and privacy requirements and properties of entire cloud infrastructure. These results in major security, privacy and resilience concerns. The ability of cloud infrastructure is to support the internet scale critical applications. Without strong assurance, organizations should not outsource their critical applications to the cloud. It is one of the challenging problems to address. In this paper, we propose a secure and reliable cloud scheduler which consider both user requirements and infrastructure properties and supported by trustworthy data enabling the scheduler to make the right decision. We focus on assuring users that their virtual resources are hosted using physical resources that match their requirements without getting users involved with understanding the details of the cloud infrastructure. We present our prototype that implements the proposed cloud scheduler which is built on OpenStack.
    Keywords: Cloud computing; Computational modeling; Computers; Physical layer; Privacy; Security; Servers; Access Control; Cloud Computing; Cloud Infrastructure; Open source; Trustworthiness (ID#:14-3267)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921757&isnumber=6921705
  • Sam Suresh J.; Manjushree A.; Eswaran P., "Differential Power Analysis (DPA) Attack On Dual Field ECC Processor For Cryptographic Applications," Computer Communication and Informatics (ICCCI), 2014 International Conference on, pp.1,5, 3-5 Jan. 2014 doi: 10.1109/ICCCI.2014.6921775 Exchange of private information over a public medium must incorporate a method for data protection against unauthorized access. To enhance the data security against the DPA attack in network communication, a dual field ECC processor supporting all finite field operations is proposed. The ECC processor performs hardware designs in terms of functionality, scalability, performance and power consumption. A unified scheme is introduced to accelerate EC arithmetic functions. The hardware is optimized by a very compact Galois field arithmetic unit with fully pipelined technique. A key-blinded technique is designed against power analysis attacks.
    Keywords: Algorithm design and analysis; Computers; Elliptic curve cryptography; Elliptic curves; Hardware; DPA; Dual fields; ECC; Galois field; Public key cryptography (ID#:14-3268)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921775&isnumber=6921705
  • Beigh, Bilal Maqbool; Peer, M.A., "Performance Evaluation Of Different Intrusion Detection System: An Empirical Approach," Computer Communication and Informatics (ICCCI), 2014 International Conference on, pp. 1, 7, 3-5 Jan. 2014, doi: 10.1109/ICCCI.2014.6921740 Easy connectivity to a large number of networks is the main reason towards the development of security measures in the field of networking. People as well as organizations are very keen to share their resources online. But sharing the valuable information over the network may not be safe as it may be hacked by their rivals, to either destroy them or to make their own benefits from this data. The technique / system which protect our data from theft or intrusions, we call that as Intrusion Detection System. Though there are many intrusion detection systems available in the market, however users are not well familiar with the performance of different intrusion detection system or are confused with the results provided by companies. In this paper, we will make an attempt to provide a better view of performance of different intrusion detection techniques implemented under same conditions (DARPA 1999 Dataset) and with same parameters for implementation (i.e Data set will be used DARPA 1999 for experimentation). Lastly we will provide some results which will be fruitful for user in accordance to the performance.
    Keywords: Computers; Engines; Informatics; Intrusion detection; Libraries; Probes; Dataset; IDS; intrusion detection; performance; policy; security (ID#:14-3269)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921740&isnumber=6921705
  • Dongre, Kirti A.; Thakur, Roshan Singh; Abraham, Allan, "Secure Cloud Storage Of Data," Computer Communication and Informatics (ICCCI), 2014 International Conference on, pp.1,5, 3-5 Jan. 2014.doi: 10.1109/ICCCI.2014.6921741 Cloud computing is one of the upcoming technologies that will upgrade generation of Internet. The data stored in the smart phones is increased as more applications are deployed and Executed. If the phone is damaged or lost then the information stored in it gets lost. If the cloud storage can be integrated for regular data backup of a mobile user so that the risk of data lost can be minimized. The user can stored data in the server and retrieve them at anytime and from anywhere. The data might be uncovered by attack during the retrieval or transmission of data using wireless cloud storage without proper authentication and protection. So to avoid this in this paper we design a mechanism that provides a security requirement for data storage of mobile phones.
    Keywords: Cloud computing; Computers; Customer relationship management; Encryption; Mobile communication; Servers; Cloud storage; SQL; encryption (ID#:14-3270)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921741&isnumber=6921705
  • Raghu, I; Sreelatha Reddy, V., "Key binding with fingerprint feature vector," Computer Communication and Informatics (ICCCI), 2014 International Conference on, pp.1, 5, 3-5 Jan. 2014 doi: 10.1109/ICCCI.2014.6921835 In modern world to secure data is a big task. Cryptographic systems have been widely used in many information security applications. One main challenge that these systems have faced has been how to protect private keys from attackers. A biometric cryptosystem that can be used to effectively protect private keys and to retrieve them only when legitimate users enter their biometric data. In biometric applications, it is widely known that a fingerprint can discriminate between persons better than other biometric modalities. In this paper, we propose a fingerprint based biometric encryption model using BCH and the combination of BCH and RS Coding. Experimental results showed that 128-bit private keys were securely encrypted with fingerprint feature vector and successfully retrieved at verification with FRR is 0.7% and FAR is 0%.
    Keywords: Discrete wavelet transforms; Encoding; Encryption; Feature extraction; Fingerprint recognition; Vectors; BCH and RS Coding; DWT; WP; biometrics; cryptographic key; fingerprint; wavelets transforms (ID#:14-3272)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921835&isnumber=6921705
  • Thangadurai, K.; Sudha Devi, G., "An Analysis of LSB Based Image Steganography Techniques," Computer Communication and Informatics (ICCCI), 2014 International Conference on, pp. 1, 4, 3-5 Jan. 2014. doi: 10.1109/ICCCI.2014.6921751 Steganography refers to information or a file that has been concealed inside a digital picture, video or audio file. If a person views the object in which the information is hidden inside, he or she will have no indication that there is any hidden information. So the person will not try to decrypt the information. Steganography can be divided into Text Steganography, Image Steganography, Audio/Video Steganography. Image Steganography is one of the common methods used for hiding the information in the cover image. LSB is very efficient algorithm used to embed the information in a cover file. This paper presents the detail knowledge about the LSB based image steganography and its applications to various file formats. In this paper we also analyze the available image based steganography along with cryptography technique to achieve security.
    Keywords: Art; Computer science; Computers; Cryptography; Gray-scale; Image color analysis; Informatics; Cover Image; Cryptography; GIF; LSB; Message Hiding; PNG; Steganography (ID#:14-3273 )
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921751&isnumber=6921705
  • Doe, Nina Pearl; Suganya V., "Secure Service To Prevent Data Breaches In Cloud," Computer Communication and Informatics (ICCCI), 2014 International Conference on, pp.1, 6, 3-5 Jan. 2014. doi: 10.1109/ICCCI.2014.6921755 Cloud Computing is a computing paradigm shift where computing is moved away from personal computers or an individual server to a cloud of computers. Its flexibility, cost-effectiveness, and dynamically re-allocation of resources as per demand make it desirable. At an unprecedented pace, cloud computing has simultaneously transformed business and government, and created new security challenges such as data breaches, data loss, account hijacking and denial of service. Paramount among these security threats is data breaches. The proposed work is to prevent data breaching threat by way of providing user authentication through one-time password system and challenge response, risk assessment to identify and prevent possible risks, encryption using enhanced elliptic curve cryptography where a cryptographically secure random number generation is used to make the number unpredictable, data integrity using MD5 technique, and key management. The platform for deployment of the application is Google App Engine.
    Keywords: Cloud computing; Computational modeling; Elliptic curve cryptography; Elliptic curves; Encryption; MD5; authentication; cloud computing; elliptic curve cryptography; risk assessment; security issues (ID#:14-3274)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921755&isnumber=6921705
  • Arockiam, L.; Monikandan, S., "Efficient Cloud Storage Confidentiality To Ensure Data Security," Computer Communication and Informatics (ICCCI), 2014 International Conference on, pp.1, 5, 3-5 Jan. 2014. doi: 10.1109/ICCCI.2014.6921762 Cloud computing provides an enormous amount of virtual storage to the users. Cloud storage mainly helps to small and medium scale industries to reduce their investments and maintenance of storage servers. Cloud storage is efficient for data storage. Users' data are sent to the cloud is to be stored in the public cloud environment. Data stored in the cloud storage might mingle with other users' data. This will lead to the data protection issue in cloud storage. If the confidentiality of cloud data is broken, then it will cause loss of data to the industry. Security of cloud storage is ensured through confidentiality parameter. To ensure the confidentiality, the most common used technique is encryption. But encryption alone doesn't give maximum protection to the data in the cloud storage. To have efficient cloud storage confidentiality, this paper uses encryption and obfuscation as two different techniques to protect the data in the cloud storage. Encryption is the process of converting the readable text into unreadable form using an algorithm and a key. Obfuscation is same like encryption. Obfuscation is a process which disguises illegal users by implementing a particular mathematical function or using programming techniques. Based on the type of data, encryption and obfuscation can be applied. Encryption can be applied to alphabets and alphanumeric type of data and obfuscation can be applied to a numeric type of data. Applying encryption and obfuscation techniques on the cloud data will provide more protection against unauthorized usage. Confidentiality could be achieved with a combination of encryption and obfuscation.
    Keywords: Cloud computing; Databases; Encryption; Memory; Cloud Storage; Confidentiality; Data Protection; Encryption; Obfuscation (ID#:14-3275)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921762&isnumber=6921705
  • Singha, Thockchom Birjit; Jain, Lakshay; Kant, Bikash, "Negligible Time-Consuming RC5 Remote Decoding Technique," Computer Communication and Informatics (ICCCI), 2014 International Conference on, pp. 1 4, 3-5 Jan. 2014. doi: 10.1109/ICCCI.2014.6921832 Remote decoding techniques based solely on polling waste a considerable amount of time in the bit-reading process, which is undesirable. Better techniques involving interrupts have been proposed, but, these still waste some amount of precious execution time. In this paper, we propose a technique which consumes negligible time (few ms) in the bit reading process, thus, utilizing all the available time for execution of the main task.
    Keywords: Algorithm design and analysis; Computers; Decoding; Delays; Flowcharts; Informatics; Protocols; IEEE 802.3; IEEE 802.4;Interrupt service routine (ISR); Interrupts; Polling; RC5 Protocol; Remote decoding (ID#:14-3276)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921832&isnumber=6921705
  • Gupta, Piyush Kumar; Roy, Ratnakirti; Changder, Suvamoy, "A Secure Image Steganography Technique With Moderately Higher Significant Bit Embedding," Computer Communication and Informatics (ICCCI), 2014 International Conference on, pp.1,6, 3-5 Jan. 2014. doi: 10.1109/ICCCI.2014.6921726 Steganography is a process to hide secret data into a cover media in an imperceptible manner. In the spatial domain of image steganography, the most common technique is Least Significant Bit Replacement (LSBR). However, LSBR is extremely sensitive to compression attacks involving truncation of LSBs. As a possible solution to the drawback of the traditional LSBR scheme, this paper proposes an image steganography technique that embeds secret data in the moderately higher significant bits such as 4th or 5th bit of a pixel. The proposed method uses a color image as a cover and according to pixel values; three groups of pixels are maintained. These groups are used for selecting the candidate pixels for 4th or 5th bit embedding. It also implements an optimal pixel adjustment process (OPAP) to minimize the visual distortion due to embedding. In addition to the OPAP, a method for randomly dispersing the secret data bits is also implemented making it harder for an adversary to detect hidden information. The experimental results for proposed method showed high values for Peak Signal to Noise Ratio (PSNR) signifying High stego-image fidelity.
    Keywords: Computers; Image coding; Informatics; Media; PSNR; Payloads; Visualization; Image Steganography; Moderately Higher Significant Bit Embedding (MHSBE);Optimal Pixel Adjustment Process (OPAP); RGB image (ID#:14-3277)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921726&isnumber=6921705
  • Vanitha, M.; Kavitha, C., "Secured Data Destruction In Cloud Based Multi-Tenant Database Architecture," Computer Communication and Informatics (ICCCI), 2014 International Conference on, pp.1, 6, 3-5 Jan. 2014. doi: 10.1109/ICCCI.2014.6921774 Cloud computing falls into two general categories. Applications being delivered as service and hardware and data centers that provides those services [1]. Cloud storage evolves from just a storage model to a new service model where data is being managed, maintained, and stored in multiple remote severs for back-up reasons. Cloud platform server clusters are running in network environment and it may contain multiple users' data and the data may be scattered in different virtual data centers. In a multi-user shared cloud computing platform users are only logically isolated, but data of different users may be stored in same physical equipment. These equipments can be rapidly provisioned, implemented, scaled up or down and decommissioned. Current cloud providers do not provide the control or at least the knowledge over the provided resources to their customers. The data in cloud is encrypted during rest, transit and back-up in multi tenant storage. The encryption keys are managed per customer. There are different stages of data life cycle Create, Store, Use, Share, Archive and Destruct. The final stage is overlooked [2], which is the complex stage of data in cloud. Data retention assurance may be easier for the cloud provider to demonstrate while the data destruction is extremely difficult. When the SLA between the customer and the cloud provider ends, today in no way it is assured that the particular customers' data is completely destroyed or destructed from the cloud provider's storage. The proposed method identifies way to track individual customers' data and their encryption keys and provides solution to completely delete the data from the cloud provider's multi-tenant storage architecture. It also ensures deletion of data copies as there are always possibilities of more than one copy of data being maintained for back-up purposes. The data destruction proof shall also be provided to customer making sure that the owner's data is completely removed.
    Keywords: Cloud computing; Computer architecture; Computers; Encryption; Informatics; Public key; attribute based encryption; data retention; encryption; file policy (ID#:14-3278)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6921774&isnumber=6921705

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Conference on Advanced Communication Technology - Korea

Conference on Advanced Communication Technology - Korea


International Conferences: Conference on Advanced Communication Technology - Korea

The 2014 16th International Conference on Advanced Communication Technology (ICACT) was held 16-19 February 2014 in Phoenix Park, PyeongChang Korea. Security topics include cryptography, using personal VPNs to preclude censorship, E-health privacy, smart grid, steganography, bots, LEACH protocols, obfuscation, IPSEC in IPv6, and grey hole attacks, among others.

  • Hyunho Kang; Hori, Y.; Katashita, T.; Hagiwara, M.; Iwamura, K., "Cryptographie Key Generation from PUF Data Using Efficient Fuzzy Extractors," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.23, 26, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6778915 Physical unclonable functions (PUFs) and biometrics are inherently noisy. When used in practice as cryptographic key generators, they need to be combined with an extraction technique to derive reliable bit strings (i.e., cryptographic key). An approach based on an error correcting code was proposed by Dodis et al. and is known as a fuzzy extractor. However, this method appears to be difficult for non-specialists to implement. In our recent study, we reported the results of some example implementations using PUF data and presented a detailed implementation diagram. In this paper, we describe a more efficient implementation method by replacing the hash function output with the syndrome from the BCH code. The experimental results show that the Hamming distance between two keys vary according to the key size and information-theoretic security has been achieved.
    Keywords: Hamming codes; cryptography; error correction codes; fuzzy set theory; BCH code; Hamming distance; PUF data; biometrics; cryptographic key generation; efficient fuzzy extractors; error correcting code; information-theoretic security; physical unclonable functions; reliable bit strings; Cryptography; Data mining; Entropy; Hamming distance; High definition video; Indexes; Reliability; Arbiter PUF; Fuzzy Extractor; Physical Unclonable Functions (ID#:14-3279)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778915&isnumber=6778899
  • Yuzhi Wang; Ping Ji; Borui Ye; Pengjun Wang; Rong Luo; Huazhong Yang, "GoHop: Personal VPN to Defend From Censorship," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.27,33, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6778916 Internet censorship threatens people's online privacy, and in recent years, new technologies such as high-speed Deep Packet Inspection (DPI) and statistical traffic analysis methods had been applied in country scale censorship and surveillance projects. Traditional encryption protocols cannot hide statistical flow properties and new censoring systems can easily detect and block them "in the dark". Recent work showed that traffic morphing and protocol obfuscation are effective ways to defend from statistical traffic analysis. In this paper, we proposed a novel traffic obfuscation protocol, where client and server communicate on random port. We implemented our idea as an open-source VPN tool named GoHop, and developed several obfuscation method including pre-shared key encryption, traffic shaping and random port communication. Experiments have shown that GoHop can successfully bypass internet censoring systems, and can provide high-bandwidth network throughput.
    Keywords: Internet; cryptographic protocols; data protection; public domain software; statistical analysis; telecommunication traffic; transport protocols; DPI; GoHop; TCP protocol; bypass Internet censoring systems; country scale censorship; encryption protocols; high-bandwidth network throughput; high-speed deep packet inspection; open-source VPN tool; people online privacy; personal VPN; pre-shared key encryption; privacy protection; random port communication; statistical flow property; statistical traffic analysis methods; surveillance projects; traffic morphing ;traffic obfuscation protocol method; traffic shaping; Cryptography; Internet; Ports (Computers);Protocols; Servers; Throughput; Virtual private networks; VPN; censorship circumvention; privacy protection; protocol obfuscation; random port; traffic morphing (ID#:14-3280)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778916&isnumber=6778899
  • Thiranant, N.; Sain, M.; Hoon Jae Lee, "A Design Of Security Framework For Data Privacy In E-Health System Using Web Service," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.40,43, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6778918 E-Health is a common term used for electronic health, where the services and systems provided include electronic health records, prescriptions, consumer health information, healthcare information systems, and so on. In this period of time, several patients have started to use e-health, considering the convenience of services delivered and cost reduction. The popularity has abruptly been increasing due to a wide range of services. From the system administrator's perspectives, not only protecting privacy of patients is considered a difficult task, but also building trust of patients in e-health. In this paper, a design of security framework for data privacy in e-Health system based on web service architecture is proposed. It is interesting to note that the approach proposed in this paper is not limited to e-Health system.
    Keywords: Web services; data privacy; electronic health records; health care; software architecture; trusted computing; Web service architecture; consumer health information; cost reduction; data privacy; e-health system; electronic health records; healthcare information systems; patient privacy; security framework; system administrator perspective; Cloud computing; Data privacy; Databases; Encryption; Data Privacy; Data encryption; E-health; Privacy; Web service (ID#:14-3281)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778918&isnumber=6778899
  • Bruce, N.; Sain, M.; Hoon Jae Lee, "A Support Middleware Solution For E-Healthcare System Security," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.44, 47, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6778919 This paper presents a middleware solution to secure data and network in the e-healthcare system. The e-Healthcare Systems are a primary concern due to the easiest deployment area accessibility of the sensor devices. Furthermore, they are often interacting closely in cooperation with the physical environment and the surrounding people, where such exposure increases security vulnerabilities in cases of improperly managed security of the information sharing among different healthcare organizations. Hence, healthcare-specific security standards such as authentication, data integrity, system security and internet security are used to ensure security and privacy of patients' information. This paper discusses security threats on e-Healthcare Systems where an attacker can access both data and network using masquerade attack Moreover, an efficient and cost effective approach middleware solution is discussed for the delivery of secure services.
    Keywords: data privacy; health care; medical administrative data processing; middleware; security of data; Internet security; authentication; data integrity; e-health care system security; electronic health care; health care organizations; health care-specific security standards; information sharing; masquerade attack; patient information privacy; patient information security; security vulnerabilities; support middleware solution; system security; Authentication; Communication system security; Logic gates; Medical services; Middleware; Wireless sensor networks; Data Security; Middleware; Network Security; e-Healthcare (ID#:14-3282)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778919&isnumber=6778899
  • Feng Zhao; Guannan Wang; Chunyu Deng; Yue Zhao, "A Real-Time Intelligent Abnormity Diagnosis Platform In Electric Power System," Advanced Communication Technology (ICACT), 2014 16th International Conference on, vol., no., pp.83, 87, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6778926 Abstract: With the rapid development of smart grid, intelligent electric meters can be seen in most of the households, and the volume of electric energy data is in a rapid growth. This paper mainly aims at introducing an abnormity diagnosis platform in electric power system. It is used to distinguish the abnormal point according to the historical data and expert experience, and put forward some resolving scheme to ensure the high reliability and stability of power grid. In our approach, we use distributed technologies to process big electric energy data. Specifically, distributed fie system (HDFS) and distributed database (HBase) are applied to data storage, and distributed computing technology (MapReduce) is applied to constructing knowledge base and computing. In the inference engine, we use Hidden Semi-Markov Model. This model can auto-get and modify knowledge in knowledge base, achieve a better real time phenomenon, through self-learning function and machine as well as interacting between human. The results show that this abnormity intelligent diagnoses platform is effective and faster.
    Keywords: Markov processes; distributed databases; expert systems; inference mechanisms; meters; power system analysis computing; power system measurement; unsupervised learning; HBase; HDFS; MapReduce; data storage; distributed computing technology; distributed database; distributed file system; electric energy data; electric power system; expert experience; hidden semiMarkov model; historical data; inference engine; intelligent electric meters; knowledge base; real time intelligent abnormity diagnosis platform;self learning function; smart grid; Data handling; Data storage systems; Engines; Expert systems; Information management; Power systems; Abnormity Intelligent Diagnosis; Distributed Computing; Distributed Storage; Hidden Markov Model (ID#:14-3283)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778926&isnumber=6778899
  • Diop, I.; Farss, S.M.; Tall, K.; Fall, P.A.; Diouf, M.L.; Diop, A.K., "Adaptive Steganography Scheme Based on LDPC Codes," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.162,166, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6778941 Steganography is the art of secret communication. Since the advent of modern steganography, in the 2000s, many approaches based on the error correcting codes (Hamming, BCH, RS, STC ...) have been proposed to reduce the number of changes of the cover medium while inserting the maximum bits. The works of LDiop and al [1], inspired by those of T. Filler [2] have shown that the LDPC codes are good candidates in minimizing the impact of insertion. This work is a continuation of the use of LDPC codes in steganography. We propose in this paper a steganography scheme based on these codes inspired by the adaptive approach to the calculation of the map detectability. We evaluated the performance of our method by applying an algorithm for steganalysis.
    Keywords: parity check codes; steganography; LDPC codes; adaptive steganography scheme; error correcting codes; map detectability; secret communication; steganalysis; Complexity theory; Distortion measurement; Educational institutions; Histograms; PSNR; Parity check codes; Vectors; Adaptative steganography; complexity; detectability; steganalysis (ID#:14-3284)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778941&isnumber=6778899
  • Dotcenko, S.; Vladyko, A.; Letenko, I., "A Fuzzy Logic-Based Information Security Management For Software-Defined Networks," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.167,171, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6778942 In terms of network security, software-defined networks (SDN) offer researchers unprecedented control over network infrastructure and define a single point of control over the data flows routing of all network infrastructure. OpenFlow protocol is an embodiment of the software-defined networking paradigm. OpenFlow network security applications can implement more complex logic processing flows than their permission or prohibition. Such applications can implement logic to provide complex quarantine procedures, or redirect malicious network flows for their special treatment. Security detection and intrusion prevention algorithms can be implemented as OpenFlow security applications, however, their implementation is often more concise and effective. In this paper we considered the algorithm of the information security management system based on soft computing, and implemented a prototype of the intrusion detection system (IDS) for software-defined network, which consisting of statistic collection and processing module and decision-making module. These modules were implemented in the form of application for the Beacon controller in Java. Evaluation of the system was carried out on one of the main problems of network security - identification of hosts engaged in malicious network scanning. For evaluation of the modules work we used mininet environment, which provides rapid prototyping for OpenFlow network. The proposed algorithm combined with the decision making based on fuzzy rules has shown better results than the security algorithms used separately. In addition the number of code lines decreased by 20-30%, as well as the opportunity to easily integrate the various external modules and libraries, thus greatly simplifies the implementation of the algorithms and decision-making system.
    Keywords: decision making; fuzzy logic; protocols; security of data; software radio; telecommunication control; telecommunication network management; telecommunication network routing; telecommunication security; Java; OpenFlow protocol; beacon controller; data flows routing; decision making; decision-making module; fuzzy logic-based information security management; intrusion detection system; intrusion prevention algorithms; logic processing flows; malicious network flows; malicious network scanning; mininet environment; network infrastructure; network security; processing module; security detection; soft computing; software-defined networks; statistic collection; Decision making; Information security; Software algorithms; Switches; Training; Fuzzy Logic; Information security; OpenFlow; Port scan; Software-Defined Networks (ID#:14-3285)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778942&isnumber=6778899
  • Buinevich, M.; Izrailov, K., "Method and Utility For Recovering Code Algorithms Of Telecommunication Devices For Vulnerability Search," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.172,176, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6778943 Abstract: The article describes a method for searching vulnerabilities in machine code based on the analysis of its algorithmized representation obtained with the help of an utility being a part of the method. Vulnerability search falls within the field of telecommunication devices. Phase-by-phase description of the method is discussed, as well as the software architecture of the utility and their limitations in terms of application and preliminary effectiveness estimate results. A forecast is given as to developing the method and the utility in the near future.
    Keywords: assembly language; binary codes; reverse engineering; security of data; algorithmized representation; code recovery algorithm; machine code; phase-by-phase description; software architecture; telecommunication devices; vulnerability search; Algorithm design and analysis; Assembly; Communications technology; Educational institutions; Information security; Software; Software algorithms; binary codes; information security; program language extension; reverse engineering and decompilation; telecommunications (ID#:14-3286)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778943&isnumber=6778899
  • Rahman, A.F.A.; Ahmad, R.; Ramli, S.N., "Forensics Readiness For Wireless Body Area Network (WBAN) System," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.177,180, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6778944 Wireless Body Area Network (WBAN) is a wireless network that can be attached or implanted onto the human body by using wireless sensor. Since WBAN developed for medical devices, the system should be design for a wide range of end user with different professional skill groups. This require WBAN system to be open, accurate and efficient. As from our previous experienced, any open system is vulnerable, similar to any other current available wireless systems such as Wireless Local Area Network (WLAN). However, currently there were not many discussions on the WBAN security vulnerability and security threats and if there is any, the issues were discussed through theoretical, concept and simulation data. In this paper, we discuss potential WBAN security vulnerability and threats using Practical Impact Assessment (PIA) conducted in real environment so that we are able to identify the problem area in details and develop potential solutions to produce a forensics readiness secure network architecture for WBAN system.
    Keywords: body area networks; body sensor networks; digital forensics; telecommunication security; wireless sensor networks; PIA; WBAN security vulnerability; WBAN system; WLAN; forensics readiness secure network architecture; human body; medical devices; practical impact assessment; wireless body area network; wireless local area network; wireless sensor network; Body area networks; Communication system security; Forensics; Hospitals; Security; Wireless communication; Wireless sensor networks; Forensics Readiness; Information Security; Practical Impact Assessment; Secure Network Architecture; Wireless Body Area Network (WBAN) (ID#:14-3287)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778944&isnumber=6778899
  • Ayalneh, D.A.; Hyoung Joong Kim; Yong Soo Choi, "JPEG Copy Paste Forgery Detection Using BAG Optimized For Complex Images," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.181,185, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6778945 Image forgery detection is one of important activities of digital forensics. Forging an image has become very easy and visually confusing with the real one. Different features of an image can be used in passive forgery detection. Most of lossy compression methods demonstrate some distinct characteristics. JPEG images have a traceable zero valued DCT coefficients in the high frequency regions due to quantization. This appears as a square grid all over the image, known as Block Artifact Grid (BAG). In this paper the BAG based copy-paste forgery detection method is improved by changing the input DCT coefficients for Local Effect computation. The proposed method has shown a better performance especially for complex images.
    Keywords: data compression; digital forensics; discrete cosine transforms; image coding; quantisation (signal);BAG;JPEG copy paste forgery detection; block artifact grid; digital forensics; image forgery detection; image forging; local effect computation; lossy compression methods; passive forgery detection; quantization; traceable zero valued DCT coefficients; Discrete cosine transforms; Educational institutions; Forgery; Image coding; Multimedia communication; Quantization (signal);Transform coding; Block Artifact Grid; Copy-paste forgery; JPEG; Local Effect (ID#:14-3288)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778945&isnumber=6778899
  • Tripathi, G.; Singh, D.; Hoon-Jae Lee, "Content Centric Battlefield Visualization Mechanism And Solutions," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.202,207, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6778949 We are designing a content centric battlefield architecture model to support Soldiers/Army, which are going to visualise and analysis of the Input receive raw data at data mining station. Previously, we had limited traffic in Battlefield networks and small number of known private servers with their contents and security concerns. The users of secured server interacted with limited number of servers which were known in advance. Today, the Battlefield networking, surveillance traffic, content servers and hybrid information have increased dynamically. The present Battlefield architecture is handling only data streams of bits between-end-to-end system for content of Battlefield services and its objects. The modern battlefield techniques and architecture is constantly evolving. Therefore, we need more resources to effectively visualize the pattern of the battlefield objects and situations. This paper presents a novel architecture model for interaction between battlefield entities based on content model for search. Where the basic object of battlefield is use as content irrespective of its location to be used for higher interaction between entities.
    Keywords: data mining; military communication; military computing; surveillance; army; battlefield networking; battlefield networks; battlefield services; content centric battlefield architecture model; content centric battlefield visualization mechanism; content model; content servers; data mining station; data streams; end-to-end system; hybrid information; private servers; security concerns; soldiers; surveillance traffic; Computer architecture; Media; Security; Servers; Streaming media; Visualization; Weapons; Battlefield monitoring; Battlefield networks; Intelligent system; Soldiers Applications (ID#:14-3289)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778949&isnumber=6778899
  • Wei Wan; Jun Li, "Investigation of state division in botnet detection model," Advanced Communication Technology (ICACT), 2014 16th International Conference on., pp.265, 268, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6778961 Botnet as a new technology of attacks is a serious threat to Internet security. With the rapid development of the botnet, botnet based several protocols came into being. In accordance with the feature of botnet, the Hidden Markov Model has application in botnet detection. Firstly, according to the situation and problems of the botnet recently, the life cycle and behaviour characteristics of the botnet have been analysed. After that a mathematical model based on state division has been built to describe the botnet. Meanwhile, a method of botnet detection based on this model has been proposed. Finally, we analyzed and summarized the experimental results, and verified the reliability and rationality of the detection method.
    Keywords: Internet; hidden Markov models; security of data; Internet security; botnet based protocols; botnet behaviour characteristics; botnet detection model; botnet life cycle; hidden Markov model; state division; Automata; Centralized control; Computer crime; Hidden Markov models; Monitoring; Protocols; Botnet; Hidden Markov Model; State Division (ID#:14-3290)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778961&isnumber=6778899
  • Sung-Hwan Ahn; Nam-Uk Kim; Tai-Myoung Chung, "Big Data Analysis System Concept For Detecting Unknown Attacks," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.269,272, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6778962 Recently, threat of previously unknown cyber-attacks are increasing because existing security systems are not able to detect them. Past cyber-attacks had simple purposes of leaking personal information by attacking the PC or destroying the system. However, the goal of recent hacking attacks has changed from leaking information and destruction of services to attacking large-scale systems such as critical infrastructures and state agencies. In the other words, existing defence technologies to counter these attacks are based on pattern matching methods which are very limited. Because of this fact, in the event of new and previously unknown attacks, detection rate becomes very low and false negative increases. To defend against these unknown attacks, which cannot be detected with existing technology, we propose a new model based on big data analysis techniques that can extract information from a variety of sources to detect future attacks. We expect our model to be the basis of the future Advanced Persistent Threat(APT) detection and prevention system implementations.
    Keywords: Big Data; computer crime; data mining; APT detection; Big Data analysis system; Big Data analysis techniques; advanced persistent threat detection; computer crime; critical infrastructures; cyber-attacks; data mining; defence technologies; detection rate; future attack detection; hacking attacks; information extraction; large-scale system attacks; pattern matching methods; personal information leakage; prevention system; security systems; service destruction; state agencies; unknown attack detection; Data handling; Data mining; Data models; Data storage systems; Information management; Monitoring; Security; Alarm systems; Computer crime; Data mining; Intrusion detection (ID#:14-3291)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778962&isnumber=6778899
  • Jiajia Wang; Jingchao Chen; Hexiang Duan; Hongbo Ba; Jianjun Wu, "Jammer Selection For Secure Two-Way DF Relay Communications With Imperfect CSI," Advanced Communication Technology (ICACT), 2014 16th International Conference on,, pp.300, 303, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6778969 This paper investigates jammer selection in a two-way decode-and-forward (DF) relay network with imperfect channel state information (CSI). The proposed scheme enables an selection of one conventional relay and two jamming nodes to enhance communication security against eavesdropper. The conventional relay assists two sources to exchange their data via a DF protocol. The two jamming nodes are used to create interference signals to confuse the eavesdropper. Furthermore, the asymptotic performance of proposed scheme is analyzed in detail. Under the assumption that the relay can decode received signals perfectly and when the jamming power is higher than that of source nodes, we find that the proposed scheme has a high secrecy performance which is almost independent of the position of the eavesdropper.
    Keywords: decode and forward communication; protocols; relay networks (telecommunication) telecommunication security; CSI; channel state information; communication security; decode-and-forward protocol; jammer selection; jamming nodes; secure two-way decode-and-forward relay communications; source nodes; Educational institutions; Jamming; Peer-to-peer computing; Relays; Security; Signal to noise ratio; Wireless communication; DF relay; Jammer selection; imperfect CSI; physical layer security; two-way (ID#:14-3292)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778969&isnumber=6778899
  • Rahayu, T.M.; Sang-Gon Lee; Hoon-Jae Lee, "Survey on LEACH-based Security Protocols," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp. 304, 309, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6778970 Energy efficiency is one of the major concerns in designing protocols for WSNs. One of the energy-efficient communication protocols for this network is LEACH that works on cluster-based homogeneous WSNs. Though LEACH is energy-efficient but it does not take security into account. Because WSNs are usually deployed in remote and hostile areas, security becomes a concern in designing a protocol. In this paper we present our security analysis of five security protocols that have been proposed to strengthen LEACH protocols. Those protocols are SLEACH, SecLEACH, SC-LEACH, Armor LEACH and MS-LEACH.
    Keywords: cryptographic protocols; pattern clustering; power aware computing; telecommunication power management; telecommunication security; wireless sensor networks; Armor LEACH protocols; LEACH-based security protocols; MS-LEACH protocols; SC-LEACH protocols; SLEACH protocols; SecLEACH protocols; cluster-based homogeneous WSN; energy-efficient communication protocols; hostile areas;remote areas; security analysis; wireless sensor network; Authentication; Protocols; radiation detectors; Schedules; Steady-state; Wireless sensor networks; Armor-LEACH; LEACH; MS-LEACH; SC-LEACH;SLEACH; SecLEACH; Security analysis; WSN (ID#:14-3293)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778970&isnumber=6778899
  • Dong-Ho Kang; Byoung-Koo Kim; Jung-Chan Na, "Cyber Threats And Defence Approaches in SCADA Systems," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.324,327, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6778974 The use of SCADA systems has been increased since the 1960s as a need arose to more efficiently monitor and control the status of remote equipment. And they are becoming more and more susceptible to cyber-attacks due to utilize standard protocols and increase connectivity. The objective of this paper is to introduce our on-going work and discuss challenges and opportunities for preventing network and application protocol attacks on SCADA systems.
    Keywords: SCADA systems; computer network security; protocols; SCADA systems; application protocol attacks; cyber threats; cyber-attacks; defence approaches; remote equipment; Filtering; IP networks ;Intrusion detection; Protocols; SCADA systems; Servers; Cyber-attacks; ICS Security; Industrial Firewall; Network Security; SCADA (ID#:14-3294)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778974&isnumber=6778899
  • Wei Ding; ZhiMin Gu; Feng Gao, "Reconstruction of Data Type In Obfuscated Binary Programs," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.393,396, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6778988 Recently, research community has advanced in type reconstruction technology for reverse engineering, but emerging with obfuscate technology, data type reconstruction is difficult and obfuscated code is easier to be monitored and analyzed by attacker or hacker. Therefore, we present a novel approach for automatic establish data type inference rules and reconstruct type from obfuscated binary programs using machine learning algorithm.
    Keywords: computer crime; inference mechanisms; learning (artificial intelligence); reverse engineering; system monitoring; systems analysis; data type inference rules; data type reconstruction; hacker; machine learning algorithm; obfuscated binary programs; obfuscated code analysis; obfuscated code monitoring; reverse engineering; Arrays; Binary codes; Decision trees; Educational institutions; Machine learning algorithms; Reverse engineering; Deobfuscation; Disassembly; Inference Rules; Obfuscated Binary; Type reconstruction (ID#:14-3295)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778988&isnumber=6778899
  • Ji-Soo Oh; Min-Woo Park; Tai-Myoung Chung, "The Solution Of Denial Of Service Attack On Ordered Broadcast Intent," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.397,400, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6778989 The Android's message passing system provides late run-time binding between components in the same or different applications, and it promotes inter-application collaboration. However, the message passing mechanism has also numerous vulnerabilities, so that Android applications can be exposed to attacks from malicious applications. Denial of service (DoS) attack on ordered broadcasts is a typical attack that exploits vulnerabilities of message passing. A malicious application which launches the attack intercepts broadcast messages by setting itself high priority, and then aborts it to prevent other benign applications from receiving it. In this paper, we propose a security framework for detecting DoS attacks on ordered broadcasts. We insert our framework into Android platform, and then the framework inspects receivers of broadcast messages. If the framework detects any threats, it issues warning to user. Finally, we provides scenario about our framework and discuss future directions.
    Keywords: Android (operating system) ;message passing; smart phones; telecommunication security; Android platform; DoS attack; denial of service attack; malicious application; message passing system; ordered broadcast Intent; run-time binding; security framework; Androids; Computer crime; Humanoid robots; Message passing; Receivers; Smart phones; Android; Denial of Service Attack; Intent; Mobile Phone Security; Ordered Broadcast (ID#:14-3296)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778989&isnumber=6778899
  • Dongxiang Fang; Peifeng Zeng; Weiqin Yang, "Attacking the IPsec Standards When Applied To Ipv6 In Confidentiality-Only ESP Tunnel Mode," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.401, 405, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6778990 Attacks which can break RFC-compliant IPsec implementation built on IPv6 in confidentiality-only ESP tunnel mode are proposed. The attacks combine the thought of IV attack, oracle attack and spoof attack to decrypt a encrypted IPv6 datagram. The attacks here are more efficient than the attacks presented by Paterson and Degabriele because no checksum issue has to be handled. The paper shows that using IPsec with confidentiality-only ESP configuration is insecure to convince users to select it carefully.
    Keywords: IP networks; cryptography; protocols; telecommunication security; Degabriele; IPsec standards; IV attack; Paterson; RFC compliant IPsec implementation; confidentiality only ESP tunnel mode; decrypt; encapsulating security payload; encrypted IPv6 datagram; initialization vector; oracle attack; spoof attack; Educational institutions; Encryption; IP networks; Payloads; Protocols; ESP; IPsec; IPv6; Security; confidentiality-only (ID#:14-3297)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778990&isnumber=6778899
  • Shuai Li; Peng Gong; Qian Yang; Xiao Peng Yan; Jiejun Kong; Ping Li, "A Secure Handshake Scheme With Pre-Negotiation For Mobile-Hierarchy City Intelligent Transportation System Under Semi-Honest Model," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.406,409, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6778991 Mobile-hierarchy architecture was widely adopted for query a deployed wireless sensor network in an intelligent transportation system recently. Secure handshake among mobile node and ordinary nodes becomes an important part of an intelligent transportation system. For dividing virtual communication area, pre-negotiation should be conducted between mobile node and ordinary node before formal handshake. Pre-negotiation among nodes can increase the odds for a successful handshake. The mobile node negotiates with an ordinary sensor node over an insecure communication channel by private set intersection. As an important handshake factor, Attribute set is negotiated privately among them in local side. In this paper, a secure handshake scheme with pre-negotiation for mobile-hierarchy city intelligent transportation system under semi-honest model is proposed.
    Keywords: intelligent transportation systems; wireless sensor networks; mobile node; mobile-hierarchy architecture; mobile-hierarchy city intelligent transportation system; prenegotiation; secure handshake scheme; semi-honest model; virtual communication area; wireless sensor network; Computational modeling; Cryptography; Educational institutions; Intelligent transportation systems; Polynomials; Protocols; Wireless communication; Attribute Encryption; Attribute-based handshake; Intelligent transportation system; Private set intersection; Wireless sensor network (ID#:14-3298)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778991&isnumber=6778899
  • Heechang Chung; Sok Pal Cho; Yongseon Jang, "Standardizations on IT Risk Analysis Service in NGN," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.410,413, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6778992 Information technology (IT) risk analysis service is a service which is capable of identifying risk, assessing the risk, and then invoking process which can identify the proper actions which should be taken to reduce damage that could affect users or organizations subscribed to an Network. Provided that a risk situation exists, the risk analysis function performs the analysis and assessment of the risk event data with an algorithm which applies the most recent pattern according to procedures, and reports the analysis results and the proper complementary measures which, if invoked, will reduce risk.
    Keywords: data analysis; next generation networks; risk analysis; telecommunication network reliability; IT risk analysis service; NGN; information technology risk analysis service; risk event data analysis; risk event data assessment; risk identification; risk reduction; Educational institutions; Hardware; Next generation networking; Organizations; Risk analysis; Software; Standardization; IT risk analysis; Identifying risk; assessing risk; external risk; internal risk; mitigation risk (ID#:14-3299)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778992&isnumber=6778899
  • Soo Young Moon; Ji Won Kim; Tae Ho Cho, "An Energy-Efficient Routing Method With Intrusion Detection And Prevention For Wireless Sensor Networks," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.467,470, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6779004 Because of the features such as limited resources, wireless communication and harsh environments, wireless sensor networks (WSNs) are prone to various security attacks. Therefore, we need intrusion detection and prevention methods in WSNs. When the two types of schemes are applied, heavy communication overhead and resulting excessive energy consumption of nodes occur. For this reason, we propose an energy efficient routing method in an environment where both intrusion detection and prevention schemes are used in WSNs. We confirmed through experiments that the proposed scheme reduces the communication overhead and energy consumption compared to existing schemes.
    Keywords: security of data; telecommunication network routing; wireless sensor networks; energy-efficient routing method; excessive energy consumption; heavy communication overhead; intrusion detection scheme; intrusion prevention scheme; security attacks; wireless communication; wireless sensor networks; Energy consumption; Intrusion detection; Network topology; Routing; Sensors; Topology; Wireless sensor networks; intrusion detection; intrusion prevention; network layer attacks; wireless sensor networks (ID#:14-3300)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779004&isnumber=6778899
  • Rahayu, T.M.; Sang-Gon Lee; Hoon-Jae Lee, "Security Analysis Of Secure Data Aggregation Protocols In Wireless Sensor Networks," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.471,474, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6779005 In order to conserve wireless sensor network (WSN) lifetime, data aggregation is applied. Some researchers consider the importance of security and propose secure data aggregation protocols. The essential of those secure approaches is to make sure that the aggregators aggregate the data in appropriate and secure way. In this paper we give the description of ESPDA (Energy-efficient and Secure Pattern-based Data Aggregation) and SRDA (Secure Reference-Based Data Aggregation) protocol that work on cluster-based WSN and the deep security analysis that are different from the previously presented one.
    Keywords: protocols ;telecommunication security; wireless sensor networks; ESPDA protocol; SRDA protocol; WSN lifetime; cluster-based WSN; deep security analysis; energy-efficient and secure pattern-based data aggregation protocol; secure reference-based data aggregation protocol; wireless sensor network lifetime; Authentication; Cryptography; Energy efficiency; Peer-to-peer computing; Protocols; Wireless sensor networks; Data aggregation protocol; ESPDA; SRDA; WSN; secure data aggregation protocol}, (ID#:14-3301)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779005&isnumber=6778899
  • Feng Zhao; Chao Li; Chun Feng Liu, "A Cloud Computing Security Solution Based On Fully Homomorphic Encryption," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.485, 488, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6779008 With the rapid development of Cloud computing, more and more users deposit their data and application on the cloud. But the development of Cloud computing is hindered by many Cloud security problem. Cloud computing has many characteristics, e.g. multi-user, virtualization, scalability and so on. Because of these new characteristics, traditional security technologies can't make Cloud computing fully safe. Therefore, Cloud computing security becomes the current research focus and is also this paper's research direction[1]. In order to solve the problem of data security in cloud computing system, by introducing fully homomorphism encryption algorithm in the cloud computing data security, a new kind of data security solution to the insecurity of the cloud computing is proposed and the scenarios of this application is hereafter constructed. This new security solution is fully fit for the processing and retrieval of the encrypted data, and effectively leading to the broad applicable prospect, the security of data transmission and the storage of the cloud computing.
    Keywords: cloud computing; cryptography; cloud computing security solution; cloud security problem; data security solution; data storage; data transmission; encrypted data processing; encrypted data retrieval; fully homomorphic encryption algorithm; security technologies; Cloud computing; Encryption; Safety; Cloud security; Cloud service; Distributed implementation; Fully homomorphic encryption (ID#:14-3302)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779008&isnumber=6778899
  • Xin Wu, "Secure Browser Architecture Based On Hardware Virtualization," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.489, 495, 16-19 Feb. 2014 doi: 10.1109/ICACT.2014.6779009 Ensuring the entire code base of a browser to deal with the security concerns of integrity and confidentiality is a daunting task. The basic method is to split it into different components and place each of them in its own protection domain. OS processes are the prevalent isolation mechanism to implement the protection domain, which result in expensive context-switching overheads produced by Inter-Process Communication (TPC). Besides, the dependences of multiple web instance processes on a single set of privileged ones reduce the entire concurrency. In this paper, we present a secure browser architecture design based on processor virtualization technique. First, we divide the browser code base into privileged components and constrained components which consist of distrusted web page Tenderer components and plugins. All constrained components are in the form of shared object (SO) libraries. Second, we create an isolated execution environment for each distrusted shared object library using the hardware virtualization support available in modern Intel and AMD processors. Different from the current researches, we design a custom kernel module to gain the hardware virtualization capabilities. Third, to enhance the entire security of browser, we implement a validation mechanism to check the OS resources access from distrusted web page Tenderer to the privileged components. Our validation rules is similar with Google chrome. By utilizing VMENTER and VMEXIT which are both CPU instructions, our approach can gain a better system performance substantially.
    Keywords: microprocessor chips; online front-ends; operating systems (computers); security of data; software libraries; virtualisation; AMD processors; CPU instructions; Google chrome; IPC; Intel processors; OS processes; OS resource checking; SO libraries; VMENTER; VMEXIT; browser security; context-switching overheads; distrusted Web page renderer components; distrusted shared object library; hardware virtualization capabilities; interprocess communication; isolated execution environment; isolation mechanism; multiple Web instance processes; processor virtualization technique; secure browser architecture design; validation mechanism; Browsers; Google; Hardware; Monitoring; Security; Virtualization; Web pages; Browser security; Component isolation; Hardware virtualization; System call interposition (ID#:14-3304)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779009&isnumber=6778899
  • Xiao Chun Yin; Zeng Guang Liu; Hoon Jae Lee, "An Efficient And Secured Data Storage Scheme In Cloud Computing Using ECC-based PKI," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.523,527, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6779015 Cloud computing is set of resources and services offered through the Internet. Cloud services are delivered from data centres located throughout the world. Cloud computing facilitates its consumers by providing virtual resources via internet. The rapid growth in field of "cloud computing" also increases severe security concerns. Security has remained a constant issue for Open Systems and internet, when we are talking about security, cloud really suffers. Lack of security is the only hurdle in wide adoption of cloud computing. Cloud computing is surrounded by many security issues like securing data and examining the utilization of cloud by the cloud computing vendors. This paper proposes a scheme to securely store and access of data via internet. We have used ECC based PKI for certificate procedure because the use of ECC significantly reduces the computation cost, message size and transmission overhead over RSA based PKI as 160-bit key size in ECC provides comparable security with 1024-bit key in RSA. We have designed Secured Cloud Storage Framework (SCSF). In this framework, users not only can securely store and access data in cloud but also can share data with multiple users through the unsecure internet in a secured way. This scheme can ensure the security and privacy of the data in the cloud.
    Keywords: cloud computing; computer centres; data privacy; open systems; public key cryptography; security of data; storage management; ECC-based PKI; RSA based PKI; SCSF; certificate procedure; cloud computing; cloud services; computation cost; data centres; data privacy; data security; message size; open systems; secured cloud storage framework; secured data storage scheme; security concern; transmission overhead; unsecure Internet; virtual resources; Cloud computing; Educational institutions; Elliptic curve cryptography; Elliptic curves; Certificate; Cloud computing; Cloud storage; ECC; PKI (ID#:14-3305)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779015&isnumber=6778899
  • Maksuanpan, S.; Veerawadtanapong, T.; San-Um, W., "Robust Digital Image Cryptosystem Based On Nonlinear Dynamics Of Compound Sine And Cosine Chaotic Maps For Private Data Protection," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.418,425, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6779201 This paper presents a digital image cryptosystem based on nonlinear dynamics of a compound sine and cosine chaotic map. The compound sine and cosine chaotic map is proposed for high-degree of chaos over most regions of parameter spaces in order to increase high-entropy random-bit sources. Image diffusion is performed through pixel shuffling and bit-plane separations prior to XOR operations in order to achieve a fast encryption process. Security key conversions from ASCII code to floating number for use as initial conditions and control parameters are also presented in order to enhance key-space and key-sensitivity performances. Experiments have been performed in MATLAB using standard color images. Nonlinear dynamics of the chaotic maps were initially investigated in terms of Cobweb map, chaotic attractor, Lyapunov exponent spectrum, bifurcation diagram, and 2-dimensional parameter spaces. Encryption qualitative performances are evaluated through pixel density histograms, 2-dimensional power spectral density, key space analysis, key sensitivity, vertical, horizontal, and diagonal correlation plots. Encryption quantitative performances are evaluated through correlation coefficients, NPCR and UACI. Demonstrations of wrong-key decrypted image are also included.
    Keywords: chaos; cryptography; data privacy; image colour analysis; 2-dimensional parameter space; 2-dimensional power spectral density; ASCII code; Cobweb map Lyapunov exponent spectrum; NPCR; UACI; XOR operation; bifurcation diagram; bit-plane separations; chaotic attractor; color images; compound cosine chaotic map; compound sine chaotic map; control parameter; correlation coefficient; diagonal correlation plot; encryption process; encryption qualitative performance; encryption quantitative performance; high-entropy random-bit source; horizontal correlation plot; image diffusion; key sensitivity; key space analysis; key-sensitivity performance; key-space performance; nonlinear dynamics; pixel density histograms; pixel shuffling; private data protection; robust digital image cryptosystem; security key conversions; vertical correlation plot; wrong-key decrypted image; Chaotic communication; Compounds; Encryption; Histograms; Chaotic Map; Cryptosystem; Decryption; Digital Image Processing; Encryption; Nonlinear Dynamics (ID#:14-3306)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779201&isnumber=6778899
  • Bo Yang; Yamamoto, R.; Tanaka, Y., "Dempster-Shafer Evidence Theory Based Trust Management Strategy Against Cooperative Black Hole Attacks And Gray Hole Attacks in MANETs," Advanced Communication Technology (ICACT), 2014 16th International Conference on , vol., no., pp.223,232, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6779177 The MANETs have been experiencing exponential growth in the past decade. However, their vulnerability to various attacks makes the security problem extremely prominent. The main reasons are its distributed, self-organized and infrastructure independent natures. As concerning these problems, trust management scheme is a common way to detect and isolate the compromised nodes when a cryptography mechanism shows a failure facing inner attacks. Among huge numbers of attacks, black hole attack may collapse the network by depriving the route of the normal communication. The conventional proposed method achieved good performance facing black hole attack, while failing to detect gray hole attacks. In this paper, a Dempster-Shafer (D-S) evidence based trust management strategy is proposed to conquer not only cooperative black hole attack but also gray hole attack. In the proposed method, a neighbour observing model based on watchdog mechanism is used to detect single black hole attack by focusing on the direct trust value (DTV). Historical evidence is also taken into consideration to go against gray hole attacks. Then, a neighbour recommendation model companied with indirect trust value (ITV) is used to figure out the cooperative black hole attack. D-S evidence theory is implemented to combine ITVs from different neighbours. Some of the neighbour nodes may declare a false ITV, which effect can also be diminished through the proposed method. The simulation is firstly conducted in the Matlab to evaluate the performance of the algorithm. Then the security routing protocol is implemented in the GloMoSim to evaluate the effectiveness of the strategy. Both of them show good results and demonstrate the advantages of proposed method by punishing malicious actions to prevent the camouflage and deception in the attacks.
    Keywords: cryptography; inference mechanisms; mobile ad hoc networks; telecommunication network management; telecommunication security; Dempster-Shafer evidence theory; GloMoSim; MANET; Matlab; cooperative black hole attacks; cryptography mechanism; gray hole attacks; indirect trust value; neighbour observing model; trust management strategy; watchdog mechanism; Ad hoc networks; Digital TV; Educational institutions; Mobile computing; Routing protocols; Security; Black hole attack; Dempster-Shafer evidence; Direct trust value; Gray hole attack; Indirect trust value; MANETs; Trust management (ID#:14-3307)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779177&isnumber=6778899

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


International Conferences on Service Oriented System Engineering, 2014, Oxford, U.K.

Service oriented System Engineering - UK


International Conferences: Service Oriented System Engineering, 2014, Oxford, U.K.

The 2014 IEEE 8th International Symposium on Service Oriented System Engineering (SOSE) was held 7-11 April 2014 at Oxford, England. Twenty- two security-related presentations were made and are cited here.

  • Hamadache, K.; Zerva, P., "Provenance of Feedback in Cloud Services," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp. 23, 34, 7-11 April 2014. doi: 10.1109/SOSE.2014.10 With the fast adoption of Services Computing, even more driven by the emergence of the Cloud, the need to ensure accountability for quality of service (QoS) for service-based systems/services has reached a critical level. This need has triggered numerous researches in the fields of trust, reputation and provenance. Most of the researches on trust and reputation have focused on their evaluation or computation. In case of provenance they have tried to track down how the service has processed and produced data during its execution. If some of them have investigated credibility models and mechanisms, only few have looked into the way reputation information is produced. In this paper we propose an innovative design for the evaluation of feedback authenticity and credibility by considering the feedback's provenance. This innovative consideration brings up a new level of security and trust in Services Computing, by fighting against malicious feedback and reducing the impact of irrelevant one.
    Keywords: cloud computing; trusted computing; QoS; cloud services; credibility models; feedback authenticity; feedback credibility; feedback provenance; innovative design; malicious feedback; quality of service; reputation information; security; service-based systems/services; services computing; trust; Context; Hospitals; Monitoring; Ontologies; Quality of service; Reliability; Schedules; cloud computing; credibility ;feedback; provenance; reputation (ID#:14-3308)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825960&isnumber=6825948
  • Wei-Tek Tsai; Peide Zhong, "Multi-tenancy and Sub-tenancy Architecture in Software-as-a-Service (SaaS)," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp.128,139, 7-11 April 2014. doi: 10.1109/SOSE.2014.20 Multi-tenancy architecture (MTA) is often used in Software-as-a-Service (SaaS) and the central idea is that multiple tenant applications can be developed using components stored in the SaaS infrastructure. Recently, MTA has been extended where a tenant application can have its own sub-tenants as the tenant application acts like a SaaS infrastructure. In other words, MTA is extended to STA (Sub-Tenancy Architecture). In STA, each tenant application not only needs to develop its own functionalities, but also needs to prepare an infrastructure to allow its sub-tenants to develop customized applications. This paper formulates eight models for STA, and discusses their trade-offs including their formal notations and application scenarios.
    Keywords: cloud computing; software architecture;MTA; STA ;SaaS infrastructure; Software-as-a-Service; multitenancy architecture; subtenancy architecture; tenant applications; Computer architecture; Data models; Databases; Organizations; Scalability; Security; Software as a service; Multi-Tenancy Architecture; SaaS; Sub-Tenancy Architecture (ID#:14-3309)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830895&isnumber=6825948
  • Yuan-Hsin Tung; Chen-Chiu Lin; Hwai-Ling Shan, "Test as a Service: A Framework for Web Security TaaS Service in Cloud Environment," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp. 212, 217, 7-11 April 2014. doi: 10.1109/SOSE.2014.36 As its name suggests, cloud testing is a form of software testing which uses cloud infrastructure. Its effective unlimited storage, quick availability of the infrastructure with scalability, flexibility and availability of distributed testing environment translate to reducing the execution time of testing of large applications and hence lead to cost-effective solutions. In cloud testing, Testing-as-a-Service (TaaS) is a new model to effectively provide testing capabilities and on-demand testing to end users. There are many studies and solutions to support TaaS service. And security testing is the most suitable form for TaaS service. To leverage the features of TaaS, we propose a framework of TaaS for security testing. We implement the prototype system, Security TaaS (abbrev. S-TaaS) based on our proposed framework. The experiments are conducted to evaluate the performance of our framework and prototype system. The experiment results indicate that our prototype system can provide quality and stable service.
    Keywords: cloud computing; program testing; security of data; TaaS service; Web security; cloud environment; cloud infrastructure; cloud testing; distributed testing environment; on-demand testing; software testing; testing capabilities; testing-as-a-service; Cloud computing; Computational modeling; Monitoring; Prototypes; Security; Software testing; TaaS; Test as a Service; cloud computing; security test; vulnerability detection; web vulnerability (ID#:14-3310)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830908&isnumber=6825948
  • Yan Ding; Huaimin Wang; Songzheng Chen; Xiaodong Tang; Hongyi Fu; Peichang Shi, "PIIM: Method of Identifying Malicious Workers in the MapReduce System with an Open Environment," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp. 326, 331, 7-11 April 2014. doi: 10.1109/SOSE.2014.47 MapReduce is widely utilized as a typical computation model of mass data processing. When a MapReduce framework is deployed in an open computation environment, the trustworthiness of the participant workers becomes an important issue because of security threats and the motivation of subjective cheating. Current integrity protection mechanisms are based on replication techniques and use redundant computation to process the same task. However, these solutions require a large amount of computation resource and lack scalability. A probe injection-based identification of malicious worker (PIIM) method is explored in this study. The method randomly injects the probes, whose results are previously known, into the input data and detects malicious workers by analyzing the processed results of the probes. A method of obtaining the set of workers involved in the computation of each probe is proposed by analyzing the shuffle phase in the MapReduce programming model. An EnginTrust-based reputation mechanism that employs information on probe execution is then designed to evaluate the trustworthiness of all the workers and detect the malicious ones. The proposed method operates at the application level and requires no modification to the MapReduce framework. Simulation experiments indicate that the proposed method is effective in detecting malicious workers in large-scale computations. In a system with 100 workers wherein 20 of them are malicious, a detection rate of above 97% can be achieved with only 500 randomly injected probes.
    Keywords: administrative data processing; invasive software; parallel programming; EnginTrust-based reputation mechanism; MapReduce programming model; MapReduce system; PIIM method; malicious worker identification; mass data processing; open computation environment; probe injection-based identification of malicious worker; security threats; subjective cheating; Computational modeling; Data models; Data processing; Estimation; Probes; Programming; Security; MapReduce; mass data processing; open system; probe injection; reputation; worker trustworthiness (ID#:14-3311)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830925&isnumber=6825948
  • Hu Ge; Li Ting; Dong Hang; Yu Hewei; Zhang Miao, "Malicious Code Detection for Android Using Instruction Signatures," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp. 332, 337, 7-11 April 2014. doi: 10.1109/SOSE.2014.48 This paper provides an overview of the current static analysis technology of Android malicious code, and a detailed analysis of the format of APK which is the application name of Android platform executable file (dex). From the perspective of binary sequence, Dalvik VM file is syncopated in method, and these test samples are analyzed by automated DEX file parsing tools and Levenshtein distance algorithm, which can detect the malicious Android applications that contain the same signatures effectively. Proved by a large number of samples, this static detection system that based on signature sequences can't only detect malicious code quickly, but also has a very low rate of false positives and false negatives.
    Keywords: Android (operating system); digital signatures; program compilers; program diagnostics; APK format; Android malicious code detection;Android platform executable file;Dalvik VM file; Levenshtein distance algorithm; automated DEX file parsing tools; binary sequence; instruction signatures; malicious Android applications detection; signature sequences; static analysis technology; static detection system; Libraries; Malware; Mobile communication; Smart phones; Software; Testing; Android; DEX; Static Analysis; malicious code (ID#:14-3312)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830926&isnumber=6825948
  • AlJahdali, H.; Albatli, A.; Garraghan, P.; Townend, P.; Lau, L.; Jie Xu, "Multi-tenancy in Cloud Computing," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp. 344, 351, 7-11 April 2014. doi: 10.1109/SOSE.2014.50 As Cloud Computing becomes the trend of information technology computational model, the Cloud security is becoming a major issue in adopting the Cloud where security is considered one of the most critical concerns for the large customers of Cloud (i.e. governments and enterprises). Such valid concern is mainly driven by the Multi-Tenancy situation which refers to resource sharing in Cloud Computing and its associated risks where confidentiality and/or integrity could be violated. As a result, security concerns may harness the advancement of Cloud Computing in the market. So, in order to propose effective security solutions and strategies a good knowledge of the current Cloud implementations and practices, especially the public Clouds, must be understood by professionals. Such understanding is needed in order to recognize attack vectors and attack surfaces. In this paper we will propose an attack model based on a threat model designed to take advantage of Multi-Tenancy situation only. Before that, a clear understanding of Multi-Tenancy, its origin and its benefits will be demonstrated. Also, a novel way on how to approach Multi-Tenancy will be illustrated. Finally, we will try to sense any suspicious behavior that may indicate to a possible attack where we will try to recognize the proposed attack model empirically from Google trace logs. Google trace logs are a 29-day worth of data released by Google. The data set was utilized in reliability and power consumption studies, but not been utilized in any security study to the extent of our knowledge.
    Keywords: cloud computing; resource allocation; security of data; Google trace logs; attack model; attack surfaces; attack vectors; cloud computing; cloud security; information technology computational model; multitenancy situation; public clouds; resource sharing; suspicious behavior; threat model; Cloud computing; Computational modeling; Databases; Resource management; Security; Servers; Virtualization; Attack Models; Cloud Computing; Cloud Data; Multi-Tenancy; Security (ID#:14-3313)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830928&isnumber=6825948
  • Wei Xiong; Wei-Tek Tsai, "HLA-Based SaaS-Oriented Simulation Frameworks," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp.376, 383, 7-11 April 2014 doi: 10.1109/SOSE.2014.74 SaaS (Software-as-a-Service) as a part of cloud computing is a new approach for software construction, evolution, and delivery. This paper proposes HLA-based SaaS-oriented simulation frameworks where simulation services will be organized into a SaaS framework running in a cloud environment. This SaaS-oriented framework can be applied to multiple application domains but illustrated by using HLA (High-Level Architecture). The framework will allow integration of a variety of modules, service-oriented design, flexible customization, multi-granularity simulation, high-performance computing, and system security. It has the potential to reduce system development time, and allows simulation to be run in a cloud environment taking advantages of resources offered by the cloud.
    Keywords: cloud computing; digital simulation; security of data; service-oriented architecture; HLA-based SaaS-oriented simulation; cloud computing; cloud environment; flexible customization; high-level architecture; high-performance computing; multigranularity simulation; service-oriented design; simulation service software as a service; software construction; software delivery; software evolution; system development time reduction; system security; Adaptation models; Computational modeling; Computer architecture; Data models; Databases; Object oriented modeling; Software as a service; HLA; SaaS (Software-as-a-Service);service-oriented design; simulation frameworks (ID#:14-3314)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830933&isnumber=6825948
  • Dornhackl, H.; Kadletz, K.; Luh, R.; Tavolato, P., "Malicious Behavior Patterns," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp.384, 389, 7-11 April 2014. doi: 10.1109/SOSE.2014.52 This paper details a schema developed for defining malicious behavior in software. The presented approach enables malware analysts to identify and categorize malicious software through its high-level goals as well as down to the individual functions executed on operating system level. We demonstrate the practical application of the schema by mapping dynamically extracted system call patterns to a comprehensive hierarchy of malicious behavior.
    Keywords: invasive software; object-oriented methods; malicious behavior patterns; malware analyst; operating system level; Availability; Grammar; Malware; Payloads; Reconnaissance; Software; Vectors; behavior pattern; formal grammar; malware (ID#:14-3315)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830934&isnumber=6825948
  • Atkinson, J.S.; Mitchell, J.E.; Rio, M.; Matich, G., "Your WiFi Is Leaking: Building a Low-Cost Device to Infer User Activities," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp.396,397, 7-11 April 2014. doi: 10.1109/SOSE.2014.54 This paper documents a hardware and software implementation to monitor, capture and store encrypted WiFi communication data. The implementation detailed can perform this entirely passively using only cheap commodity hardware and freely available software. It is hoped that this will be of use to other researchers and practitioners wishing to explore activity inference without breaking encryption, or supplement the (somewhat scarce) existing body of data available from this particular external perspective.
    Keywords: cryptography; wireless LAN; WiFi; communication data; encryption; Encryption; Hardware; IEEE 802.11 Standards; Privacy; Software; Wireless communication; activity inference; cyber security; encryption; implementation; wifi (ID#:14-3316)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830936&isnumber=6825948
  • Alzahrani, A.A.H.; Eden, A.H.; Yafi, M.Z., "Structural Analysis of the Check Point Pattern," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp.404, 408, 7-11 April 2014. doi: 10.1109/SOSE.2014.56 We investigate intuitive claims made in security pattern catalogues using the formal language of Codecharts and the Two-Tier Programming Toolkit. We analyse the Check Point pattern's structure and explore claims about conformance (of programs to the pattern), about consistency (between different catalogues), and about the relation between (security and design) patterns. Our analysis shows that some of the intuitive claims hold whereas others were found inaccurate or false.
    Keywords: checkpointing; formal languages; security of data; check point pattern; codecharts; formal language; intuitive claims; security pattern catalogues; structural analysis; two-tier programming toolkit; Educational institutions; Java; Object oriented modeling; Security; Software; Unified modeling language; Codecharts; Security patterns; design pattern; design verification; formal languages (ID#:14-3317)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830938&isnumber=6825948
  • Kulkarni, A.; Metta, R., "A New Code Obfuscation Scheme for Software Protection," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp.409, 414, 7-11 April 2014. doi: 10.1109/SOSE.2014.57 IT industry loses tens of billions of dollars annually from security attacks such as tampering and malicious reverse engineering. Code obfuscation techniques counter such attacks by transforming code into patterns that resist the attacks. None of the current code obfuscation techniques satisfy all the obfuscation effectiveness criteria such as resistance to reverse engineering attacks and state space increase. To address this, we introduce new code patterns that we call nontrivial code clones and propose a new obfuscation scheme that combines nontrivial clones with existing obfuscation techniques to satisfy all the effectiveness criteria. The nontrivial code clones need to be constructed manually, thus adding to the development cost. This cost can be limited by cloning only the code fragments that need protection and by reusing the clones across projects. This makes it worthwhile considering the security risks. In this paper, we present our scheme and illustrate it with a toy example.
    Keywords: computer crime; reverse engineering; software engineering; systems re-engineering; IT industry; code fragment cloning; code obfuscation scheme; code patterns; code transformation; malicious reverse engineering; nontrivial code clones; security attacks; software protection; tampering; Cloning; Complexity theory; Data processing; Licenses; Resistance; Resists; Software (ID#:14-3318)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830939&isnumber=6825948
  • Smith, P.; Schaeffer-Filho, A., "Management Patterns for Smart Grid Resilience," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp.415,416, 7-11 April 2014. doi: 10.1109/SOSE.2014.58 Smart grids are power distribution networks characterised by an increased level of automation of the infrastructure, sensors and actuators connected to monitoring and control centres, and are strongly supported by information and communication technology (ICT). Consequently, smart grids are more vulnerable to cyber-attacks. In this position paper, we advocate the need for management patterns that capture best-practices for ensuring the resilience of smart grids to cyber-attacks and other related challenges. Management patterns are akin to software design patterns in the sense that patterns promote the use of well-established solutions to recurring problems. These patterns describe how to orchestrate the cyber-physical behaviour of ICT, industrial control systems and human resources in a safe manner, in response to cyber-attacks.
    Keywords: actuators; distribution networks; power engineering computing; power system management; security of data; sensors; smart power grids; ICT; actuators; control centres; cyber-attacks; cyber-physical behaviour; human resources; industrial control systems; information and communication technology; management patterns; power distribution networks; sensors; smart grid resilience;software design patterns; Automation; Guidelines; Resilience; Security; Smart grids; Standards (ID#:14-3319)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830940&isnumber=6825948
  • Blyth, A., "Understanding Security Patterns for Socio-technical Systems via Responsibility Modelling," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp.417, 421, 7-11 April 2014. doi: 10.1109/SOSE.2014.59 Increasingly, security requirements are being viewed as a social construct derived from the culture and society within which the requirement is said to exist. A socio-technical system can be modelled as a series of inter-related, and interacting patterns of behaviour. Within a socio-technical system a security requirements can be derived from the analysis and interaction of the pattern. To capture and understand these requirements/patterns we need to make use of a formal reasoning system that supports a rigorous deductive process. In this paper we will develop a formal model of a socio -- technical systems pattern using a Kripke Semantic model. Then, via the application of Kripke Semantics to the modelling of responsibilities and how they are created/fulfilled within a socio -- context, we will derive a set of security requirements/patterns.
    Keywords: {human computer interaction; programming language semantics; security of data; social aspects of automation; Kripke semantic model; deductive process; formal reasoning system; responsibility modelling; security patterns; security requirements; socio-technical system; Analytical models; Computational modeling; Context; Security; Semantics; Sociotechnical systems; Accountability; Liability and Culpability; Modal Action Logic (MAL); Responsibility Modelling; SocioTechnical System (STS) (ID#:14-3320)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830941&isnumber=6825948
  • Aziz, B.; Blackwell, C., "Using Security Patterns for Modelling Security Capabilities in Grid Systems," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp.422,427, 7-11 April 2014. doi: 10.1109/SOSE.2014.60 We extend previous work on formalising design patterns to start the development of security patterns for Grid systems. We demonstrate the feasibility of our approach with a case study involving a deployed security architecture in a Grid Operating System called XtreemOS. A number of Grid security management capabilities that aid the secure setting-up and running of a Grid are presented. We outline the functionality needed for such cases in a general form, which could be utilised when considering the development of similar large-scale systems in the future. We also specifically describe the use of authentication patterns that model the extension of trust from a secure core, and indicate how these patterns can be composed, specialised and instantiated.
    Keywords: grid computing; operating systems (computers); security of data; XtreemOS; authentication patterns; design patterns formalization; grid operating system; grid security management capabilities; grid systems; security capabilities modeling; security patterns; trust extension; Authentication; Databases; Monitoring; Operating systems; Public key; Receivers; Grid operating systems; Security patterns; authentication patterns; security architectures (ID#:14-3321)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830942&isnumber=6825948
  • Duncan, I.; De Muijnck-Hughes, J., "Security Pattern Evaluation," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp.428, 429, 7-11 April 2014. doi: 10.1109/SOSE.2014.61 Current Security Pattern evaluation techniques are demonstrated to be incomplete with respect to quantitative measurement and comparison. A proposal for a dynamic testbed system is presented as a potential mechanism for evaluating patterns within a constrained environment.
    Keywords: pattern classification; security of data; dynamic testbed system; security pattern evaluation; Complexity theory; Educational institutions; Measurement; Security; Software; Software reliability; Testing; evaluation; metrics; security patterns; testing (ID#:14-3322)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830943&isnumber=6825948
  • Madhusudhan, R.; Kumar, S.R., "Cryptanalysis of a Remote User Authentication Protocol Using Smart Cards," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp.474,477, 7-11 April 2014. doi: 10.1109/SOSE.2014.84 Remote user authentication using smart cards is a method of verifying the legitimacy of remote users accessing the server through insecure channel, by using smart cards to increase the efficiency of the system. During last couple of years many protocols to authenticate remote users using smart cards have been proposed. But unfortunately, most of them are proved to be unsecure against various attacks. Recently this year, Yung-Cheng Lee improved Shin et al.'s protocol and claimed that their protocol is more secure. In this article, we have shown that Yung-Cheng-Lee's protocol too has defects. It does not provide user anonymity; it is vulnerable to Denial-of-Service attack, Session key reveal, user impersonation attack, Server impersonation attack and insider attacks. Further it is not efficient in password change phase since it requires communication with server and uses verification table.
    Keywords: computer network security; cryptographic protocols; message authentication; smart cards; Yung-Cheng-Lee's protocol; cryptanalysis; denial-of-service attack; insecure channel; insider attacks; legitimacy verification; password change phase; remote user authentication protocol; server impersonation attack; session key; smart cards; user impersonation attack; verification table;Authentication;Bismuth;Cryptography;Protocols;Servers;Smart cards; authentication; smart card; cryptanalysis; dynamic id (ID#:14-3323)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830951&isnumber=6825948
  • Alarifi, S.; Wolthusen, S.D., "Mitigation of Cloud-Internal Denial of Service Attacks," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp.478,483, 7-11 April 2014. doi: 10.1109/SOSE.2014.71 Cloud computing security is one of the main concerns preventing the adoption of the cloud by many organisations. This paper introduces mitigation strategies to defend the cloud specific CIDoS class of attacks (Cloud-Internal Denial of Service), presented in [1]. The mitigation approaches are based on techniques used in signals processing field. The main strategy to detect the attack is the calculation of correlations measurement and distances between attackers workload patters, we use DCT (Discrete Cosine Transform) to accomplish this task. This paper also suggests some prevention and response strategies.
    Keywords: cloud computing; computer network security; discrete cosine transforms; CIDoS class; DCT; attack detection; cloud computing security; cloud-internal denial of service attack mitigation; correlations measurement; discrete cosine transform; mitigation strategies; prevention strategy; response strategy; signals processing field; Computer crime; Correlation; Delays; Discrete cosine transforms; Educational institutions; Monitoring; Testing; CIDoS attack detection; Cloud Attack Mitigation; Cloud Computing Security; Cloud DoS attacks; IaaS Cloud Security (ID#:14-3324)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830952&isnumber=6825948
  • Mapp, G.; Aiash, M.; Ondiege, B.; Clarke, M., "Exploring a New Security Framework for Cloud Storage Using Capabilities," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp.484,489, 7-11 April 2014. doi: 10.1109/SOSE.2014.69 We are seeing the deployment of new types of networks such as sensor networks for environmental and infrastructural monitoring, social networks such as facebook, and e-Health networks for patient monitoring. These networks are producing large amounts of data that need to be stored, processed and analysed. Cloud technology is being used to meet these challenges. However, a key issue is how to provide security for data stored in the Cloud. This paper addresses this issue in two ways. It first proposes a new security framework for Cloud security which deals with all the major system entities. Secondly, it introduces a Capability ID system based on modified IPv6 addressing which can be used to implement a security framework for Cloud storage. The paper then shows how these techniques are being used to build an e-Health system for patient monitoring.
    Keywords: cloud computing; electronic health records; patient monitoring; social networking (online);storage management;IPv6 addressing; capability ID system; cloud security; cloud storage; cloud technology; e-Health system; e-health networks; environmental monitoring; facebook; infrastructural monitoring; patient monitoring; security for data security framework; sensor networks; social networks; system entity; Cloud computing; Companies; Monitoring; Protocols; Security; Servers; Virtual machine monitors; Capability Systems; Cloud Storage; Security Framework; e-Health Monitoring (ID#:14-3325)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830953&isnumber=6825948
  • Euijin Choo; Younghee Park; Siyamwala, H., "Identifying Malicious Metering Data in Advanced Metering Infrastructure," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp.490,495, 7-11 April 2014. doi: 10.1109/SOSE.2014.75 Advanced Metering Infrastructure (AMI) has evolved to measure and control energy usage in communicating through metering devices. However, the development of the AMI network brings with it security issues, including the increasingly serious risk of malware in the new emerging network. Malware is often embedded in the data payloads of legitimate metering data. It is difficult to detect malware in metering devices, which are resource-constrained embedded systems, during time-critical communications. This paper describes a method in order to distinguish malware-bearing traffic and legitimate metering data using a disassembler and statistical analysis. Based on the discovered unique characteristic of each data type, the proposed method detects malicious metering data. (i.e. malware-bearing data). The analysis of data payloads is statistically performed while investigating a distribution of instructions in traffic by using a disassembler. Doing so demonstrates that the distribution of instructions in metering data is significantly different from that in malware-bearing data. The proposed approach successfully identifies the two different types of data with complete accuracy, with 0% false positives and 0% false negatives.
    Keywords: invasive software; metering; power system security; program assemblers; smart meters; statistical analysis; AMI network; advanced metering infrastructure; data payloads; disassembler; energy usage; malicious metering data; malware-bearing data; malware-bearing traffic; metering devices; resource constrained embedded systems; security issues; statistical analysis; time-critical communications; Malware; Registers; Statistical analysis; Testing; Training; ARM Instructions; Advanced Metering Infrastructure; Diassembler; Malware; Security; Smart Meters (ID#:14-3326)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830954&isnumber=6825948
  • Hongjun Dai; Qian Li; Meikang Qiu; Zhilou Yu; Zhiping Jia, "A Cloud Trust Authority Framework for Mobile Enterprise Information System," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp.496,501, 7-11 April 2014. doi: 10.1109/SOSE.2014.68 With the trend of mobile enterprise information systems, security has become the primary issue as it relates to business secret, decision, and process control. Hence, we carry out a fully customized framework to emphasize on security from trust authority of the cloud certificate authority server, and to guarantee security with the process of the software developments. The core object model, named as secure mobile beans (SMB), can be deployed into the cloud server. Our framework consists of SMB models, object-relation mapping module, SMB translator, and development tools. The use cases show that it can free developers from the complex implementation of security policies during the development stages, shorten the time of mobile application's development effectively.
    Keywords: cloud computing; file servers; information systems; trusted computing; SMB translator; business secret; cloud certificate authority server; cloud trust authority framework; fully customized framework; mobile enterprise information system; object-relation mapping module; process control; secure mobile beans; security policies; software developments; Authentication; Data models; Databases; Java; Mobile communication; Servers; cloud trust authority; enterprise development framework; mobile enterprise information system; secure mobile beans (ID#:14-3327)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830955&isnumber=6825948

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


International Conferences: Dependable Systems and Networks (2014) - USA

Dependable Systems and Networks (2014)


As part of the series focused upon specific international conferences, the citations given here are from the 2014 44th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), held in Atlanta, Georgia on 23-26 June 2014. All relate to security issue research.

  • Cuong Pham; Estrada, Z.; Phuong Cao; Kalbarczyk, Z.; Iyer, R.K., "Reliability and Security Monitoring of Virtual Machines Using Hardware Architectural Invariants," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.13,2 4, 23-26 June 2014. doi: 10.1109/DSN.2014.19 This paper presents a solution that simultaneously addresses both reliability and security (RnS) in a monitoring framework. We identify the commonalities between reliability and security to guide the design of Hyper Tap, a hyper visor-level framework that efficiently supports both types of monitoring in virtualization environments. In Hyper Tap, the logging of system events and states is common across monitors and constitutes the core of the framework. The audit phase of each monitor is implemented and operated independently. In addition, Hyper Tap relies on hardware invariants to provide a strongly isolated root of trust. Hyper Tap uses active monitoring, which can be adapted to enforce a wide spectrum of RnS policies. We validate Hyper Tap by introducing three example monitors: Guest OS Hang Detection (GOSHD), Hidden Root Kit Detection (HRKD), and Privilege Escalation Detection (PED). Our experiments with fault injection and real root kits/exploits demonstrate that Hyper Tap provides robust monitoring with low performance overhead.
    Keywords: monitoring; reliability; security of data; virtual machines; GOSHD; Guest OS Hang Detection; HRKD; Hyper Tap; PED; active monitoring; fault injection; hardware architectural invariants; hidden root kit detection; hyper visor-level framework; privilege escalation detection; reliability; robust monitoring; security monitoring framework; virtual machines; virtualization environments; Data structures; Hardware; Kernel; Monitoring; Reliability; Security; Virtual machine monitors; Fault Injection; Hypervisor; Invariant; Monitoring; Reliability; Rootkit; Security (ID#:14-3095)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903563&isnumber=6903544
  • Haq, O.; Ahmed, W.; Syed, A.A., "Titan: Enabling Low Overhead and Multi-faceted Network Fingerprinting of a Bot," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.37, 44, 23-26 June 2014. doi: 10.1109/DSN.2014.20 Botnets are an evolutionary form of malware, unique in requiring network connectivity for herding by a botmaster that allows coordinated attacks as well as dynamic evasion from detection. Thus, the most interesting features of a bot relate to its rapidly evolving network behavior. The few academic and commercial malware observation systems that exist, however, are either proprietary or have large cost and management overhead. Moreover, the network behavior of bots changes considerably under different operational contexts. We first identify these various contexts that can impact its fingerprint. We then present Titan: a system that generates faithful network fingerprints by recreating all these contexts and stressing the bot with different network settings and host interactions. This effort includes a semi-automated and tunable containment policy to prevent bot proliferation. Most importantly, Titan has low cost overhead as a minimal setup requires just two machines, while the provision of a user-friendly web interface reduces the setup and management overhead. We then show a fingerprint of the Crypto locker bot to demonstrate automatic detection of its domain generation algorithm (DGA). We also demonstrate the effective identification of context specific behavior with a controlled deployment of Zeus botnet.
    Keywords: invasive software; Botnets; Crypto locker bot; DGA; Titan system; Zeus botnet; bot detection; bot proliferation prevention; botmaster; containment policy; domain generation algorithm; malware; malware observation systems; network connectivity; network fingerprinting; Context; Fingerprint recognition; IP networks; Logic gates; Malware; Ports (Computers); Sensors; botnets; containment policy; malware fingerprint; software defined networking; testbed (ID#:14-3096)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903565&isnumber=6903544
  • Howard, G.M.; Gutierrez, C.N.; Arshad, F.A.; Bagchi, S.; Yuan Qi, "pSigene: Webcrawling to Generalize SQL Injection Signatures," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.45, 56, 23-26 June 2014. doi: 10.1109/DSN.2014.21 Intrusion detection systems (IDS) are an important component to effectively protect computer systems. Misuse detection is the most popular approach to detect intrusions, using a library of signatures to find attacks. The accuracy of the signatures is paramount for an effective IDS, still today's practitioners rely on manual techniques to improve and update those signatures. We present a system, called pSigene, for the automatic generation of intrusion signatures by mining the vast amount of public data available on attacks. It follows a four-step process to generate the signatures, by first crawling attack samples from multiple public cyber security web portals. Then, a feature set is created from existing detection signatures to model the samples, which are then grouped using a biclustering algorithm which also gives the distinctive features of each cluster. Finally the system automatically creates a set of signatures using regular expressions, one for each cluster. We tested our architecture for SQL injection attacks and found our signatures to have a True and False Positive Rates of 90.52% and 0.03%, respectively and compared our findings to other SQL injection signature sets from popular IDS and web application firewalls. Results show our system to be very competitive to existing signature sets.
    Keywords: SQL; authorisation; data mining; digital signatures; portals; IDS; SQL injection attack; SQL injection signature; Webcrawling; biclustering algorithm; crawling attack; data mining; intrusion detection system; misuse detection; pSigene; public cyber security Web portal; Clustering algorithms; Computer security; Databases; Feature extraction; Manuals; Portals; SQL injection; biclustering; signature generalization; web application security}, (ID#:14-3097)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903566&isnumber=6903544
  • Haitao Du; Yang, S.J., "Probabilistic Inference for Obfuscated Network Attack Sequences," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.57, 67, 23-26 June 2014. doi: 10.1109/DSN.2014.22 Facing diverse network attack strategies and overwhelming alters, much work has been devoted to correlate observed malicious events to pre-defined scenarios, attempting to deduce the attack plans based on expert models of how network attacks may transpire. Sophisticated attackers can, however, employ a number of obfuscation techniques to confuse the alert correlation engine or classifier. Recognizing the need for a systematic analysis of the impact of attack obfuscation, this paper models attack strategies as general finite order Markov models, and treats obfuscated observations as noises. Taking into account that only finite observation window and limited computational time can be afforded, this work develops an algorithm to efficiently inference on the joint distribution of clean and obfuscated attack sequences. The inference algorithm recovers the optimal match of obfuscated sequences to attack models, and enables a systematic and quantitative analysis on the impact of obfuscation on attack classification.
    Keywords: Markov processes; computer network security; invasive software; Markov models; attack obfuscation; diverse network attack strategies; finite observation window; limited computational time; obfuscated attack sequences; obfuscated network attack sequences; observed malicious events; probabilistic inference; sophisticated attackers; systematic analysis; Computational modeling; Dynamic programming; Hidden Markov models; Inference algorithms; Markov processes; Probabilistic logic; Vectors (ID#:14-3098)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903567&isnumber=6903544
  • Anceaume, E.; Busnel, Y.; Le Merrer, E.; Ludinard, R.; Marchand, J.L.; Sericola, B., "Anomaly Characterization in Large Scale Networks," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.68, 79, 23-26 June 2014. doi: 10.1109/DSN.2014.23 The context of this work is the online characterization of errors in large scale systems. In particular, we address the following question: Given two successive configurations of the system, can we distinguish massive errors from isolated ones, the former ones impacting a large number of nodes while the second ones affect solely a small number of them, or even a single one? The rationale of this question is twofold. First, from a theoretical point of view, we characterize errors with respect to their neighbourhood, and we show that there are error scenarios for which isolated and massive errors are indistinguishable from an omniscient observer point of view. We then relax the definition of this problem by introducing unresolved configurations, and exhibit necessary and sufficient conditions that allow any node to determine the type of errors it has been impacted by. These conditions only depend on the close neighbourhood of each node and thus are locally computable. We present algorithms that implement these conditions, and show through extensive simulations, their performances. Now from a practical point of view, distinguishing isolated errors from massive ones is of utmost importance for networks providers. For instance, for Internet service providers that operate millions of home gateways, it would be very interesting to have procedures that allow gateways to self distinguish whether their dysfunction is caused by network-level errors or by their own hardware or software, and to notify the service provider only in the latter case.
    Keywords: computerised monitoring; digital simulation; distributed processing ; security of data; anomaly characterization; error online characterization; extensive simulations ;isolated errors; large scale distributed systems; large scale networks; large scale systems; massive errors; online monitoring problem; Bismuth; Measurement; Monitoring; Observers; Peer-to-peer computing; Quality of service; Trajectory; Error detection; large scale systems; local algorithms (ID#:14-3099)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903568&isnumber=6903544
  • Daiping Liu; Haining Wang; Stavrou, A., "Detecting Malicious Javascript in PDF through Document Instrumentation," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.100, 111, 23-26 June 2014. doi: 10.1109/DSN.2014.92 An emerging threat vector, embedded malware inside popular document formats, has become rampant since 2008. Owed to its wide-spread use and Javascript support, PDF has been the primary vehicle for delivering embedded exploits. Unfortunately, existing defenses are limited in effectiveness, vulnerable to evasion, or computationally expensive to be employed as an on-line protection system. In this paper, we propose a context-aware approach for detection and confinement of malicious Javascript in PDF. Our approach statically extracts a set of static features and inserts context monitoring code into a document. When an instrumented document is opened, the context monitoring code inside will cooperate with our runtime monitor to detect potential infection attempts in the context of Javascript execution. Thus, our detector can identify malicious documents by using both static and runtime features. To validate the effectiveness of our approach in a real world setting, we first conduct a security analysis, showing that our system is able to remain effective in detection and be robust against evasion attempts even in the presence of sophisticated adversaries. We implement a prototype of the proposed system, and perform extensive experiments using 18623 benign PDF samples and 7370 malicious samples. Our evaluation results demonstrate that our approach can accurately detect and confine malicious Javascript in PDF with minor performance overhead.
    Keywords: Java; document handling; feature extraction ;invasive software; ubiquitous computing; Javascript execution; Javascript support; PDF; context monitoring code; context-aware approach; document format; document instrumentation; embedded malware; emerging threat vector; evasion attempt; malicious Javascript confinement; malicious Javascript detection; malicious document identification; online protection system; potential infection attempt detection; runtime feature; runtime monitoring; security analysis; sophisticated adversaries; static feature extraction; Context; Feature extraction; Instruments; Malware; Monitoring; Portable document format; Runtime; Malcode bearing PDF; document instrumentation; malicious Javascript; malware detection and confinement (ID#:14-3100)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903571&isnumber=6903544
  • Bin Liang; Wei You; Liangkun Liu; Wenchang Shi; Heiderich, M., "Scriptless Timing Attacks on Web Browser Privacy," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.112,123, 23-26 June 2014 doi: 10.1109/DSN.2014.93 The existing Web timing attack methods are heavily dependent on executing client-side scripts to measure the time. However, many techniques have been proposed to block the executions of suspicious scripts recently. This paper presents a novel timing attack method to sniff users' browsing histories without executing any scripts. Our method is based on the fact that when a resource is loaded from the local cache, its rendering process should begin earlier than when it is loaded from a remote website. We leverage some Cascading Style Sheets (CSS) features to indirectly monitor the rendering of the target resource. Three practical attack vectors are developed for different attack scenarios and applied to six popular desktop and mobile browsers. The evaluation shows that our method can effectively sniff users' browsing histories with very high precision. We believe that modern browsers protected by script-blocking techniques are still likely to suffer serious privacy leakage threats.
    Keywords: data privacy; online front-ends; CSS features; Web browser privacy; Web timing attack methods; cascading style sheets; client-side scripts; desktop browser; mobile browser; privacy leakage threats; rendering process; script-blocking techniques; scriptless timing attacks; user browsing history; Animation; Browsers; Cascading style sheets; History; Rendering (computer graphics);Timing; Web privacy; browsing history; scriptless attack; timing attack (ID#:14-3101)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903572&isnumber=6903544
  • Shaw, A.; Doggett, D.; Hafiz, M., "Automatically Fixing C Buffer Overflows Using Program Transformations," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.124, 135, 23-26 June 2014. doi: 10.1109/DSN.2014.25 Fixing C buffer overflows at source code level remains a manual activity, at best semi-automated. We present an automated approach to fix buffer overflows by describing two program transformations that automatically introduce two well-known security solutions to C source code. The transformations embrace the difficulties of correctly analyzing and modifying C source code considering pointers and aliasing. They are effective: they fixed all buffer overflows featured in 4,505 programs of NIST's SAMATE reference dataset, making the changes automatically on over 2.3 million lines of code (MLOC). They are also safe: we applied them to make hundreds of changes on four open source programs (1.7 MLOC) without breaking the programs. Automated transformations such as these can be used by developers during coding, and by maintainers to fix problems in legacy code. They can be applied on a case by case basis, or as a batch to fix the root causes behind buffer overflows, thereby improving the dependability of systems.
    Keywords: C language; public domain software; security of data; source code (software) ;source coding; C source code; MLOC; NIST SAMATE reference dataset; automatic C buffer overflow fixing; legacy code; million lines of code; open source programs; program transformations; security solutions; source coding; Algorithm design and analysis; Arrays; ISO standards; Libraries; Manuals; Security; buffer; dependability; overflow; security (ID#:14-3102)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903573&isnumber=6903544
  • Lerner, L.W.; Franklin, Z.R.; Baumann, W.T.; Patterson, C.D., "Application-Level Autonomic Hardware to Predict and Preempt Software Attacks on Industrial Control Systems," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.136, 147, 23-26 June 2014 doi: 10.1109/DSN.2014.26 We mitigate malicious software threats to industrial control systems, not by bolstering perimeter security, but rather by using application-specific configurable hardware to monitor and possibly override software operations in real time at the lowest (I/O pin) level of a system-on-chip platform containing a micro controller augmented with configurable logic. The process specifications, stability-preserving backup controller, and switchover logic are specified and formally verified as C code commonly used in control systems, but synthesized into hardware to resist software reconfiguration attacks. In addition, a copy of the production controller task is optionally implemented in an on-chip, isolated soft processor, connected to a model of the physical process, and accelerated to preview what the controller will attempt to do in the near future. This prediction provides greater assurance that the backup controller can be invoked before the physical process becomes unstable. Adding trusted, application-tailored, software-invisible, autonomic hardware is well-supported in a commercial system-on-chip platform.
    Keywords: industrial control; security of data; software engineering; system-on-chip; trusted computing; application-level autonomic hardware; application-tailored hardware; industrial control systems; malicious software threats; perimeter security; software attacks;s oftware reconfiguration attacks; software-invisible hardware; system-on-chip platform; trusted hardware; Hardware; Kernel; Monitoring; Process control; Production; Security ;formal analysis ;hardware root-of-trust; industrial control system security; software threats (ID#:14-3103)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903574&isnumber=6903544
  • Rahman, M.A.; Al-Shaer, E.; Kavasseri, R.G., "Security Threat Analytics and Countermeasure Synthesis for Power System State Estimation," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.156, 167, 23-26 June 2014. doi: 10.1109/DSN.2014.29 State estimation plays a critically important role in ensuring the secure and reliable operation of the power grid. However, recent works have shown that the widely used weighted least squares (WLS) estimator, which uses several system wide measurements, is vulnerable to cyber attacks wherein an adversary can alter certain measurements to corrupt the estimator's solution, but evade the estimator's existing bad data detection algorithms and thus remain invisible to the system operator. Realistically, such a stealthy attack in its most general form has several constraints, particularly in terms of an adversary's knowledge and resources for achieving a desired attack outcome. In this light, we present a formal framework to systematically investigate the feasibility of stealthy attacks considering constraints of the adversary. In addition, unlike prior works, our approach allows the modeling of attacks on topology mappings, where an adversary can drastically strengthen stealthy attacks by intentionally introducing topology errors. Moreover, we show that this framework allows an operator to synthesize cost-effective countermeasures based on given resource constraints and security requirements in order to resist stealthy attacks. The proposed approach is illustrated on standard IEEE test cases.
    Keywords: energy management systems; least squares approximations; power grids; power system state estimation; security of data; topology; IEEE test cases; WLS estimator; countermeasure synthesis; data detection algorithms; power grid; power system state estimation; security threat analytics; stealthy cyber attacks; topology errors; topology mappings; weighted least square estimator; Equations; Mathematical model; Power measurement; Security State estimation; Topology; Transmission line measurements; False Data Injection Attack; Formal Method; Power Grid; State Estimation (ID#:14-3104)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903576&isnumber=6903544
  • Mustafa, H.; Wenyuan Xu; Sadeghi, A.R.; Schulz, S., "You Can Call but You Can't Hide: Detecting Caller ID Spoofing Attacks," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.168,179, 23-26 June 2014. doi: 10.1109/DSN.2014.102 Caller ID (caller identification) is a service provided by telephone carriers to transmit the phone number and/or the name of a caller to a callee. Today, most people trust the caller ID information, and it is increasingly used to authenticate customers (e.g., by banks or credit card companies). However, with the proliferation of smartphones and VoIP, it is easy to spoof caller ID by installing corresponding Apps on smartphones or by using fake ID providers. As telephone networks are fragmented between enterprises and countries, no mechanism is available today to easily detect such spoofing attacks. This vulnerability has already been exploited with crucial consequences such as faking caller IDs to emergency services (e.g., 9-1-1) or to commit fraud. In this paper, we propose an end-to-end caller ID verification mechanism CallerDec that works with existing combinations of landlines, cellular and VoIP networks. CallerDec can be deployed at the liberty of users, without any modification to the existing infrastructures. We implemented our scheme as an App for Android-based phones and validated the effectiveness of our solution in detecting spoofing attacks in various scenarios.
    Keywords: Android (operating system); Internet telephony; authorisation; mobile radio; smart phones; Android-based phones; CallerDec; VoIP networks; caller ID information; caller ID spoofing attacks; caller identification; cellular networks; customer authentication; emergency services; end-to-end caller ID verification mechanism; fake ID providers; landlines; smartphones; telephone networks; Authentication; Credit cards; Emergency services; Protocols; Smart phones; Timing; Caller ID Spoofing ; End-user Security (ID#:14-3105)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903577&isnumber=6903544
  • Chenxiong Qian; Xiapu Luo; Yuru Shao; Chan, A.T.S., "On Tracking Information Flows through JNI in Android Applications," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp. 180, 191, 23-26 June 2014. doi: 10.1109/DSN.2014.30 Android provides native development kit through JNI for developing high-performance applications (or simply apps). Although recent years have witnessed a considerable increase in the number of apps employing native libraries, only a few systems can examine them. However, none of them scrutinizes the interactions through JNI in them. In this paper, we conduct a systematic study on tracking information flows through JNI in apps. More precisely, we first perform a large-scale examination on apps using JNI and report interesting observations. Then, we identify scenarios where information flows uncaught by existing systems can result in information leakage. Based on these insights, we propose and implement NDroid, an efficient dynamic taint analysis system for checking information flows through JNI. The evaluation through real apps shows NDroid can effectively identify information leaks through JNI with low performance overheads.
    Keywords: Android (operating system); Java; Android applications; JNI; Java Native Interface; NDroid systems; high-performance applications; information flow tracking; Androids; Context; Engines; Games; Humanoid robots ;Java; Libraries (ID#:14-3106)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903578&isnumber=6903544
  • Kharraz, A.; Kirda, E.; Robertson, W.; Balzarotti, D.; Francillon, A., "Optical Delusions: A Study of Malicious QR Codes in the Wild," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.192,203, 23-26 June 2014. doi: 10.1109/DSN.2014.103 QR codes, a form of 2D barcode, allow easy interaction between mobile devices and websites or printed material by removing the burden of manually typing a URL or contact information. QR codes are increasingly popular and are likely to be adopted by malware authors and cyber-criminals as well. In fact, while a link can "look" suspicious, malicious and benign QR codes cannot be distinguished by simply looking at them. However, despite public discussions about increasing use of QR codes for malicious purposes, the prevalence of malicious QR codes and the kinds of threats they pose are still unclear. In this paper, we examine attacks on the Internet that rely on QR codes. Using a crawler, we performed a large-scale experiment by analyzing QR codes across 14 million unique web pages over a ten-month period. Our results show that QR code technology is already used by attackers, for example to distribute malware or to lead users to phishing sites. However, the relatively few malicious QR codes we found in our experiments suggest that, on a global scale, the frequency of these attacks is not alarmingly high and users are rarely exposed to the threats distributed via QR codes while surfing the web.
    Keywords: Internet; Web sites; computer crime; invasive software ;telecommunication security;2D barcode; Internet; URL; Web crawler; Web sites; contact information; malicious QR code; mobile device; optical delusion; phishing sites; Crawlers; Malware; Mobile communication; Servers; Smart phones; Web pages; Mobile devices; malicious QR codes; malware; phishing (ID#:14-3107)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903579&isnumber=6903544
  • Quan Jia; Huangxin Wang; Fleck, D.; Fei Li; Stavrou, A.; Powell, W., "Catch Me If You Can: A Cloud-Enabled DDoS Defense," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.264,275, 23-26 June 2014. doi: 10.1109/DSN.2014.35 We introduce a cloud-enabled defense mechanism for Internet services against network and computational Distributed Denial-of-Service (DDoS) attacks. Our approach performs selective server replication and intelligent client re-assignment, turning victim servers into moving targets for attack isolation. We introduce a novel system architecture that leverages a "shuffling" mechanism to compute the optimal re-assignment strategy for clients on attacked servers, effectively separating benign clients from even sophisticated adversaries that persistently follow the moving targets. We introduce a family of algorithms to optimize the runtime client-to-server re-assignment plans and minimize the number of shuffles to achieve attack mitigation. The proposed shuffling-based moving target mechanism enables effective attack containment using fewer resources than attack dilution strategies using pure server expansion. Our simulations and proof-of-concept prototype using Amazon EC2 [1] demonstrate that we can successfully mitigate large-scale DDoS attacks in a small number of shuffles, each of which incurs a few seconds of user-perceived latency.
    Keywords: client-server systems; cloud computing; computer network security; Amazon EC2; Internet services; attack dilution strategies ;attack mitigation; client-to-server reassignment plans; cloud computing; cloud-enabled DDoS defense; computational distributed denial-of-service attacks; intelligent client reassignment; large-scale DDoS attacks; moving target mechanism; moving targets; network attacks; optimal reassignment strategy; shuffling mechanism; system architecture; turning victim servers; Cloud computing; Computer architecture; Computer crime; IP networks; Servers; Web and internet services; Cloud; DDoS; Moving Target Defense; Shuffling (ID#:14-3108)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903585&isnumber=6903544
  • Wei Zhang; Sheng Xiao; Yaping Lin; Ting Zhou; Siwang Zhou, "Secure Ranked Multi-keyword Search for Multiple Data Owners in Cloud Computing," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.276,286, 23-26 June 2014. doi: 10.1109/DSN.2014.36 With the advent of cloud computing, it becomes increasingly popular for data owners to outsource their data to public cloud servers while allowing data users to retrieve these data. For privacy concerns, secure searches over encrypted cloud data motivated several researches under the single owner model. However, most cloud servers in practice do not just serve one owner, instead, they support multiple owners to share the benefits brought by cloud servers. In this paper, we propose schemes to deal with secure ranked multi-keyword search in a multi-owner model. To enable cloud servers to perform secure search without knowing the actual data of both keywords and trapdoors, we systematically construct a novel secure search protocol. To rank the search results and preserve the privacy of relevance scores between keywords and files, we propose a novel Additive Order and Privacy Preserving Function family. Extensive experiments on real-world datasets confirm the efficacy and efficiency of our proposed schemes.
    Keywords: cloud computing; data privacy; information retrieval; additive order function; cloud computing; data outsourcing; data owners; keywords; multi-owner model; privacy concerns; privacy preserving function; public cloud servers; ranked multi-keyword search security; relevance scores; secure search protocol; single owner model; trapdoors; Cloud computing; Data privacy; Encryption; Keyword search; Privacy; Servers; cloud computing; multiple data owners; privacy and additive order preserving; secure ranked keyword search (ID#:14-3109)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903586&isnumber=6903544
  • Xiaojing Liao; Uluagac, S.; Beyah, R.A., "S-MATCH: Verifiable Privacy-Preserving Profile Matching for Mobile Social Services," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.287, 298, 23-26 June 2014. doi: 10.1109/DSN.2014.37 Mobile social services utilize profile matching to help users find friends with similar social attributes (e.g., interests, location, background). However, privacy concerns often hinder users from enabling this functionality. In this paper, we introduce S-MATCH, a novel framework for privacy-preserving profile matching based on property-preserving encryption (PPE). First, we illustrate that PPE should not be considered secure when directly used on social attribute data due to its key-sharing problem and information leakage problem. Then, we address the aforementioned problems of applying PPE to social network data and develop an efficient and verifiable privacy-preserving profile matching scheme. We implement both the client and server portions of S-MATCH and evaluate its performance under three real-world social network datasets. The results show that S-MATCH can achieve at least one order of magnitude better computational performance than the techniques that use homomorphic encryption.
    Keywords: cryptography; data privacy; mobile computing; social networking (online); PPE; S-MATCH; homomorphic encryption; information leakage problem; key-sharing problem; mobile social services; privacy concerns; profile matching; property-preserving encryption; social attributes; social network data; verifiable privacy-preserving profile matching; Encryption; Entropy; Mobile communication; Servers; Social network services; privacy; profile matching; property-preserving encryption; symmetric encryption (ID#:14-3110)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903587&isnumber=6903544
  • Jiesheng Wei; Thomas, A.; Guanpeng Li; Pattabiraman, K., "Quantifying the Accuracy of High-Level Fault Injection Techniques for Hardware Faults," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.375,382, 23-26 June 2014. doi: 10.1109/DSN.2014.2 Hardware errors are on the rise with reducing feature sizes, however tolerating them in hardware is expensive. Researchers have explored software-based techniques for building error resilient applications. Many of these techniques leverage application-specific resilience characteristics to keep overheads low. Understanding application-specific resilience characteristics requires software fault-injection mechanisms that are both accurate and capable of operating at a high-level of abstraction to allow developers to reason about error resilience. In this paper, we quantify the accuracy of high-level software fault injection mechanisms vis-a-vis those that operate at the assembly or machine code levels. To represent high-level injection mechanisms, we built a fault injector tool based on the LLVM compiler, called LLFI. LLFI performs fault injection at the LLVM intermediate code level of the application, which is close to the source code. We quantitatively evaluate the accuracy of LLFI with respect to assembly level fault injection, and understand the reasons for the differences.
    Keywords: program compilers; program testing; software fault tolerance; system recovery; LLFI; LLVM compiler; error resilience; fault injector tool; hardware faults; software fault-injection mechanisms; software testing; Accuracy; Assembly; Benchmark testing; Computer crashes; Hardware; Registers; Software; Fault injection; LLVM; PIN; comparison (ID#:14-3111)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903595&isnumber=6903544
  • Hong, J.B.; Dong Seong Kim, "Scalable Security Models for Assessing Effectiveness of Moving Target Defenses," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.515,526, 23-26 June 2014. doi: 10.1109/DSN.2014.54 Moving Target Defense (MTD) changes the attack surface of a system that confuses intruders to thwart attacks. Various MTD techniques are developed to enhance the security of a networked system, but the effectiveness of these techniques is not well assessed. Security models (e.g., Attack Graphs (AGs)) provide formal methods of assessing security, but modeling the MTD techniques in security models has not been studied. In this paper, we incorporate the MTD techniques in security modeling and analysis using a scalable security model, namely Hierarchical Attack Representation Models (HARMs), to assess the effectiveness of the MTD techniques. In addition, we use importance measures (IMs) for scalable security analysis and deploying the MTD techniques in an effective manner. The performance comparison between the HARM and the AG is given. Also, we compare the performance of using the IMs and the exhaustive search method in simulations.
    Keywords: graph theory; security of data; HARMs; IMs; MTD; attack graphs; effectiveness assessment; exhaustive search method; hierarchical attack representation models; importance measures; moving target defenses; networked system security; scalable security models; security assessment; Analytical models; Computational modeling; Diversity methods; Internet; Linux; Measurement; Security; Attack Representation Model; Importance Measures; Moving Target Defense; Security Analysis; Security Modeling Techniques (ID#:14-3112)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903607&isnumber=6903544
  • Mason, S.; Gashi, I.; Lugini, L.; Marasco, E.; Cukic, B., "Interoperability between Fingerprint Biometric Systems: An Empirical Study," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.586,597, 23-26 June 2014. doi: 10.1109/DSN.2014.60 Fingerprints are likely the most widely used biometric in commercial as well as law enforcement applications. With the expected rapid growth of fingerprint authentication in mobile devices their importance justifies increased demands for dependability. An increasing number of new sensors, applications and a diverse user population also intensify concerns about the interoperability in fingerprint authentication. In most applications, fingerprints captured for user enrollment with one device may need to be "matched" with fingerprints captured with another device. We have performed a large-scale study with 494 participants whose fingerprints were captured with 4 different industry-standard optical fingerprint devices. We used two different image quality algorithms to evaluate fingerprint images, and then used three different matching algorithms to calculate match scores. In this paper we present a comprehensive analysis of dependability and interoperability attributes of fingerprint authentication and make empirically-supported recommendations on their deployment strategies.
    Keywords: fingerprint identification; image matching; message authentication; dependability attribute; fingerprint authentication; fingerprint biometric system; image quality algorithm ;industry-standard optical fingerprint device; interoperability attribute; matching algorithm; mobile device; biometric systems; design diversity; empirical assessment; experimental results; interoperability (ID#:14-3113)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903613&isnumber=6903544
  • Hong, J.B.; Dong Seong Kim; Haqiq, A., "What Vulnerability Do We Need to Patch First?," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference, pp.684,689, 23-26 June 2014. doi: 10.1109/DSN.2014.68 Computing a prioritized set of vulnerabilities to patch is important for system administrators to determine the order of vulnerabilities to be patched that are more critical to the network security. One way to assess and analyze security to find vulnerabilities to be patched is to use attack representation models (ARMs). However, security solutions using ARMs are optimized for only the current state of the networked system. Therefore, the ARM must reanalyze the network security, causing multiple iterations of the same task to obtain the prioritized set of vulnerabilities to patch. To address this problem, we propose to use importance measures to rank network hosts and vulnerabilities, then combine these measures to prioritize the order of vulnerabilities to be patched. We show that nearly equivalent prioritized set of vulnerabilities can be computed in comparison to an exhaustive search method in various network scenarios, while the performance of computing the set is dramatically improved, while equivalent solutions are computed in various network scenarios.
    Keywords: security of data; ARM; attack representation models; importance measures; network hosts; network security; networked system; prioritized set; security solutions; system administrators; vulnerability patch; Analytical models; Computational modeling; Equations; Mathematical model; Measurement; Scalability; Security; Attack Representation Model; Network Centrality; Security Analysis; Security Management; Security Metrics; Vulnerability Patch (ID#:14-3114)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903625&isnumber=6903544
  • Parvania, M.; Koutsandria, G.; Muthukumary, V.; Peisert, S.; McParland, C.; Scaglione, A., "Hybrid Control Network Intrusion Detection Systems for Automated Power Distribution Systems," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on pp.774,779, 23-26 June 2014. doi: 10.1109/DSN.2014.81 In this paper, we describe our novel use of network intrusion detection systems (NIDS) for protecting automated distribution systems (ADS) against certain types of cyber attacks in a new way. The novelty consists of using the hybrid control environment rules and model as the baseline for what is normal and what is an anomaly, tailoring the security policies to the physical operation of the system. NIDS sensors in our architecture continuously analyze traffic in the communication medium that comes from embedded controllers, checking if the data and commands exchanged conform to the expected structure of the controllers interactions, and evolution of the system's physical state. Considering its importance in future ADSs, we chose the fault location, isolation and service restoration (FLISR) process as our distribution automation case study for the NIDS deployment. To test our scheme, we emulated the FLISR process using real programmable logic controllers (PLCs) that interact with a simulated physical infrastructure. We used this test bed to examine the capability of our NIDS approach in several attack scenarios. The experimental analysis reveals that our approach is capable of detecting various attacks scenarios including the attacks initiated within the trusted perimeter of the automation network by attackers that have complete knowledge about the communication information exchanged.
    Keywords: {computer crime; control engineering computing; embedded systems; fault location; power distribution control; power distribution faults; power distribution protection; power engineering computing; power system security; programmable controllers; DS; FLISR process; NIDS sensors; PLC; automated power distribution systems protection; automation network; communication information exchange; communication medium traffic; controllers interactions;cyber attacks; distribution automation; embedded controllers; fault location isolation and service restoration; hybrid control environment rules; hybrid control network intrusion detection systems; physical infrastructure; real programmable logic controllers; security policies; system physical operation; system physical state evolution; trusted perimeter; Circuit breakers; Circuit faults; IP networks; intrusion detection; Monitoring; Protocols; Power distribution systems; distribution automation; intrusion detection systems; network security (ID#:14-3115)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903640&isnumber=6903544
  • Gibson, T.; Ciraci, S.; Sharma, P.; Allwardt, C.; Rice, M.; Akyol, B., "An Integrated Security Framework for GOSS Power Grid Analytics Platform," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.786,791, 23-26 June 2014. doi: 10.1109/DSN.2014.106 In power grid operations, security is an essential component for any middleware platform. Security protects data against unwanted access as well as cyber attacks. GridOpticsTM Software System (GOSS) is an open source power grid analytics platform that facilitates ease of access between applications and data sources and promotes development of advanced analytical applications. GOSS contains an API that abstracts many of the difficulties in connecting to various heterogeneous data sources. A number of applications and data sources have already been implemented to demonstrate functionality and ease of use. A security framework has been implemented which leverages widely accepted, robust Java TM security tools in a way such that they can be interchanged as needed. This framework supports the complex fine-grained, access control rules identified for the diverse data sources already in GOSS. Performance and reliability are also important considerations in any power grid architecture. An evaluation is done to determine the overhead cost caused by security within GOSS and ensure minimal impact to performance.
    Keywords: Java; application program interfaces; authorisation; middleware; power grids; power system analysis computing; public domain software; API; GOSS power grid analytics platform; GridOptics software system; Java security tools; complex fine-grained access control rules; cyber attacks;I ntegrated security framework; middleware platform; open source power grid analytics platform; power grid architecture; power grid operations; Authentication; Authorization; Organizations; Phasor measurement units; Power grids; jaas; middleware; pmu; power grid; security; smartgrid (ID#:14-3116)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903642&isnumber=6903544
  • Zhiyuan Teo; Kutsenko, V.; Birman, K.; van Renesse, R., "Ironstack: Performance, Stability and Security for Power Grid Data Networks," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.792, 797, 23-26 June 2014. doi: 10.1109/DSN.2014.83 Operators of the nationwide power grid use proprietary data networks to monitor and manage their power distribution systems. These purpose-built, wide area communication networks connect a complex array of equipment ranging from PMUs and synchrophasers to SCADA systems. Collectively, these equipment form part of an intricate feedback system that ensures the stability of the power grid. In support of this mission, the operational requirements of these networks mandates high performance, reliability, and security. We designed Iron Stack, a system to address these concerns. By using cutting-edge software defined networking technology, Iron Stack is able to use multiple network paths to improve communications bandwidth and latency, provide seamless failure recovery, and ensure signals security. Additionally, Iron Stack is incrementally deployable and backward-compatible with existing switching infrastructure.
    Keywords: SCADA systems; computer network performance evaluation; computer network security; feedback; power distribution; power grids; IronStack; PMU; SCADA systems; communication bandwidth; communication latency; cutting-edge software defined networking technology; failure recovery; feedback system; power distribution systems; power grid data network security; power grid data network stability; proprietary data networks; switching infrastructure; synchrophasers; wide area communication networks; Bandwidth; Power grids; Process control; Redundancy; Security; Software; Switches; SDNs; high-assurance computing; network performance; security; software-defined networking (ID#:14-3117)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903643&isnumber=6903544

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


International Science of Security Research: China Communications 2013

China Communications 2013


In this bibliographical selection, we look at science of security research issues that highlight a specific series of international conferences and the IEEE journals that have come out of them rather than at key words. This inaugural set is from China Communications, an English language technical journal published by China Institute of Communications, with the stated objective of providing a global academic exchange platform involved in information and communications technologies sector. The research cited is security research published in 2013.

  • He Defang; Pan Yuntao; Ma Zheng; Wang Jingting, "Sustainable Growth In China's Communications Field: Trend Analysis Of Impact Of China's Academic Publications," Communications, China, vol.10, no.3, pp.157, 163, March 2013. doi: 10.1109/CC.2013.6488844 China's communications industry is an important part of the electronic information industry, and plays a significant role in the national informatization process. In 2006, China issued its National Plans for Medium and Long-term Development of Science and Technology (2006-2020) (NPMLDST). Since 2006, there has been a rapid increase in the number of citations of China's international papers in the field of communications. In accordance with the goals listed in the NPMLDST, China needs to overtake several competitors by 2020 to be among the top five countries in the field of natural science field. By comparing two Essential Science Indicators (ESI) (i.e., the total number of citations and the number of citations per paper) for China and other countries, China's annual growth rate is found to exceed that of other influential countries in the field of science and technology, and exhibits evident growth-type characteristics. Besides, our study also shows that the shortage of high-quality academic papers in China is the main obstacle to improving the impact of China's academic publications.
    Keywords: citation analysis; publishing; ESI; NPMLDST; National Plans for Medium and Long-term Development of Science and Technology; academic papers; academic publications; communications field; electronic information industry; essential science indicators; growth-type characteristics; national informatization process; sustainable growth; Bibliometrics; Communication industry; Market research; Mobile communication; Publishing; Technological innovation; China's communications field; Essential Science Indicators; academic publications; citations; growth trend (ID#:14-2904)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6488844&isnumber=6488803
  • Yi, Chengqi; Bao, Yuanyuan; Jiang, Jingchi; Xue, Yibo, "Mitigation Strategy Against Cascading Failures On Social Networks," Communications, China, vol.11, no.8, pp.37,46, Aug. 2014. doi: 10.1109/CC.2014.6911086 Cascading failures are common phenomena in many of real-world networks, such as power grids, Internet, transportation networks and social networks. It's worth noting that once one or a few users on a social network are unavailable for some reasons, they are more likely to influence a large portion of social network. Therefore, an effective mitigation strategy is very critical for avoiding or reducing the impact of cascading failures. In this paper, we firstly quantify the user loads and construct the processes of cascading dynamics, then elaborate the more reasonable mechanism of sharing the extra user loads with considering the features of social networks, and further propose a novel mitigation strategy on social networks against cascading failures. Based on the real-world social network datasets, we evaluate the effectiveness and efficiency of the novel mitigation strategy. The experimental results show that this mitigation strategy can reduce the impact of cascading failures effectively and maintain the network connectivity better with lower cost. These findings are very useful for rationally advertising and may be helpful for avoiding various disasters of cascading failures on many real-world networks.
    Keywords: Educational institutions; Facebook; Power system dynamics; Power system faults; Power system protection; Twitter; betweenness centrality; cascading dynamics; cascading failures; mitigation strategy; social networks (ID#:14-2905)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6911086&isnumber=6911078
  • Guoyuan Lin; Danru Wang; Yuyu Bie; Min Lei, "MTBAC: A Mutual Trust Based Access Control Model In Cloud Computing," Communications, China, vol.11, no.4, pp.154,162, April 2014. doi: 10.1109/CC.2014.6827577 As a new computing mode, cloud computing can provide users with virtualized and scalable web services, which faced with serious security challenges, however. Access control is one of the most important measures to ensure the security of cloud computing. But applying traditional access control model into the Cloud directly could not solve the uncertainty and vulnerability caused by the open conditions of cloud computing. In cloud computing environment, only when the security and reliability of both interaction parties are ensured, data security can be effectively guaranteed during interactions between users and the Cloud. Therefore, building a mutual trust relationship between users and cloud platform is the key to implement new kinds of access control method in cloud computing environment. Combining with Trust Management(TM), a mutual trust based access control (MTBAC) model is proposed in this paper. MTBAC model take both user's behavior trust and cloud services node's credibility into consideration. Trust relationships between users and cloud service nodes are established by mutual trust mechanism. Security problems of access control are solved by implementing MTBAC model into cloud computing environment. Simulation experiments show that MTBAC model can guarantee the interaction between users and cloud service nodes.
    Keywords: Web services; authorisation; cloud computing; virtualisation; MTBAC model; cloud computing environment; cloud computing security; cloud service node credibility; data security; mutual trust based access control model; mutual trust mechanism; mutual trust relationship; open conditions; scalable Web services; trust management; user behavior trust; virtualized Web services; Computational modeling; Reliability; Time-frequency analysis; MTBAC; access control; cloud computing; mutual trust mechanism ;trust model (ID#:14-2906)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6827577&isnumber=6827540
  • Huang Qinlong; Ma Zhaofeng; Yang Yixian; Niu Xinxin; Fu Jingyi, "Improving Security And Efficiency For Encrypted Data Sharing In Online Social Networks," Communications, China, vol.11, no.3, pp.104,117, March 2014. doi: 10.1109/CC.2014.6825263 Despite that existing data sharing systems in online social networks (OSNs) propose to encrypt data before sharing, the multiparty access control of encrypted data has become a challenging issue. In this paper, we propose a secure data sharing scheme in OSNs based on ciphertext-policy attribute-based proxy re-encryption and secret sharing. In order to protect users' sensitive data, our scheme allows users to customize access policies of their data and then outsource encrypted data to the OSNs service provider. Our scheme presents a multiparty access control model, which enables the disseminator to update the access policy of ciphertext if their attributes satisfy the existing access policy. Further, we present a partial decryption construction in which the computation overhead of user is largely reduced by delegating most of the decryption operations to the OSNs service provider. We also provide checkability on the results returned from the OSNs service provider to guarantee the correctness of partial decrypted ciphertext. Moreover, our scheme presents an efficient attribute revocation method that achieves both forward and backward secrecy. The security and performance analysis results indicate that the proposed scheme is secure and effcient in OSNs.
    Keywords: authorization ;cryptography; social networking (online); attribute based proxy reencryption; ciphertext policy; data security; decryption operations; encrypted data sharing efficiency; multiparty access control model; online social networks; secret sharing; secure data sharing; Access control; Amplitude shift keying; Data sharing; Encryption; Social network services; attribute revocation; attribute-based encryption; data sharing; multiparty access control; online social networks (ID#:14-2907)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825263&isnumber=6825249
  • Huifang, Chen; Lei, Xie; Xiong, Ni, "Reputation-Based Hierarchically Cooperative Spectrum Sensing Scheme In Cognitive Radio Networks," Communications, China, vol.11, no. 1, pp. 12, 25, Jan. 2014. doi: 10.1109/CC.2014.6821304 Cooperative spectrum sensing in cognitive radio is investigated to improve the detection performance of Primary User (PU). Meanwhile, cluster-based hierarchical cooperation is introduced for reducing the overhead as well as maintaining a certain level of sensing performance. However, in existing hierarchically cooperative spectrum sensing algorithms, the robustness problem of the system is seldom considered. In this paper, we propose a reputation-based hierarchically cooperative spectrum sensing scheme in Cognitive Radio Networks (CRNs). Before spectrum sensing, clusters are grouped based on the location correlation coefficients of Secondary Users (SUs). In the proposed scheme, there are two levels of cooperation, the first one is performed within a cluster and the second one is carried out among clusters. With the reputation mechanism and modified MAJORITY rule in the second level cooperation, the proposed scheme can not only relieve the influence of the shadowing, but also eliminate the impact of the PU emulation attack on a relatively large scale. Simulation results show that, in the scenarios with deep-shadowing or multiple attacked SUs, our proposed scheme achieves a better tradeoff between the system robustness and the energy saving compared with those conventionally cooperative sensing schemes.
    Keywords: Clustering methods; Cognitive radio; Correlation; Correlation coefficient; Robustness; Shadow mapping; Spread spectrum management; cluster; cognitive radio networks; cooperative spectrum sensing; location correlation; reputation (ID#:14-2908)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821304&isnumber=6821299
  • Cao Wanpeng; Bi Wei, "Adaptive And Dynamic Mobile Phone Data Encryption Method," Communications, China, vol.11, no.1, pp.103, 109, Jan. 2014. doi: 10.1109/CC.2014.6821312 To enhance the security of user data in the clouds, we present an adaptive and dynamic data encryption method to encrypt user data in the mobile phone before it is uploaded. Firstly, the adopted data encryption algorithm is not static and uniform. For each encryption, this algorithm is adaptively and dynamically selected from the algorithm set in the mobile phone encryption system. From the mobile phone's character, the detail encryption algorithm selection strategy is confirmed based on the user's mobile phone hardware information, personalization information and a pseudo-random number. Secondly, the data is rearranged with a randomly selected start position in the data before being encrypted. The start position's randomness makes the mobile phone data encryption safer. Thirdly, the rearranged data is encrypted by the selected algorithm and generated key. Finally, the analysis shows this method possesses the higher security because the more dynamics and randomness are adaptively added into the encryption process.
    Keywords: cloud computing; cryptography; data protection; mobile computing; mobile handsets; random functions; detail encryption algorithm selection strategy; mobile phone data encryption method; mobile phone encryption system; mobile phone hardware information; personalization information; pseudorandom number; user data security; Encryption; Heuristic algorithms; Mobile communication; Mobile handsets; Network security; cloud storage; data encryption; mobile phone; pseudo-random number (ID#:14-2909)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821312&isnumber=6821299
  • Shang Tao; Pei Hengli; Liu Jianwei, "Secure Network Coding Based On Lattice Signature," Communications, China, vol.11, no.1, pp.138, 151, Jan. 2014. doi: 10.1109/CC.2014.6821316 To provide a high-security guarantee to network coding and lower the computing complexity induced by signature scheme, we take full advantage of homomorphic property to build lattice signature schemes and secure network coding algorithms. Firstly, by means of the distance between the message and its signature in a lattice, we propose a Distance-based Secure Network Coding (DSNC) algorithm and stipulate its security to a new hard problem Fixed Length Vector Problem (FLVP), which is harder than Shortest Vector Problem (SVP) on lattices. Secondly, considering the boundary on the distance between the message and its signature, we further propose an efficient Boundary-based Secure Network Coding (BSNC) algorithm to reduce the computing complexity induced by square calculation in DSNC. Simulation results and security analysis show that the proposed signature schemes have stronger unforgeability due to the natural property of lattices than traditional Rivest-Shamir-Adleman (RSA)-based signature scheme. DSNC algorithm is more secure and BSNC algorithm greatly reduces the time cost on computation.
    Keywords: computational complexity; digital signatures; network coding; telecommunication security; BSNC; DSNC; FLVP; boundary-based secure network coding; computing complexity; distance-based secure network coding; fixed length vector problem; hard problem; high-security guarantee; homomorphic property; lattice signature; signature scheme; Algorithm design and analysis; Cryptography; Lattices; Network coding;N etwork security ;fixed length vector problem; lattice signature;pollution attack;secure network coding (ID#:14-2910)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821316&isnumber=6821299
  • Jingzheng, Huang; Zhenqiang, Yin; Wei, Chen; Shuang, Wang; Hongwei, Li; Guangcan, Guo; Zhengfu, Han, "A Survey On Device-Independent Quantum Communications," Communications, China, vol.10, no.2, pp.1,10, Feb. 2013. doi: 10.1109/CC.2013.6472853 Quantum communications helps us to enhance the security and efficiency of communications and to deepen our understanding of quantum physics. Its rapid development in recent years has attracted the interest of researchers from diverse fields such as physics, mathematics, and computer science. We review the background and current state of quantum communications technology, with an emphasis on quantum key distribution, quantum random number generation, and a relatively hot topic: device independent protocols.
    Keywords: Cryptography; Detectors; Hilbert space; Network security; Photonics; Protocols; Quantum communications; device-independent; quantum communications; quantum key distribution (ID#:14-2911)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6472853&isnumber=6472848
  • Li, Yang; Chong, Xiang; Bao, Li, "Quantum Probabilistic Encryption Scheme Based On Conjugate Coding," Communications, China, vol.10, no.2, pp.19,26, Feb. 2013. doi: 10.1109/CC.2013.6472855 We present a quantum probabilistic encryption algorithm for a private-key encryption scheme based on conjugate coding of the qubit string. A probabilistic encryption algorithm is generally adopted in public-key encryption protocols. Here we consider the way it increases the unicity distance of both classical and quantum private-key encryption schemes. The security of quantum probabilistic private-key encryption schemes against two kinds of attacks is analyzed. By using the no-signalling postulate, we show that the scheme can resist attack to the key. The scheme's security against plaintext attack is also investigated by considering the information-theoretic indistinguishability of the encryption scheme. Finally, we make a conjecture regarding Breidbart's attack.
    Keywords: Cryptography; Encoding; Encryption; Private key encryption; Probabilistic logic; Public key; Quantum communications; information-theoretic indistinguishability; probabilistic encryption; quantum cryptography (ID#:14-2912)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6472855&isnumber=6472848
  • Zhi, Ma; Riguang, Leng; Zhengchao, Wei; Shuqin, Zhong, "Constructing Non-Binary Asymmetric Quantum Codes Via Graphs," Communications, China, vol.10, no.2, pp.33,41, Feb. 2013. doi: 10.1109/CC.2013.6472857 The theory of quantum error correcting codes is a primary tool for fighting decoherence and other quantum noise in quantum communication and quantum computation. Recently, the theory of quantum error correcting codes has developed rapidly and been extended to protect quantum information over asymmetric quantum channels, in which phase-shift and qubit-flip errors occur with different probabilities. In this paper, we generalize the construction of symmetric quantum codes via graphs (or matrices) to the asymmetric case, converting the construction of asymmetric quantum codes to finding matrices with some special properties. We also propose some asymmetric quantum Maximal Distance Separable (MDS) codes as examples constructed in this way.
    Keywords: Cryptography; Matrix converters; Measurement; Quantum communications; Quantum computing; Quantum mechanics; Symmetric matrices; asymmetric quantum codes; graph construction; quantum MDS codes (ID#:14-2913)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6472857&isnumber=6472848
  • Liaojun, Pang; Huixian, Li; Qingqi, Pei; Nengbin, Liu; Yumin, Wang, "Fair Data Collection Scheme In Wireless Sensor Networks," Communications, China , vol.10, no.2, pp.112,120, Feb. 2013. doi: 10.1109/CC.2013.6472863 To solve the slow congestion detection and rate convergence problems in the existing rate control based fair data collection schemes, a new fair data collection scheme is proposed, which is named the improved scheme with fairness or ISWF for short. In ISWF, a quick congestion detection method, which combines the queue length with traffic changes of a node, is used to solve the slow congestion detection problem, and a new solution, which adjusts the rate of sending data of a node by monitoring the channel utilization rate, is used to solve the slow convergence problem. At the same time, the probability selection method is used in ISWF to achieve the fairness of channel bandwidth utilization. Experiment and simulation results show that ISWF can effectively reduce the reaction time in detecting congestion and shorten the rate convergence process. Compared with the existing tree-based fair data collection schemes, ISWF can achieve better fairness in data collection and reduce the transmission delay effectively, and at the same time, it can increase the average network throughput by 9.1% or more.
    Keywords: Bandwidth; Congestion control; Data collection; Data communication; Interference; Telecommunication traffic; Throughput; Wireless sensor networks; congestion detection;fairness; probability selection; rate control; wireless sensor networks (ID#:14-2914)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6472863&isnumber=6472848
  • Xiaoyun, Chen; Yujie, Su; Xiaosheng, Tang; Xiaohong, Huang; Yan, Ma, "On Measuring The Privacy Of Anonymized Data In Multiparty Network Data Sharing," Communications, China , vol.10, no.5, pp.120,127, May 2013. doi: 10.1109/CC.2013.6520944 This paper aims to find a practical way of quantitatively representing the privacy of network data. A method of quantifying the privacy of network data anonymization based on similarity distance and entropy in the scenario involving multiparty network data sharing with Trusted Third Party (TTP) is proposed. Simulations are then conducted using network data from different sources, and show that the measurement indicators defined in this paper can adequately quantify the privacy of the network. In particular, it can indicate the effect of the auxiliary information of the adversary on privacy.
    Keywords: Data privacy; Entropy; IP networks; Ports (Computers); Privacy; Probability distribution; Workstations; multiparty network data sharing; network data anonymization; privacy (ID#:14-2915)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6520944&isnumber=6520928
  • Lu Gang; Zhang Hongli; Zhang Yu; Qassrawi, M.T.; Yu Xiangzhan; Peng Lizhi, "Automatically Mining Application Signatures For Lightweight Deep Packet Inspection," Communications, China, vol.10, no.6, pp. 86, 99, June 2013. doi: 10.1109/CC.2013.6549262 Automatic signature generation approaches have been widely applied in recent traffic classification. However, they are not suitable for LightWeight Deep Packet Inspection (LW_DPI) since their generated signatures are matched through a search of the entire application data. On the basis of LW_DPI schemes, we present two Hierarchical Clustering (HC) algorithms: HC_TCP and HC_UDP, which can generate byte signatures from TCP and UDP packet payloads respectively. In particular, HC_TCP and HC_ UDP can extract the positions of byte signatures in packet payloads. Further, in order to deal with the case in which byte signatures cannot be derived, we develop an algorithm for generating bit signatures. Compared with the LASER algorithm and Suffix Tree (ST)-based algorithm, the proposed algorithms are better in terms of both classification accuracy and speed. Moreover, the experimental results indicate that, as long as the application-protocol header exists, it is possible to automatically derive reliable and accurate signatures combined with their positions in packet payloads.
    Keywords: Internet; data mining; inspection; telecommunication traffic; transport protocols; HC_TCP; HC_UDP; LASER algorithm; LW_DPI; application protocol header; application signatures; automatic signature generation; byte signatures; classification accuracy; hierarchical clustering; lightweight deep packet inspection; packet payloads; traffic classification; Classification algorithms; Clustering algorithms; Machine learning algorithms; Payloads; Ports (Computers);Telecommunication traffic Training; LW_DPI; association mining; automatic signature generation; hierarchical clustering; traffic classification (ID#:14-2916)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6549262&isnumber=6549247
  • Wang Li; Ma Xin; Ma Yue; Teng Yinglei; Zhang Yong, "Security-oriented Transmission Based On Cooperative Relays In Cognitive Radio," Communications, China, vol.10, no.8, pp.27, 35, Aug. 2013 doi: 10.1109/CC.2013.6633742 In this paper, we propose a security-oriented transmission scheme with the help of multiple relays in Cognitive Radio (CR). To maximise the Secrecy Capacity (SC) of the source-destination link in CR, both beamforming and cooperative jamming technologies are used to improve the performance of the Secondary User (SU) and protect the Primary User (PU). The effectiveness of the proposed scheme is demonstrated using extensive simulation. Both theoretical analyses and simulation results reveal that the proposed scheme contributes to the secure transmission of the SU with acceptable attenuation of the Signal-to-Noise Ratio (SNR) at the PU receiver, and the upper bound of the SC at the SU receiver is able to exploit the power allocation strategy.
    Keywords: array signal processing; cognitive radio; cooperative communication; jamming; relay networks (telecommunication);resource allocation; telecommunication security; SNR; SU receiver; beamforming; cognitive radio; cooperative jamming; cooperative relays; power allocation strategy; primary user; secondary user; secrecy capacity; security-oriented transmission scheme; signal-to-noise ratio; source-destination link; Interference; Jamming; Network security; Receivers; Relays; Resource management ;Signal to noise ratio; CR; SC; acceptable SNR attenuation level; power allocation (ID#:14-2917)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6633742&isnumber=6633733
  • Liu Guangjun; Wang Bin, "Secure Network Coding Against Intra/Inter-Generation Pollution Attacks," Communications, China, vol.10, no.8, pp.100, 110, Aug. 2013. doi: 10.1109/CC.2013.6633749 By allowing routers to combine the received packets before forwarding them, network coding-based applications are susceptible to possible malicious pollution attacks. Existing solutions for counteracting this issue either incur inter-generation pollution attacks (among multiple generations) or suffer high computation/bandwidth overhead. Using a dynamic public key technique, we propose a novel homomorphic signature scheme for network coding for each generation authentication without updating the initial secret key used. As per this idea, the secret key is scrambled for each generation by using the generation identifier, and each packet can be fast signed using the scrambled secret key for the generation to which the packet belongs. The scheme not only can resist intra-generation pollution attacks effectively but also can efficiently prevent inter-generation pollution attacks. Further, the communication overhead of the scheme is small and independent of the size of the transmitting files.
    Keywords: authorisation; network coding; public key cryptography; telecommunication security; communication overhead; dynamic public key technique; generation authentication; generation identifier; homomorphic signature ;inter-generation pollution attacks; intra-generation pollution attacks; malicious pollution attacks; multiple generations; received packets; scrambled secret key; secure network coding; Authentication; Computer viruses; Network coding; Network security; Public key; authentication; homomorphic cryptography; homomorphic signature; network coding; pollution attacks (ID#:14-2918)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6633749&isnumber=6633733
  • Zhou Conghua; Cao Meiling, "Analysis of Fast And Secure Protocol Based On Continuous-Time Markov Chain," Communications, China, vol.10, no.8, pp.137,149, Aug. 2013. doi: 10.1109/CC.2013.6633752 To provide an optimal alternative to traditional Transmission Control Protocol (TCP)-based transport technologies, Aspera's Fast and Secure Protocol (FASP) is proposed as an innovative bulky data transport technology. To accurately analyse the reliability and rapidness of FASP, an automated formal technique ? probabilistic model checking ? is used for formally analysing FASP in this paper. First, FASP's transmission process is decomposed into three modules: the Sender, the Receiver and the transmission Channel. Each module is then modelled as a Continuous-Time Markov Chain (CTMC). Second, the reward structure for CTMC is introduced so that the reliability and rapidness can be specified with the Continuous-time Stochastic Logic (CSL). Finally, the probabilistic model checker, PRISM is used for analysing the impact of different parameters on the reliability and rapidness of FASP. The probability that the Sender finishes sending data and the Receiver successfully receives data is always 1, which indicates that FASP can transport data reliably. The result that FASP takes approximately 10 s to complete transferring the file of 1 G irrespective of the network configuration shows that FASP can transport data very quickly. Further, by the comparison of throughput between FASP and TCP under various latency and packet loss conditions, FASP's throughput is shown to be perfectly independent of network delays and robust to extreme packet loss.
    Keywords: Markov processes; formal verification; probability; telecommunication network reliability; transport protocols; automated formal technique; continuous time Markov chain; continuous time stochastic logic; fast and secure protocol; innovative bulky data transport technology; network delays; packet loss conditions; probabilistic model checking; transmission control protocol; Markov processes; Model checking; Packet loss; Probabilistic logic; Protocols; Reliability; Throughput; CTMC; FASP; PRISM; probabilistic model checking (ID#:14-2919)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6633752&isnumber=6633733
  • Shi Wenhua; Zhang Xiaohang; Gong Xue; Lv Tingjie, "Identifying Fake And Potential Corporate Members In Telecommunications Operators," Communications, China, vol.10, no.8, pp.150, 157, Aug. 2013 doi: 10.1109/CC.2013.6633753 Nowadays, mobile operators in China mainland are facing fierce competition from one to another, and their focus of customer competition has, in general, shifted from public to corporate customers. One big challenge in corporate customer management is how to identify fake corporate members and potential corporate members from corporate customers. In this study, we have proposed an identification method that combines the rule-based and probabilistic methods. Through this method, fake corporate members can be eliminated and external potential members can be mined. The experimental results based on the data obtained from a local mobile operator revealed that the proposed method can effectively and efficiently identify fake and potential corporate members. The proposed method can be used to improve the management of corporate customers.
    Keywords: customer relationship management; identification; knowledge based systems; probability ;telecommunication industry; telecommunication network management; China mainland; corporate customer management; customer competition; fake corporate members; identification method; mobile operators; potential corporate members; probabilistic methods; public customers; rule-based methods; Base stations; Consumer behavior; Customer profiles; Information technology; Mobile communication; Probabilistic logic; Telecommunication services; corporate customer; fake-member identification; kernel density estimation; rule-based method (ID#:14-2920)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6633753&isnumber=6633733
  • Wang Houtian; Zhang Qi; Xin Xiangjun; Tao Ying; Liu Naijin, "Cross-layer Design And Ant-Colony Optimization Based Routing Algorithm For Low Earth Orbit Satellite Networks," Communications, China , vol.10, no.10, pp.37, 46, Oct. 2013. doi: 10.1109/CC.2013.6650318 To improve the robustness of the Low Earth Orbit (LEO) satellites networks and realise load balancing, a Cross-layer design and Ant-colony optimization based Load-balancing routing algorithm for LEO Satellite Networks (CAL-LSN) is proposed in this paper. In CAL-LSN, mobile agents are used to gather routing information actively. CAL-LSN can utilise the information of the physical layer to make routing decision during the route construction phase. In order to achieve load balancing, CAL-LSN makes use of a multi-objective optimization model. Meanwhile, how to take the value of some key parameters is discussed while designing the algorithm so as to improve the reliability. The performance is measured by the packet delivery rate, the end-to-end delay, the link utilization and delay jitter. Simulation results show that CAL-LSN performs well in balancing traffic load and increasing the packet delivery rate. Meanwhile, the end-to-end delay and delay jitter performance can meet the requirement of video transmission.
    Keywords: ant colony optimisation; delays; jitter; resource allocation; satellite links; telecommunication network reliability; telecommunication network routing; video communication; CAL-LSN; LEO satellite; ant-colony optimization; cross-layer design; delay jitter performance; end-to-end delay; link utilization; low earth orbit satellite network; mobile agent; multiobjective optimization model; packet delivery rate; reliability; robustness; traffic load-balancing routing algorithm; video transmission; Algorithm design and analysis; Delays; Low earth orbit satellites; Optimization; Routing; Satellite broadcasting; LEO satellite networks; Quality of Service; ant-colony algorithm; cross-layer design; load balancing (ID#:14-2921)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6650318&isnumber=6650308
  • Fang Enbo; Han Caiyun; Liu Jiayong, "Auto-aligned Sharing Fuzzy Fingerprint Vault," Communications, China, vol.10, no.10, pp.145, 154, Oct. 2013. doi: 10.1109/CC.2013.6650327 Recently, a cryptographic construct, called fuzzy vault, has been proposed for crypto-biometric systems, and some implementations for fingerprint have been reported to protect the stored fingerprint template by hiding the fingerprint features. However, all previous studies assumed that fingerprint features were pre-aligned, and automatic alignment in the fuzzy vault domain is a challenging issue. In this paper, an auto-aligned sharing fuzzy fingerprint vault based on a geometric hashing technique is proposed to address automatic alignment in the multiple-control fuzzy vault with a compartmented structure. The vulnerability analysis and experimental results indicate that, compared with original multiple-control fuzzy vault, the auto-aligned sharing fuzzy fingerprint vault can improve the security of the system.
    Keywords: cryptography; fingerprint identification; image matching; auto-aligned sharing fuzzy fingerprint vault; automatic alignment; compartmented structure; crypto-biometric systems; cryptographic construct; fingerprint features; geometric hashing technique; multiple-control fuzzy vault; stored fingerprint template; vulnerability analysis; Authentication; Bioinformatics; Biometrics (access control); Cryptography; Fingerprint recognition; auto-aligned sharing fuzzy fingerprint vault; biometrics ;fingerprint; geometric hashing (ID#:14-2922)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6650327&isnumber=6650308
  • Qi Yanfeng; Tang Chunming; Lou Yu; Xu Maozhi; Guo Baoan, "Certificateless Proxy Identity-Based Signcryption Scheme Without Bilinear Pairings," Communications, China, vol.10, no.11, pp.37, 41, Nov. 2013. doi: 10.1109/CC.2013.6674208 Signcryption, which was introduced by ZHENG, is a cryptographic primitive that fulfils the functions of both digital signature and encryption and guarantees confidentiality, integrity and non-repudiation in a more efficient way. Certificateless signcryption and proxy signcryption in identity-based cryptography were proposed for different applications. Most of these schemes are constructed by bilinear pairings from elliptic curves. However, some schemes were recently presented without pairings. In this paper, we present a certificateless proxy identity-based signcryption scheme without bilinear pairings, which is efficient and secure.
    Keywords: digital signatures; public key cryptography; certificateless proxy identity-based signcryption scheme; confidentiality; cryptographic primitive; digital signature; elliptic curve discrete logarithm problem; encryption; identity-based cryptography; integrity; nonrepudiation; Elliptic curve cryptography; Elliptic curves; Information security; certificateless signcryption; elliptic curve discrete logarithm problem; identity-based cryptography; proxy signcryption (ID#:14-2923)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6674208&isnumber=6674198
  • Zou Futai; Zhang Siyu; Rao Weixiong, "Hybrid Detection And Tracking Of Fast-Flux Botnet On Domain Name System Traffic," Communications, China, vol.10, no.11, pp.81,94, Nov. 2013. doi: 10.1109/CC.2013.6674213 Fast-flux is a Domain Name System (DNS) technique used by botnets to organise compromised hosts into a high-availability, load-balancing network that is similar to Content Delivery Networks (CDNs). Fast-Flux Service Networks (FFSNs) are usually used as proxies of phishing websites and malwares, and hide upstream servers that host actual content. In this paper, by analysing recursive DNS traffic, we develop a fast-flux domain detection method which combines both real-time detection and long-term monitoring. Experimental results demonstrate that our solution can achieve significantly higher detection accuracy values than previous flux-score based algorithms, and is lightweight in terms of resource consumption. We evaluate the performance of the proposed fast-flux detection and tracking solution during a 180-day period of deployment on our university's DNS servers. Based on the tracking results, we successfully identify the changes in the distribution of FFSN and their roles in recent Internet attacks.
    Keywords: Internet; Web sites; computer network security; invasive software; network servers; resource allocation; telecommunication traffic; DNS servers; DNS technique; FFSNs; Internet attacks; domain name system traffic; fast-flux botnet; fast-flux detection; fast-flux domain detection method; fast-flux service networks; hide upstream servers; hybrid detection; hybrid tracking; load-balancing network; long-term monitoring; malwares; performance evaluation; phishing Web sites; real-time detection; recursive DNS traffic; resource consumption; time 180 day; tracking solution; Classification algorithms; Decision trees; Feature extraction; IP networks; Real-time systems; Telecommunication traffic; botnet; domain name system; fast-flux (ID#:14-2924)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6674213&isnumber=6674198
  • Ye Na; Zhao Yinliang; Dong Lili; Bian Genqing; Enjie Liu; Clapworthy, G.J., "User Identification Based On Multiple Attribute Decision Making In Social Networks," Communications, China , vol.10, no.12, pp.37, 49, Dec. 2013. doi: 10.1109/CC.2013.6723877 Social networks are becoming increasingly popular and influential, and users are frequently registered on multiple networks simultaneously, in many cases leaving large quantities of personal information on each network. There is also a trend towards the personalization of web applications; to do this, the applications need to acquire information about the particular user. To maximise the use of the various sets of user information distributed on the web, this paper proposes a method to support the reuse and sharing of user profiles by different applications, and is based on user profile integration. To realize this goal, the initial task is user identification, and this forms the focus of the current paper. A new user identification method based on Multiple Attribute Decision Making (MADM) is described in which a subjective weight-directed objective weighting, which is obtained from the Similarity Weight method, is proposed to determine the relative weights of the common properties. Attribute Synthetic Evaluation is used to determine the equivalence of users. Experimental results show that the method is both feasible and effective despite the incompleteness of the candidate user dataset.
    Keywords: decision making; social networking (online); MADM; Web application personalization; attribute synthetic evaluation; multiple attribute decision making; similarity weight method; social network; subjective weight-directed objective weighting; user identification; user profile integration; user profile reusing; user profile sharing; Communication systems; Competitive intelligence; Decision making; Electronic mail; Facebook; Identification; Information technology; LinkedIn; Social network services; Twitter; cooperative communication; fuzzy matching; heterogeneous networks; network convergence; weighted algorithm (ID#:14-2925)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6723877&isnumber=6723867
  • Ghosh, A.; Gottlieb, Y.M.; Naidu, A.; Vashist, A.; Poylisher, A.; Kubota, A.; Sawaya, Y.; Yamada, A., "Managing High Volume Data For Network Attack Detection Using Real-Time Flow Filtering," Communications, China, vol.10, no.3, pp.56,66, March 2013. doi: 10.1109/CC.2013.6488830 In this paper, we present Real-Time Flow Filter (RTFF) -a system that adopts a middle ground between coarse-grained volume anomaly detection and deep packet inspection. RTFF was designed with the goal of scaling to high volume data feeds that are common in large Tier-1 ISP networks and providing rich, timely information on observed attacks. It is a software solution that is designed to run on off-the-shelf hardware platforms and incorporates a scalable data processing architecture along with lightweight analysis algorithms that make it suitable for deployment in large networks. RTFF also makes use of state of the art machine learning algorithms to construct attack models that can be used to detect as well as predict attacks.
    Keywords: Internet; computer network management; computer network security; Internet service provider;RTFF;Tier-1 ISP networks; coarse-grained volume anomaly detection; deep packet inspection; high volume data feeds; high volume data management; machine learning algorithms; network attack detection; off-the-shelf hardware platforms; real-time flow filtering; scalable data processing architecture; software solution; Data processing; Filters; Intrusion detection; Network architecture; Network security; Real-time systems; Security; intrusion detection; network security; scaling (ID#:14-2926)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6488830&isnumber=6488803

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


International Science of Security Research: China Communications 2014

China Communications 2014


In this bibliographical selection, we look at science of security research issues that highlight a specific series of international conferences and the IEEE journals that have come out of them rather than at key words. This inaugural set is from China Communications, an English language technical journal published by China Institute of Communications, with the stated objective of providing a global academic exchange platform involved in information and communications technologies sector. The research cited is security research published in 2014.

  • Yang Yu; Lei Min; Cheng Mingzhi; Liu Bohuai; Lin Guoyuan; Xiao Da, "An Audio Zero-Watermark Scheme Based On Energy Comparing," Communications, China , vol.11, no.7, pp.110,116, July 2014. doi: 10.1109/CC.2014.6895390 Zero-watermark technique, embedding watermark without modifying carriers, has been broadly applied for copyright protection of images. However, there is little research on audio zero-watermark. This paper proposes an audio zero-watermark scheme based on energy relationship between adjacent audio sections. Taking use of discrete wavelet transformation (DWT), it gets power approximations, or energies, of audio segments. Then, it extracts the audio profile, i.e. the zero-watermark, according to the relative size of energies of consecutive fragments. The experimental results demonstrate that the proposed scheme is robust against general malicious attacks including noise addition, resampling, low-pass filtering, etc., and this approach effectively solves the contradiction between inaudibility and robustness.
    Keywords: approximation theory; audio watermarking; discrete wavelet transforms; DWT; audio profile extraction; audio sections; audio segment energies; audio zero-watermark scheme; consecutive fragments; discrete wavelet transformation; energy comparing; energy relationship; general malicious attacks; power approximations ;relative energy size; watermark embedding; Arrays; Bit error rate; Digital audio players; Discrete wavelet transforms; Filtering; Robustness; Watermarking; audio watermarking scheme; energy comparing; zero-watermark (ID#:14-3118)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6895390&isnumber=6895376
  • Zou Weixia; Guo Chao; Du Guanglong; Wang Zhenyu; Gao Ying, "A New Codebook Design Scheme For Fast Beam Searching In Millimeter-Wave Communications," Communications, China, vol.11, no.6, pp.12, 22, June 2014. doi: 10.1109/CC.2014.6878999 To overcome imperfection of exhaustive based beam searching scheme in IEEE 802.15.3c and IEEE 802.11ad and accelerate the beam training process, combined with the fast beam searching algorithm previously proposed, this paper proposed a beam codebook design scheme for phased array to not only satisfy the fast beam searching algorithm's demand, but also make good use of the advantage of the searching algorithm. The simulation results prove that the proposed scheme not only performs well on flexibility and searching time complexity, but also has high success ratio.
    Keywords: antenna phased arrays; codes; radio networks; search problems; wireless LAN;IEEE 802.11ad standard; IEEE 802.15.3c standard; antenna element; beam codebook design scheme; beam training process; fast beam searching scheme; millimeter-wave communication; phased array; wireless communication; Array signal processing; Millimeter wave measurements; Particle beams; Receivers; Signal to noise ratio; Wireless communication; Wireless networks;60GHz;beam codebook design; beam searching; beam-forming; phased array; wireless communication (ID#:14-3119)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6878999&isnumber=6878993
  • Zhao Feng; Li Jingling, "Performance of an Improved One-Way Error Reconciliation Protocol Based On Key Redistribution," Communications, China, vol.11, no.6, pp.63,70, June 2014. doi: 10.1109/CC.2014.6879004 In data post-processing for quantum key distribution, it is essential to have a highly efficient error reconciliation protocol. Based on the key redistribution scheme, we analyze a one-way error reconciliation protocol by data simulation. The relationship between the error correction capability and the key generation efficiency of three kinds of Hamming code are demonstrated. The simulation results indicate that when the initial error rates are (0,1.5%], (1.5,4%], and (4,11%], using the Hamming (31,26), (15,11), and (7,4) codes to correct the error, respectively, the key generation rate will be maximized. Based on this, we propose a modified one-way error reconciliation protocol which employs a mixed Hamming code concatenation scheme. The error correction capability and key generation rate are verified through data simulation. Using the parameters of the posterior distribution based on the tested data, a simple method for estimating the bit error rate (BER) with a given confidence interval is estimated. The simulation results show that when the initial bit error rate is 10.00%, after 7 rounds of error correction, the error bits are eliminated completely, and the key generation rate is 10.36%; the BER expectation is 2.96 x 10-10, and when the confidence is 95% the corresponding BER upper limit is 2.17 x 10-9. By comparison, for the single (7,4) Hamming code error reconciliation scheme at a confidence of 95%, the key generation rate is only 6.09%, while the BER expectation is 5.92 x 10-9, with a BER upper limit of 4.34 x 10-8. Hence, our improved protocol is much better than the original one.
    Keywords: Hamming codes; concatenated codes; cryptographic protocols; error correction codes; error statistics; quantum cryptography; statistical distributions; BER estimation; bit error rate; confidence interval; data post-processing; data simulation; error correction capability; improved one-way error reconciliation protocol; key generation efficiency; key generation rate; key redistribution scheme; mixed Hamming code concatenation scheme; posterior distribution; quantum key distribution; single (7,4) Hamming code error reconciliation scheme; Bit error rate; Data processing; Error correction codes; Error probability; Performance evaluation; Quantum wells; data post-processing; error reconciliation; quantum key distribution (ID#:14-3120)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6879004&isnumber=6878993
  • Wang Yi; Liu Sanyang; Niu Wei; Liu Kai; Liao Yong, "Threat assessment method based on intuitionistic fuzzy similarity measurement reasoning with orientation," Communications, China , vol.11, no.6, pp.119,128, June 2014 doi: 10.1109/CC.2014.6879010 Abstract: The aim of this paper is to propose a threat assessment method based on intuitionistic fuzzy measurement reasoning with orientaion to deal with the shortcomings of the method proposed in [Ying-Jie Lei et al., Journal of Electronics and Information Technology 29(9)(2007)2077-2081] and [Dong-Feng Chen et al., Procedia Engineering 29(5)(2012)3302-3306] the ignorance of the influence of the intuitionistic index's orientation on the membership functions in the reasoning, which caused partial information loss in reasoning process. Therefore, we present a 3D expression of intuitionistic fuzzy similarity measurement, make an analysis of the constraints for intuitionistic fuzzy similarity measurement, and redefine the intuitionistic fuzzy similarity measurement. Moreover, in view of the threat assessment problem, we give the system variables of attribute function and assessment index, set up the reasoning system based on intuitionistic fuzzy similarity measurement with orientation, and design the reasoning rules, reasoning algorithms and fuzzy-resolving algorithms. Finally, through the threat assessment, some typical examples are cited to verify the validity and superiority of the method.
    Keywords: constraint handling; fuzzy logic; fuzzy reasoning; security of data; assessment index; attribute function; constraints analysis; fuzzy resolving algorithm; intuitionistic fuzzy similarity measurement with orientation; reasoning algorithms; reasoning rules; system variables; threat assessment method; Algorithm design and analysis; Cognition; Extraterrestrial measurements; Fuzzy reasoning; Fuzzy sets; Three-dimensional displays; Intuitionistic fuzzy reasoning; Orientation; Similarity measurement; Threat assessment (ID#:14-3121)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6879010&isnumber=6878993
  • Li Wei; Tao Zhi; Gu Dawu; Sun Li; Qu Bo; Liu Zhiqiang; Liu Ya, "An Effective Differential Fault Analysis On The Serpent Cryptosystem in the Internet of Things," Communications, China, vol.11, no.6, pp.129,139, June 2014. doi: 10.1109/CC.2014.6879011 Due to the strong attacking ability, fast speed, simple implementation and other characteristics, differential fault analysis has become an important method to evaluate the security of cryptosystem in the Internet of Things. As one of the AES finalists, the Serpent is a 128-bit Substitution-Permutation Network (SPN) cryptosystem. It has 32 rounds with the variable key length between 0 and 256 bits, which is flexible to provide security in the Internet of Things. On the basis of the byte-oriented model and the differential analysis, we propose an effective differential fault attack on the Serpent cryptosystem. Mathematical analysis and simulating experiment show that the attack could recover its secret key by introducing 48 faulty ciphertexts. The result in this study describes that the Serpent is vulnerable to differential fault analysis in detail. It will be beneficial to the analysis of the same type of other iterated cryptosystems.
    Keywords: Internet of Things; computer network security; mathematical analysis; private key cryptography; Internet of Things; SPN cryptosystem; Serpent cryptosystem; byte-oriented model; cryptosystem security; differential fault analysis; differential fault attack; faulty ciphertexts; mathematical analysis; secret key recovery; substitution-permutation network cryptosystem; word length 0 bit to 256 bit; Educational institutions; Encryption; Internet of Things; Schedules; cryptanalysis; differential fault analysis ;internet of things; serpent (ID#:14-3122)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6879011&isnumber=6878993
  • Seongwon Han; Youngtae Noh; Liang, R.; Chen, R.; Yung-Ju Cheng; Gerla, M., "Evaluation of Underwater Optical-Acoustic Hybrid Network," Communications, China, vol.11, no.5, pp.49,59, May 2014. doi: 10.1109/CC.2014.6880460 The deployment of underwater networks allows researchers to collect explorative and monitoring data on underwater ecosystems. The acoustic medium has been widely adopted in current research and commercial uses, while the optical medium remains experimental only. According to our survey on the properties of acoustic and optical communications and preliminary simulation results have shown significant trade-offs between bandwidth, propagation delay, power consumption, and effective communication range. We propose a hybrid solution that combines the use of acoustic and optical communication in order to overcome the bandwidth limitation of the acoustic channel by enabling optical communication with the help of acoustic-assisted alignment between optical transmitters and receivers.
    Keywords: optical receivers; optical transmitters; underwater acoustic communication; underwater optical wireless communication; acoustic communication; acoustic communications; acoustic medium; bandwidth; monitoring data; optical communication; optical communications; optical medium; optical receivers; optical transmitters; power consumption; propagation delay; underwater ecosystems; underwater optical acoustic hybrid network; Acoustics; Attenuation; Optical attenuators; Optical fiber communication; Optical receivers; Optical transmitters; acoustic communication; optical communication; underwater (ID#:14-3123)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6880460&isnumber=6880452
  • Tian Zhihong; Jiang Wei; Li Yang; Dong Lan, "A Digital Evidence Fusion Method In Network Forensics Systems With Dempster-Shafer Theory," Communications, China, Vol.11, No.5, Pp.91, 97, May 2014. Doi: 10.1109/CC.2014.6880464 Network intrusion forensics is an important extension to present security infrastructure, and is becoming the focus of forensics research field. However, comparison with sophisticated multi-stage attacks and volume of sensor data, current practices in network forensic analysis are to manually examine, an error prone, labor-intensive and time consuming process. To solve these problems, in this paper we propose a digital evidence fusion method for network forensics with Dempster-Shafer theory that can detect efficiently computer crime in networked environments, and fuse digital evidence from different sources such as hosts and sub-networks automatically. In the end, we evaluate the method on well-known KDD Cup 1999 dataset. The results prove our method is very effective for real-time network forensics, and can provide comprehensible messages for a forensic investigator.
    Keywords: computer crime; computer network security; digital forensics; inference mechanisms; Dempster-Shafer theory; KDD Cup dataset; comprehensible messages; computer crime detection; digital evidence fusion method; network intrusion forensic systems; networked environments; security infrastructure; Algorithm design and analysis; Computer crime; Computer security; Digital forensics; Digital systems; Forensics; Support vector machines; dempster-shafer theory; digital evidence; fusion; network forensics; security (ID#:14-3124)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6880464&isnumber=6880452
  • Hu Ziquan; She Kun; Wang Jianghua; Tang Jianguo, "Game Theory Based False Negative Probability Of Embedded Watermark Under Unintentional And Steganalysis Attacks," Communications, China, vol. 11, no. 5, pp.114, 123, May 2014. doi: 10.1109/CC.2014.6880467 Steganalysis attack is to statistically estimate the embedded watermark in the watermarked multimedia, and the estimated watermark may be destroyed by the attacker. The existing methods of false negative probability, however, do not consider the influence of steganalysis attack. This paper proposed the game theory based false negative probability to estimate the impacts of steganalysis attack, as well as unintentional attack. Specifically, game theory was used to model the collision between the embedment and steganalysis attack, and derive the optimal building/embedding/attacking strategy. Such optimal playing strategies devote to calculating the attacker destructed watermark, used for calculation of the game theory based false negative probability. The experimental results show that watermark detection reliability measured using our proposed method, in comparison, can better reflect the real scenario in which the embedded watermark undergoes unintentional attack and the attacker using steganalysis attack. This paper provides a foundation for investigating countermeasures of digital watermarking community against steganalysis attack.
    Keywords: game theory; multimedia communication; probability; steganography; telecommunication security; watermarking; embedded watermark; false negative probability; game theory; negative probability; optimal building-embedding-attacking strategy; optimal playing strategies; steganalysis attacks; unintentional attack; unintentional attacks; watermark detection reliability; watermarked multimedia; Bit error rate; Digital watermarking; Error analysis; Game theory; Reliability; Steganography; Watermarking; digital watermarking; false negative probability; game theory; steganalysis attack; watermark capacity (ID#:14-3125)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6880467&isnumber=6880452
  • Xiaoyan Liang; Chunhe Xia; Jian Jiao; Junshun Hu; Xiaojian Li, "Modeling and Global Conflict Analysis Of Firewall Policy," Communications, China, vol. 11, no. 5, pp.124, 135, May 2014. doi: 10.1109/CC.2014.6880468 The global view of firewall policy conflict is important for administrators to optimize the policy. It has been lack of appropriate firewall policy global conflict analysis, existing methods focus on local conflict detection. We research the global conflict detection algorithm in this paper. We presented a semantic model that captures more complete classifications of the policy using knowledge concept in rough set. Based on this model, we presented the global conflict formal model, and represent it with OBDD (Ordered Binary Decision Diagram). Then we developed GFPCDA (Global Firewall Policy Conflict Detection Algorithm) algorithm to detect global conflict. In experiment, we evaluated the usability of our semantic model by eliminating the false positives and false negatives caused by incomplete policy semantic model, of a classical algorithm. We compared this algorithm with GFPCDA algorithm. The results show that GFPCDA detects conflicts more precisely and independently, and has better performance.
    Keywords: binary decision diagrams; firewalls; pattern classification; rough set theory; GFPCDA algorithm; OBDD; firewall policy classification; firewall policy global conflict analysis; global conflict detection algorithm; global firewall policy conflict detection algorithm; knowledge concept; local conflict detection; ordered binary decision diagram; rough set; semantic model; semantic model usability; Algorithm design and analysis; Analytical models; Classification algorithms; Detection algorithms; Firewalls (computing);Semantics; conflict analysis; conflict detection; firewall policy; semantic model (ID#:14-3126)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6880468&isnumber=6880452
  • Xu Chaofeng; Fan Weimin; Wang Changfeng; Xin Zhanhong, "Risk and Intellectual Property In Technical Standard Competition: A Game Theory Perspective," Communications, China, vol.11, no.5, pp.136,143, May 2014. doi: 10.1109/CC.2014.6880469 Technical standard is typically characterized by network effect. The key point for a technical standard is the consumers' choice, which is based on consumers' maximum benefits. When a technical standard becomes a national standard, its interests have been integrated into the national interests. National interests are divided into economic profits and security factors. From the perspective of consumers' choice, this paper deals with the main factors which affect the result of technical standard competition- the risk and profits of intellectual property based on the assumption of bounded rationality and dynamic game theory.
    Keywords: consumer behaviour; game theory; industrial property; macroeconomics; profitability; risk management; consumer choice; consumer maximum benefits; dynamic game theory; economic profit factor; economic security factor intellectual property profits; intellectual property risk; national interests; network effect; technical standard competition; Analytical models; Computer security; Game theory; Intellectual property; Standards; game theory; intellectual property; risk; standard competition (ID#:14-3127)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6880469&isnumber=6880452
  • Li Chaoling; Chen Yue; Zhou Yanzhou, "A Data Assured Deletion Scheme In Cloud Storage," Communications, China, vol.11, no.4, pp. 98, 110, April 2014. doi: 10.1109/CC.2014.6827572 In order to provide a practicable solution to data confidentiality in cloud storage service, a data assured deletion scheme, which achieves the fine grained access control, hopping and sniffing attacks resistance, data dynamics and deduplication, is proposed. In our scheme, data blocks are encrypted by a two-level encryption approach, in which the control keys are generated from a key derivation tree, encrypted by an All-Or-Nothing algorithm and then distributed into DHT network after being partitioned by secret sharing. This guarantees that only authorized users can recover the control keys and then decrypt the outsourced data in an owner-specified data lifetime. Besides confidentiality, data dynamics and deduplication are also achieved separately by adjustment of key derivation tree and convergent encryption. The analysis and experimental results show that our scheme can satisfy its security goal and perform the assured deletion with low cost.
    Keywords: authorisation; cloud computing; cryptography; storage management; DHT network; all-or-nothing algorithm; cloud storage; convergent encryption; data assured deletion scheme; data confidentiality; data deduplication; data dynamics; fine grained access control; key derivation tree; owner-specified data lifetime; sniffing attack resistance; two-level encryption approach; Artificial neural networks; Encryption; cloud storage; data confidentiality; data dynamics; secure data assured deletion (ID#:14-3128)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6827572&isnumber=6827540
  • Guoyuan Lin; Danru Wang; Yuyu Bie; Min Lei, "MTBAC: A mutual trust based access control model in Cloud computing," Communications, China, vol.11, no.4, pp.154, 162, April 2014. doi: 10.1109/CC.2014.6827577 As a new computing mode, cloud computing can provide users with virtualized and scalable web services, which faced with serious security challenges, however. Access control is one of the most important measures to ensure the security of cloud computing. But applying traditional access control model into the Cloud directly could not solve the uncertainty and vulnerability caused by the open conditions of cloud computing. In cloud computing environment, only when the security and reliability of both interaction parties are ensured, data security can be effectively guaranteed during interactions between users and the Cloud. Therefore, building a mutual trust relationship between users and cloud platform is the key to implement new kinds of access control method in cloud computing environment. Combining with Trust Management(TM), a mutual trust based access control (MTBAC) model is proposed in this paper. MTBAC model take both user's behavior trust and cloud services node's credibility into consideration. Trust relationships between users and cloud service nodes are established by mutual trust mechanism. Security problems of access control are solved by implementing MTBAC model into cloud computing environment. Simulation experiments show that MTBAC model can guarantee the interaction between users and cloud service nodes.
    Keywords: Web services; authorisation; cloud computing; virtualisation; MTBAC model; cloud computing environment; cloud computing security; cloud service node credibility; data security; mutual trust based access control model; mutual trust mechanism; mutual trust relationship; open conditions; scalable Web services; trust management; user behavior trust; virtualized Web services; Computational modeling; Reliability; Time-frequency analysis; MTBAC; access control; cloud computing; mutual trust mechanism; trust model (ID#:14-3129)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6827577&isnumber=6827540
  • Li Ning; Lin Kanfeng; Lin Wenliang; Deng Zhongliang, "A Joint Encryption And Error Correction Method Used In Satellite Communications," Communications, China, vol.11, no.3, pp.70, 79, March 2014. doi: 10.1109/CC.2014.6825260 Due to the ubiquitous open air links and complex electromagnetic environment in the satellite communications, how to ensure the security and reliability of the information through the satellite communications is an urgent problem. This paper combines the AES(Advanced Encryption Standard) with LDPC(Low Density Parity Check Code) to design a secure and reliable error correction method -SEEC(Satellite Encryption and Error Correction).This method selects the LDPC codes, which is suitable for satellite communications, and uses the AES round key to control the encoding process, at the same time, proposes a new algorithm of round key generation. Based on a fairly good property in error correction in satellite communications, the method improves the security of the system, achieves a shorter key size, and then makes the key management easier. Eventually, the method shows a great error correction capability and encryption effect by the MATLAB simulation.
    Keywords: cryptography; encoding; error correction codes; parity check codes; satellite communication; telecommunication network reliability; telecommunication security; AES; LDPC codes; MATLAB simulation; SEEC; advanced encryption standard; complex electromagnetic environment; encoding process; error correction; low density parity check code ;reliability; round key generation; satellite communications; satellite encryption; security; ubiquitous open air links; Encoding; Encryption; Error correction; Parity check codes; Satellite communication; LDPC channel coding; advanced encryption standard; data encryption; error correcting cipher; satellite communications (ID#:14-3130)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825260&isnumber=6825249
  • Huang Qinlong; Ma Zhaofeng; Yang Yixian; Niu Xinxin; Fu Jingyi, "Improving Security And Efficiency For Encrypted Data Sharing In Online Social Networks," Communications, China, vol. 11, no. 3, pp. 104, 117, March 2014. doi: 10.1109/CC.2014.6825263 Despite that existing data sharing systems in online social networks (OSNs) propose to encrypt data before sharing, the multiparty access control of encrypted data has become a challenging issue. In this paper, we propose a secure data sharing scheme in OSNs based on ciphertext-policy attribute-based proxy re-encryption and secret sharing. In order to protect users' sensitive data, our scheme allows users to customize access policies of their data and then outsource encrypted data to the OSNs service provider. Our scheme presents a multiparty access control model, which enables the disseminator to update the access policy of ciphertext if their attributes satisfy the existing access policy. Further, we present a partial decryption construction in which the computation overhead of user is largely reduced by delegating most of the decryption operations to the OSNs service provider. We also provide checkability on the results returned from the OSNs service provider to guarantee the correctness of partial decrypted ciphertext. Moreover, our scheme presents an efficient attribute revocation method that achieves both forward and backward secrecy. The security and performance analysis results indicate that the proposed scheme is secure and efficient in OSNs.
    Keywords: authorisation; cryptography; social networking (online); attribute based proxy reencryption; ciphertext policy; data security; decryption operations; encrypted data sharing efficiency; multiparty access control model; online social networks; secret sharing; secure data sharing; Access control; Amplitude shift keying; Data sharing; Encryption; Social network services; attribute revocation; attribute-based encryption; data sharing; multiparty access control; online social networks (ID#:14-3131)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825263&isnumber=6825249
  • Yue Keqiang; Sun Lingling; Qin Xing; Zheng Zhonghua, "Design of Anti-Collision Integrated Security Mechanism Based On Chaotic Sequence In UHF RFID System," Communications, China , vol.11, no.3, pp.137,147, March 2014. doi: 10.1109/CC.2014.6825266 Collision and security issues are considered as barriers to RFID applications. In this paper, a parallelizable anti-collision based on chaotic sequence combined dynamic frame slotted aloha to build a high-effciency RFID system is proposed. In the tags parallelizable identification, we design a Discrete Markov process to analyze the success identification rate. Then a mutual authentication security protocol merging chaotic anti-collision is presented. The theoretical analysis and simulation results show that the proposed identifcation scheme has less than 45.1 % of the identifcation time slots compared with the OVSF-system when the length of the chaos sequence is 31. The success identification rate of the proposed chaotic anti-collision can achieve 63% when the number of the tag is 100. We test the energy consumption of the presented authentication protocol, which can simultaneously solve the anti-collision and security of the UHF RFID system.
    Keywords: Markov processes; access protocols; chaotic communication ;cryptographic protocols; power consumption; radiofrequency identification; UHF RFID system; anticollision integrated security; chaotic anticollision; chaotic sequence; combined dynamic frame slotted aloha; discrete Markov process; energy consumption; mutual authentication security protocol; parallelizable anticollision; parallelizable identification; success identification rate; Authentication; Chaotic communication; Markov processes; Protocols; Radiofrequency identification; anti-collision; chaotic sequence; discrete Markov process; performance analysis; security (ID#:14-3132)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825266&isnumber=6825249
  • Zhiming Wang; Jiangxing Wu; Yu Wang; Ning Qi; Julong Lan, "Survivable Virtual Network Mapping Using Optimal Backup Topology In Virtualized SDN," Communications, China, vol.11, no.2, pp.26, 37, Feb 2014. doi: 10.1109/CC.2014.6821735 Software-Defined Network architecture offers network virtualization through a hypervisor plane to share the same physical substrate among multiple virtual networks. However, for this hypervisor plane, how to map a virtual network to the physical substrate while guaranteeing the survivability in the event of failures, is extremely important. In this paper, we present an efficient virtual network mapping approach using optimal backup topology to survive a single link failure with less resource consumption. Firstly, according to whether the path splitting is supported by virtual networks, we propose the OBT-I and OBT-II algorithms respectively to generate an optimal backup topology which minimizes the total amount of bandwidth constraints. Secondly, we propose a Virtual Network Mapping algorithm with coordinated Primary and Backup Topology (VNM-PBT) to make the best of the substrate network resource. The simulation experiments show that our proposed approach can reduce the average resource consumption and execution time cost, while improving the request acceptance ratio of VNs.
    Keywords: software radio; telecommunication network reliability; telecommunication network topology; OBT-I algorithms; OBT-II algorithms; bandwidth constraints; hypervisor plane; multiple virtual networks; optimal backup topology; physical substrate; resource consumption; single link failure; software-defined network architecture; substrate network resource; survivable virtual network mapping; virtualized SDN; Artificial neural networks; Bandwidth; optimization; Switches; Topology; backup sharing; optimal backup topology; path splitting; software-defined network; survivability; virtual network mapping (ID#:14-3133)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821735&isnumber=6821729
  • Gu Lize; Wang Jingpei; Sun Bin, "Trust Management Mechanism for Internet of Things," Communications, China, vol.11, no.2, pp.148,156, Feb 2014. doi: 10.1109/CC.2014.6821746 Trust management has been proven to be a useful technology for providing security service and as a consequence has been used in many applications such as P2P, Grid, ad hoc network and so on. However, few researches about trust mechanism for Internet of Things (IoT) could be found in the literature, though we argue that considerable necessity is held for applying trust mechanism to IoT. In this paper, we establish a formal trust management control mechanism based on architecture modeling of IoT. We decompose the IoT into three layers, which are sensor layer, core layer and application layer, from aspects of network composition of IoT. Each layer is controlled by trust management for special purpose: self-organized, affective routing and multi-service respectively. And the final decision-making is performed by service requester according to the collected trust information as well as requester' policy. Finally, we use a formal semantics-based and fuzzy set theory to realize all above trust mechanism, the result of which provides a general framework for the development of trust models of IoT.
    Keywords: Internet of Things; ad hoc networks; decision making; fuzzy set theory; peer-to-peer computing; telecommunication network routing; telecommunication security; Internet of Things;IoT;P2P;ad hoc network; application layer; core layer; decision making; formal semantics; formal trust management control; fuzzy set theory; grid; routing; security service; sensor layer; trust management mechanism; Decision making; Internet ;Legged locomotion; Multiplexing; Security; Internet of Things; formal semantics; trust decisionmaking; trust management (ID#:14-3134)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821746&isnumber=6821729
  • Cao Wanpeng; Bi Wei, "Adaptive And Dynamic Mobile Phone Data Encryption Method," Communications, China, vol.11, no.1, pp.103,109, Jan. 2014. doi: 10.1109/CC.2014.6821312 To enhance the security of user data in the clouds, we present an adaptive and dynamic data encryption method to encrypt user data in the mobile phone before it is uploaded. Firstly, the adopted data encryption algorithm is not static and uniform. For each encryption, this algorithm is adaptively and dynamically selected from the algorithm set in the mobile phone encryption system. From the mobile phone's character, the detail encryption algorithm selection strategy is confirmed based on the user's mobile phone hardware information, personalization information and a pseudo-random number. Secondly, the data is rearranged with a randomly selected start position in the data before being encrypted. The start position's randomness makes the mobile phone data encryption safer. Thirdly, the rearranged data is encrypted by the selected algorithm and generated key. Finally, the analysis shows this method possesses the higher security because the more dynamics and randomness are adaptively added into the encryption process.
    Keywords: cloud computing; cryptography; data protection; mobile computing; mobile handsets; random functions; detail encryption algorithm selection strategy; mobile phone data encryption method; mobile phone encryption system; mobile phone hardware information; personalization information; pseudorandom number; user data security; Encryption; Heuristic algorithms; Mobile communication; Mobile handsets; Network security; cloud storage; data encryption; mobile phone; pseudo-random number (ID#:14-3135)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821312&isnumber=6821299
  • Shang Tao; Pei Hengli; Liu Jianwei, "Secure Network Coding Based On Lattice Signature," Communications, China, vol.11, no.1, pp.138, 151, Jan. 2014. doi: 10.1109/CC.2014.6821316 To provide a high-security guarantee to network coding and lower the computing complexity induced by signature scheme, we take full advantage of homomorphic property to build lattice signature schemes and secure network coding algorithms. Firstly, by means of the distance between the message and its signature in a lattice, we propose a Distance-based Secure Network Coding (DSNC) algorithm and stipulate its security to a new hard problem Fixed Length Vector Problem (FLVP), which is harder than Shortest Vector Problem (SVP) on lattices. Secondly, considering the boundary on the distance between the message and its signature, we further propose an efficient Boundary-based Secure Network Coding (BSNC) algorithm to reduce the computing complexity induced by square calculation in DSNC. Simulation results and security analysis show that the proposed signature schemes have stronger unforgeability due to the natural property of lattices than traditional Rivest-Shamir-Adleman (RSA)-based signature scheme. DSNC algorithm is more secure and BSNC algorithm greatly reduces the time cost on computation.
    Keywords: {computational complexity; digital signatures; network coding; telecommunication security; BSNC; DSNC; FLVP; boundary-based secure network coding; computing complexity; distance-based secure network coding; fixed length vector problem; hard problem; high-security guarantee; homomorphic property; lattice signature; signature scheme; Algorithm design and analysis; Cryptography; Lattices; Network coding; Network security; fixed length vector problem; lattice signature; pollution attack; secure network coding (ID#:14-3136)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821316&isnumber=6821299

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Lablet Research: Human Behavior & Cybersecurity

Humand Behavior and Cybersecurity


EXECUTIVE SUMMARY:

Over the past year the, NSA Science of Security lablets engaged in 12 NSA-approved research projects addressing the hard problem of Human Behavior and Cybersecurity. Both CMU and UIUC worked with non-lablet universities on SoS research, effectively expanding the SoS community. In addition to the lablets, other universities involved in SoS research include Berkeley, U Pitt, UTSA, University of Newcastle, USC, UPenn, and Dartmouth. Several of the projects addressed other hard problems, most frequently Security-Metrics-Driven Evaluation, Design, Development, and Deployment. The projects are in various stages of maturity, and several have led to publications and/or conference presentations. Summaries of the projects, highlights, and publications are presented below.

1. USE: User Security Behavior (CMU/Berkeley/University of Pittsburgh Collaborative Proposal)

SUMMARY: The Security Behavior Observatory addresses the hard problem of "Understanding and Accounting for Human Behavior" by collecting data directly from people's own home computers, thereby capturing people's computing behavior "in the wild". This data is the closest to the ground truth of the users' everyday security and privacy challenges that the research community has ever collected. We expect the insights discovered by analyzing this data will profoundly impact multiple research domains, including but not limited to behavioral sciences, computer security & privacy, economics, and human-computer interaction. By its very nature - building infrastructure to collect data, then collecting, and eventually analyzing the data - the project has a long set up phase. As a result, it will likely be much more publication-centered toward the second half of its projected duration. However, we are confident that the greater number and quality of sensors we are building, and the more secure, reliable, and robust infrastructure we continue to build will provide more and better data, resulting in more and stronger publications. However, now that we are launching our data collection pilot study, we hope to compile the lessons learnt about building and launching such a large-scale field study into an early publication. We also hope the pilot will go smoothly enough that we could submit a paper with early results from the short-term data collected. (ID#:14-3330)

HIGHLIGHTS and PUBLICATIONS

  • We have launched our data collection architecture pilot study, and have thus far not encountered any technical challenges.
  • With the launch of our pilot study, we are now also pilot testing numerous data collection sensors, which are collecting live field data on client machines' processes, filesystem meta-data (e.g., file path, file size, date created, date modified, permissions), network packet headers, Windows security logs, Windows updates, installed software, and wireless access points.
  • We have published the following technical report describing our data collection architecture and the various issues and design decisions surrounding building and deploying a large-scale data collection infrastructure: A. Forget, S. Komanduri, A. Acquisti, N. Christin, L.F. Cranor, R. Telang. "Security Behavior Observatory: Infrastructure for Long-term Monitoring of Client Machines." Carnegie Mellon University CyLab Technical Report CMU-CyLab-14-009. https://www.cylab.cmu.edu/research/techreports/2014/tr_cylab14009.html (accessed 2014-09-05)
  • We have also given an invited presentation of our project, as well as an archival poster presentation, at the IEEE Symposium and Bootcamp on the Science of Security 2014 (HotSoS, http://www.csc2.ncsu.edu/conferences/hotsos/index.html).

2. Usable Formal Methods for the Design and Composition of Security Privacy Policies (CMU/UTSA Collaborative Proposal)

SUMMARY: Our research is based on theories in psychology concerning how designers comprehend and interpret their environment, how they plan and project solutions into the future, with the aim of better understanding how these activities exist in designing more secure systems. These are not typical models of attackers and defenders, but models of developer behavior, including our ability to influence that behavior with interventions. The project also addresses the hard problem of Security-Metrics-Driven-Evaluation, Design, Development and Deployment. (ID#:14-3331)

HIGHLIGHTS and PUBLICATIONS:

  • We developed a repository and search tool that security analysts can use to select from 176 security patterns that were mined from a total of 21 different publications.
  • We designed a survey protocol to collect security analyst risk perceptions for formalization in Fuzzy Logic. We plan to evaluate the formalization to check whether it can predict co-dependencies between security requirements as increasing or decreasing perceptions of security risk with respect to specific threat scenarios.
  • Hui Shen, Ram Krishnan, Rocky Slavin, and Jianwei Niu. "Sequence Diagram Aided Privacy Policy Specification", revision submitted for publication: IEEE Transactions on Dependable and Secure Computing in August 2014.
  • H. Hibshi, T. Breaux, M. Riaz, L. Williams. "Discovering Decision-Making Patterns for Security Novices and Experts", In Submission: International Journal of Secure Software Engineering, 2014.
  • H. Hibshi, T. Breaux, M. Riaz, L. Williams. "A Framework to Measure Experts' Decision Making in Security Requirements Analysis," IEEE 1st International Workshop on Evolving Security and Privacy Requirements Engineering, pp. 13-18, 2014.
  • R. Slavin, J.M. Lehker, J. Niu, T. Breaux. "Managing Security Requirement Patterns Using Feature Diagram Hierarchies," IEEE 22nd International Requirements Engineering Conference, pp. 193-202, 2014.
  • Slankas, J., Riaz, M. King, J., Williams, L. "Discovering Security Requirements from Natural Language," IEEE 22nd International Requirements Engineering Conference, 2014.
  • Rao, H. Hibshi, T. Breaux, J-M. Lehker, J. Niu, "Less is More? Investigating the Role of Examples in Security Studies using Analogical Transfer," 2014 Symposium and Bootcamp on the Science of Security (HotSoS), Article 7.
  • H. Hibshi, R. Slavin, J. Niu, T. Breaux, "Rethinking Security Requirements in RE Research," University of Texas at San Antonio, Technical Report #CS-TR-2014-001, January, 2014
  • Riaz, M., Breaux, T., Williams, L. "On the Design of Empirical Studies to Evaluate Software Patterns: A Survey," Revision submitted for consideration: Information and Software Technology, 2014
  • Breaux, T., Hibshi, H., Rao, A., Lehker, J.-M. "Towards a Framework for Pattern Experimentation: Understanding empirical validity in requirements engineering patterns." IEEE 2nd Workshop on Requirements Engineering Patterns (RePa'12), Chicago, Illinois, Sep. 2012, pp. 41-47.
  • Slavin, R., Shen, H., Niu, J., "Characterizations and Boundaries of Security Requirements Patterns," IEEE 2nd Workshop on Requirements Engineering Patterns (RePa'12), Chicago, Illinois, Sep. 2012, pp. 48-53.

3. Leveraging the Effects of Cognitive Function on Input Device Analytics to Improve Security (NCSU)

SUMMARY: Our work addresses understanding human behavior through observations of input device usage. The basic principles we are developing will enable new avenues for characterizing risk and identifying malicious (or accidental) uses of systems that lead to security problems. (ID#:14-3332)

HIGHLIGHTS and PUBLICATIONS:

  • We have extensively tested a typing game, which has been under development for the past two quarters.
  • Our early analysis of data from a pilot evaluation has been completed, and it resulted in a redesign which we expect will lead to much higher quality data.
  • The final version is ready for deployment, and we have commenced data collection.
  • The team has acquired an eye-tracking device and began developing software for it to be integrated into the experiment, for additional instrumentation. -
  • Data analysis on the mouse movement patterns during the concentration game has also progressed, and we have identified a number of characteristic patterns in movement hesitations.

4. A Human Information-Processing Analysis of Online Deception Detection (NCSU)

SUMMARY: Predicting individual users' judgments and decisions regarding possible online deception. Our research addresses this problem within the context of examining user decisions with regard to phishing attacks. This work is grounded within the scientific literature on human decision-making processes. (ID#:14-3333)

HIGHLIGHTS and PUBLICATIONS

  • Continued to modify the design of our Google Chrome browser extension to protect against phishing attacks, through iterative evaluation.
  • Completed procedural details for the study, "Browser Extension to Prevent Phishing Attack", initially designed in the last quarter, including preparation of fliers for recruiting subjects, consent forms, questionnaires, and the interface mentioned in the first bullet, as well as the protocol for conducting the experiment.
  • Submitted an application to the Institutional Review Board at Purdue University, which was approved.

5. Warning of Phishing Attacks: Supporting Human Information Processing, Identifying Phishing Deception Indicators, and Reducing Vulnerability (NCSU)

SUMMARY: This preliminary work in understanding how mental models vary between novice users, experts (such as IT professionals), and hackers should be useful in accomplishing the ultimate goal of the work: to build secure systems that reduce user vulnerability to phishing. (ID#:14-3334)

HIGHLIGHTS and PUBLICATIONS:

  • We have completed data collection from the novices recruited for the mental models experiment. In preparation for the Industry Day Lablet meeting at NCSU on Oct. 24, we plan to recruit our "knowledgeable" sample of computer security professionals so that we can complete data collection on this project. By recruiting from these two diverse samples that vary considerably on security-related knowledge, we hope to expose how novices differ from experts on how they conceptualize system security attributes. Knowledge of these differences in mental models should allow us to recommend interventions that can promote security for all users (but most specifically novices).
  • Preliminary data analysis on this project has been initiated.
  • To demonstrate our knowledge dissemination, we are presenting our Lablet research at the Oct. 24 (Industry Day), a meeting of the Carolinas Chapter of the Human Factors and Ergonomics Society (HFES) on Oct. 23, and at the international conference for HFES in Chicago from Oct. 27-31.
  • Zielinska, O., Tembe, R., Hong, K. W., Xe, G., Murphy-Hill, E. & Mayhorn, C. B. (2014). "One Phish, Two Phish, How to Avoid the Internet Phish: Analysis of Training Strategies to Detect Phishing Emails." Proceedings of the Human Factors and Ergonomics Society 56th Annual Meeting. Santa Monica, CA: Human Factors and Ergonomics Society.

6. Data-Driven Model-Based Decision-Making (UIUC/University of Newcastle Collaborative Proposal)

SUMMARY: Modeling and evaluating human behavior is challenging, but it is an imperative component in security analysis. Stochastic modeling serves as a good approximation of human behavior, but we intend to do more with the HITOP method, which considers a task based process modeling language that evaluates a human's opportunity, willingness, and capability to perform individual tasks in their daily behavior. Partnered with an effective data collection strategy to validate model parameters, we are working to provide a sound model of human behavior. This project also addresses the hard problem of Predictive Security Metrics. (ID#:14-3335)

HIGHLIGHTS and PUBLICATIONS:

  • Newcastle University has lined up their research team for their work on the project.
  • Regular team meetings at UIUC have commenced and planning for improvements to the current HITOP prototype has been completed.
  • Full team kick off with Newcastle University has been scheduled for the first week of October.

7. Science of Human Circumvention of Security (UIUC/USC/UPenn/Dartmouth Collaborative Proposal)

SUMMARY: Via fieldwork in real-world enterprises, we have been identifying and cataloging types and causes of circumvention by well-intentioned users. We are using help desk logs, records security-related computer changes, analysis of user behavior in situ, and surveys---in addition to interviews and observations. We then began to build and validate models of usage and circumvention behavior, for individuals and then for populations within an enterprise. This project also addresses three other hard problems: Scalability and Composability; Policy-Governed Secure Collaboration; and Security-Metrics-Driven Evaluation, Design, Development, and Deployment. (ID#:14-3336)

HIGHLIGHTS and PUBLICATIONS:

  • The JAMIA paper by Smith and Koppel on usability problems with health IT (pre-SHUCS, but related) received another accolade, this time from the International Medical Informatics Association, which also named it one of best papers of 2014. We are updating that paper to include discoveries from our analysis of the workaround corpora above.
  • J. Blythe, R. Koppel, V. Kothari, and S. Smith. "Ethnography of Computer Security Evasions in Healthcare Settings: Circumvention as the Norm". HealthTech' 14: Proceedings of the 2014 USENIX Summit on Health Information Technologies, August 2014.
  • R. Koppel. "Software Loved by its Vendors and Disliked by 70% of its Users: Two Trillion Dollars of Healthcare Information Technology's Promises and Disappointments". HealthTech'14: Keynote talk at the 2014 USENIX Summit on Health Information Technologies, August 2014.
  • R. Koppel. "Software Loved by its Vendors and Disliked by 70% of its Users: Two Trillion Dollars of Healthcare Information Technology's Promises and Disappointments". HealthTech'14: Keynote talk at the 2014 USENIX Summit on Health Information Technologies, August 2014.

8. Human Behavior and Cyber Vulnerabilities (UMD)

SUMMARY: When a vulnerability is exploited, software vendors often release patches fixing the vulnerability. However, our prior research has shown that some vulnerabilities continue to be exploited more than four years after their disclosure. Why? We posit that there are both technical and sociological reasons for this. On the technical side, it is unclear how quickly security patches are disseminated, and how long it takes to patch all the vulnerable hosts on the Internet. On the sociological side, users/administrators may decide to delay the deployment of security patches. Our goal in this task is to validate and quantify these explanations. Specifically, we seek to characterize the rate of vulnerability patching, and to determine the factors--both technical and sociological--that influence the rate of applying patches. This project also addresses the hard problem of Security-Metrics-Driven Evaluation, Design, Development, and Deployment. (ID#:14-3337)

HIGHLIGHTS and PUBLICATIONS:

  • We conducted a study to determine how SSL certificates were reissued and revoked in response to a widespread vulnerability, Heartbleed, that enabled undetectable key compromise. We conducted large-scale measurements and developed new methodologies to determine how the most-popular 1 million web sites reacted to this vulnerability in terms of certificate management, and how this impacts security for clients that use those web sites.
  • We found that the vast majority of vulnerable certificates have not been reissued; further, of those domains that reissued certificates in response to Heartbleed, 60% did not revoke their vulnerable certificates. If those certificates are not eventually revoked, 20% of them will remain valid (i.e., will not expire) for two or more years. The ramifications of this findings are alarming: users will remain potentially vulnerable to malicious third parties using stolen keys to masquerade as a compromised site for a long time to come. We analyzed these trends with vulnerable Extended Validation (EV) certificates as well, and found that, while such certificates were handled with better security practices, those certificates still remain largely not reissued (67%) and not revoked (88%) even weeks after the vulnerability was made public.
  • Liang Zhang, David Choffnes, Tudor Dumitras, Dave Levin, Alan Mislove, Aaron Schulman, and Christo Wilson. Analysis of SSL Certificate Reissues and Revocations in the Wake of Heartbleed. In Proceedings of the ACM Internet Measurement Conference (IMC'14), Vancouver, Canada, Nov 2014.

9. Does the Presence of Honest Users Affect Intruders' Behavior? (UMD)

SUMMARY: The underlying premise that drives many existing cybersecurity efforts is that once an attacker has gained access to a computer system, the compromised system is no longer under the victim's control and all is lost. While we agree that efforts to secure computer systems should focus on preventing system infiltration, attention should also be given to the study of situational factors that might mitigate the potential damage caused by a successful breach. This research task applies "soft science" (sociology, psychology, and criminology) to better understand the effect of system configurations and situational stimuli on the progression and development of system break-ins. (ID#:14-3338)

10. Understanding Developers' Reasoning about Privacy and Security (UMD)

SUMMARY: Our goal is to discover, understand, and quantify challenges that developers face in writing secure and privacy preserving programs. Several research thrusts will enable this goal. Qualitative studies of developers will discover cultural and workplace dynamics that encourage or discourage privacy and security by design. Experiments with alternative design schemas will test how to facilitate adoption. (ID#:14-3339)

HIGHLIGHTS and PUBLICATIONS:

  • We have continued interviews with mobile application developers focused on cultural and workplace dynamics, and these are expected to progress over the course of the coming academic year.
  • We have implemented a simplified version of the Bubbles platform including the Bubbles trusted viewer and the centralized database server. The Bubbles trusted viewer resides in a user's Android device and provides other applications with a trusted platform service. With the Bubbles platform, a user groups various application data into a single Bubble based on its context. Then the user can share a Bubble only with the people he has selected at the time of Bubble creation. The Bubble platform prevents any malicious applications from sharing the user data with anyone who is not authorized by the data owner. We are preparing for a user study to measure developers' reasoning about privacy and security vis-a-vis our platform. We will measure how well non-security-expert undergraduate students understand Bubble platform's security model and how easily they can convert a non-secure Android application into a secure, Bubble-compatible version. For this, we have implemented a simple Android application where a user can write a text memo and store it in a local database. The students will be provided with the Bubbles trusted viewer, the centralized database server and the simple Android application and will be asked to implement missing parts necessary for the compatibility with Bubbles platform.
  • Krontiris, I., Langheinrichz, M. & Shilton, K. (2014). Trust and Privacy in Mobile Experience Sharing - Future Challenges and Avenues for Research. IEEE Communications, August 2014.

11. Reasoning about Protocols with Human Participants (UMD)

SUMMARY: Our purpose is to rigorously derive security properties of network-security protocols involving human participants and physical objects, where the limited computational capabilities of human participants and the physical properties of the objects affect the security properties of the protocols.

We first consider the example problem of electronic voting. Human voters are not explicitly taken into account since it is (implicitly) assumed that each voter has access to a trusted computer while voting. In our work we do not make this assumption, because voters voting from home might have malware on their computers that could be used to throw an election.

Some more recent voting protocols have been designed for human participants voting from untrusted computers, some relying on paper or other physical objects to obtain security guarantees. However, the security properties of these protocols are not well understood. We need a well-developed model to reason about these properties. Such a model would incorporate a human's computational capabilities and the properties of the physical objects. The model would then be used to reason about, and prove security of, the integrity and privacy properties of remote voting protocols such as Remotegrity (used for absentee voting by the City of Takoma Park for its 2011 municipal election).

In the short term, this project will focus on the development of the model of humans and the use of physical obects such as paper, and on the security properties of remote voting protocol Remotegrity. In the longer term---in addition to the general problem of the voting protocol---there are other problems where it is important to consider the fact that all protocol participants are not computers. For example, when a human logs into a website to make a financial transaction (such as a bank website, or a retirement account, or an e-commerce site), the human uses an untrusted computer and hence cannot be expected to correctly encrypt or sign messages. Can one use the techniques developed for electronic voting to develop simple and more secure protocols using physical objects and paper while using the untrusted computer to make the transaction? Can one prove the security properties of the proposed protocols? (ID#:14-3340)

HIGHLIGHTS and PUBLICATIONS:

  • In accomplishments to date, we have begun formally specifying two remote voting protocols: Remotegrity and Helios. The former uses paper, while the latter does not. The former appears to be ``more secure'' --- in particular, with the ability to resolve disputes between the voting system or voting computer, which might claim that it encrypted the vote correctly, and the voter, who might dispute this claim. This project is rigorously examining this difference between the two protocols.

12. User-Centered Design for Security (UMD)

SUMMARY: Our goal is to better understand human behavior within security systems and through that learn knowledge propose, design, and build better security systems. There are several research thrusts involved in meeting this challenge: Understanding, Measuring, and Applying User Perceptions of Security and Usability; Measuring Queuing Language in User Graphical Password Selection; Improving Password Memorability; and Improving Password Memorability. This project also addresses the hard problem of Security-Metrics-Driven Evaluation, Design, Development, and Deployment. (ID#:14-3341)

HIGHLIGHTS and PUBLICATIONS:

  • We have developed and pilot tested an experiment for improving password memorability through a timed reminder service based in principles of cognitive psychology. We are testing whether users, when prompted to login on a schedule with increasingly distant time periods, will better remember their passwords for multiple sites. If our hypothesis is correct, this will be one way to leverage lessons of HCI, cognitive science, and psychology to improve security of systems through better understanding human behavior.

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Publications of Interest

Publications of Interest


The Publications of Interest section contains bibliographical citations, abstracts if available and links on specific topics and research problems of interest to the Science of Security community.

How recent are these publications?

These bibliographies include recent scholarly research on topics which have been presented or published within the past year. Some represent updates from work presented in previous years, others are new topics.

How are topics selected?

The specific topics are selected from materials that have been peer reviewed and presented at SoS conferences or referenced in current work. The topics are also chosen for their usefulness for current researchers.

How can I submit or suggest a publication?

Researchers willing to share their work are welcome to submit a citation, abstract, and URL for consideration and posting, and to identify additional topics of interest to the community. Researchers are also encouraged to share this request with their colleagues and collaborators.

Submissions and suggestions may be sent to: research (at) securedatabank.net

(ID#:14-3138)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Big Data

Big Data


Big data security is a growing area of interest for researchers. The work presented here ranges from cyber-threat detection in critical infrastructures to privacy protection. This work was presented and published in the first half of 2014.

  • Abawajy, J.; Kelarev, A; Chowdhury, M., "Large Iterative Multitier Ensemble Classifiers for Security of Big Data," Emerging Topics in Computing, IEEE Transactions on, vol. PP, no.99, pp.1,1, April 2014. doi: 10.1109/TETC.2014.2316510 This article introduces and investigates Large Iterative Multitier Ensemble (LIME) classifiers specifically tailored for Big Data. These classifiers are very large, but are quite easy to generate and use. They can be so large that it makes sense to use them only for Big Data. They are generated automatically as a result of several iterations in applying ensemble meta classifiers. They incorporate diverse ensemble meta classifiers into several tiers simultaneously and combine them into one automatically generated iterative system so that many ensemble meta classifiers function as integral parts of other ensemble meta classifiers at higher tiers. In this paper, we carry out a comprehensive investigation of the performance of LIME classifiers for a problem concerning security of big data. Our experiments compare LIME classifiers with various base classifiers and standard ordinary ensemble meta classifiers. The results obtained demonstrate that LIME classifiers can significantly increase the accuracy of classifications. LIME classifiers performed better than the base classifiers and standard ensemble meta classifiers.
    Keywords: Big data; Data handling; Data mining; Data storage systems; Information management; Iterative methods; Malware (ID#:14-2639)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6808522&isnumber=6558478
  • Hurst, W.; Merabti, M.; Fergus, P., "Big Data Analysis Techniques for Cyber-threat Detection in Critical Infrastructures," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on, pp.916, 921, 13-16 May 2014. doi: 10.1109/WAINA.2014.141 The research presented in this paper offers a way of supporting the security currently in place in critical infrastructures by using behavioral observation and big data analysis techniques to add to the Defense in Depth (DiD). As this work demonstrates, applying behavioral observation to critical infrastructure protection has effective results. Our design for Behavioral Observation for Critical Infrastructure Security Support (BOCISS) processes simulated critical infrastructure data to detect anomalies which constitute threats to the system. This is achieved using feature extraction and data classification. The data is provided by the development of a nuclear power plant simulation using Siemens Tecnomatix Plant Simulator and the programming language SimTalk. Using this simulation, extensive realistic data sets are constructed and collected, when the system is functioning as normal and during a cyber-attack scenario. The big data analysis techniques, classification results and an assessment of the outcomes is presented.
    Keywords: Big Data; critical infrastructures; feature extraction; pattern classification; programming languages; security of data; BOCISS process; DiD; Siemens Tecnomatix Plant Simulator; anomaly detection; behavioral observation; big data analysis techniques ;critical infrastructure protection ;critical infrastructure security support process; cyber-attack scenario; cyber-threat detection; data classification; defence in depth; feature extraction; nuclear power plant simulation; programming language SimTalk; realistic data set; simulated critical infrastructure data; Big data; Data models; Feature extraction; Inductors; Security; Support vector machine classification; Water resources; Behavioral Observation; Big Data; Critical Infrastructure; Data Classification; Simulation (ID#:14-2640)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844756&isnumber=6844560
  • Demchenko, Y.; de Laat, C.; Membrey, P., "Defining Architecture Components of the Big Data Ecosystem," Collaboration Technologies and Systems (CTS), 2014 International Conference on, pp.104,112, 19-23 May 2014. doi: 10.1109/CTS.2014.6867550 Big Data are becoming a new technology focus both in science and in industry and motivate technology shift to data centric architecture and operational models. There is a vital need to define the basic information/semantic models, architecture components and operational models that together comprise a so-called Big Data Ecosystem. This paper discusses a nature of Big Data that may originate from different scientific, industry and social activity domains and proposes improved Big Data definition that includes the following parts: Big Data properties (also called Big Data 5V: Volume, Velocity, Variety, Value and Veracity), data models and structures, data analytics, infrastructure and security. The paper discusses paradigm change from traditional host or service based to data centric architecture and operational models in Big Data. The Big Data Architecture Framework (BDAF) is proposed to address all aspects of the Big Data Ecosystem and includes the following components: Big Data Infrastructure, Big Data Analytics, Data structures and models, Big Data Lifecycle Management, Big Data Security. The paper analyses requirements to and provides suggestions how the mentioned above components can address the main Big Data challenges. The presented work intends to provide a consolidated view of the Big Data phenomena and related challenges to modern technologies, and initiate wide discussion.
    Keywords: Big Data; data analysis; security of data; BDAF; Big Data analytics; Big Data architecture framework; Big Data ecosystem; Big Data infrastructure; Big Data lifecycle management; Big Data properties ;Big Data security; data analytics; data centric architecture; data infrastructure; data models; data operational models; data security; data structures; information-semantic models; value property; variety property; velocity property; veracity property; volume property; Big data; Biological system modeling; Computer architecture; Data models; Ecosystems; Industries; Security; Big Data Architecture Framework (BDAF); Big Data Ecosystem; Big Data Infrastructure (BDI); Big Data Lifecycle Management (BDLM);Big Data Technology; Cloud based Big Data Infrastructure Services (ID#:14-2641)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6867550&isnumber=6867522
  • Rongxing Lu; Hui Zhu; Ximeng Liu; Liu, J.K.; Jun Shao, "Toward Efficient And Privacy-Preserving Computing In Big Data Era," Network, IEEE , vol.28, no.4, pp.46,50, July-August 2014. doi: 10.1109/MNET.2014.6863131 Big data, because it can mine new knowledge for economic growth and technical innovation, has recently received considerable attention, and many research efforts have been directed to big data processing due to its high volume, velocity, and variety (referred to as "3V") challenges. However, in addition to the 3V challenges, the flourishing of big data also hinges on fully understanding and managing newly arising security and privacy challenges. If data are not authentic, new mined knowledge will be unconvincing; while if privacy is not well addressed, people may be reluctant to share their data. Because security has been investigated as a new dimension, "veracity," in big data, in this article, we aim to exploit new challenges of big data in terms of privacy, and devote our attention toward efficient and privacy-preserving computing in the big data era. Specifically, we first formalize the general architecture of big data analytics, identify the corresponding privacy requirements, and introduce an efficient and privacy-preserving cosine similarity computing protocol as an example in response to data mining's efficiency and privacy requirements in the big data era.
    Keywords: Big Data; data analysis; data mining; data privacy; security of data; big data analytics; big data era; big data processing; data mining efficiency; privacy requirements; privacy-preserving cosine similarity computing protocol; security; Authentication; Big data; Cryptography; Data privacy; Economics ;Information analysis; Privacy (ID#:14-2642)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6863131&isnumber=6863119
  • Kan Yang; Xiaohua Jia; Kui Ren; Ruitao Xie; Liusheng Huang, "Enabling Efficient Access Control With Dynamic Policy Updating For Big Data In The Cloud," INFOCOM, 2014 Proceedings IEEE, pp.2013,2021, April 27 2014-May 2 2014. doi: 10.1109/INFOCOM.2014.6848142 Due to the high volume and velocity of big data, it is an effective option to store big data in the cloud, because the cloud has capabilities of storing big data and processing high volume of user access requests. Attribute-Based Encryption (ABE) is a promising technique to ensure the end-to-end security of big data in the cloud. However, the policy updating has always been a challenging issue when ABE is used to construct access control schemes. A trivial implementation is to let data owners retrieve the data and re-encrypt it under the new access policy, and then send it back to the cloud. This method incurs a high communication overhead and heavy computation burden on data owners. In this paper, we propose a novel scheme that enabling efficient access control with dynamic policy updating for big data in the cloud. We focus on developing an outsourced policy updating method for ABE systems. Our method can avoid the transmission of encrypted data and minimize the computation work of data owners, by making use of the previously encrypted data with old access policies. Moreover, we also design policy updating algorithms for different types of access policies. The analysis shows that our scheme is correct, complete, secure and efficient.
    Keywords: Big Data; authorisation; cloud computing ;cryptography; ABE; Big Data; access control; access policy; attribute-based encryption; cloud; dynamic policy updating; end-to-end security ;outsourced policy updating method; Access control; Big data; Encryption; Public key; Servers;ABE; Access Control; Big Data; Cloud; Policy Updating (ID#:14-2643)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848142&isnumber=6847911
  • Xindong Wu; Xingquan Zhu; Gong-Qing Wu; Wei Ding, "Data Mining With Big Data," Knowledge and Data Engineering, IEEE Transactions on, vol.26, no.1, pp.97,107, Jan. 2014. doi: 10.1109/TKDE.2013.109 Big Data concern large-volume, complex, growing data sets with multiple, autonomous sources. With the fast development of networking, data storage, and the data collection capacity, Big Data are now rapidly expanding in all science and engineering domains, including physical, biological and biomedical sciences. This paper presents a HACE theorem that characterizes the features of the Big Data revolution, and proposes a Big Data processing model, from the data mining perspective. This data-driven model involves demand-driven aggregation of information sources, mining and analysis, user interest modeling, and security and privacy considerations. We analyze the challenging issues in the data-driven model and also in the Big Data revolution.
    Keywords: data mining; user modelling; Big Data processing model; Big Data revolution ;HACE theorem; data collection capacity; data driven model; data mining; data storage; demand driven aggregation; growing data sets; information sources; networking; user interest modeling; Data handling; Data models; Data privacy; Data storage systems; Distributed databases; Information management; Big Data; autonomous sources; complex and evolving associations; data mining; heterogeneity (ID#:14-2644)
    URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6547630&isnumber=6674933
  • Sandryhaila, A; Moura, J., "Big Data Analysis with Signal Processing on Graphs: Representation and processing of massive data sets with irregular structure," Signal Processing Magazine, IEEE, vol.31, no.5, pp.80, 90, Sept. 2014. doi: 10.1109/MSP.2014.2329213 Analysis and processing of very large data sets, or big data, poses a significant challenge. Massive data sets are collected and studied in numerous domains, from engineering sciences to social networks, biomolecular research, commerce, and security. Extracting valuable information from big data requires innovative approaches that efficiently process large amounts of data as well as handle and, moreover, utilize their structure. This article discusses a paradigm for large-scale data analysis based on the discrete signal processing (DSP) on graphs (DSPG). DSPG extends signal processing concepts and methodologies from the classical signal processing theory to data indexed by general graphs. Big data analysis presents several challenges to DSPG, in particular, in filtering and frequency analysis of very large data sets. We review fundamental concepts of DSPG, including graph signals and graph filters, graph Fourier transform, graph frequency, and spectrum ordering, and compare them with their counterparts from the classical signal processing theory. We then consider product graphs as a graph model that helps extend the application of DSPG methods to large data sets through efficient implementation based on parallelization and vectorization. We relate the presented framework to existing methods for large-scale data processing and illustrate it with an application to data compression.
    Keywords: Big data; Data storage; Digital signal processing; Fourier transforms; Graph theory; Information analysis; Information processing; Time series analysis (ID#:14-2645)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6879640&isnumber=6879573
  • Peng Li; Song Guo, "Load Balancing For Privacy-Preserving Access To Big Data In Cloud," Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on, pp.524,528, April 27 2014-May 2 2014. doi: 10.1109/INFCOMW.2014.6849286 In the era of big data, many users and companies start to move their data to cloud storage to simplify data management and reduce data maintenance cost. However, security and privacy issues become major concerns because third-party cloud service providers are not always trusty. Although data contents can be protected by encryption, the access patterns that contain important information are still exposed to clouds or malicious attackers. In this paper, we apply the ORAM algorithm to enable privacy-preserving access to big data that are deployed in distributed file systems built upon hundreds or thousands of servers in a single or multiple geo-distributed cloud sites. Since the ORAM algorithm would lead to serious access load unbalance among storage servers, we study a data placement problem to achieve a load balanced storage system with improved availability and responsiveness. Due to the NP-hardness of this problem, we propose a low-complexity algorithm that can deal with large-scale problem size with respect to big data. Extensive simulations are conducted to show that our proposed algorithm finds results close to the optimal solution, and significantly outperforms a random data placement algorithm.
    Keywords: Big Data; cloud computing; computational complexity; data protection; distributed databases; file servers; information retrieval; random processes; resource allocation; storage management; Big Data; NP-hardness; ORAM algorithm; cloud storage; data availability; data content protection; data maintenance cost reduction; data management; data placement problem; data security; distributed file system; encryption; file server; geo-distributed cloud site; load balanced storage system; low-complexity algorithm; privacy preserving access; random data placement algorithm; responsiveness; storage server; Big data; Cloud computing; Conferences; Data privacy; Random access memory; Security; Servers (ID#:14-2646)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849286&isnumber=6849127
  • Du, Nan; Manjunath, Niveditha; Shuai, Yao; Burger, Danilo; Skorupa, Ilona; Schuffny, Rene; Mayr, Christian; Basov, Dimitri N.; Di Ventra, Massimiliano; Schmidt, Oliver G.; Schmidt, Heidemarie, "Novel Implementation Of Memristive Systems For Data Encryption And Obfuscation," Journal of Applied Physics, vol. 115, no.12, pp.124501,124501-7, Mar 2014. doi: 10.1063/1.4869262 With the rise of big data handling, new solutions are required to drive cryptographic algorithms for maintaining data security. Here, we exploit the nonvolatile, nonlinear resistance change in BiFeO3 memristors [Shuai et al., J. Appl. Phys. 109, 124117 (2011)] by applying a voltage for the generation of second and higher harmonics and develop a new memristor-based encoding system from it to encrypt and obfuscate data. It is found that a BiFeO3 memristor in high and low resistance state can be used to generate two clearly distinguishable sets of second and higher harmonics as recently predicted theoretically [Cohen et al., Appl. Phys. Lett. 100, 133109 (2012)]. The computed autocorrelation of encrypted data using higher harmonics generated by a BiFeO3 memristor shows that the encoded data distribute randomly.
    Keywords: (not provided) (ID#:14-2647)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778720&isnumber=6777935
  • Kaushik, A; Satvika; Gupta, K.; Kumar, A, "Digital Image Chaotic Encryption (DICE - A Partial-Symmetric Key Cipher For Digital Images)," Optimization, Reliability, and Information Technology (ICROIT), 2014 International Conference on, pp.314,317, 6-8 Feb. 2014. doi: 10.1109/ICROIT.2014.6798345 The swift growth of communication facilities and ever decreasing cost of computer hardware has brought tremendous possibilities of expansion for commercial and academic rationales. With widely incremented communique like Internet, not only the good guys, but also bad guys have advantage. The hackers or crackers can take advantage of network vulnerabilities and pose a big threat to network security personnel. The information can be transferred by means of textual data, digital images, videos, animations, etc and thus requires better defense. Especially, the images are more visual and descriptive than textual data; hence they act as a momentous way of communication in the modern world. Protection of the digital images during transmission becomes more serious concern when they are confidential war plans, top-secret weapon photographs, stealthy military data and surreptitious architectural designs of financial buildings, etc. Several mechanisms like cryptography, steganography, hash functions, digital signatures have been designed to provide the ultimate safety for secret data. When the data is in form of digital images; certain features of images like high redundancy, strong correlation between neighboring pixels and abundance in information expression need some extra fortification while transmission. This paper proposes a new cryptographic cipher named Digital Image Chaotic Encryption (DICE) to convene the special requisites of secure image transfer. The strength of DICE lies in its partial-symmetric key nature i.e. even discovery of encryption key by hacker will not guarantee decoding of the original message.
    Keywords: computer network security; cryptography; image processing; DICE ;Internet; digital image chaotic encryption; digital images protection; digital signatures; hash functions; network security personnel; partial-symmetric key cipher; steganography; Algorithm design and analysis; Biomedical imaging; Encryption; Standards; Block cipher; DICE Partial-Symmetric key algorithm; Digital watermarking (ID#:14-2648)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798345&isnumber=6798279
  • Haoliang Lou; Yunlong Ma; Feng Zhang; Min Liu; Weiming Shen, "Data Mining For Privacy Preserving Association Rules Based On Improved MASK Algorithm," Computer Supported Cooperative Work in Design (CSCWD), Proceedings of the 2014 IEEE 18th International Conference on, pp.265,270, 21-23 May 2014. doi: 10.1109/CSCWD.2014.6846853 With the arrival of the big data era, information privacy and security issues become even more crucial. The Mining Associations with Secrecy Konstraints (MASK) algorithm and its improved versions were proposed as data mining approaches for privacy preserving association rules. The MASK algorithm only adopts a data perturbation strategy, which leads to a low privacy-preserving degree. Moreover, it is difficult to apply the MASK algorithm into practices because of its long execution time. This paper proposes a new algorithm based on data perturbation and query restriction (DPQR) to improve the privacy-preserving degree by multi-parameters perturbation. In order to improve the time-efficiency, the calculation to obtain an inverse matrix is simplified by dividing the matrix into blocks; meanwhile, a further optimization is provided to reduce the number of scanning database by set theory. Both theoretical analyses and experiment results prove that the proposed DPQR algorithm has better performance.
    Keywords: data mining; data privacy; matrix algebra; query processing; DPQR algorithm; data mining; data perturbation and query restriction; data perturbation strategy; improved MASK algorithm ;information privacy ;inverse matrix; mining associations with secrecy constraints; privacy preserving association rules; scanning database; security issues; Algorithm design and analysis; Association rules; Data privacy; Itemsets; Time complexity ;Data mining; association rules; multi-parameters perturbation; privacy preservation(ID#:14-2649)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846853&isnumber=6846800
  • Beigh, B.M., "One-stop: A novel hybrid model for intrusion detection system," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on, pp.798,805, 5-7 March 2014. doi: 10.1109/IndiaCom.2014.6828072 Organizations are paying huge amount only for the sake of securing their confidential data from attackers or intruders. But the hackers are Big Bosses and are very sharp enough to crack the security of the organization. Therefore before they made security breach, let us hunt down them and make the alert for organization, so that they can save their confidential data. For the above mentioned purpose, Intrusion detection system came into existence. But the current systems are not capable enough to detect all the attacks coming towards them. In order to fix the problem of detecting novel attacks and reducing number of false alarm, here in this paper, we have proposed a hybrid model for intrusion detection system, which have enhanced quality of detecting the unknown attack via anomaly based detection and also have module which will try to reduce the number of false alarm generated by the system.
    Keywords: security of data; anomaly based detection; confidential data; false alarm reduction; intrusion detection system; one-stop model; organization security; security breach; Databases; Decoding; Engines; Hybrid power systems ;Intrusion detection; Organizations; Intrusion; attack; availability; confidentiality; detection; information; integrity; mitigate (ID#:14-2650)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828072&isnumber=6827395

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Browser Security

Browser Security


Browser Security Web browser exploits are a common attack vector. Research into browser security in the first three quarters of 2014 has looked at the common browsers and add-ons to address both specific and general problems. Included in the articles cited here are some addressing cross site scripting, hardware virtualization, bothounds, system call monitoring, and phishing detection.

  • Barnes, R.; Thomson, M., "Browser-to-Browser Security Assurances for WebRTC," Internet Computing, IEEE, vol. PP, no. 99, pp.1, 1, September, 2014. doi: 10.1109/MIC.2014.106 For several years, browsers have been able to assure a user that he is talking to a specific, identified web site, protected from network-based attackers. In email, messaging, and other applications where sites act as intermediaries, there is a need for additional protections to provide end-to-end security. In this article we describe the approach that WebRTC takes to providing end-to-end security, leveraging both the flexibility of JavaScript and the ability of browsers to create constraints through JavaScript APIs.
    Keywords: Browsers; Cameras; Internet; Media (ID#:14-2838)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6894480&isnumber=5226613
  • Abgrall, E.; Le Traon, Y.; Gombault, S.; Monperrus, M., "Empirical Investigation of the Web Browser Attack Surface under Cross-Site Scripting: An Urgent Need for Systematic Security Regression Testing," Software Testing, Verification and Validation Workshops (ICSTW), 2014 IEEE Seventh International Conference on, pp.34,41, March 31 2014-April 4 2014. doi: 10.1109/ICSTW.2014.63 One of the major threats against web applications is Cross-Site Scripting (XSS). The final target of XSS attacks is the client running a particular web browser. During this last decade, several competing web browsers (IE, Netscape, Chrome, Firefox) have evolved to support new features. In this paper, we explore whether the evolution of web browsers is done using systematic security regression testing. Beginning with an analysis of their current exposure degree to XSS, we extend the empirical study to a decade of most popular web browser versions. We use XSS attack vectors as unit test cases and we propose a new method supported by a tool to address this XSS vector testing issue. The analysis on a decade releases of most popular web browsers including mobile ones shows an urgent need of XSS regression testing. We advocate the use of a shared security testing benchmark as a good practice and propose a first set of publicly available XSS vectors as a basis to ensure that security is not sacrificed when a new version is delivered.
    Keywords: online front-ends; regression analysis; security of data; Web applications; Web browser attack surface; XSS vector testing; cross-site scripting; systematic security regression testing; Browsers; HTML; Mobile communication; Payloads; Security; Testing; Vectors; XSS; browser; regression; security; testing; web (ID#:14-2839)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825636&isnumber=6825623
  • Xin Wu, "Secure Browser Architecture Based on Hardware Virtualization," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.489, 495, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6779009 Ensuring the entire code base of a browser to deal with the security concerns of integrity and confidentiality is a daunting task. The basic method is to split it into different components and place each of them in its own protection domain. OS processes are the prevalent isolation mechanism to implement the protection domain, which result in expensive context-switching overheads produced by Inter-Process Communication (TPC). Besides, the dependences of multiple web instance processes on a single set of privileged ones reduce the entire concurrency. In this paper, we present a secure browser architecture design based on processor virtualization technique. First, we divide the browser code base into privileged components and constrained components which consist of distrusted web page Tenderer components and plugins. All constrained components are in the form of shared object (SO) libraries. Second, we create an isolated execution environment for each distrusted shared object library using the hardware virtualization support available in modern Intel and AMD processors. Different from the current researches, we design a custom kernel module to gain the hardware virtualization capabilities. Third, to enhance the entire security of browser, we implement a validation mechanism to check the OS resources access from distrusted web page Tenderer to the privileged components. Our validation rules is similar with Google chrome. By utilizing VMENTER and VMEXIT which are both CPU instructions, our approach can gain a better system performance substantially.
    Keywords: microprocessor chips; online front-ends; operating systems (computers); security of data; software libraries; virtualisation; AMD processors; CPU instructions; Google chrome; IPC; Intel processors; OS processes; OS resource checking; SO libraries; VMENTER; VMEXIT; browser security; context-switching overheads; distrusted Web page renderer components; distrusted shared object library; hardware virtualization capabilities; Interprocess communication; isolated execution environment; isolation mechanism; multiple Web instance processes; processor virtualization technique; secure browser architecture design; validation mechanism; Browsers; Google; Hardware; Monitoring; Security; Virtualization; Web pages; Browser security; Component isolation; Hardware virtualization; System call interposition (ID#:14-2840)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779009&isnumber=6778899
  • Wadkar, H.; Mishra, A; Dixit, A, "Prevention of Information Leakages In A Web Browser By Monitoring System Calls," Advance Computing Conference (IACC), 2014 IEEE International, pp.199,204, 21-22 Feb. 2014. doi: 10.1109/IAdCC.2014.6779320 The web browser has become one of most accessed process/applications in recent years. The latest website security statistics report about 30% of vulnerability attacks happen due to the information leakage by browser application and its use by hackers to exploit privacy of an individual. This leaked information is one of the main sources for hackers to attack individual's PC or to make the PC a part of botnet. A software controller is proposed to track system calls invoked by the browser process. The designed prototype deals with the systems calls which perform operations related to read, write, access personal and/or system information. The objective of the controller is to confine the leakage of information by a browser process.
    Keywords: Web sites; online front-ends; security of data; Web browser application; Web site security statistics report; botnet; browser process; monitoring system calls; software controller; system information leakages; track system calls; vulnerability attacks; Browsers; Computer hacking; Monitoring; Privacy; Process control; Software; browser security; confinement; information leakage}, (ID#:14-2841)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779320&isnumber=6779283
  • Shamsi, J.A; Hameed, S.; Rahman, W.; Zuberi, F.; Altaf, K.; Amjad, A, "Clicksafe: Providing Security Against Clickjacking Attacks," High-Assurance Systems Engineering (HASE), 2014 IEEE 15th International Symposium on, pp.206,210, 9-11 Jan. 2014. doi: 10.1109/HASE.2014.36 Click jacking is an act of hijacking user clicks in order to perform undesired actions which are beneficial for the attacker. We propose Click safe, a browser-based tool to provide increased security and reliability against click jacking attacks. Click safe is based on three major components. The detection unit detects malicious components in a web page that redirect users to external links. The mitigation unit provides interception of user clicks and give educated warnings to users who can then choose to continue or not. Click safe also incorporate a feedback unit which records the user's actions, converts them into ratings and allows future interactions to be more informed. Click safe is predominant from other similar tools as the detection and mitigation is based on a comprehensive framework which utilizes detection of malicious web components and incorporating user feedback. We explain the mechanism of click safe, describes its performance, and highlights its potential in providing safety against click jacking to a large number of users.
    Keywords: Internet; online front-ends; security of data; Clicksafe; Web page; browser-based tool; click safe; clickjacking attacks; detection unit; feedback unit; malicious Web component detection; mitigation unit; Browsers; Communities; Computers; Context; Loading; Safety; Security; Browser Security; Clickjacking; Safety; Security; Soft assurance of safe browsing (ID#:14-2842)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6754607&isnumber=6754569
  • Mohammad, R.M.; Thabtah, F.; McCluskey, L., "Intelligent Rule-Based Phishing Websites Classification," Information Security, IET, vol.8, no.3, pp.153,160, May 2014. doi: 10.1049/iet-ifs.2013.0202 Phishing is described as the art of echoing a website of a creditable firm intending to grab user's private information such as usernames, passwords and social security number. Phishing websites comprise a variety of cues within its content-parts as well as the browser-based security indicators provided along with the website. Several solutions have been proposed to tackle phishing. Nevertheless, there is no single magic bullet that can solve this threat radically. One of the promising techniques that can be employed in predicting phishing attacks is based on data mining, particularly the `induction of classification rules' since anti-phishing solutions aim to predict the website class accurately and that exactly matches the data mining classification technique goals. In this study, the authors shed light on the important features that distinguish phishing websites from legitimate ones and assess how good rule-based data mining classification techniques are in predicting phishing websites and which classification technique is proven to be more reliable.
    Keywords: Web sites; data mining; data privacy; pattern classification; security of data; unsolicited e-mail; Web site echoing; Website class; antiphishing solutions; browser-based security indicators; creditable flrm; intelligent rule-based phishing Web site classification; phishing attack prediction; rule-based data mining classification techniques; social security number; user private information (ID#:14-2843)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6786863&isnumber=6786849
  • Phung, P.; Monshizadeh, M.; Sridhar, M.; Hamlen, K.; Venkatakrishnan, V., "Between Worlds: Securing Mixed JavaScript/ActionScript Multi-party Web Content," Dependable and Secure Computing, IEEE Transactions on, vol. PP, no.99, pp.1, 1, September 2014. doi: 10.1109/TDSC.2014.2355847 Mixed Flash and JavaScript content has become increasingly prevalent; its purveyance of dynamic features unique to each platform has popularized it for myriad web development projects. Although Flash and JavaScript security has been examined extensively, the security of untrusted content that combines both has received considerably less attention. This article considers this fusion in detail, outlining several practical scenarios that threaten the security of web applications. The severity of these attacks warrants the development of new techniques that address the security of Flash-JavaScript content considered as a whole, in contrast to prior solutions that have examined Flash or JavaScript security individually. Toward this end, the article presents FlashJaX, a cross-platform solution that enforces fine-grained, history-based policies that span both Flash and JavaScript. Using in-lined reference monitoring, FlashJaX safely embeds untrusted JavaScript and Flash content in web pages without modifying browser clients or using special plug-ins. The architecture of FlashJaX, its design and implementation, and a detailed security analysis are exposited. Experiments with advertisements from popular ad networks demonstrate that FlashJaX is transparent to policy-compliant advertisement content, yet blocks many common attack vectors that exploit the fusion of these web platforms.
    Keywords: Browsers; Engines; Mediation; Monitoring; Payloads; Runtime; Security (ID#:14-2844) URL: http://ieeexplore.ieee.org/stam
    p/stamp.jsp?tp=&arnumber=6894186&isnumber=4358699
  • Byungho Min; Varadharajan, V., "A New Technique for Counteracting Web Browser Exploits," Software Engineering Conference (ASWEC), 2014 23rd Australian, pp.132,141, 7-10 April 2014. doi: 10.1109/ASWEC.2014.28 Over the last few years, exploit kits have been increasingly used for system compromise and malware propagation. As they target the web browser which is one of the most commonly used software in the Internet era, exploit kits have become a major concern of security community. In this paper, we propose a proactive approach to protecting vulnerable systems from this prevalent cyber threat. Our technique intercepts communications between the web browser and web pages, and proactively blocks the execution of exploit kits using version information of web browser plugins. Our system, AFFAF, is a zero-configuration solution, and hence users do not need to do anything but just simply install it. Also, it is an easy-to-employ methodology from the perspective of plugin developers. We have implemented a lightweight prototype, which has demonstrated that AFFAF protected vulnerable systems can counteract 50 real-world and one locally deployed exploit kit URLs. Tested exploit kits include popular and well-maintained ones such as Blackhole 2.0, Redkit, Sakura, Cool and Bleeding Life 2. We have also shown that the false positive rate of AFFAF is virtually zero, and it is robust enough to be effective against real web browser plugin scanners.
    Keywords: Internet; invasive software; online front-ends; AFFAF protected vulnerable systems; Internet; Web browser exploits; Web browser plugin scanners; Web pages; cyber threat; exploit kit URL; lightweight prototype; malware propagation; security community; system compromise; version information; zero-configuration solution; browsers; Java; Malware; Prototypes; Software; Web sites; Defensive Techniques; Exploit Kits; Security Attacks (ID#:14-2845)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6824118&isnumber=6824087
  • Mewara, B.; Bairwa, S.; Gajrani, J., "Browser's Defenses Against Reflected Cross-Site Scripting Attacks," Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on, pp.662,667, 12-13 July 2014. doi: 10.1109/ICSPCT.2014.6884928 Due to the frequent usage of online web applications for various day-to-day activities, web applications are becoming most suitable target for attackers. Cross-Site Scripting also known as XSS attack, one of the most prominent defacing web based attack which can lead to compromise of whole browser rather than just the actual web application, from which attack has originated. Securing web applications using server side solutions is not profitable as developers are not necessarily security aware. Therefore, browser vendors have tried to evolve client side filters to defend against these attacks. This paper shows that even the foremost prevailing XSS filters deployed by latest versions of most widely used web browsers do not provide appropriate defense. We evaluate three browsers - Internet Explorer 11, Google Chrome 32, and Mozilla Firefox 27 for reflected XSS attack against different type of vulnerabilities. We find that none of above is completely able to defend against all possible type of reflected XSS vulnerabilities. Further, we evaluate Firefox after installing an add-on named XSS-Me, which is widely used for testing the reflected XSS vulnerabilities. Experimental results show that this client side solution can shield against greater percentage of vulnerabilities than other browsers. It is witnessed to be more propitious if this add-on is integrated inside the browser instead being enforced as an extension.
    Keywords: online front-ends; security of data; Google Chrome 32; Internet Explorer 11; Mozilla Firefox 27;Web based attack; Web browsers; XSS attack; XSS filters; XSS-Me;online Web applications; reflected cross-site scripting attacks; Browsers; Security; Thyristors; JavaScript; Reflected XSS; XSS-Me; attacker; bypass; exploit; filter (ID#:14-2846)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6884928&isnumber=6884878
  • Biedermann, S.; Ruppenthal, T.; Katzenbeisser, S., "Data-centric Phishing Detection Based On Transparent Virtualization Technologies," Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on, pp.215,223, 23-24 July 2014. doi: 10.1109/PST.2014.6890942 We propose a novel phishing detection architecture based on transparent virtualization technologies and isolation of the own components. The architecture can be deployed as a security extension for virtual machines (VMs) running in the cloud. It uses fine-grained VM introspection (VMI) to extract, filter and scale a color-based fingerprint of web pages which are processed by a browser from the VM's memory. By analyzing the human perceptual similarity between the fingerprints, the architecture can reveal and mitigate phishing attacks which are based on redirection to spoofed web pages and it can also detect "Man-in-the-Browser" (MitB) attacks. To the best of our knowledge, the architecture is the first anti-phishing solution leveraging virtualization technologies. We explain details about the design and the implementation and we show results of an evaluation with real-world data.
    Keywords: Web sites; cloud computing; computer crime; online front-ends; virtual machines; virtualisation; MitB attack; VM introspection; VMI; antiphishing solution; cloud; color-based fingerprint extraction; color-based fingerprint filtering; color-based fingerprint scaling; component isolation; data-centric phishing detection; human perceptual similarity; man-in-the-browser attack; phishing attacks; spoofed Web pages; transparent virtualization technologies; virtual machines; Browsers; Computer architecture; Data mining; Detectors; Image color analysis; Malware; Web pages (ID#:14-2847)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890942&isnumber=6890911
  • Sayed, B.; Traore, I, "Protection against Web 2.0 Client-Side Web Attacks Using Information Flow Control," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on, pp. 261, 268, 13-16 May 2014. doi: 10.1109/WAINA.2014.52 The dynamic nature of the Web 2.0 and the heavy obfuscation of web-based attacks complicate the job of the traditional protection systems such as Firewalls, Anti-virus solutions, and IDS systems. It has been witnessed that using ready-made toolkits, cyber-criminals can launch sophisticated attacks such as cross-site scripting (XSS), cross-site request forgery (CSRF) and botnets to name a few. In recent years, cyber-criminals have targeted legitimate websites and social networks to inject malicious scripts that compromise the security of the visitors of such websites. This involves performing actions using the victim browser without his/her permission. This poses the need to develop effective mechanisms for protecting against Web 2.0 attacks that mainly target the end-user. In this paper, we address the above challenges from information flow control perspective by developing a framework that restricts the flow of information on the client-side to legitimate channels. The proposed model tracks sensitive information flow and prevents information leakage from happening. The proposed model when applied to the context of client-side web-based attacks is expected to provide a more secure browsing environment for the end-user.
    Keywords: Internet; computer crime; data protection; invasive software; IDS systems; Web 2.0 client-side Web attacks; antivirus solutions; botnets; cross-site request forgery; cross-site scripting; c yber-criminals; firewalls; information flow control; information leakage; legitimate Web sites; malicious script injection; protection systems; secure browsing environment; social networks; Browsers; Feature extraction; Security; Semantics; Servers; Web 2.0;Web pages; AJAX; Client-side web attacks; Information Flow Control; Web 2.0 (ID#:14-2848)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844648&isnumber=6844560
  • Zarras, A; Papadogiannakis, A; Gawlik, R.; Holz, T., "Automated Generation Of Models For Fast And Precise Detection Of HTTP-Based Malware," Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on, pp.249,256, 23-24 July 2014. doi: 10.1109/PST.2014.6890946 Malicious software and especially botnets are among the most important security threats in the Internet. Thus, the accurate and timely detection of such threats is of great importance. Detecting machines infected with malware by identifying their malicious activities at the network level is an appealing approach, due to the ease of deployment. Nowadays, the most common communication channels used by attackers to control the infected machines are based on the HTTP protocol. To evade detection, HTTP-based malware adapt their behavior to the communication patterns of the benign HTTP clients, such as web browsers. This poses significant challenges to existing detection approaches like signature-based and behavioral-based detection systems. In this paper, we propose BOTHOUND: a novel approach to precisely detect HTTP-based malware at the network level. The key idea is that implementations of the HTTP protocol by different entities have small but perceivable differences. Building on this observation, BOTHOUND automatically generates models for malicious and benign requests and classifies at real time the HTTP traffic of a monitored network. Our evaluation results demonstrate that BOTHOUND outperforms prior work on identifying HTTP-based botnets, being able to detect a large variety of real-world HTTP-based malware, including advanced persistent threats used in targeted attacks, with a very low percentage of classification errors.
    Keywords: Internet; invasive software; BOTHOUND approach; HTTP protocol; HTTP traffic; HTTP-based malware detection; Internet; Web browsers; behavioral-based detection system; botnets; classification errors; hypertext transfer protocol; malicious software; security threats; signature-based detection system; Accuracy; Browsers; Malware; Monitoring; Protocols; Software; Training (ID#:14-2849)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890946&isnumber=6890911
  • Ortiz-Yepes, D.A; Hermann, R.J.; Steinauer, H.; Buhler, P., "Bringing Strong Authentication And Transaction Security To The Realm Of Mobile Devices," IBM Journal of Research and Development, vol.58, no.1, pp.4:1, 4:11, Jan.-Feb. 2014. doi: 10.1147/JRD.2013.2287810 Widespread usage of mobile devices in conjunction with malicious software attacks calls for the development of mobile-device-oriented mechanisms aiming to provide strong authentication and transaction security. This paper considers the eBanking application scenario and argues that the concept of using a trusted companion device can be ported to the mobile realm. Trusted companion devices involve established and proven techniques in the PC (personal computer) environment to secure transactions. Various options for the communication between mobile and companion devices are discussed and evaluated in terms of technical feasibility, usability, and cost. Accordingly, audio communication across the 3.5-mm audio jack--also known as tip-ring-ring-sleeve, or TRRS connector,--is determined to be quite appropriate. We present a proof-of-concept companion device implementing binary frequency shift keying across this interface. Results from a field study performed with the proof-of-concept device further confirm the feasibility of the proposed solution.
    Keywords: Authentication; Browsers; Computer security; Malware; Mobile communication; Servers; Smart cards; Universal Serial Bus (ID#:14-2850)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6717088&isnumber=6717043
  • Chuan Xu; Guofeng Zhao; Gaogang Xie; Shui Yu, "Detection on Application Layer DDOSs Using Random Walk Model," Communications (ICC), 2014 IEEE International Conference on, pp.707,712, 10-14 June 2014. doi: 10.1109/ICC.2014.6883402 Application Layer Distributed Denial of Service (ALDDoS) attacks have been increasing rapidly with the growth of Botnets and Ubiquitous computing. Differentiate to the former DDoS attacks, ALDDoS attacks cannot be efficiently detected, as attackers always adopt legitimate requests with real IP address, and the traffic has high similarity to legitimate traffic. In spite of that, we think, the attackers' browsing behavior will have great disparity from that of the legitimate users'. In this paper, we put forward a novel user behavior-based method to detect the application layer asymmetric DDoS attack. We introduce an extended random walk model to describe user browsing behavior and establish the legitimate pattern of browsing sequences. For each incoming browser, we observe his page request sequence and predict subsequent page request sequence based on random walk model. The similarity between the predicted and the observed page request sequence is used as a criterion to measure the legality of the user, and then attacker would be detected based on it. Evaluation results based on real collected data set has demonstrated that our method is very effective in detecting asymmetric ALDDoS attacks.
    Keywords: computer network security; ALDDoS attacks; application layer distributed denial of service attacks; botnet; browsing sequences; extended random walk model; legitimate users; novel user behavior-based method; page request sequence; real IP address; subsequent page request sequence; ubiquitous computing; user browsing behavior; Computational modeling; Computer crime; Educational institutions; Information systems; Predictive models; Probability distribution; Vectors; Asymmetric application layer DDoS attack; anomaly detection; random walk model; similarity (ID#:14-2851)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883402&isnumber=6883277
  • Sah, S.K.; Shakya, S.; Dhungana, H., "A Security Management For Cloud Based Applications And Services with Diameter-AAA," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on, pp.6,11, 7-8 Feb. 2014. doi: 10.1109/ICICICT.2014.6781243 The Cloud computing offers various services and web based applications over the internet. With the tremendous growth in the development of cloud based services, the security issue is the main challenge and today's concern for the cloud service providers. This paper describes the management of security issues based on Diameter AAA mechanisms for authentication, authorization and accounting (AAA) demanded by cloud service providers. This paper focuses on the integration of Diameter AAA into cloud system architecture.
    Keywords: authorisation; cloud computing; Internet; Web based applications; authentication, authorization and accounting; cloud based applications; cloud based services; cloud computing; cloud service providers; cloud system architecture; diameter AAA mechanisms; security management; Authentication; Availability; Browsers; Computational modeling; Protocols; Servers; Cloud Computing; Cloud Security; Diameter-AAA (ID#:14-2852)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781243&isnumber=6781240

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


CAPTCHA

CAPTCHA


CAPTCHA (acronym for Completely Automated Public Turing test to tell Computers and Humans Apart) technology has become a standard security tool. In the research presented here, some novel uses are presented, including an Arabic language text digitization scheme, use of Captchas as graphical passwords, motion-based captchas, and defeating a captcha using a gaming technique. These works were presented or published in 2014.

  • Zhu, B.B.; Yan, J.; Guanbo Bao; Maowei Yang; Ning Xu, "Captcha as Graphical Passwords--A New Security Primitive Based on Hard AI Problems," Information Forensics and Security, IEEE Transactions on, vol.9, no.6, pp.891,904, June 2014. doi: 10.1109/TIFS.2014.2312547 Many security primitives are based on hard mathematical problems. Using hard AI problems for security is emerging as an exciting new paradigm, but has been under-explored. In this paper, we present a new security primitive based on hard AI problems, namely, a novel family of graphical password systems built on top of Captcha technology, which we call Captcha as graphical passwords (CaRP). CaRP is both a Captcha and a graphical password scheme. CaRP addresses a number of security problems altogether, such as online guessing attacks, relay attacks, and, if combined with dual-view technologies, shoulder-surfing attacks. Notably, a CaRP password can be found only probabilistically by automatic online guessing attacks even if the password is in the search set. CaRP also offers a novel approach to address the well-known image hotspot problem in popular graphical password systems, such as PassPoints, that often leads to weak password choices. CaRP is not a panacea, but it offers reasonable security and usability and appears to fit well with some practical applications for improving online security.
    Keywords: artificial intelligence; security of data; CaRP password; Captcha as graphical passwords; PassPoints; artificial intelligence; automatic online guessing attacks; dual-view technologies; hard AI problems; hard mathematical problems; image hotspot problem; online security; password choices; relay attacks; search set; security primitives; shoulder-surfing attacks; Animals; Artificial intelligence; Authentication ;CAPTCHAs; Usability; Visualization; CaRP; Captcha; Graphical password; dictionary attack; hotspots; password; password guessing attack; security primitive (ID#:14-2853)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6775249&isnumber=6803967
  • Bakry, M.; Khamis, M.; Abdennadher, S., "AreCAPTCHA: Outsourcing Arabic Text Digitization to Native Speakers," Document Analysis Systems (DAS), 2014 11th IAPR International Workshop on, pp.304,308, 7-10 April 2014. doi: 10.1109/DAS.2014.50 There has been a recent increasing demand to digitize Arabic books and documents, due to the fact that digital books do not lose quality over time, and can be easily sustained. Meanwhile, the number of Arabic-speaking Internet users is increasing. We propose AreCAPTCHA, a system that digitizes Arabic text by outsourcing it to native Arabic speakers, while offering protective measures to online web forms of Arabic websites. As users interact with AreCAPTCHA, we collect possible digitizations of words that were not recognized by OCR programs. We explain how the system works, the challenges we faced, and promising preliminary evaluation results.
    Keywords: Web sites; document image processing; natural language processing; optical character recognition; security of data; Arabic Web sites; Arabic book; Arabic document; Arabic text digitization; Arabic-speaking Internet user; AreCAPTCHA; OCR program; digital book; native Arabic speaker; online Web form; protective measure; CAPTCHAs; Databases; Educational institutions; Engines; Internet; Libraries; Optical character recognition software; Arabic; CAPTCHA; Digitization; Human computation; words recognition (ID#:14-2854)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6831018&isnumber=6824386
  • Yi Xu; Reynaga, G.; Chiasson, S.; Frahm, J.-M.; Monrose, F.; van Oorschot, P.C., "Security Analysis and Related Usability of Motion-Based CAPTCHAs: Decoding Codewords in Motion," Dependable and Secure Computing, IEEE Transactions on, vol.11, no.5, pp.480,493, Sept.-Oct. 2014. doi: 10.1109/TDSC.2013.52 We explore the robustness and usability of moving-image object recognition (video) CAPTCHAS, designing and implementing automated attacks based on computer vision techniques. Our approach is suitable for broad classes of moving-image CAPTCHAS involving rigid objects. We first present an attack that defeats instances of such a CAPTCHA (NuCaptcha) representing the state-of-the-art, involving dynamic text strings called codewords. We then consider design modifications to mitigate the attacks (e.g., overlapping characters more closely, randomly changing the font of individual characters, or even randomly varying the number of characters in the codeword). We implement the modified CAPTCHAS and test if designs modified for greater robustness maintain usability. Our lab-based studies show that the modified captchas fail to offer viable usability, even when the captcha strength is reduced below acceptable targets. Worse yet, our GPU-based implementation shows that our automated approach can decode these captchas faster than humans can, and we can do so at a relatively low cost of roughly 50 cents per 1,000 captchas solved based on Amazon EC2 rates circa 2012. To further demonstrate the challenges in designing usable captchas, we also implement and test another variant of moving text strings using the known emerging images concept. This variant is resilient to our attacks and also offers similar usability to commercially available approaches. We explain why fundamental elements of the emerging images idea resist our current attack where others fail.
    Keywords: Turing machines; computer vision; graphics processing units; image coding ;image motion analysis; object recognition; security of data; text analysis; Amazon EC2 rates circa strings; GPU-based implementation; automated attack mitigation; computer vision; decoding codeword; design modification; dynamic text strings; motion-based CAPTCHA; moving image object recognition CAPTCHA; security analysis; usability analysis; CAPTCHAs; Feature extraction; Image color analysis; Robustness; Streaming media; Trajectory; Usability; CAPTCHAs; computer vision; security; usability (ID#:14-2855)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6682912&isnumber=6893064
  • Subpratatsavee, P.; Kuha, P.; Janthong, N.; Chintho, C., "An Implementation of a Geometric and Arithmetic CAPTCHA without Database," Information Science and Applications (ICISA), 2014 International Conference on, pp.1,3, 6-9 May 2014. doi: 10.1109/ICISA.2014.6847359 This research presented a geometric CAPTCHA which was not created from images in any database, but it is an image of a geometric shape that randomly generated from a program and its edge was incomplete. Geometric CAPTCHAs were tested with users to identify the number of angles from a shape and to do a simple calculation. Users must type a right answer to pass the CAPTCHA test. Geometric CAPTCHAs were test run with other similar three CAPTCHAs in terms of time for task completion, number of errors, and user's satisfaction. This paper was a pilot study for designing a new image- based CAPTCHA, and the improved design will be made in the short future. This research presented a geometric CAPTCHA which was not created from images in any database, but it is an image of a geometric shape that randomly generated from a program and its edge was incomplete. Geometric CAPTCHAs were tested with users to identify the number of angles from a shape and to do a simple calculation. Users must type a right answer to pass the CAPTCHA test. Geometric CAPTCHAs were test run with other similar three CAPTCHAs in terms of time for task completion, number of errors, and user's satisfaction. This paper was a pilot study for designing a new image-based CAPTCHA, and the improved design will be made in the short future.
    Keywords: image processing; message authentication; CAPTCHA test; arithmetic CAPTCHA; authentication; geometric CAPTCHA; geometric shape image; image- based CAPTCHA; shape angle identification; task completion time; user satisfaction; CAPTCHAs; Databases; Educational institutions; Image edge detection; Security; Shape; Silicon (ID#:14-2856)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847359&isnumber=6847317
  • Powell, B.M.; Goswami, G.; Vatsa, M.; Singh, R.; Noore, A, "fgCAPTCHA: Genetically Optimized Face Image CAPTCHA 5," Access, IEEE, vol.2, no., pp.473, 484, 2014. doi: 10.1109/ACCESS.2014.2321001 The increasing use of smartphones, tablets, and other mobile devices poses a significant challenge in providing effective online security. CAPTCHAs, tests for distinguishing human and computer users, have traditionally been popular; however, they face particular difficulties in a modern mobile environment because most of them rely on keyboard input and have language dependencies. This paper proposes a novel image-based CAPTCHA that combines the touch-based input methods favored by mobile devices with genetically optimized face detection tests to provide a solution that is simple for humans to solve, ready for worldwide use, and provides a high level of security by being resilient to automated computer attacks. In extensive testing involving over 2600 users and 40000 CAPTCHA tests, fgCAPTCHA demonstrates a very high human success rate while ensuring a 0% attack rate using three well-known face detection algorithms.
    Keywords: face recognition; mobile computing; security of data; automated computer attacks; face detection algorithms; fgCAPTCHA; genetically optimized face image CAPTCHA; modern mobile environment; novel image-based CAPTCHA; online security; touch-based input methods; CAPTCHAs; Face detection; Face recognition; Mobile communication; Mobile handsets; Noise measurement; Security; CAPTCHA; Mobile security; face detection; web security (ID#:14-2857)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6807630&isnumber=6705689
  • Qi Ye; Youbin Chen; Bin Zhu, "The Robustness of a New 3D CAPTCHA," Document Analysis Systems (DAS), 2014 11th IAPR International Workshop on, vol., no., pp.319, 323, 7-10 April 2014 doi: 10.1109/DAS.2014.31 CAPTCHA is a standard security technology to tell humans and computers and the most widely used method is text based scheme. As many text schemes have been broken, 3D CAPTCHAs have emerged as one of the latest one. In this paper, we study the robustness of 3D text-based CAPTCHA adopted by Ku6 which is a leading website providing videos in China and provide the first analysis of 3D hollow CAPTCHA. The security of this CAPTCHA scheme relies on a novel segmentation resistance mechanism, which combines Crowding Character Together (CCT) strategy and side surfaces which form the 3D visual effect of characters and lead to a promising usability even under strong overlapping between characters. However, by exploiting the unique features of the 3D characters in hollow font, i.e. parallel boundaries, the different stroke width of side faces and front faces and relationships between them, we propose a technique that segments connected characters apart and repairs some overlapped apart. The success segmentation rate is 70%. With minor changes, our attack program works well on its two variations, the segmentation rate is 75% and 85% respectively.
    Keywords: cryptography ;image coding; image segmentation; 3D CAPTCHA scheme; CCT strategy; Completely Automated Public Turing test to tell Computers and Humans Apart; attack program; crowding character together; side surfaces; standard security technology; success segmentation rate; CAPTCHAs; Character recognition; Computers; Maintenance engineering; Robustness; Security; Three-dimensional displays;3D;CAPTCHA;hollow font; security; segmentation; usability (ID#:14-2858)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6831021&isnumber=6824386
  • Harisinghaney, A; Dixit, A; Gupta, S.; Arora, A, "Text and Image Based Spam Email Classification Using KNN, Naive Bayes and Reverse DBSCAN Algorithm," Optimization, Reliability, and Information Technology (ICROIT), 2014 International Conference on, pp. 153, 155, 6-8 Feb. 2014. doi: 10.1109/ICROIT.2014.6798302 Internet has changed the way of communication, which has become more and more concentrated on emails. Emails, text messages and online messenger chatting have become part and parcel of our lives. Out of all these communications, emails are more prone to exploitation. Thus, various email providers employ algorithms to filter emails based on spam and ham. In this research paper, our prime aim is to detect text as well as image based spam emails. To achieve the objective we applied three algorithms namely: KNN algorithm, Naive Bayes algorithm and reverse DBSCAN algorithm. Pre-processing of email text before executing the algorithms is used to make them predict better. This paper uses Enron corpus's dataset of spam and ham emails. In this research paper, we provide comparison performance of all three algorithms based on four measuring factors namely: precision, sensitivity, specificity and accuracy. We are able to attain good accuracy by all the three algorithms. The results have shown comparison of all three algorithms applied on same data set.
    Keywords: Bayes methods; image classification; neural nets; text analysis; text detection; unsolicited e-mail; Enron corpus dataset; Internet; KNN algorithm; Naive Bayes algorithm; email text pre-processing; image based spam email classification; online messenger chatting; reverse DBSCAN algorithm; text based spam email classification; text detection; text messages; CAPTCHAs; Classification algorithms; Computers; Electronic mail; Image resolution; Technological innovation; Viruses (medical); Ham; Image Spam; KNN; Naive Bayes; Spam; reverse DBSCAN (ID#:14-2859)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798302&isnumber=6798279
  • Goto, Misako; Shirato, Toru; Uda, Ryuya, "Text-Based CAPTCHA Using Phonemic Restoration Effect and Similar Sounds," Computer Software and Applications Conference Workshops (COMPSACW), 2014 IEEE 38th International, pp.270,275, 21-25 July 2014. doi: 10.1109/COMPSACW.2014.48 In Recent years, bot (robot) program has been one of the problems on the web. Some kinds of the bots acquire accounts of web services in order to use the accounts for SPAM mails, phishing, etc. CAPTCHA (Completely Automated Public Turing Test to Tell Computers and Humans Apart) is one of the countermeasures for preventing bots from acquiring the accounts. Text-based CAPTCHA is especially implemented on almost all famous web services. However, CAPTCHA faces a problem that evolution of algorithms for analysis of printed characters disarms text-based CAPTCHA. Of course, stronger distortion of characters is the easiest solution of the problem. However, it makes recognition of characters difficult not only for bots but also for human beings. Therefore, in this paper, we propose a new CAPTCHA with higher safety and convenience. Especially, we focus on the human abilities of phonemic restoration and recognition of similar sounds, and adopt the abilities in the propose CAPTCHA. The proposed CAPTCHA makes machinery presumption difficult for bots, while providing easy recognition for human beings.
    Keywords: CAPTCHAs; Character recognition; Computers; Educational institutions; Google; Image restoration; Time measurement; CAPTCHA; Phonemic Restoration; Web Technology (ID#:14-2860)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903141&isnumber=6903069
  • Song Gao; Mohamed, M.; Saxena, N.; Chengcui Zhang, "Gaming the Game: Defeating A Game Captcha With Efficient And Robust Hybrid Attacks," Multimedia and Expo (ICME), 2014 IEEE International Conference on, pp.1, 6, 14-18 July 2014. doi: 10.1109/ICME.2014.6890287 Dynamic Cognitive Game (DCG) CAPTCHAs are a promising new generation of interactive CAPTCHAs aiming to provide improved security against automated and human-solver relay attacks. Unlike existing CAPTCHAs, defeating DCG CAPTCHAs using pure automated attacks or pure relay attacks may be challenging in practice due to the fundamental limitations of computer algorithms (semantic gap) and synchronization issues with solvers. To overcome this barrier, we propose two hybrid attack frameworks. which carefully combine the strengths of an automated program and offline/online human intelligence. These hybrid attacks require maintaining the synchronization only between the game and the bot similar to a pure automated attack, while solving the static AI problem (i.e., bridging the semantic gap) behind the game challenge similar to a pure relay attack. As a crucial component of our framework, we design a new DCG object tracking algorithm, based on color code histogram, and show that it is simpler, more efficient and more robust compared to several known tracking approaches. We demonstrate that both frameworks can effectively defeat a wide range of DCG CAPTCHAs.
    Keywords: authorisation; computer games; image colour analysis; object tracking; DCG CAPTCHA; DCG object tracking algorithm; automated human-solver relay attacks; automated program; color code histogram; computer algorithms; dynamic cognitive game CAPTCHA; hybrid attack framework; interactive CAPTCHA; offline human intelligence; online human intelligence; security improvement; semantic gap; static AI problem; synchronization issues; High definition video; Light emitting diodes; CAPTCHA; hybrid attack; multi-object tracking; visual processing; web security (ID#:14-2861)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890287&isnumber=6890121

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Channel Coding

Channel Coding


Channel coding, also known as Forward Error Correction, are methods for controlling errors in data transmissions over noisy or unreliable communications channels. For cybersecurity, these methods can also be used to ensure data integrity, as some of the research cited below shows. These works were presented in the first half of 2014.

  • Si, H.; Koyluoglu, O.O.; Vishwanath, S., "Polar Coding for Fading Channels: Binary and Exponential Channel Cases," Communications, IEEE Transactions on, vol.62, no.8, pp.2638, 2650, Aug. 2014. doi: 10.1109/TCOMM.2014.2345399 This work presents a polar coding scheme for fading channels, focusing primarily on fading binary symmetric and additive exponential noise channels. For fading binary symmetric channels, a hierarchical coding scheme is presented, utilizing polar coding both over channel uses and over fading blocks. The receiver uses its channel state information (CSI) to distinguish states, thus constructing an overlay erasure channel over the underlying fading channels. By using this scheme, the capacity of a fading binary symmetric channel is achieved without CSI at the transmitter. Noting that a fading AWGN channel with BPSK modulation and demodulation corresponds to a fading binary symmetric channel, this result covers a fairly large set of practically relevant channel settings. For fading additive exponential noise channels, expansion coding is used in conjunction to polar codes. Expansion coding transforms the continuous-valued channel to multiple (independent) discrete-valued ones. For each level after expansion, the approach described previously for fading binary symmetric channels is used. Both theoretical analysis and numerical results are presented, showing that the proposed coding scheme approaches the capacity in the high SNR regime. Overall, utilizing polar codes in this (hierarchical) fashion enables coding without CSI at the transmitter, while approaching the capacity with low complexity.
    Keywords: AWGN channels; Channel state information; Decoding; Encoding; Fading; Noise; Transmitters; Binary symmetric channel; channel coding; expansion coding; fading channels; polar codes (ID#:14-2651)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6871313&isnumber=6880875
  • Koller, C.; Haenggi, M.; Kliewer, J.; Costello, D.J., "Joint Design of Channel and Network Coding for Star Networks Connected by Binary Symmetric Channels," Communications, IEEE Transactions on, vol.62, no.1, pp.158, 169, January 2014. doi: 10.1109/TCOMM.2013.110413.120971 In a network application, channel coding alone is not sufficient to reliably transmit a message of finite length K from a source to one or more destinations as in, e.g., file transfer. To ensure that no data is lost, it must be combined with rateless erasure correcting schemes on a higher layer, such as a time-division multiple access (TDMA) system paired with automatic repeat request (ARQ) or random linear network coding (RLNC). We consider binary channel coding on a binary symmetric channel (BSC) and q-ary RLNC for erasure correction in a star network, where Y sources send messages to each other with the help of a central relay. In this scenario RLNC has been shown to have a throughput advantage over TDMA schemes as K- and q-. In this paper we focus on finite block lengths and compare the expected throughputs of RLNC and TDMA. For a total message length of K bits, which can be subdivided into blocks of smaller size prior to channel coding, we obtain the channel code rate and the number of blocks that maximize the expected throughput of both RLNC and TDMA, and we find that TDMA is more throughput-efficient for small message lengths K and small q.
    Keywords: channel coding; network coding; time division multiple access; wireless channels; ARQ; BSC; RLNC; TDMA schemes; TDMA system; automatic repeat request; binary channel coding; binary symmetric channels; channel code rate; erasure correction; file transfer; joint design; random linear network coding; star network; star networks; time division multiple access; Automatic repeat request; Encoding; Network coding; Relays; Silicon; Throughput; Time division multiple access; Random linear network coding; joint channel and network coding; star networks (ID#:14-2652)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6657830&isnumber=6719911
  • Aguerri, IE.; Varasteh, M.; Gunduz, D., "Zero-delay Joint Source-Channel Coding," Communication and Information Theory (IWCIT), 2014 Iran Workshop on, pp.1,6, 7-8 May 2014. doi: 10.1109/IWCIT.2014.6842482 In zero-delay joint source-channel coding each source sample is mapped to a channel input, and the samples are directly estimated at the receiver based on the corresponding channel output. Despite its simplicity, uncoded transmission achieves the optimal end-to-end distortion performance in some communication scenarios, significantly simplifying the encoding and decoding operations, and reducing the coding delay. Three different communication scenarios are considered here, for which uncoded transmission is shown to achieve either optimal or near-optimal performance. First, the problem of transmitting a Gaussian source over a block-fading channel with block-fading side information is considered. In this problem, uncoded linear transmission is shown to achieve the optimal performance for certain side information distributions, while separate source and channel coding fails to achieve the optimal performance. Then, uncoded transmission is shown to be optimal for transmitting correlated multivariate Gaussian sources over a multiple-input multiple-output (MIMO) channel in the low signal to noise ratio (SNR) regime. Finally, motivated by practical systems a peak-power constraint (PPC) is imposed on the transmitter's channel input. Since linear transmission is not possible in this case, nonlinear transmission schemes are proposed and shown to perform very close to the lower bound.
    Keywords: Gaussian channels; MIMO communication; block codes; combined source-channel coding; decoding; delays; fading channels; radio receivers; radio transmitters; MIMO communication; PPC; SNR; block fading channel; correlated multivariate Gaussian source transmission; decoding; encoding delay reduction; end-to-end distortion performance; information distribution; multiple input multiple output channel; nonlinear transmission scheme; peak power constraint; receiver; signal to noise ratio; transmitter channel; uncoded linear transmission; zero delay joint source channel coding; Channel coding; Decoding; Joints; MIMO; Nonlinear distortion; Signal to noise ratio (ID#:14-2653)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842482&isnumber=6842477
  • Taotao Wang; Soung Chang Liew, "Joint Channel Estimation and Channel Decoding in Physical-Layer Network Coding Systems: An EM-BP Factor Graph Framework," Wireless Communications, IEEE Transactions on, vol.13, no.4, pp.2229, 2245, April 2014. doi: 10.1109/TWC.2013.030514.131312 This paper addresses the problem of joint channel estimation and channel decoding in physical-layer network coding (PNC) systems. In PNC, multiple users transmit to a relay simultaneously. PNC channel decoding is different from conventional multi-user channel decoding: specifically, the PNC relay aims to decode a network-coded message rather than the individual messages of the users. Although prior work has shown that PNC can significantly improve the throughput of a relay network, the improvement is predicated on the availability of accurate channel estimates. Channel estimation in PNC, however, can be particularly challenging because of 1) the overlapped signals of multiple users; 2) the correlations among data symbols induced by channel coding; and 3) time-varying channels. We combine the expectation-maximization (EM) algorithm and belief propagation (BP) algorithm on a unified factor-graph framework to tackle these challenges. In this framework, channel estimation is performed by an EM subgraph, and channel decoding is performed by a BP subgraph that models a virtual encoder matched to the target of PNC channel decoding. Iterative message passing between these two subgraphs allow the optimal solutions for both to be approached progressively. We present extensive simulation results demonstrating the superiority of our PNC receivers over other PNC receivers.
    Keywords: channel coding; channel estimation; expectation-maximisation algorithm; graph theory; network coding; BP algorithm; EM algorithm;E M-BP factor graph framework; PNC channel decoding; PNC receivers; PNC systems;belief propagation; data symbols; expectation maximization joint channel estimation; multiuser channel decoding; network coded message; overlapped signals; physical layer network coding systems; unified factor graph framework; Channel estimation; Decoding; Iterative decoding; Joints; Message passing; Receivers; Relays; Physical-layer network coding; belief propagation; expectation-maximization; factor graph; message passing (ID#:14-2654)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6760601&isnumber=6803026
  • Feng Cen; Fanglai Zhu, "Codeword Averaged Density Evolution For Distributed Joint Source And Channel Coding With Decoder Side Information," Communications, IET, vol.8, no.8, pp.1325,1335, May 22 2014. doi: 10.1049/iet-com.2013.1005 The authors consider applying the systematic low-density parity-check codes with the parity based approach to the lossless (or near lossless) distributed joint source channel coding (DJSCC) with the decoder side information for the non-uniform sources over the asymmetric memoryless transmission channel. By using an equivalent channel coding model, which consists of two parallel subchannels: a correlation and a transmission sub-channel, respectively, they derive the codeword averaged density evolution (DE) for the DJSCC with the decoder side information for the asymmetrically correlated non-uniform sources over the asymmetric memoryless transmission channel. A new code ensemble definition of the irregular codes is introduced to distinguish between the source and the parity variable nodes, respectively. Extensive simulations demonstrate the effectiveness of the codeword averaged DE.
    Keywords: channel coding; combined source-channel coding; decoding; parity check codes; DE; DJSCC; asymmetric memoryless transmission channel; codeword averaged density evolution; decoder side information; distributed joint source channel coding; equivalent channel coding model; parity variable nodes; systematic low-density parity-check codes (ID#:14-2655)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6827069&isnumber=6827053
  • Muramatsu, J., "Channel Coding and Lossy Source Coding Using a Generator of Constrained Random Numbers," Information Theory, IEEE Transactions on, vol.60, no.5, pp.2667, 2686, May 2014. doi: 10.1109/TIT.2014.2309140 Stochastic encoders for channel coding and lossy source coding are introduced with a rate close to the fundamental limits, where the only restriction is that the channel input alphabet and the reproduction alphabet of the lossy source code are finite. Random numbers, which satisfy a condition specified by a function and its value, are used to construct stochastic encoders. The proof of the theorems is based on the hash property of an ensemble of functions, where the results are extended to general channels/sources and alternative formulas are introduced for channel capacity and the rate-distortion region. Since an ensemble of sparse matrices has a hash property, we can construct a code by using sparse matrices.
    Keywords: channel capacity; channel coding; random number generation; source coding; channel capacity; channel coding; channel input alphabet; constrained random number generator; hash property; lossy source coding; rate distortion region; reproduction alphabet;s parse matrices; stochastic encoders; Channel capacity; Channel coding; Manganese; Probability distribution; Random variables; Rate-distortion; Sparse matrices; LDPC codes; Shannon theory; channel coding; information spectrum methods; lossy source coding (ID#:14-2656)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6750723&isnumber=6800061
  • Bocharova, IE.; Guillen i Fabregas, A; Kudryashov, B.D.; Martinez, A; Tauste Campo, A; Vazquez-Vilar, G., "Source-Channel Coding With Multiple Classes," Information Theory (ISIT), 2014 IEEE International Symposium on, pp.1514,1518, June 29 2014-July 4 2014. doi: 10.1109/ISIT.2014.6875086 We study a source-channel coding scheme in which source messages are assigned to classes and encoded using a channel code that depends on the class index. While each class code can be seen as a concatenation of a source code and a channel code, the overall performance improves on that of separate source-channel coding and approaches that of joint source-channel coding as the number of classes increases. The performance of this scheme is studied by means of random-coding bounds and validated by simulation of a low-complexity implementation using existing source and channel codes.
    Keywords: combined source-channel coding; class code; random coding bounds; source channel coding; source messages; AWGN; Decoding; Joints; Presses (ID#:14-2657)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875086&isnumber=6874773
  • Romero, S.M.; Hassanin, M.; Garcia-Frias, J.; Arce, G.R., "Analog Joint Source Channel Coding for Wireless Optical Communications and Image Transmission," Lightwave Technology, Journal of, vol.32, no.9, pp.1654, 1662, May1, 2014. doi: 10.1109/JLT.2014.2308136 An analog joint source channel coding (JSCC) system is developed for wireless optical communications. Source symbols are mapped directly onto channel symbols using space filling curves and then a non-linear stretching function is used to reduce distortion. Different from digital systems, the proposed scheme does not require long block lengths to achieve good performance reducing the complexity of the decoder significantly. This paper focuses on intensity-modulated direct-detection (IM/DD) optical wireless systems. First, a theoretical analysis of the IM/DD wireless optical channel is presented and the prototype communication system designed to transmit data using analog JSCC is introduced. The nonlinearities of the real channel are studied and characterized. A novel technique to mitigate the channel nonlinearities is presented. The performance of the real system follows the simulations and closely approximates the theoretical limits. The proposed system is then used for image transmission by first taking samples of a set of images using compressive sensing and then encoding the measurements using analog JSCC. Both simulation and experimental results are shown.
    Keywords: combined source-channel coding; compressed sensing; image coding; intensity modulation; optical communication; optical modulation; wireless channels; IM/DD wireless optical channel; JSCC; analog joint source channel coding; channel nonlinearities; compressive sensing; distortion reduction; image encoding; image transmission; intensity-modulated direct-detection optical wireless systems; nonlinear stretching function; space filling curves; wireless optical communications; Channel coding; Decoding; Noise; Nonlinear optics; Optical receivers; Optical transmitters; Wireless communication; Compressive sensing (CS);Shannon mappings; intensity-modulation direct-detection (IM/DD); joint source channel coding (JSCC); optical communications (ID#:14-2658)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6748003&isnumber=6781021
  • Suhan Choi, "Functional Duality Between Distributed Reconstruction Source Coding and Multiple-Access Channel Coding in the Case of Correlated Messages," Communications Letters, IEEE , vol.18, no.3, pp.499, 502, March 2014. doi: 10.1109/LCOMM.2014.012214.140018 In this letter, functional duality between Distributed Reconstruction Source Coding (DRSC) with correlated messages and Multiple-Access Channel Coding (MACC) with correlated messages is considered. It is shown that under certain conditions, for a given DRSC problem with correlated messages, a functional dual MACC problem with correlated messages can be obtained, and vice versa. In particular, it is shown that the correlation structures of the messages in the two dual problems are the same. The source distortion measure and the channel cost measure for this duality are also specified.
    Keywords: channel coding; correlation theory; distortion measurement; duality (mathematics);functional analysis; source coding; DRSC; MACC; channel cost measure; correlated messages; distributed reconstruction source coding; functional duality; multiple access channel coding; source distortion measure; Bipartite graph; Channel coding; Correlation; Decoding; Distortion measurement; Source coding; Functional duality; correlated messages; distributed reconstruction source coding; multiple-access channel coding (ID#:14-2659)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6784556&isnumber=6784524
  • Jie Luo, "Generalized Channel Coding Theorems For Random Multiple Access Communication," Communications Workshops (ICC), 2014 IEEE International Conference on , vol., no., pp.489,494, 10-14 June 2014. doi: 10.1109/ICCW.2014.6881246 This paper extends the channel coding theorems of [1][2] to time-slotted random multiple access communication systems with a generalized problem formulation. Assume that users choose their channel codes arbitrarily in each time slot. When the codeword length can be taken to infinity, fundamental performance limitation of the system is characterized using an achievable region defined in the space of channel code index vector each specifies the channel codes of all users. The receiver decodes the message if the code index vector happens to locate inside the achievable region and reports a collision if it falls outside the region. A generalized system error performance measure is defined as the maximum of weighted probabilities of different types of communication error events. Upper bounds on the generalized error performance measure are derived under the assumption of a finite codeword length. It is shown that "interfering users" can be introduced to model not only the impact of interference from remote transmitters, but also the impact of channel uncertainty in random access communication.
    Keywords: channel coding; decoding; probability; radio receivers; radio transmitters; radiofrequency interference; channel code index vector; channel uncertainty impact; communication error events; finite codeword length; generalized channel coding theorems; generalized system error performance measurement; interference impact; message decoding; receiver; remote transmitters; time-slotted random multiple access communication systems; weighted probabilities; Channel coding; Decoding; Error probability; Indexes; Receivers; Vectors (ID#:14-2660)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6881246&isnumber=6881162
  • Hye Won Chung; Guha, S.; Lizhong Zheng, "Superadditivity of Quantum Channel Coding Rate With Finite Blocklength Quantum Measurements," Information Theory (ISIT), 2014 IEEE International Symposium on, pp.901,905, June 29 2014-July 4 2014. doi: 10.1109/ISIT.2014.6874963 We investigate superadditivity in the maximum achievable rate of reliable classical communication over a quantum channel. The maximum number of classical information bits extracted per use of the quantum channel strictly increases as the number of channel outputs jointly measured at the receiver increases. This phenomenon is called superadditivity. We provide an explanation of this phenomenon by comparing a quantum channel with a classical discrete memoryless channel (DMC) under concatenated codes. We also give a lower bound on the maximum accessible information per channel use at a finite length of quantum measurements in terms of V, which is the quantum version of channel dispersion, and C, the classical capacity of the quantum channel.
    Keywords: channel coding; concatenated codes; DMC; concatenated codes; discrete memoryless channel; finite blocklength quantum measurements; quantum channel; quantum channel coding rate; superadditivity; Binary phase shift keying; Concatenated codes; Decoding; Length measurement; Photonics; Quantum mechanics; Receivers (ID#:14-2661)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6874963&isnumber=6874773
  • Vaezi, M.; Labeau, F., "Distributed Source-Channel Coding Based on Real-Field BCH Codes," Signal Processing, IEEE Transactions on, vol.62, no.5, pp.1171,1184, March 1, 2014. doi: 10.1109/TSP.2014.2300039 We use real-number codes to compress statistically dependent sources and establish a new framework for distributed lossy source coding in which we compress sources before, rather than after, quantization. This change in the order of binning and quantization blocks makes it possible to model the correlation between continuous-valued sources more realistically and compensate for the quantization error partially. We then focus on the asymmetric case, i.e., lossy source coding with side information at the decoder. The encoding and decoding procedures are described in detail for a class of real-number codes called discrete Fourier transform (DFT) codes, both for the syndrome- and parity-based approaches. We leverage subspace-based decoding to improve the decoding and by extending it we are able to perform distributed source coding in a rate-adaptive fashion to further improve the decoding performance when the statistical dependency between sources is unknown. We also extend the parity-based approach to the case where the transmission channel is noisy and thus we perform distributed joint source-channel coding in this context. The proposed system is well suited for low-delay communications, as the mean-squared reconstruction error (MSE) is shown to be reasonably low for very short block length.
    Keywords: BCH codes; combined source-channel coding; correlation methods; decoding; discrete Fourier transforms; mean square error methods; quantisation (signal); DFT codes; MSE; discrete Fourier transform; distributed lossy source coding; distributed source-channel coding; mean-squared reconstruction error; quantization blocks; quantization error; real-field BCH codes; real-number codes; subspace-based decoding; transmission channel; Correlation; Decoding; Delays; Discrete Fourier transforms; Quantization (signal);Source coding; BCH-DFT codes; distributed source coding; joint source-channel coding; parity; real-number codes; syndrome (ID#:14-2662)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6712144&isnumber=6732988
  • Tao Wang; Wenbo Zhang; Maunder, R.G.; Hanzo, L., "Near-Capacity Joint Source and Channel Coding of Symbol Values from an Infinite Source Set Using Elias Gamma Error Correction Codes," Communications, IEEE Transactions on, vol.62, no.1, pp.280,292, January 2014. doi: 10.1109/TCOMM.2013.120213.130301 In this paper we propose a novel low-complexity Joint Source and Channel Code (JSCC), which we refer to as the Elias Gamma Error Correction (EGEC) code. Like the recently-proposed Unary Error Correction (UEC) code, this facilitates the practical near-capacity transmission of symbol values that are randomly selected from a set having an infinite cardinality, such as the set of all positive integers. However, in contrast to the UEC code, our EGEC code is a universal code, facilitating the transmission of symbol values that are randomly selected using any monotonic probability distribution. When the source symbols obey a particular zeta probability distribution, our EGEC scheme is shown to offer a 3.4 dB gain over a UEC benchmarker, when Quaternary Phase Shift Keying (QPSK) modulation is employed for transmission over an uncorrelated narrowband Rayleigh fading channel. In the case of another zeta probability distribution, our EGEC scheme offers a 1.9 dB gain over a Separate Source and Channel Coding (SSCC) benchmarker.
    Keywords: Rayleigh channels; channel coding; error correction codes; phase shift keying; source coding; statistical distributions; EGEC code; Infinite Source Set; QPSK modulation; UEC code; elias gamma error correction codes; monotonic probability distribution; near-capacity joint source and channel coding; near-capacity transmission; novel low-complexity joint source and channel code; quaternary phase shift keying modulation; symbol values; unary error correction code; uncorrelated narrowband Rayleigh fading channel; universal code; zeta probability distribution; Decoding; Encoding; Error correction codes; Phase shift keying; Probability distribution; Transmitters; Vectors; Source coding; channel capacity; channel coding; iterative decoding (ID#:14-2663)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6679360&isnumber=6719911

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Clean Slate

Clean Slate


The "clean slate" approach looks at designing networks and internets from scratch, with security built in, in contrast to the evolved Internet in place. The research presented here covers a range of research topics, and includes a survey of those topics. These works were published or presented in the first half of 2014.

  • Sourlas, V.; Tassiulas, L., "Replication Management And Cache-Aware Routing In Information-Centric Networks," Network Operations and Management Symposium (NOMS), 2014 IEEE, pp.1,7, 5-9 May 2014. doi: 10.1109/NOMS.2014.6838282 Content distribution in the Internet places content providers in a dominant position, with delivery happening directly between two end-points, that is, from content providers to consumers. Information-Centrism has been proposed as a paradigm shift from the host-to-host Internet to a host-to-content one, or in other words from an end-to-end communication system to a native distribution network. This trend has attracted the attention of the research community, which has argued that content, instead of end-points, must be at the center stage of attention. Given this emergence of information-centric solutions, the relevant management needs in terms of performance have not been adequately addressed, yet they are absolutely essential for relevant network operations and crucial for the information-centric approaches to succeed. Performance management and traffic engineering approaches are also required to control routing, to configure the logic for replacement policies in caches and to control decisions where to cache, for instance. Therefore, there is an urgent need to manage information-centric resources and in fact to constitute their missing management and control plane which is essential for their success as clean-slate technologies. In this thesis we aim to provide solutions to crucial problems that remain, such as the management of information-centric approaches which has not yet been addressed, focusing on the key aspect of route and cache management.
    Keywords: Internet; telecommunication network routing ;telecommunication traffic; Internet; cache management; cache-aware routing; clean-slate technologies; content distribution; control plane; end-to-end communication system; host-to-host Internet ;information-centric approaches; information-centric networks; information-centric resources ;information-centric solutions; information-centrism; missing management; native distribution network; performance management; replication management; route management; traffic engineering approaches; Computer architecture; Network topology; Planning; Routing; Servers; Subscriptions; Transportation (ID#:14-2664)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838282&isnumber=6838210
  • Visala, K.; Keating, A; Khan, R.H., "Models And Tools For The High-Level Simulation Of A Name-Based Interdomain Routing Architecture," Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on , vol., no., pp.55,60, April 27 2014-May 2 2014. doi: 10.1109/INFCOMW.2014.6849168 The deployment and operation of global network architectures can exhibit complex, dynamic behavior and the comprehensive validation of their properties, without actually building and running the systems, can only be achieved with the help of simulations. Packet-level models are not feasible in the Internet scale, but we are still interested in the phenomena that emerge when the systems are run in their intended environment. We argue for the high-level simulation methodology and introduce a simulation environment based on aggregate models built on state-of-the-art datasets available while respecting invariants observed in measurements. The models developed are aimed at studying a clean slate name-based interdomain routing architecture and provide an abundance of parameters for sensitivity analysis and a modular design with a balanced level of detail in different aspects of the model. In addition to introducing several reusable models for traffic, topology, and deployment, we report our experiences in using the high-level simulation approach and potential pitfalls related to it.
    Keywords: Internet; telecommunication network routing; telecommunication network topology; telecommunication traffic; aggregate models; clean slate name-based interdomain routing architecture; complex-dynamic behavior; global network architecture deployment; global network architecture operation; high-level simulation methodology; modular design; packet-level models; reusable deployment model; reusable topology model; reusable traffic model; sensitivity analysis; Aggregates; Approximation methods; Internet; Network topology; Peer-to-peer computing; Routing; Topology (ID#:14-2665)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849168&isnumber=6849127
  • Campista, M.E.M.; Rubinstein, M.G.; Moraes, IM.; Costa, L.H.M.K.; Duarte, O.C.M.B., "Challenges and Research Directions for the Future Internetworking," Communications Surveys & Tutorials, IEEE, vol.16, no.2, pp.1050,1079, Second Quarter 2014. doi: 10.1109/SURV.2013.100213.00143 We review the main challenges and survey promising techniques for network interconnection in the Internet of the future. To this end, we first discuss the shortcomings of the Internet's current model. Among them, many are consequence of unforeseen demands on the original Internet design such as: mobility, multihoming, multipath, and network scalability. These challenges have attracted significant research efforts in the latest years because of both their relevance and complexity. In this survey, for the sake of completeness, we cover several new protocols for network interconnection spanning both incremental deployments (evolutionary approach) and radical proposals to redesign the Internet from scratch (clean-slate approach). We focus on specific proposals for future internetworking such as: Loc/ID split, flat routing, network mobility, multipath and content-based routing, path programmability, and Internet scalability. Although there is no consensus on the future internetworking approach, requirements such as security, scalability, and incremental deployment are often considered.
    Keywords: {internetworking telecommunication network routing; Internet scalability; content-based routing; future Internetworking; incremental deployments; multipath routing; network interconnection spanning; network mobility; path programmability; radical proposals; IP networks; Internet; Mobile communication; Mobile computing; Routing; Routing protocols; Future Internet; internetworking; routing (ID#:14-2666)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6644748&isnumber=6811383
  • Qadir, J.; Hasan, O., "Applying Formal Methods to Networking: Theory, Techniques and Applications," Communications Surveys & Tutorials, IEEE, vol. PP, no.99, pp.1, 1, August 2014. doi: 10.1109/COMST.2014.2345792 Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet which began as a research experiment was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, especially for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification, and an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design--especially, the software defined networking (SDN) paradigm--offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods, and present a survey of its applications to networking.
    Keywords: Communities; Computers; Internet; Mathematics; Protocols; Software; Tutorials (ID#:14-2667)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6873212&isnumber=5451756
  • Mohamed, Abdelrahim; Onireti, Oluwakayode; Qi, Yinan; Imran, Ali; Imran, Muhammed; Tafazolli, Rahim, "Physical Layer Frame in Signalling-Data Separation Architecture: Overhead and Performance Evaluation," European Wireless 2014; 20th European Wireless Conference; Proceedings of, pp.1,6, 14-16 May 2014. Doi: (not provided) Conventional cellular systems are dimensioned according to a worst case scenario, and they are designed to ensure ubiquitous coverage with an always-present wireless channel irrespective of the spatial and temporal demand of service. A more energy conscious approach will require an adaptive system with a minimum amount of overhead that is available at all locations and all times but becomes functional only when needed. This approach suggests a new clean slate system architecture with a logical separation between the ability to establish availability of the network and the ability to provide functionality or service. Focusing on the physical layer frame of such an architecture, this paper discusses and formulates the overhead reduction that can be achieved in next generation cellular systems as compared with the Long Term Evolution (LTE). Considering channel estimation as a performance metric whilst conforming to time and frequency constraints of pilots spacing, we show that the overhead gain does not come at the expense of performance degradation.
    Keywords: (not provided) (ID#:14-2668)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6843062&isnumber=6843048

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Cloud Security

Cloud Security


Cloud security is one of the prime topics for theoretical and applied research today. The works cited here cover a wide range of topics and methods for addressing cloud security issues. They were presented or published between January and August of 2014.

  • Feng Zhao; Chao Li; Chun Feng Liu, "A Cloud Computing Security Solution Based On Fully Homomorphic Encryption," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.485,488, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6779008 With the rapid development of Cloud computing, more and more users deposit their data and application on the cloud. But the development of Cloud computing is hindered by many Cloud security problem. Cloud computing has many characteristics, e.g. multi-user, virtualization, scalability and so on. Because of these new characteristics, traditional security technologies can't make Cloud computing fully safe. Therefore, Cloud computing security becomes the current research focus and is also this paper's research direction[1]. In order to solve the problem of data security in cloud computing system, by introducing fully homomorphism encryption algorithm in the cloud computing data security, a new kind of data security solution to the insecurity of the cloud computing is proposed and the scenarios of this application is hereafter constructed. This new security solution is fully fit for the processing and retrieval of the encrypted data, and effectively leading to the broad applicable prospect, the security of data transmission and the storage of the cloud computing[2].
    Keywords: cloud computing; cryptography; cloud computing security solution; cloud security problem; data security solution; data storage; data transmission; encrypted data processing; encrypted data retrieval; fully homomorphic encryption algorithm; security technologies; Cloud computing; Encryption; Safety; Cloud security; Cloud service; Distributed implementation; Fully homomorphic encryption (ID#:14-2669)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779008&isnumber=6778899
  • Fazal-e-Amin; Alghamdi, AS.; Ahmad, I, "Cloud Based C4I Systems: Security Requirements and Concerns," Computational Science and Computational Intelligence (CSCI), 2014 International Conference on, vol.2, pp.75, 80, 10-13 March 2014. doi: 10.1109/CSCI.2014.98 C4I (Command, Control, Communication, Computer and Intelligence) systems are critical systems of systems. These systems are used in military, emergency response, and in disaster management etc. Due to the sensitive nature of domains and applications of these systems, quality could never be compromised. C4I systems are resource demanding system, their expansion or up gradation for the sake of improvement require additional computational resources. Cloud computing provides a solution for the convenient access and scaling of resources. Recently, it is envisioned by the researchers to deploy C4I systems using cloud computing resources. However, there are many issues in such deployment and security being at the top, is focus of many researchers. In this research, security requirements and concerns of cloud based C4I systems are highlighted. Different aspects of cloud computing and C4I systems are discussed from the security point of view. This research will be helpful for both academia and industry to further strengthen the basis of cloud based C4I systems.
    Keywords: cloud computing; command and control systems; security of data; Command, Control, Communication, Computer and Intelligence systems; cloud based C4I systems; cloud computing resources; critical systems of systems; security requirements; Availability; Cloud computing; Computational modeling;Computers;Government;Security;c4i system; cloud computing; security (ID#:14-2670)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6822307&isnumber=6822285
  • Albahdal, AA; Alsolami, F.; Alsaadi, F., "Evaluation of Security Supporting Mechanisms in Cloud Storage," Information Technology: New Generations (ITNG), 2014 11th International Conference on, pp.285,292, 7-9 April 2014. doi: 10.1109/ITNG.2014.110 Cloud storage is one of the most promising services of cloud computing. It holds promise for unlimited, scalable, flexible, and low cost data storage. However, security of data stored at the cloud is the main concern that hinders the adoption of cloud storage model. In the literature, there are many proposed mechanisms to improve the security of cloud storage. These proposed mechanisms differ in many aspects and provide different levels of security. In this paper, we evaluate five different mechanisms for supporting the security of the cloud storage. We begin with a brief description of these mechanisms. Then we evaluate these mechanisms based on the following criteria: security, support of writing serializability and reading freshness, workload distribution between the client and cloud, performance, financial cost, support of accountability between the client and cloud, support of file sharing between users, and ease of deployment. The evaluation section of this paper forms a guide for individuals and organizations to select or design an appropriate mechanism that satisfies their requirements for securing cloud storage.
    Keywords: cloud computing; security of data; storage management; In file sharing ;cloud computing; cloud security; cloud storage model; reading freshness; security supporting mechanism; workload distribution; Availability; Cloud computing; Encryption; Secure storage; Writing; Cloud Computing; Cloud Security; Cloud Storage (ID#:14-2671)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6822212&isnumber=6822158
  • Varadharajan, V.; Tupakula, U., "Security as a Service Model for Cloud Environment," Network and Service Management, IEEE Transactions on, vol. 11, no.1, pp.60, 75, March 2014. doi: 10.1109/TNSM.2014.041614.120394 Cloud computing is becoming increasingly important for provision of services and storage of data in the Internet. However there are several significant challenges in securing cloud infrastructures from different types of attacks. The focus of this paper is on the security services that a cloud provider can offer as part of its infrastructure to its customers (tenants) to counteract these attacks. Our main contribution is a security architecture that provides a flexible security as a service model that a cloud provider can offer to its tenants and customers of its tenants. Our security as a service model while offering a baseline security to the provider to protect its own cloud infrastructure also provides flexibility to tenants to have additional security functionalities that suit their security requirements. The paper describes the design of the security architecture and discusses how different types of attacks are counteracted by the proposed architecture. We have implemented the security architecture and the paper discusses analysis and performance evaluation results.
    Keywords: cloud computing; security of data; Internet; baseline security ;cloud computing; cloud environment; cloud infrastructures; cloud provider; data storage; security architecture; security functionalities; security requirements; security-as-a-service model; service provisioning; Cloud computing; Computer architecture; Operating systems; Privacy; Security; Software as a service; Virtual machining; Cloud security; security and privacy; security architecture (ID#:14-2672)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6805344&isnumber=6804401
  • Whaiduzzaman, M.; Gani, A, "Measuring Security For Cloud Service Provider: A Third Party Approach," Electrical Information and Communication Technology (EICT), 2013 International Conference on, pp.1,6, 13-15 Feb. 2014. doi: 10.1109/EICT.2014.6777855 Cloud Computing (CC) is a new paradigm of utility computing and enormously growing phenomenon in the present IT industry hype. CC leverages low cost investment opportunity for the new business entrepreneur as well as business avenues for cloud service providers. As the number of the new Cloud Service Customer (CSC) increases, users require a secure, reliable and trustworthy Cloud Service Provider (CSP) from the market to store confidential data. However, a number of shortcomings in reliable monitoring and identifying security risks, threats are an immense concern in choosing the highly secure CSP for the wider cloud community. The secure CSP ranking system is currently a challenging aspect to gauge trust, privacy and security. In this paper, a Trusted Third Party (TTP) like credit rating agency is introduced for security ranking by identifying current assessable security risks. We propose an automated software scripting model by penetration testing for TTP to run on CSP side and identify the vulnerability and check security strength and fault tolerance capacity of the CSP. Using the results, several non-measurable metrics are added and provide the ranking system of secured trustworthy CSP ranking systems. Moreover, we propose a conceptual model for monitoring and maintaining such TTP cloud ranking providers worldwide called federated third party approach. Hence the model of federated third party cloud ranking and monitoring system assures and boosts up the confidence to make a feasible secure and trustworthy market of CSPs.
    Keywords: cloud computing; program testing; trusted computing; CC; CSC; CSP fault tolerance capacity; CSP ranking system; CSP security strength; IT industry; TTP; automated software scripting model; business avenues; business entrepreneur; cloud computing; cloud service customer; cloud service provider; confidential data storage; credit rating agency; federated third party approach; information technology; penetration testing; security measurement; security risks identification; security risks monitoring; trusted third party; utility computing; Business; Cloud computing; Measurement; Mobile communication; Monitoring; Security; Cloud computing; cloud security ranking; cloud service provider; trusted third party (ID#:14-2673)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6777855&isnumber=6777807
  • Djenna, A; Batouche, M., "Security Problems In Cloud Infrastructure," Networks, Computers and Communications, The 2014 International Symposium on, pp.1,6, 17-19 June 2014. doi: 10.1109/SNCC.2014.6866505 Cloud computing is the emergence of a logical continuation of the computing history, following in the footsteps of mainframes, PCs, Servers, Internet and Data Centers, all those had changed radically the way of our everyday life which adopt the technology. Cloud Computing is able to provide its customers numerous services through Internet. The Virtualization is the secret to the establishment of a Cloud infrastructure. With any technology, it presents both benefits and challenges; the virtualization is not an exception to this rule. In this context, the Cloud infrastructure can be used as a springboard for the generation of new types of attacks. Therefore, security is one of the major concerns for the evolution and migration to the Cloud. In this paper, an overview of security issues related to Cloud infrastructure will be presented, followed by a critical analysis of the various issues that arise in IaaS Cloud and the current attempts to improve security in the Cloud environment.
    Keywords: cloud computing; telecommunication security; virtualisation; IaaS Cloud; Internet; cloud computing; cloud infrastructure; computing history; security problems; virtualization; Cloud computing; Computer architecture; Security; Servers; Virtual machine monitors; Virtual machining; Virtualization; Cloud Computing; Cloud Security; Virtualization (ID#:14-2674)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866505&isnumber=6866503
  • Chalse, R.R.; Katara, A; Selokar, A; Talmale, R., "Inter-cloud Data Transfer Security," Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on, pp.654,657, 7-9 April 2014. doi: 10.1109/CSNT.2014.137 The use of cloud computing has increased rapidly in many organizations. Cloud computing provides many benefits in terms of low cost and accessibility of data. Cloud computing has generated a lot of interest and competition in the industry and it is recognize as one of the top 10 technologies of 2010. It is an internet based service delivery model which provides internet based services, computing and storage for users in all market including financial, health care & government. In this paper we to provide Inter Cloud Data Transfer Security. Cloud security is becoming a key differentiator and competitive edge between cloud providers. This paper discusses the security issues arising in different type of clouds. This work aims to promote the use of multi-clouds due to its ability to reduce security risks that affect the cloud computing user.
    Keywords: cloud computing; security of data ;Internet based service delivery model; cloud computing user; cloud providers; competitive edge; differentiator; government;health care; intercloud data transfer security; multiclouds; security risks; Cloud computing; Computer crime; Data transfer; Fingerprint recognition; Servers; Software as a service; Cloud; Cloud computing; DMFT; Security; Security challenges; data security (ID#:14-2675)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821479&isnumber=6821334
  • Mapp, G.; Aiash, M.; Ondiege, B.; Clarke, M., "Exploring a New Security Framework for Cloud Storage Using Capabilities," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp.484, 489, 7-11 April 2014. doi: 10.1109/SOSE.2014.69 We are seeing the deployment of new types of networks such as sensor networks for environmental and infrastructural monitoring, social networks such as facebook, and e-Health networks for patient monitoring. These networks are producing large amounts of data that need to be stored, processed and analysed. Cloud technology is being used to meet these challenges. However, a key issue is how to provide security for data stored in the Cloud. This paper addresses this issue in two ways. It first proposes a new security framework for Cloud security which deals with all the major system entities. Secondly, it introduces a Capability ID system based on modified IPv6 addressing which can be used to implement a security framework for Cloud storage. The paper then shows how these techniques are being used to build an e-Health system for patient monitoring.
    Keywords: cloud computing; electronic health records; patient monitoring; social networking (online);storage management;IPv6 addressing; capability ID system; cloud security; cloud storage; cloud technology ;e-Health system; e-health networks; environmental monitoring; facebook; infrastructural monitoring; patient monitoring; security for data; security framework; sensor networks; social networks; system entity; Cloud computing; Companies; Monitoring; Protocols; Security; Servers; Virtual machine monitors; Capability Systems; Cloud Storage; Security Framework; e-Health Monitoring (ID#:14-2676)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830953&isnumber=6825948
  • Himmel, M.A; Grossman, F., "Security on distributed systems: Cloud security versus traditional IT," IBM Journal of Research and Development, vol.58, no.1, pp.3:1, 3:13, Jan.-Feb. 2014. doi: 10.1147/JRD.2013.2287591 Cloud computing is a popular subject across the IT (information technology) industry, but many risks associated with this relatively new delivery model are not yet fully understood. In this paper, we use a qualitative approach to gain insight into the vectors that contribute to cloud computing risks in the areas of security, business, and compliance. The focus is on the identification of risk vectors affecting cloud computing services and the creation of a framework that can help IT managers in their cloud adoption process and risk mitigation strategy. Economic pressures on businesses are creating a demand for an alternative delivery model that can provide flexible payments, dramatic cuts in capital investment, and reductions in operational cost. Cloud computing is positioned to take advantage of these economic pressures with low-cost IT services and a flexible payment model, but with certain security and privacy risks. The frameworks offered by this paper may assist IT professionals obtain a clearer understanding of the risk tradeoffs associated with cloud computing environments.
    Keywords: Automation; Cloud computing; Computer security; Information technology; Risk management; Virtual machine monitors (ID#:14-2677)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6717051&isnumber=6717043
  • Sah, S.K.; Shakya, S.; Dhungana, H., "A Security Management For Cloud Based Applications And Services with Diameter-AAA," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on, pp.6,11, 7-8 Feb. 2014. doi: 10.1109/ICICICT.2014.6781243 The Cloud computing offers various services and web based applications over the internet. With the tremendous growth in the development of cloud based services, the security issue is the main challenge and today's concern for the cloud service providers. This paper describes the management of security issues based on Diameter AAA mechanisms for authentication, authorization and accounting (AAA) demanded by cloud service providers. This paper focuses on the integration of Diameter AAA into cloud system architecture.
    Keywords: authorisation; cloud computing ;Internet; Web based applications; authentication, authorization and accounting; cloud based applications; cloud based services; cloud computing; cloud service providers; cloud system architecture; diameter AAA mechanisms; security management; Authentication; Availability; Browsers; Computational modeling; Protocols; Servers; Cloud Computing; Cloud Security; Diameter-AAA (ID#:14-2678)
    URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781243&isnumber=6781240
  • Boopathy, D.; Sundaresan, M., "Data Encryption Framework Model With Watermark Security For Data Storage In Public Cloud Model," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on, pp.903,907, 5-7 March 2014. doi: 10.1109/IndiaCom.2014.6828094 Cloud computing technology is a new concept of providing dramatically scalable and virtualized resources. It implies a SOA (Service Oriented Architecture) type, reduced information technology overhead for the end level user, greater flexibility model, reduced total cost of ownership and on-demand service providing structure. From the user point of view, one of the main concerns is cloud security from the unknown threats. The lack of physical access to servers constitutes a completely new and disruptive challenge for investigators. The Clients can store, transfer or exchange their data using public cloud model. This paper represents the encryption method for public cloud and also the cloud service provider's verification mechanism using the third party auditors with framework model. The Cloud Data Storage is one of the mandatory services which are acquiring in this rapid development business world.
    Keywords: cloud computing; cryptography; service-oriented architecture; storage management; watermarking; SOA; cloud computing technology; cloud data storage; cloud security; cloud service provider verification mechanism; data encryption framework model; end level user; information technology overhead; mandatory services; on-demand service; physical access; public cloud model; scalable resources; service oriented architecture; third party auditors; total cost of ownership reduction; unknown threats; virtualized resources; watermark security; Cloud computing; Computational modeling; Data models; Encryption; Servers; Watermarking ;Cloud Data Storage; Cloud Encryption; Data Confidency Data Privacy; Encryption Model; Watermark Security (ID#:14-2679)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828094&isnumber=6827395
  • Poornima, B.; Rajendran, T., "Improving Cloud Security by Enhanced HASBE Using Hybrid Encryption Scheme," Computing and Communication Technologies (WCCCT), 2014 World Congress on, pp.312,314, Feb. 27 2014-March 1 2014. doi: 10.1109/WCCCT.2014.88 Cloud computing has appeared as one of the most influential paradigms in the IT commerce in recent years and this technology needs users to entrust their precious facts and figures to cloud providers, there have been expanding security and privacy concerns on outsourced data. Several schemes employing attribute-based encryption (ABE) have been suggested for get access to control of outsourced data in cloud computing; however, most of them suffer from inflexibility in applying convoluted get access to command principles. In order to recognize scalable, flexible, and finegrained get access to control of outsourced facts and figures in cloud computing, in this paper, we suggest hierarchical attribute-set-based encryption (HASBE) by expanding ciphertext-policy attributeset- based encryption (ASBE) with a hierarchical structure of users. The suggested design not only achieves scalability due to its hierarchical structure, but furthermore inherits flexibility and fine-grained get access to command in carrying compound attributes of ASBE. In addition, HASBE uses multiple worth assignments for access expiration time to deal with client revocation more effectively than living schemes. We apply our scheme and show that it is both effective and flexible in dealing with get access to command for outsourced facts in cloud computing with comprehensive trials.
    Keywords: cloud computing; cryptography; IT commerce; attribute based encryption; attribute set based encryption; cloud computing; cloud providers; enhanced HASBE; hierarchical attribute-set-based encryption; hybrid encryption scheme ;improving cloud security; Cloud computing; Computational modeling; Educational institutions; Encryption; Privacy; Scalability (ID#:14-2680)
    URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6755167&isnumber=6755083
  • Goel, R.; Garuba, M.; Goel, R., "Cloud Computing Vulnerability: DDoS as Its Main Security Threat, and Analysis of IDS as a Solution Model," Information Technology: New Generations (ITNG), 2014 11th International Conference on, vol., no., pp.307, 312, 7-9 April 2014. doi: 10.1109/ITNG.2014.77 Cloud computing has emerged as an increasingly popular means of delivering IT-enabled business services and a potential technology resource choice for many private and government organizations in today's rapidly changing computing environment. Consequently, as cloud computing technology, functionality and usability expands unique security vulnerabilities and treats requiring timely attention arise continuously. The primary challenge being providing continuous service availability. This paper will address cloud security vulnerability issues, the threats propagated by a distributed denial of service (DDOS) attack on cloud computing infrastructure and also discuss the means and techniques that could detect and prevent the attacks.
    Keywords: business data processing; cloud computing; computer network security; DDoS ;IDS; IT-enabled business services; cloud computing infrastructure; cloud computing vulnerability; cloud security vulnerability issues; distributed denial of service; security threat; Availability; Cloud computing; Computational modeling; Computer crime; Organizations; Servers; Cloud; DDoS; IDS; Security; Vulnerability (ID#:14-2681)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6822215&isnumber=6822158
  • Datta, E.; Goyal, N., "Security Attack Mitigation Framework For The Cloud," Reliability and Maintainability Symposium (RAMS), 2014 Annual, pp.1,6, 27-30 Jan. 2014. doi: 10.1109/RAMS.2014.6798457 Cloud computing brings in a lot of advantages for enterprise IT infrastructure; virtualization technology, which is the backbone of cloud, provides easy consolidation of resources, reduction of cost, space and management efforts. However, security of critical and private data is a major concern which still keeps back a lot of customers from switching over from their traditional in-house IT infrastructure to a cloud service. Existence of techniques to physically locate a virtual machine in the cloud, proliferation of software vulnerability exploits and cross-channel attacks in-between virtual machines, all of these together increases the risk of business data leaks and privacy losses. This work proposes a framework to mitigate such risks and engineer customer trust towards enterprise cloud computing. Everyday new vulnerabilities are being discovered even in well-engineered software products and the hacking techniques are getting sophisticated over time. In this scenario, absolute guarantee of security in enterprise wide information processing system seems a remote possibility; software systems in the cloud are vulnerable to security attacks. Practical solution for the security problems lies in well-engineered attack mitigation plan. At the positive side, cloud computing has a collective infrastructure which can be effectively used to mitigate the attacks if an appropriate defense framework is in place. We propose such an attack mitigation framework for the cloud. Software vulnerabilities in the cloud have different severities and different impacts on the security parameters (confidentiality, integrity, and availability). By using Markov model, we continuously monitor and quantify the risk of compromise in different security parameters (e.g.: change in the potential to compromise the data confidentiality). Whenever, there is a significant change in risk, our framework would facilitate the tenants to calculate the Mean Time to Security Failure (MTTSF) cloud and allow - hem to adopt a dynamic mitigation plan. This framework is an add-on security layer in the cloud resource manager and it could improve the customer trust on enterprise cloud solutions.
    Keywords: Markov processes; cloud computing; security of data; virtualisation; MTTSF cloud; Markov model; attack mitigation plan; availability parameter; business data leaks; cloud resource manager; cloud service; confidentiality parameter; cross-channel attacks; customer trust; enterprise IT infrastructure; enterprise cloud computing; enterprise cloud solutions; enterprise wide information processing system; hacking techniques; information technology; integrity parameter; mean time to security failure; privacy losses; private data security; resource consolidation; security attack mitigation framework; security guarantee; software products; software vulnerabilities; software vulnerability exploits; virtual machine; virtualization technology; Cloud computing; Companies; Security ;Silicon; virtual machining; Attack Graphs; Cloud computing; Markov Chain; Security; Security Administration (ID#:14-2682)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798457&isnumber=6798433
  • Dinadayalan, P.; Jegadeeswari, S.; Gnanambigai, D., "Data Security Issues in Cloud Environment and Solutions," Computing and Communication Technologies (WCCCT), 2014 World Congress on, pp.88, 91, Feb. 27 2014-March 1 2014. doi: 10.1109/WCCCT.2014.63 Cloud computing is an internet based model that enable convenient, on demand and pay per use access to a pool of shared resources. It is a new technology that satisfies a user's requirement for computing resources like networks, storage, servers, services and applications, Data security is one of the leading concerns and primary challenges for cloud computing. This issue is getting more serious with the development of cloud computing. From the consumers' perspective, cloud computing security concerns, especially data security and privacy protection issues, remain the primary inhibitor for adoption of cloud computing services. This paper analyses the basic problem of cloud computing and describes the data security and privacy protection issues in cloud.
    Keywords: cloud computing; data privacy; security of data; Internet based model ;cloud computing security concerns; cloud computing services; cloud environment; computing resources; data security issues; networks; pay per use access; privacy protection issues; servers; shared resources; Cloud computing; Computers; Data privacy; Data security; Organizations; Privacy; Cloud Computing; Cloud Computing Security; Data Security; Privacy protection (ID#:14-2683)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6755112&isnumber=6755083
  • Pawar, Y.; Rewagad, P.; Lodha, N., "Comparative Analysis of PAVD Security System with Security Mechanism of Different Cloud Storage Services," Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on, pp.611,614, 7-9 April 2014. doi: 10.1109/CSNT.2014.128 Cloud Computing, being in its infancy in the field of research has attracted lots of research communities in last few years. Lot of investment is made in cloud based research by MNCs like Amazon, IBM and different R & D organizations. Inspite of these the number of stakeholders actually using cloud services is limited. The main hindrance to the wide adoption of cloud Technology is feeling of insecurity regarding storage of data in cloud, absence of reliance and comprehensive access control mechanism. To overcome it, cloud service providers have employed different security mechanism to protect confidentiality and integrity of data in cloud. We have used PAVD security system to protect confidentiality and integrity of data stored in cloud. PAVD is an acronym for Privacy, Authentication and Verification of data. We have statistically analyzed the performance of PAVD security system over different sizes of data files. This paper aimed at comparing the performance of different cloud storage services like Drop box, sky drive etc with our PAVD security system with respect to uploading and downloading time.
    Keywords: authorisation; cloud computing; data privacy; storage management; Drop Box; MNC; PAVD security system; Sky Drive; access control mechanism; cloud based research; cloud computing; cloud storage services; cloud technology; data storage; multinational companies; privacy authentication and verification of data; security mechanism; statistical analysis; Cloud computing; Digital signatures ;Encryption; Servers; Cloud Computing; Data confidentiality; Performance Analysis; Security Issues; Stakeholders (ID#:14-2684)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821470&isnumber=6821334
  • Honggang Wang; Shaoen Wu; Min Chen; Wei Wang, "Security Protection Between Users And The Mobile Media Cloud," Communications Magazine, IEEE, vol.52, no.3, pp.73, 79, March 2014. doi: 10.1109/MCOM.2014.6766088 Mobile devices such as smartphones are widely deployed in the world, and many people use them to download/upload media such as video and pictures to remote servers. On the other hand, a mobile device has limited resources, and some media processing tasks must be migrated to the media cloud for further processing. However, a significant question is, can mobile users trust the media services provided by the media cloud service providers? Many traditional security approaches are proposed to secure the data exchange between mobile users and the media cloud. However, first, because multimedia such as video is large-sized data, and mobile devices have limited capability to process media data, it is important to design a lightweight security method; second, uploading and downloading multi-resolution images/videos make it difficult for the traditional security methods to ensure security for users of the media cloud. Third, the error-prone wireless environment can cause failure of security protection such as authentication. To address the above challenges, in this article, we propose to use both secure sharing and watermarking schemes to protect user's data in the media cloud. The secure sharing scheme allows users to upload multiple data pieces to different clouds, making it impossible to derive the whole information from any one cloud. In addition, the proposed scalable watermarking algorithm can be used for authentications between personal mobile users and the media cloud. Furthermore, we introduce a new solution to resist multimedia transmission errors through a joint design of watermarking and Reed- Solomon codes. Our studies show that the proposed approach not only achieves good security performance, but also can enhance media quality and reduce transmission overhead.
    Keywords: Reed-Solomon codes; cloud computing; security of data; smart phones; video watermarking; Reed-Solomon codes; authentication; data exchange; error-prone wireless environment; large-sized data; lightweight security; media cloud service providers; media processing tasks; mobile devices; mobile media cloud; multimedia transmission errors; multiple data pieces; multiresolution images-videos; personal mobile users;secure sharing; security protection; smartphones; transmission overhead; watermarking; Cloud computing; Cryptography; Handheld devices ;Media; Mobile communication; Multimedia communication; Network security; Watermarking (ID#:14-2685)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6766088&isnumber=6766068
  • Sugumaran, M.; Murugan, B.B.; Kamalraj, D., "An Architecture for Data Security in Cloud Computing," Computing and Communication Technologies (WCCCT), 2014 World Congress on, pp.252,255, Feb. 27 2014-March 1 2014. doi: 10.1109/WCCCT.2014.53 Cloud computing is a more flexible, cost effective and proven delivery platform for providing business or consumer services over the Internet. Cloud computing supports distributed service oriented architecture, multi-user and multi-domain administrative infrastructure. So, it is more prone to security threats and vulnerabilities. At present, a major concern in cloud adoption is towards its security and privacy. Security and privacy issues are of great concern to cloud service providers who are actually hosting the services. In most cases, the provider must guarantee that their infrastructure is secure and clients' data and applications are safe, by implementing security policies and mechanisms. The security issues are organized into several general categories: trust, identity management, software isolation, data protection, availability reliability, ownership, data backup, data portability and conversion, multi platform support and intellectual property. In this paper, it is discuss about some of the techniques that were implemented to protect data and propose architecture to protect data in cloud. This architecture was developed to store data in cloud in encrypted data format using cryptography technique which is based on block cipher.
    Keywords: cloud computing; cryptography; data protection; electronic data interchange; industrial property; safety-critical software; service-oriented architecture; block cipher; business services; client data; cloud computing; cloud service providers; consumer services; cryptography technique; data availability; data backup; data conversion; data ownership; data portability; data privacy; data protection; data reliability; data security architecture; data storage; distributed service-oriented architecture; encrypted data format; identity management; intellectual property; multiuser multidomain administrative infrastructure; software isolation; trust factor; Ciphers; Cloud computing; Computer architecture; Encryption; Cloud computing; data privacy ;data security; symmetric cryptography; virtualization (ID#:14-2686)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6755152&isnumber=6755083
  • De, S.J.; Pal, AK., "A Policy-Based Security Framework for Storage and Computation on Enterprise Data in the Cloud," System Sciences (HICSS), 2014 47th Hawaii International Conference on, pp.4986,4997, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.613 A whole range of security concerns that can act as barriers to the adoption of cloud computing have been identified by researchers over the last few years. While outsourcing its business-critical data and computations to the cloud, an enterprise loses control over them. How should the organization decide what security measures to apply to protect its data and computations that have different security requirements from a Cloud Service Provider (CSP) with an unknown level of corruption? The answer to this question relies on the organization's perception about the CSP's trustworthiness and the security requirements of its data. This paper proposes a decentralized, dynamic and evolving policy-based security framework that helps an organization to derive such perceptions from knowledgeable and trusted employee roles and based on that, choose the most relevant security policy specifying the security measures necessary for outsourcing data and computations to the cloud. The organizational perception is built through direct user participation and is allowed to evolve over time.
    Keywords: business data processing; cloud computing; data protection; outsourcing; security of data; trusted computing; CSPs trustworthiness; cloud computing; cloud service provider; data outsourcing; data protection; data security requirements; decentralized security framework; dynamic security framework; enterprise data computation; enterprise data storage; policy-based security framework; Cloud computing; Computational modeling; Data security; Organizations; Outsourcing; Secure storage; Cloud Computing; Data and Computation Outsourcing; Security (ID#:14-2687)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759216&isnumber=6758592
  • Hassan, S.; Abbas Kamboh, A; Azam, F., "Analysis of Cloud Computing Performance, Scalability, Availability, & Security," Information Science and Applications (ICISA), 2014 International Conference on, pp.1, 5, 6-9 May 2014. doi: 10.1109/ICISA.2014.6847363 Cloud Computing means that a relationship of many number of computers through a contact channel like internet. Through cloud computing we send, receive and store data on internet. Cloud Computing gives us an opportunity of parallel computing by using a large number of Virtual Machines. Now a days, Performance, scalability, availability and security may represent the big risks in cloud computing. In this paper we highlights the issues of security, availability and scalability issues and we will also identify that how we make our cloud computing based infrastructure more secure and more available. And we also highlight the elastic behavior of cloud computing. And some of characteristics which involved for gaining the high performance of cloud computing will also be discussed.
    Keywords: cloud computing; parallel processing; security of data; virtual machines; Internet; cloud computing; parallel computing; scalability; security; virtual machine; Availability; Cloud computing; Computer hacking; Scalability (ID#:14-2688)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847363&isnumber=6847317

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Coding Theory

Coding Theory


Coding theory is one of the essential pieces of information theory. More important, coding theory is a core element in cryptography. The research work cited here looks at signal processing, crowdsourcing, matroid theory, WOM codes, and the N-P hard problem. These works were presented or published between January and August of 2014.

  • Vempaty, A; Han, Y.S.; Varshney, L.R.; Varshney, P.K., "Coding Theory For Reliable Signal Processing," Computing, Networking and Communications (ICNC), 2014 International Conference on, pp.200,205, 3-6 Feb. 2014. doi: 10.1109/ICCNC.2014.6785331 With increased dependence on technology in daily life, there is a need to ensure their reliable performance. There are many applications where we carry out inference tasks assisted by signal processing systems. A typical system performing an inference task can fail due to multiple reasons: presence of a component with permanent failure, a malicious component providing corrupt information, or there might simply be an unreliable component which randomly provides faulty data. Therefore, it is important to design systems which perform reliably even in the presence of such unreliable components. Coding theory based techniques provide a possible solution to this problem. In this position paper, we survey some of our recent work on the use of coding theory based techniques for the design of some signal processing applications. As examples, we consider distributed classification and target localization in wireless sensor networks. We also consider the more recent paradigm of crowdsourcing and discuss how coding based techniques can be used to mitigate the effect of unreliable crowd workers in the system.
    Keywords: error correction codes; signal processing; telecommunication network reliability; wireless sensor networks; coding theory; crowdsourcing; distributed inference; malicious component; permanent failure; reliable signal processing; wireless sensor network; Encoding; Maximum likelihood estimation; Reliability theory; Sensors; Wireless sensor networks; Coding theory; Crowdsourcing; Distributed Inference; Reliability; Wireless Sensor Networks (ID#:14-2689)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785331&isnumber=6785290
  • Vempaty, A; Varshney, L.R.; Varshney, P.K., "Reliable Crowdsourcing for Multi-Class Labeling Using Coding Theory," Selected Topics in Signal Processing, IEEE Journal of, vol.8, no.4, pp.667,679, Aug. 2014. doi: 10.1109/JSTSP.2014.2316116 Crowdsourcing systems often have crowd workers that perform unreliable work on the task they are assigned. In this paper, we propose the use of error-control codes and decoding algorithms to design crowdsourcing systems for reliable classification despite unreliable crowd workers. Coding theory based techniques also allow us to pose easy-to-answer binary questions to the crowd workers. We consider three different crowdsourcing models: systems with independent crowd workers, systems with peer-dependent reward schemes, and systems where workers have common sources of information. For each of these models, we analyze classification performance with the proposed coding-based scheme. We develop an ordering principle for the quality of crowds and describe how system performance changes with the quality of the crowd. We also show that pairing among workers and diversification of the questions help in improving system performance. We demonstrate the effectiveness of the proposed coding-based scheme using both simulated data and real datasets from Amazon Mechanical Turk, a crowdsourcing microtask platform. Results suggest that use of good codes may improve the performance of the crowdsourcing task over typical majority-voting approaches.
    Keywords: decoding; error correction codes; pattern classification; Amazon Mechanical Turk; classification performance analysis; coding theory based techniques; crowdsourcing microtask platform; decoding algorithms; error-control codes; independent crowd workers; majority-voting approaches; multiclass labeling; peer-dependent reward schemes; reliable crowdsourcing system; Algorithm design and analysis; Decoding; Hamming distance;Nose;Reliability;Sensors;Vectors;Crowdsourcing;error-control codes; multi-class labeling; quality assurance (ID#:14-2690)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6784318&isnumber=6856242
  • Guangfu Wu; Lin Wang; Trieu-Kien Truong, "Use of Matroid Theory To Construct A Class Of Good Binary Linear Codes," Communications, IET, vol.8, no.6, pp.893, 898, April 17 2014. doi: 10.1049/iet-com.2013.0671 It is still an open challenge in coding theory how to design a systematic linear (n, k) - code C over GF(2) with maximal minimum distance d. In this study, based on matroid theory (MT), a limited class of good systematic binary linear codes (n, k, d) is constructed, where n = 2k-1 + * * * + 2k-d and d = 2k-2 + * * * + 2k-d-1 for k 4, 1 d <; k. These codes are well known as special cases of codes constructed by Solomon and Stiffler (SS) back in 1960s. Furthermore, a new shortening method is presented. By shortening the optimal codes, we can design new kinds of good systematic binary linear codes with parameters n = 2k-1 + * * * + 2k-d - 3u and d = 2k-2 + * * * + 2k-d-1 - 2u for 2 u 4, 2 d <; k. The advantage of MT over the original SS construction is that it has an advantage in yielding generator matrix on systematic form. In addition, the dual code C with relative high rate and optimal minimum distance can be obtained easily in this study.
    Keywords: binary codes; combinatorial mathematics; linear codes; matrix algebra; SS construction; Solomon-Stiffler code construction; coding theory; dual code; generator matrix; matroid theory; maximal minimum distance; optimal codes; shortening method; systematic binary linear codes (ID#:14-2691)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798003&isnumber=6797989
  • Xunrui Yin; Zongpeng Li; Xin Wang, "A Matroid Theory Approach To Multicast Network Coding," INFOCOM, 2014 Proceedings IEEE, pp.646,654, April 27 2014-May 2 2014. doi: 10.1109/INFOCOM.2014.6847990 Network coding encourages the mixing of information flows at intermediate nodes of a network for enhanced network capacity, especially for one-to-many multicast applications. A fundamental problem in multicast network coding is to construct a feasible solution such that encoding and decoding are performed over a finite field of size as small as possible. Coding operations over very small finite fields (e.g., F2) enable low computational complexity in theory and ease of implementation in practice. In this work, we propose a new approach based on matroid theory to study multicast network coding and its minimum field size requirements. Applying this new approach that translates multicast networks into matroids, we derive the first upper-bounds on the field size requirement based on the number of relay nodes in the network, and make new progresses along the direction of proving that coding over very small fields (F2 and F3) suffices for multicast network coding in planar networks.
    Keywords: combinatorial mathematics; matrix algebra; multicast communication; network coding; coding operations; decoding; encoding; enhanced network capacity; information flows; intermediate nodes; matroid theory; minimum field size requirements; multicast network coding; multicast networks; one-to-many multicast applications; planar networks; relay nodes; Encoding; Multicast communication; Network coding; Receivers; Relays; Throughput; Vectors (ID#:14-2692)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847990&isnumber=6847911
  • Shanmugam, K.; Dimakis, AG.; Langberg, M., "Graph Theory Versus Minimum Rank For Index Coding," Information Theory (ISIT), 2014 IEEE International Symposium on, pp.291,295, June 29 2014-July 4 2014. doi: 10.1109/ISIT.2014.6874841 We obtain novel index coding schemes and show that they provably outperform all previously known graph theoretic bounds proposed so far 1. Further, we establish a rather strong negative result: all known graph theoretic bounds are within a logarithmic factor from the chromatic number. This is in striking contrast to minrank since prior work has shown that it can outperform the chromatic number by a polynomial factor in some cases. The conclusion is that all known graph theoretic bounds are not much stronger than the chromatic number.
    Keywords: graph colouring; linear codes; chromatic number; graph theoretic bounds; index coding scheme; logarithmic factor; minimum rank; minrank; polynomial factor; Channel coding; Indexes; Interference; Unicast; Upper bound (ID#:14-2693)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6874841&isnumber=6874773
  • Chen, H.C.H.; Lee, P.P.C., "Enabling Data Integrity Protection in Regenerating-Coding-Based Cloud Storage: Theory and Implementation," Parallel and Distributed Systems, IEEE Transactions on, vol.25, no.2, pp.407,416, Feb. 2014. doi: 10.1109/TPDS.2013.164 To protect outsourced data in cloud storage against corruptions, adding fault tolerance to cloud storage, along with efficient data integrity checking and recovery procedures, becomes critical. Regenerating codes provide fault tolerance by striping data across multiple servers, while using less repair traffic than traditional erasure codes during failure recovery. Therefore, we study the problem of remotely checking the integrity of regenerating-coded data against corruptions under a real-life cloud storage setting. We design and implement a practical data integrity protection (DIP) scheme for a specific regenerating code, while preserving its intrinsic properties of fault tolerance and repair-traffic saving. Our DIP scheme is designed under a mobile Byzantine adversarial model, and enables a client to feasibly verify the integrity of random subsets of outsourced data against general or malicious corruptions. It works under the simple assumption of thin-cloud storage and allows different parameters to be fine-tuned for a performance-security trade-off. We implement and evaluate the overhead of our DIP scheme in a real cloud storage testbed under different parameter choices. We further analyze the security strengths of our DIP scheme via mathematical models. We demonstrate that remote integrity checking can be feasibly integrated into regenerating codes in practical deployment.
    Keywords: cloud computing; data integrity; data protection; DIP scheme; data integrity protection; fault tolerance; mobile Byzantine adversarial model; performance-security trade-off ;regenerating-coded data integrity checking; regenerating-coding-based cloud storage; remote integrity checking; repair-traffic saving; thin-cloud storage; experimentation; implementation; remote data checking; secure and trusted storage systems (ID#:14-2694)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6547608&isnumber=6689796
  • Gomez, Arley; Mejia, Carolina; Montoya, J.Andres, "Linear Network Coding And The Model Theory Of Linear Rank Inequalities," Network Coding (NetCod), 2014 International Symposium on, pp.1,5, 27-28 June 2014. doi: 10.1109/NETCOD.2014.6892128 Let n 4. Can the entropic region of order n be defined by a finite list of polynomial inequalities? This question was first asked by Chan and Grant. We showed, in a companion paper, that if it were the case one could solve many algorithmic problems coming from network coding, index coding and secret sharing. Unfortunately, it seems that the entropic regions are not semialgebraic. Are the Ingleton regions semialgebraic sets? We provide some evidence showing that the Ingleton regions are semialgebraic. Furthermore, we show that if the Ingleton regions are semialgebraic, then one can solve many algorithmic problems coming from Linear Network Coding.
    Keywords: Electronic mail; Encoding; Indexes; Network coding; Polynomials; Random variables; Vectors (ID#:14-2695)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6892128&isnumber=6892118
  • Xenoulis, K., "List Permutation Invariant Linear Codes: Theory and Applications," Information Theory, IEEE Transactions on, vol.60, no.9, pp.5263, 5282, Sept. 2014. doi: 10.1109/TIT.2014.2333000 The class of q-ary list permutation invariant linear codes is introduced in this paper along with probabilistic arguments that validate their existence when certain conditions are met. The specific class of codes is characterized by an upper bound that is tighter than the generalized Shulman-Feder bound and relies on the distance of the codes' weight distribution to the binomial (multinomial, respectively) one. The bound applies to cases where a code from the proposed class is transmitted over a q-ary output symmetric discrete memoryless channel and list decoding with fixed list size is performed at the output. In the binary case, the new upper bounding technique allows the discovery of list permutation invariant codes whose upper bound coincides with sphere-packing exponent. Furthermore, the proposed technique motivates the introduction of a new class of upper bounds for general q-ary linear codes whose members are at least as tight as the DS2 bound as well as all its variations for the discrete channels treated in this paper.
    Keywords: channel coding; decoding; linear codes; memoryless systems; generalized Shulman-Feder bound; list decoding; list permutation invariant linear codes; symmetric discrete memoryless channel; Hamming distance; Hamming weight; Linear codes; Maximum likelihood decoding; Vectors; Discrete symmetric channels; double exponential function; list decoding; permutation invariance; reliability function (ID#:14-2696)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6843999&isnumber=6878505
  • Micciancio, D., "Locally Dense Codes," Computational Complexity (CCC), 2014 IEEE 29th Conference on, vol., no., pp.90,97, 11-13 June 2014. doi: 10.1109/CCC.2014.17 The Minimum Distance Problem (MDP), i.e., the computational task of evaluating (exactly or approximately) the minimum distance of a linear code, is a well known NP-hard problem in coding theory. A key element in essentially all known proofs that MDP is NP-hard is the construction of a combinatorial object that we may call a locally dense code. This is a linear code with large minimum distance d that admits a ball of smaller radius r!d containing an exponential number of codewords, together with some auxiliary information used to map these codewords. In this paper we provide a generic method to explicitly construct locally dense binary codes, starting from an arbitrary linear code with sufficiently large minimum distance. Instantiating our construction with well known linear codes (e.g., Reed-Solomon codes concatenated with Hadamard codes) yields a simple proof that MDP is NPhard to approximate within any constant factor under deterministic polynomial time reductions, simplifying and explaining recent results of Cheng and Wan (STOC 2009 / IEEE Trans. Inf. Theory, 2012) and Austrin and Khot (ICALP 2011). Our work is motivated by the construction of analogous combinatorial objects over integer lattices, which are used in NP-hardness proofs for the Shortest Vector Problem (SVP). We show that for the max norm, locally dense lattices can also be easily constructed. However, all currently known constructions of locally dense lattices in the standard Euclidean norm are probabilistic. Finding a deterministic construction of locally dense Euclidean lattices, analogous to the results presented in this paper, would prove the NP-hardness of approximating SVP under deterministic polynomial time reductions, a long standing open problem in the computational complexity of integer lattices.
    Keywords: binary codes; combinatorial mathematics; computational complexity; linear codes; MDP; NP-hard problem; SVP; arbitrary linear code; codewords; coding theory; combinatorial object construction; computational complexity; deterministic polynomial time reductions; integer lattices; locally dense Euclidean lattices; locally dense binary codes; locally dense lattices; max norm; minimum distance problem; shortest vector problem; standard Euclidean norm; Binary codes; Lattices; Linear codes; Polynomials; Reed-Solomon codes; Symmetric matrices; Vectors; NP-hardness; coding theory; derandomization; lattices; minimum distance problem; shortest vector problem (ID#:14-2697)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875478&isnumber=6875460
  • Paajanen, P., "Finite p-Groups, Entropy Vectors, and the Ingleton Inequality for Nilpotent Groups," Information Theory, IEEE Transactions on, vol.60, no.7, pp.3821, 3824, July 2014. doi: 10.1109/TIT.2014.2321561 In this paper, we study the capacity/entropy region of finite, directed, acyclic, multiple-sources, and multiple-sinks network by means of group theory and entropy vectors coming from groups. There is a one-to-one correspondence between the entropy vector of a collection of n random variables and a certain group-characterizable vector obtained from a finite group and n of its subgroups. We are looking at nilpotent group characterizable entropy vectors and show that they are all also abelian group characterizable, and hence they satisfy the Ingleton inequality. It is known that not all entropic vectors can be obtained from abelian groups, so our result implies that to get more exotic entropic vectors, one has to go at least to soluble groups or larger nilpotency classes. The result also implies that Ingleton inequality is satisfied by nilpotent groups of bounded class, depending on the order of the group.
    Keywords: entropy; group theory; network coding; Ingleton inequality; abelian group; capacity-entropy region; finite p-groups; group theory; group-characterizable vector; multiple-sinks network; network coding theory; nilpotent group characterizable entropy vectors; Channel coding;Entropy; Indexes; Lattices; Random variables; Structural rings; Vectors; Non-Shannon type inequalities; entropy regions; network coding theory; nilpotent groups; p-groups (ID#:14-2698)
    URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6809978&isnumber=6832684
  • Guruswami, V.; Narayanan, S., "Combinatorial Limitations of Average-Radius List-Decoding," Information Theory, IEEE Transactions on, vol.60, no.10, pp. 5827, 5842, Oct. 2014. doi: 10.1109/TIT.2014.2343224 We study certain combinatorial aspects of list-decoding, motivated by the exponential gap between the known upper bound (of (O(1/gamma )) ) and lower bound (of (Omega _{p} (log (1/gamma ))) ) for the list size needed to list decode up to error fraction (p) with rate (gamma ) away from capacity, i.e., (1- h(p)-gamma ) [here (pin (0, {1}/{2})) and (gamma > 0) ]. Our main result is that we prove that in any binary code (C subseteq { 0, 1 } ^{n}) of rate (1- h(p) - gamma ) , there must exist a set ( mathcal {L}subset C) of (Omega _{p} (1/sqrt {gamma })) codewords such that the average distance of the points in ( mathcal {L}) from their centroid is at most (pn) . In other words, there must exist (Omega _{p}(1/sqrt {gamma })) codewords with low average radius. The standard notion of list decoding corresponds to working with the maximum distance of a collection of codewords from a center instead of average distance. The average radius form is in - tself quite natural; for instance, the classical Johnson bound in fact implies average-radius list-decodability. The remaining results concern the standard notion of list-decoding, and help clarify the current state of affairs regarding combinatorial bounds for list-decoding as follows. First, we give a short simple proof, over all fixed alphabets, of the above-mentioned (Omega _{p}(log (1/gamma ))) lower bound. Earlier, this bound followed from a complicated, more general result of Blinovsky. Second, we show that one cannot improve the (Omega _{p}(log (1/gamma ))) lower bound via techniques based on identifying the zero-rate regime for list-decoding of constant-weight codes [this is a typical approach for negative results in coding theory, including the (Omega _{p} (log (1/gamma ))) list-size lower bound]. On a positive note, our (Omega _{p}(1/sqrt {gamma })) lower bound for average-radius list-decoding circumvents this barrier. Third, we exhibit a reverse connection between the existence of constant-weight and general codes for list-decoding, showing that the best possible list-size, as a function of the gap (gamma ) of the rate to the capacity limit, is the same up to constant factors for both constant-weight codes (with weight bounded away from (p) ) and general codes. Fourth, we give simple second moment-based proofs that w.h.p. a list-size of (Omega _{p} (1/gamma )) is needed for list-decoding random codes from errors as well as erasures.
    Keywords: Binary codes; Decoding; Entropy; Hamming distance; Standards; Upper bound; Combinatorial coding theory; linear codes; list error-correction; probabilistic method; random coding (ID#:14-2699)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866234&isnumber=6895347
  • Shpilka, A, "Capacity-Achieving Multiwrite WOM Codes," Information Theory, IEEE Transactions on, vol.60, no.3, pp.1481,1487, March 2014. doi: 10.1109/TIT.2013.2294464 In this paper, we give an explicit construction of a family of capacity-achieving binary t-write WOM codes for any number of writes t, which have polynomial time encoding and decoding algorithms. The block length of our construction is N=(t/e)O(t/(de)) when e is the gap to capacity and encoding and decoding run in time N1+d. This is the first deterministic construction achieving these parameters. Our techniques also apply to larger alphabets.
    Keywords: codes; decoding; alphabets; capacity-achieving binary t-write WOM codes; capacity-achieving multiwrite WOM codes; decoding algorithms; polynomial time encoding; Decoding; Encoding; Force; Indexes; Polynomials; Vectors; Writing; Coding theory; WOM-codes; flash memories; hash-functions; write-once memories (ID#:14-2700)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6680743&isnumber=6739111
  • Bitouze, N.; Amat, AG.I; Rosnes, E., "Using Short Synchronous WOM Codes to Make WOM Codes Decodable," Communications, IEEE Transactions on, vol.62, no.7, pp.2156, 2169, July 2014. doi: 10.1109/TCOMM.2014.2323308 In the framework of write-once memory (WOM) codes, it is important to distinguish between codes that can be decoded directly and those that require the decoder to know the current generation so as to successfully decode the state of the memory. A widely used approach to constructing WOM codes is to design first nondecodable codes that approach the boundaries of the capacity region and then make them decodable by appending additional cells that store the current generation, at an expense of rate loss. In this paper, we propose an alternative method to making nondecodable WOM codes decodable by appending cells that also store some additional data. The key idea is to append to the original (nondecodable) code a short synchronous WOM code and write generations of the original code and the synchronous code simultaneously. We consider both the binary and the nonbinary case. Furthermore, we propose a construction of synchronous WOM codes, which are then used to make nondecodable codes decodable. For short-to-moderate block lengths, the proposed method significantly reduces the rate loss as compared to the standard method.
    Keywords: decoding; WOM codes decodable; capacity region; current generation; nondecodable codes; short synchronous WOM codes; short-to-moderate block lengths; standard method; write generations; write once memory; Binary codes; Decoding; Encoding; Solids; Standards; Synchronization; Vectors; Coding theory; Flash memories; decodable codes; synchronous write-once memory (WOM) codes (ID#:14-2701)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6815644&isnumber=6860331
  • Papailiopoulos, D.S.; Dimakis, AG., "Locally Repairable Codes," Information Theory, IEEE Transactions on, vol.60, no.10, pp.5843,5855, Oct. 2014. doi: 10.1109/TIT.2014.2325570 Distributed storage systems for large-scale applications typically use replication for reliability. Recently, erasure codes were used to reduce the large storage overhead, while increasing data reliability. A main limitation of off-the-shelf erasure codes is their high-repair cost during single node failure events. A major open problem in this area has been the design of codes that: 1) are repair efficient and 2) achieve arbitrarily high data rates. In this paper, we explore the repair metric of locality, which corresponds to the number of disk accesses required during a single node repair. Under this metric, we characterize an information theoretic tradeoff that binds together the locality, code distance, and storage capacity of each node. We show the existence of optimal locally repairable codes (LRCs) that achieve this tradeoff. The achievability proof uses a locality aware flow-graph gadget, which leads to a randomized code construction. Finally, we present an optimal and explicit LRC that achieves arbitrarily high data rates. Our locality optimal construction is based on simple combinations of Reed-Solomon blocks.
    Keywords: Encoding; Entropy; Joints; Maintenance engineering; Measurement; Peer-to-peer computing; Vectors; Information theory; coding theory; distributed storage (ID#:14-2702)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6818438&isnumber=6895347
  • Yaakobi, E.; Mahdavifar, H.; Siegel, P.H.; Vardy, A; Life, J.K.W., "Rewriting Codes for Flash Memories," Information Theory, IEEE Transactions on, vol.60, no.2, pp.964,975, Feb. 2014. doi: 10.1109/TIT.2013.2290715 Flash memory is a nonvolatile computer memory comprising blocks of cells, wherein each cell can take on q different values or levels. While increasing the cell level is easy, reducing the level of a cell can be accomplished only by erasing an entire block. Since block erasures are highly undesirable, coding schemes-known as floating codes (or flash codes) and buffer codes-have been designed in order to maximize the number of times that information stored in a flash memory can be written (and rewritten) prior to incurring a block erasure. An (n,k,t)q flash code C is a coding scheme for storing k information bits in n cells in such a way that any sequence of up to t writes can be accommodated without a block erasure. The total number of available level transitions in n cells is n(q-1), and the write deficiency of C, defined as d(C)=n(q-1)-t, is a measure of how close the code comes to perfectly utilizing all these transitions. In this paper, we show a construction of flash codes with write deficiency O(q k log k) if q log2 k, and at most O(klog2k) otherwise. An (n,r,l,t)q buffer code is a coding scheme for storing a buffer of r l-ary symbols such that for any sequence of t symbols, it is possible to successfully decode the last r symbols that were written. We improve upon a previous upper bound on the maximum number of writes t in the case where there is a single cell to store the buffer. Then, we show how to improve a construction by Jiang that uses multiple cells, where n 2r.
    Keywords: block codes; flash memories ;random-access storage; block erasures; buffer codes; coding schemes; flash codes; flash memories; floating codes; l-ary symbols; multiple cells; nonvolatile computer memory; rewriting codes; Ash; Buffer storage; Decoding; Encoding; Indexes; Upper bound; Vectors; Buffer codes; coding theory; flash codes; flash memories (ID#:14-2703)
    URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6662417&isnumber=6714461
  • Vempaty, A; Han, Y.S.; Varshney, P.K., "Target Localization in Wireless Sensor Networks Using Error Correcting Codes," Information Theory, IEEE Transactions on, vol.60, no.1, pp.697, 712, Jan. 2014 doi: 10.1109/TIT.2013.2289859 In this paper, we consider the task of target localization using quantized data in wireless sensor networks. We propose a computationally efficient localization scheme by modeling it as an iterative classification problem. We design coding theory based iterative approaches for target localization where at every iteration, the fusion center (FC) solves an M-ary hypothesis testing problem and decides the region of interest for the next iteration. The coding theory based iterative approach works well even in the presence of Byzantine (malicious) sensors in the network. We further consider the effect of non-ideal channels. We suggest the use of soft-decision decoding to compensate for the loss due to the presence of fading channels between the local sensors and FC. We evaluate the performance of the proposed schemes in terms of the Byzantine fault tolerance capability and probability of detection of the target region. We also present performance bounds, which help us in designing the system. We provide asymptotic analysis of the proposed schemes and show that the schemes achieve perfect region detection irrespective of the noise variance when the number of sensors tends to infinity. Our numerical results show that the proposed schemes provide a similar performance in terms of mean square error as compared with the traditional maximum likelihood estimation but are computationally much more efficient and are resilient to errors due to Byzantines and non-ideal channels.
    Keywords: decoding; error correction codes; fading channels; iterative methods; probability; wireless sensor networks; Byzantine fault tolerance capability; M-ary hypothesis testing problem; asymptotic analysis; coding theory based iterative approaches; efficient localization scheme; error correcting codes; fading channels; fusion center; iterative classification problem; maximum likelihood estimation; mean square error; perfect region detection; probability; soft-decision decoding; target localization; wireless sensor networks; Decoding; Encoding; Fading; Hamming distance; Sensor fusion; Wireless sensor networks; Byzantines; Target localization; error correcting codes; wireless sensor networks (ID#:14-2704)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6657772&isnumber=6690264

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Cognitive Radio Security

Cognitive Radio Security


If volume is any indication, cognitive radio (CR) is the "hot topic" for research and conferences in 2014. The works cited here come from a global range of conference sources and cover a range of issues including spectrum competition between CR and radar, cooperative jamming, authentication, trust manipulation, and others. These works were published or presented between January and October, 2014.

  • Chauhan, K.K.; Sanger, AK.S., "Survey of Security Threats And Attacks In Cognitive Radio Networks," Electronics and Communication Systems (ICECS), 2014 International Conference on , vol., no., pp.1,5, 13-14 Feb. 2014. doi: 10.1109/ECS.2014.6892537 Number of technologies has been developed in wireless communication and there is always a common issue in this field i.e. `Security' due to its open medium of communication. Now, spectrum allocation is becoming major problem in wireless communication due to paucity of available spectrum. Cognitive radio is one of the rapidly increasing technologies in wireless communication. Cognitive radio promises to detract spectrum shortage problem by allowing unlicensed users to co-exist with licensed users in spectrum band and use it for communication while causing no interference with licensed users. Cognitive radio technology intelligently detects vacant channels and allows unlicensed users to use that one, while avoiding occupied channels optimizing the use of available spectrum. Initially research in cognitive radios focused on resource allocation, spectrum sensing and management. Parallelly, another important issue that garnered attention of researchers from academia and industry is Security. Security considerations show that the unique characteristics of cognitive radio such as spectrum sensing and sharing make it vulnerable to new class of security threats and attacks. These security threats are challenge in the deployment of CRN and meeting Quality of Service (QoS). This is a survey paper in which we identified and discussed some of the security threats and attacks in spectrum sensing and cognitive radio networks. Together with discussing security attacks, we also proposed some techniques to mitigate the effectiveness of these attacks.
    Keywords: cognitive radio; quality of service; radio networks; radio spectrum management; signal detection; telecommunication security; QoS; cognitive radio networks; quality of service; resource allocation; security attacks; security threats; spectrum allocation; spectrum sensing; spectrum sharing; unlicensed users; vacant channel detection; wireless communication; Artificial intelligence; Authentication; Computers; FCC; Jamming; Radio networks; Cognitive Radio; Cognitive Radio Networks; Dynamic Spectrum Access; Mitigation; Security Threats/Attacks (ID#:14-2883)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6892537&isnumber=6892507
  • Khasawneh, M.; Agarwal, A, "A Survey On Security In Cognitive Radio Networks," Computer Science and Information Technology (CSIT), 2014 6th International Conference on, pp.64, 70, 26-27 March 2014. doi: 10.1109/CSIT.2014.6805980 Cognitive radio (CR) has been introduced to accommodate the steady increment in the spectrum demand. In CR networks, unlicensed users, which are referred to as secondary users (SUs), are allowed to dynamically access the frequency bands when licensed users which are referred to as primary users (PUs) are inactive. One important technical area that has received little attention to date in the cognitive radio system is wireless security. New classes of security threats and challenges have been introduced in the cognitive radio systems, and providing strong security may prove to be the most difficult aspect of making cognitive radio a long-term commercially-viable concept. This paper addresses the main challenges, security attacks and their mitigation techniques in cognitive radio networks. The attacks showed are organized based on the protocol layer that an attack is operating on.
    Keywords: cognitive radio; protocols; radio networks; telecommunication security; cognitive radio networks; long-term commercially-viable concept; mitigation techniques; protocol layer; security attacks; spectrum demand; wireless security; Authentication; Cognitive radio ;Linear programming; Physical layer; Protocols; Sensors; Attack; Cognitive radio; Primary User (PU);Secondary User (SU); Security (ID#:14-2884)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6805980&isnumber=6805962
  • Akin, S., "Security in Cognitive Radio Networks," Information Sciences and Systems (CISS), 2014 48th Annual Conference on, pp.1,6, 19-21 March 2014. doi: 10.1109/CISS.2014.6814188 In this paper, we investigate the information-theoretic security by modeling a cognitive radio wiretap channel under quality-of-service (QoS) constraints and interference power limitations inflicted on primary users (PUs). We initially define four different transmission scenarios regarding channel sensing results and their correctness. We provide effective secure transmission rates at which a secondary eavesdropper is refrained from listening to a secondary transmitter (ST). Then, we construct a channel state transition diagram that characterizes this channel model. We obtain the effective secure capacity which describes the maximum constant buffer arrival rate under given QoS constraints. We find out the optimal transmission power policies that maximize the effective secure capacity, and then, we propose an algorithm that, in general, converges quickly to these optimal policy values. Finally, we show the performance levels and gains obtained under different channel conditions and scenarios. And, we emphasize, in particular, the significant effect of hidden-terminal problem on information-theoretic security in cognitive radios.
    Keywords: cognitive radio; information theory; quality of service; radio transmitters; radiofrequency interference; telecommunication security; QoS; channel sensing; channel state transition; cognitive radio networks; constant buffer arrival rate; hidden-terminal problem; information-theoretic security; interference power limitations; optimal policy; optimal transmission power; primary users; quality of service; secondary eavesdropper; secondary transmitter; transmission rates; wiretap channel; Cognitive radio; Fading; Interference; Quality of service; Security; Sensors; Signal to noise ratio; Cognitive radio; effective capacity; information-theoretic security; quality of service (QoS) constraints (ID#:14-2885)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814188&isnumber=6814063
  • Liu, W.; Sarkar, M.Z.I; Ratnarajah, T., "On the Security Of Cognitive Radio Networks: Cooperative Jamming With Relay Selection," Networks and Communications (EuCNC), 2014 European Conference on, pp.1,5, 23-26 June 2014. doi: 10.1109/EuCNC.2014.6882674 We consider the problem of secret communication through a relay assisted downlink cognitive interference channel in which secondary base station (SBS) is allowed to transmit simultaneously with the primary base station (PBS) over the same channel instead of waiting for an idle channel which is traditional for a cognitive radio. We propose a cooperative jamming (CJ) scheme to improve the secrecy rate where multiple relays transmit weighted jamming signals to create additional interferences in the direction of eavesdropper with the purpose of confusing it. The proposed CJ scheme is designed to cancel out interference at the secondary receiver (SR) while maintaining interference at the primary receiver (PR) under a certain threshold. Moreover, we develop an algorithm to select the effective relays which meet the target secrecy rate. Our results show that with the help of developed algorithm, a suitable CJ scheme can be designed to improve the secrecy rate at SR to meet the target secrecy rate.
    Keywords: cognitive radio; jamming; relay networks (telecommunication) ;telecommunication security; cognitive radio networks security; cooperative jamming; primary base station; primary receiver; relay assisted downlink cognitive interference channel; relay selection; secondary base station; secondary receiver; Cognitive radio; Interference; Jamming; Physical layer; Relays; Scattering; Security; Cooperative jamming; cognitive radio network; secrecy rate (ID#:14-2886)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6882674&isnumber=6882614
  • Elkashlan, M.; Wang, L.; Duong, T.Q.; Karagiannidis, G.K.; Nallanathan, A, "On the Security of Cognitive Radio Networks," Vehicular Technology, IEEE Transactions on, vol. PP, no.99, pp.1, 1, September 2014. doi: 10.1109/TVT.2014.2358624 Cognitive radio has emerged as an essential recipe for future high-capacity high-coverage multi-tier hierarchical networks. Securing data transmission in these networks is of utmost importance. In this paper, we consider the cognitive wiretap channel and propose multiple antennas to secure the transmission at the physical layer, where the eavesdropper overhears the transmission from the secondary transmitter to the secondary receiver. The secondary receiver and the eavesdropper are equipped with multiple antennas, and passive eavesdropping is considered where the channel state information of the eavesdropper's channel is not available at the secondary transmitter. We present new closedform expressions for the exact and asymptotic secrecy outage probability. Our results reveal the impact of the primary network on the secondary network in the presence of a multi-antenna wiretap channel.
    Keywords: Antennas; Cognitive radio; Interference; Radio transmitters; Receivers; Security; Signal to noise ratio (ID#:14-2887)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6901288&isnumber=4356907
  • Safdar, G.A; Albermany, S.; Aslam, N.; Mansour, A; Epiphaniou, G., "Prevention Against Threats To Self Co-Existence - A Novel Authentication Protocol For Cognitive Radio Networks," Wireless and Mobile Networking Conference (WMNC), 2014 7th IFIP, pp.1, 6, 20-22 May 2014. doi: 10.1109/WMNC.2014.6878857 Cognitive radio networks are intelligent networks that can sense the environment and adapt the communication parameters accordingly. These networks find their applications in co-existence of different wireless networks, interference mitigation, and dynamic spectrum access. Unlike traditional wireless networks, cognitive radio networks additionally have their own set of unique security threats and challenges, such as selfish misbehaviours, self-coexistence, license user emulation and attacks on spectrum managers; accordingly the security protocols developed for these networks must have abilities to counter these attacks. This paper presents a novel cognitive authentication protocol, called CoG-Auth, aimed to provide security in cognitive radio networks against threats to self co-existence. CoG-Auth does not require presence of any resource enriched base stations or centralised certification authorities, thus enabling it to be applicable to both infrastructure and ad hoc cognitive radio networks. The CoG-Auth design employs key hierarchy; such as temporary keys, partial keys and session keys to fulfil the fundamental requirements of security. CoG-Auth is compared with IEEE 802.16e standard PKMv2 for performance analysis; it is shown that CoG-Auth is secure, more efficient, less computational intensive, and performs better in terms of authentication time, successful authentication and transmission rate.
    Keywords: cognitive radio; cryptographic protocols; CoG-Auth design; base stations; centralised certification authorities; cognitive radio networks; dynamic spectrum access; intelligent networks; interference mitigation; novel cognitive authentication protocol; security protocols; self co-existence; wireless networks; Authentication; Cognitive radio; Encryption; Protocols; Standards; Authentication; Cognitive Radio; Cryptography; Protocol; Security (ID#:14-2888)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6878857&isnumber=6878843
  • Savas, O.; Ahn, G.S.; Deng, J., "Securing Cognitive Radio Networks Against Belief Manipulation Attacks Via Trust Management," Collaboration Technologies and Systems (CTS), 2014 International Conference on , vol., no., pp.158,165, 19-23 May 2014. doi: 10.1109/CTS.2014.6867559 Cognitive Radio (CR) provides cognitive, self-organizing, and reconfiguration features. When forming a network, namely Cognitive Radio Networks (CRNs), these features can further provide network agility and spectrum sharing. On the other hand, they also make the network much more vulnerable than other traditional wireless networks, e.g., ad hoc wireless or sensor networks. In particular, the malicious nodes may exploit the cognitive engine of CRs, and conduct belief manipulation attacks to degrade the network performance. Traditional security methods using cryptography or authentication cannot adequately address these attacks. In this paper, we propose to use trust management for a more robust CRN operation against belief manipulation attacks. Specifically, we first study the effects of malicious behaviors to the network performance, define trust evaluation metrics to capture malicious behaviors, and illustrate how trust management strategy can help to enhance the robustness of network operations in various network configurations.
    Keywords: ad hoc networks; authorisation; cognitive radio; cryptography; radio spectrum management; CRN; ad hoc wireless; authentication; belief manipulation attacks; cognitive radio networks; cryptography; malicious nodes; network agility; security methods; sensor networks ;spectrum sharing; trust management; Ad hoc networks; Authentication; Cognitive radio; Routing; Throughput; Uncertainty; Cognitive radio networks; belief manipulation attack; cross-layer networking; trust initialization; trust management (ID#:14-2889)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6867559&isnumber=6867522
  • Li Hongning; Pei Qingqi; Ma Lichuan, "Channel Selection Information Hiding Scheme For Tracking User Attack In Cognitive Radio Networks," Communications, China, vol.11, no.3, pp.125,136, March 2014. doi: 10.1109/CC.2014.6825265 For the discontinuous occupancy of primary users in cognitive radio networks (CRN), the time-varying of spectrum holes becomes more and more highlighted. In the dynamic environment, cognitive users can access channels that are not occupied by primary users, but they have to hand off to other spectrum holes to continue communication when primary users come back, which brings new security problems. Tracking user attack (TUA) is a typical attack during spectrum handoff, which will invalidate handoff by preventing user accessing, and break down the whole network. In this paper, we propose a Channel Selection Information Hiding scheme (CSIH) to defense TUA. With the proposed scheme, we can destroy the routes to the root node of the attack tree by hiding the information of channel selection and enhance the security of cognitive radio networks.
    Keywords: cognitive radio; mobility management (mobile radio); radio spectrum management; tracking; CRN; CSIH; TUA; access channels; channel selection information hiding scheme; cognitive radio networks; root node; spectrum handoff; spectrum holes; tracking user attack; Channel estimation; Cognitive radio; Communication system security; Security; Tracking; Wireless sensor networks; attack tree; handoff; tracking user attack (ID#:14-2890)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825265&isnumber=6825249
  • Jung-Min Park; Reed, J.H.; Beex, AA; Clancy, T.C.; Kumar, V.; Bahrak, B., "Security and Enforcement in Spectrum Sharing," Proceedings of the IEEE, vol.102, no.3, pp.270,281, March 2014. doi: 10.1109/JPROC.2014.2301972 When different stakeholders share a common resource, such as the case in spectrum sharing, security and enforcement become critical considerations that affect the welfare of all stakeholders. Recent advances in radio spectrum access technologies, such as cognitive radios, have made spectrum sharing a viable option for significantly improving spectrum utilization efficiency. However, those technologies have also contributed to exacerbating the difficult problems of security and enforcement. In this paper, we review some of the critical security and privacy threats that impact spectrum sharing. We propose a taxonomy for classifying the various threats, and describe representative examples for each threat category. We also discuss threat countermeasures and enforcement techniques, which are discussed in the context of two different approaches: ex ante (preventive) and ex post (punitive) enforcement.
    Keywords: cognitive radio; radio spectrum management; telecommunication security; cognitive radios; enforcement techniques; ex ante enforcement; ex post enforcement; preventive enforcement; privacy threats; punitive enforcement; radio spectrum access technologies; security; spectrum sharing ;spectrum utilization; stakeholders; taxonomy; threat category; Data privacy; Interference; Network security; Privacy; Radio spectrum management; Sensors; Cognitive radio; dynamic spectrum access ;enforcement; security; spectrum sharing (ID#:14-2891)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6732887&isnumber=6740864
  • Sheng Zhong; Haifan Yao, "Towards Cheat-Proof Cooperative Relay for Cognitive Radio Networks," Parallel and Distributed Systems, IEEE Transactions on, vol.25, no.9, pp.2442, 2451, Sept. 2014 doi: 10.1109/TPDS.2013.151 In cognitive radio networks, cooperative relay is a new technology that can significantly improve spectrum efficiency. While the existing protocols for cooperative relay are very interesting and useful, there is a crucial problem that has not been investigated: Selfish users may cheat in cooperative relay, in order to benefit themselves. Here by cheating we mean the behavior of reporting misleading channel and payment information to the primary user and other secondary users. Such cheating behavior may harm other users and thus lead to poor system throughput. Given the threat of selfish users' cheating, our objective in this paper is to suppress the cheating behavior of selfish users in cooperative relay. Hence, we design the first cheat-proof scheme for cooperative relay in cognitive radio networks, and rigorously prove that under our scheme, selfish users have no incentive to cheat. Our design and analysis start in the model of strategic game for interactions among secondary users; then they are extended to the entire cooperative relay process, which is modeled as an extensive game. To make our schemes more practical, we also consider two aspects: fairness and system security. Results of extensive simulations demonstrate that our scheme suppresses cheating behavior and thus improves the system throughput in face of selfish users.
    Keywords: cognitive radio; cooperative communication; relay networks (telecommunication) ;cheat-proof cooperative relay; cognitive radio networks; fairness aspect; misleading channel; payment information; secondary users; strategic game; system security; Cognitive radio networks; cheat-proof; cooperative relay; fairness (ID#:14-2892)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6520841&isnumber=6873370
  • Rocca, P.; Quanjiang Zhu; Bekele, E.T.; Shiwen Yang; Massa, A, "4-D Arrays as Enabling Technology for Cognitive Radio Systems," Antennas and Propagation, IEEE Transactions on, vol.62, no.3, pp.1102, 1116, March 2014. doi: 10.1109/TAP.2013.2288109 Time-modulation (TM) in four-dimensional (4-D) arrays is implemented by using a set of radio-frequency switches in the beam forming network to modulate, by means of periodic pulse sequences, the static excitations and thus control the antenna radiation features. The on-off reconfiguration of the switches, that can be easily implemented via software, unavoidably generates harmonic radiations that can be suitably exploited for multiple channel communication purposes. As a matter of fact, harmonic beams can be synthesized having different spatial distribution and shapes in order to receive signals arriving on the antenna from different directions. Similarly, the capability to generate a field having different frequency and spatial distribution implies that the signal transmitted by time-modulated 4-D arrays is direction-dependent. Accordingly, such a feature is also exploited to implement a secure communication scheme directly at the physical layer. Thanks to the easy software-based reconfigurability, the multiple harmonic beamforming, and the security capability, 4-D arrays can be considered as an enabling technology for future cognitive radio systems. In this paper, these potentialities of time-modulated 4-D arrays are presented and their effectiveness is supported by a set of representative numerical simulation results.
    Keywords: MIMO communication; antenna arrays; antenna radiation patterns; array signal processing; cognitive radio; modulation; telecommunication security; time-domain analysis ;antenna radiation features; beam forming network; four-dimensional arrays; future cognitive radio systems; harmonic beams ;harmonic radiations; multiple channel communication purposes; multiple harmonic beamforming; on-off reconfiguration; periodic pulse sequences; physical layer; radiofrequency switches; secure communication scheme; security capability; software-based reconfigurability; spatial distribution; static excitations; time-modulated 4D arrays time-modulation; Antenna arrays; Directive antennas; Harmonic analysis; Optimization; Radio frequency; Receiving antennas;4-D arrays; cognitive radio; harmonic beamforming; multiple-input multiple-output (MIMO); reconfigurability; secure communications; time-modulated arrays (ID#:14-2893)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6651739&isnumber=6750022
  • Heuel, S.; Roessler, A, "Coexistence of S-Band radar and 4G Mobile Networks," Radar Symposium (IRS), 2014 15th Internationa , pp.1,4, 16-18 June 2014. doi: 10.1109/IRS.2014.6869236 Today's wireless network and radar systems are designed to obey a fixed spectrum assignment policy regulated by the Federal Communications Commission (FCC). The assignment served well in the past, but sparse or medium utilization of some frequencies confronts heavy usage of others. This ineffective spectrum allocation contradicts the dramatically increasing need of bandwidth for security systems like radar or mobile networks and causes the evolution of intelligent radios applying dynamic spectrum access i.e. cognitive radio. To underline the demand of dynamic spectrum allocation, this paper addresses coexistence between S-Band Air Traffic Control (ATC) radar systems and LTE mobiles operating in E-UTRA Band 7. Technical requirements for radar and mobile devices operating close to each other are addressed and coexistence validated by test and measurement performed at a major German airport. It is shown that throughput reduction, increased Block Error Rate (BLER) of these mobile radios and reduction of the probability of detection Pd of security relevant S-Band radar occur in the presence of the other service.
    Keywords: Long Term Evolution; air traffic control; military radar; radio spectrum management; radiofrequency interference; 4G mobile networks; E-UTRA band 7; Federal Communications Commission; LTE mobile radio; S-band air traffic control radar systems; S-band radar; block error rate; cognitive radio; dynamic spectrum access; dynamic spectrum allocation; fixed spectrum assignment policy; ineffective spectrum allocation; intelligent radios; radar-mobile radio coexistence; security system bandwidth; wireless network; Interference; Meteorological radar; Mobile communication; Radar antennas; Radar measurements;Throughput;4G Networks; ATC Radar; ATS Radar; Coexistence; LTE; S-Band; WiMAX (ID#:14-2894)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6869236&isnumber=6869176
  • Dabcevic, K.; Betancourt, A; Marcenaro, L.; Regazzoni, C.S., "A Fictitious Play-Based Game-Theoretical Approach To Alleviating Jamming Attacks For Cognitive Radios," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pp.8158,8162, 4-9 May 2014 doi: 10.1109/ICASSP.2014.6855191 On-the-fly reconfigurability capabilities and learning prospectives of Cognitive Radios inherently bring a set of new security issues. One of them is intelligent radio frequency jamming, where adversary is able to deploy advanced jamming strategies to degrade performance of the communication system. In this paper, we observe the jamming/antijamming problem from a game-theoretical perspective. A game with incomplete information on opponent's payoff and strategy is modelled as a Markov Decision Process (MDP). A variant of fictitious play learning algorithm is deployed to find optimal strategies in terms of combination of channel hopping and power alteration anti-jamming schemes.
    Keywords: {Markov processes; cognitive radio; game theory; jamming; MDP; Markov decision process; channel hopping; cognitive radios; fictitious play-based game-theoretical approach; intelligent radio frequency jamming attack; power alteration anti-jamming scheme; Cognitive radio; Games; Interference; Jamming; Radio transmitters; Stochastic processes; Markov models; anti-jamming; channel surfing; cognitive radio; fictitious play; game theory; jamming; power alteration (ID#:14-2895)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6855191&isnumber=6853544
  • YuanYuan He; Evans, J.; Dey, S., "Secrecy Rate Maximization For Cooperative Overlay Cognitive Radio Networks With Artificial Noise," Communications (ICC), 2014 IEEE International Conference on, pp.1663,1668, 10-14 June 2014. doi: 10.1109/ICC.2014.6883561 We consider physical-layer security in a novel MISO cooperative overlay cognitive radio network (CRN) with a single eavesdropper. We aim to design an artificial noise (AN) aided secondary transmit strategy to maximize the joint achievable secrecy rate of both primary and secondary links, subject to a global secondary transmit power constraint and guaranteeing any transmission of secondary should at least not degrade the receive quality of primary network, under the assumption that global CSI is available. The resulting optimization problem is challenging to solve due to its non-convexity in general. A computationally efficient approximation methodology is proposed based on the semidefinite relaxation (SDR) technique and followed by a two-step alternating optimization algorithm for obtaining a local optimum for the corresponding SDR problem. This optimization algorithm consists of a one-dimensional line search and a non-convex optimization problem, which, however, through a novel reformulation, can be approximated as a convex semidefinite program (SDP). Analysis on the extension to multiple eavesdroppers scenario is also provided. Simulation results show that the proposed AN-aided joint secrecy rate maximization design (JSRMD) can significantly boost the secrecy performance over JSRMD without AN.
    Keywords: cognitive radio; concave programming; convex programming; cooperative communication; overlay networks; radio links; radio networks; telecommunication security; AN aided secondary power transmission strategy; AN-aided joint secrecy rate maximization design; JSRMD; MISO cooperative overlay CRN;SDR technique; artificial noise; computationally efficient approximation methodology; convex semidefinite relaxation program; cooperative overlay cognitive radio networks; global CSI; nonconvex optimization problem; physical layer security; primary links; secondary links; single eavesdropper; two step alternating optimization algorithm; Approximation algorithms; Approximation methods; Cognitive radio; Interference; Jamming; Optimization; Vectors; Overlay Cognitive Radio; amplify-and-forward relaying; artificial interference; physical-layer security; semidefinite relaxation (ID#:14-2896)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883561&isnumber=6883277
  • Kumar, V.; Jung-Min Park; Clancy, T.C.; Kaigui Bian, "PHY-Layer Authentication Using Hierarchical Modulation And Duobinary Signaling," Computing, Networking and Communications (ICNC), 2014 International Conference on, pp.782,786, 3-6 Feb. 2014. doi: 10.1109/ICCNC.2014.6785436 In a cognitive radio network, the non-conforming behavior of rogue transmitters is a major threat to opportunistic spectrum access. One approach for facilitating spectrum enforcement and security is to require every transmitter to embed a uniquely-identifiable authentication signal in its waveform at the PHY-layer. In existing PHY-layer authentication schemes, known as blind signal superposition, the authentication/identification signal is added to the message signal as noise, which leads to a tradeoff between the message signal's signal-to-noise (SNR) and the authentication signal's SNR under the assumption of constant average transmitted power. This implies that one cannot improve the former without scarifying the latter, and vice versa. In this paper, we propose a novel PHY-layer authentication scheme called hierarchically modulated duobinary signaling for authentication (HM-DSA). HM-DSA introduces some controlled amount of inter-symbol interference (ISI) into the message signal. The redundancy induced by the addition of the controlled ISI is utilized to embed the authentication signal. Our scheme, HM-DSA, relaxes the constraint on the aforementioned tradeoff and improves the error performance of the message signal as compared to the prior art.
    Keywords: message authentication; radio spectrum management; telecommunication signalling; ISI;PHY-layer authentication; SNR; authentication-identification signal; blind signal superposition; cognitive radio network; duobinary signaling; hierarchical modulation; intersymbol interference; message signal; opportunistic spectrum access ;rogue transmitters; signal-to-noise; spectrum enforcement; spectrum security; uniquely-identifiable authentication signal; Authentication; Constellation diagram; Euclidean distance; Radio transmitters; Receivers; Signal to noise ratio (ID#:14-2897)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785436&isnumber=6785290
  • ChunSheng Xin; Song, M., "Detection of PUE Attacks in Cognitive Radio Networks Based on Signal Activity Pattern," Mobile Computing, IEEE Transactions on, vol.13, no.5, pp.1022, 1034, May 2014. doi: 10.1109/TMC.2013.121 Promising to significantly improve spectrum utilization, cognitive radio networks (CRNs) have attracted a great attention in the literature. Nevertheless, a new security threat known as the primary user emulation (PUE) attack raises a great challenge to CRNs. The PUE attack is unique to CRNs and can cause severe denial of service (DoS) to CRNs. In this paper, we propose a novel PUE detection system, termed Signal activity Pattern Acquisition and Reconstruction System. Different from current solutions of PUE detection, the proposed system does not need any a priori knowledge of primary users (PUs), and has no limitation on the type of PUs that are applicable. It acquires the activity pattern of a signal through spectrum sensing, such as the ON and OFF periods of the signal. Then it reconstructs the observed signal activity pattern through a reconstruction model. By examining the reconstruction error, the proposed system can smartly distinguish a signal activity pattern of a PU from a signal activity pattern of an attacker. Numerical results show that the proposed system has excellent performance in detecting PUE attacks.
    Keywords: cognitive radio; computer network security; radio spectrum management; CRN; DoS; PUE attacks; PUE detection system; cognitive radio networks; denial of service; primary user emulation; signal activity pattern acquisition and reconstruction system; spectrum sensing; spectrum utilization; Data models; Probability distribution; Radio transmitters; Sensors; Training; Training data; Cognitive radio network; primary user emulation attack; primary user emulation detection (ID#:14-2898)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6819890&isnumber=6819877
  • Songjun Ma; Yunfeng Peng; Tao Wang; Xiaoying Gan; Feng Yang; Xinbing Wang; Guizani, M., "Detecting the Greedy Spectrum Occupancy Threat In Cognitive Radio Networks," Communications (ICC), 2014 IEEE International Conference on, pp.4939,4944, 10-14 June 2014. doi: 10.1109/ICC.2014.6884103 Recently, security of cognitive radio (CR) is becoming a severe issue. There is one kind of threat, which we call greedy spectrum occupancy threat (GSOT) in this paper, has long been ignored in previous work. In GSOT, a secondary user may selfishly occupy the spectrum for a long time, which makes other users suffer additional waiting time in queue to access the spectrum and leads to congestion or breakdown. In this paper, a queueing model is established to describe the system with greedy secondary user (GSU). Based on this model, the impacts of GSU on the system are evaluated. Numerical results indicate that the steady-state performance of the system is influenced not only by average occupancy time, but also by the number of users as well as number of channels. Since a sudden change in average occupancy time of GSU will produce dramatic performance degradation, the greedy second user prefers to increase its occupancy time in a gradual manner in case it is detected easily. Once it reaches its targeted occupancy time, the system will be in steady state, and the performance will be degraded. In order to detect such a cunning behavior as quickly as possible, we propose a wavelet based detection approach. Simulation results are presented to demonstrate the effectiveness and quickness of the proposed approach.
    Keywords: cognitive radio; greedy algorithms; telecommunication security; wavelet transforms; CR security; GSOT; GSU; cognitive radio networks; greedy secondary user; greedy spectrum occupancy threat detection; occupancy time; steady-state performance; wavelet based detection approach; Cognitive radio; Educational institutions; Queueing analysis; Security; Steady-state; Transforms (ID#:14-2899)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6884103&isnumber=6883277
  • Kabir, I; Astaneh, S.; Gazor, S., "Forensic Outlier Detection for Cognitive Radio Networks," Communications (QBSC), 2014 27th Biennial Symposium on, vol., no., pp.52, 56, 1-4 June 2014. doi: 10.1109/QBSC.2014.6841183 We consider forensic outlier detection instead of traditional outlier detection to enforce spectrum security in a Cognitive Radio Network (CRN). We investigate a CRN where a group of sensors report their local binary decisions to a Fusion Center (FC), which makes a global decision on the availability of the spectrum. To ensure the truthfulness of the sensors, we examine the reported decisions in order to determine whether a specific sensor is an outlier. We propose several optimal detectors (for known parameters) and suboptimal detectors (for the practical cases where the parameters are unknown) to detect three types of outlier sensors: 1) selfish sensor, which reports the spectrum to be occupied when locally detects its vacancy, 2) malicious sensor, which reports the spectrum to be vacant when locally detects its occupancy, 3) malfunctioning sensor, whose reports are not accurate enough (i.e., its performance is close to random guessing). We evaluate the proposed detectors by simulations. Our simulation results reveal that the proposed detectors significantly outperform the Grubb's test. Since the unknown or untrustworthy parameters are accurately estimated by the FC, the proposed suboptimal detectors do not require the knowledge of the spectrum statistics and are insensitive to the parameters reported by the suspected user. These detectors can be used by government agencies for forensic testing in policy control and abuser identification in CRNs.
    Keywords: {cognitive radio; decision theory; radio networks; sensor fusion; signal detection; telecommunication security; CRN; FC; Grubb test; abuser identification; cognitive radio networks; forensic outlier detection; forensic testing; fusion center; local binary decisions; malfunctioning sensor; malicious sensor; optimal detectors; outlier sensors; policy control; selfish sensor; spectrum security; spectrum statistics; suboptimal detectors; Availability; Cognitive radio; Detectors; Forensics; Maximum likelihood estimation; Simulation; Cognitive radio; forensic cognitive detection; outlier detection; policy enforcement; spectrum security (ID#:14-2900)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841183&isnumber=6841165
  • Alvarado, A; Scutari, G.; Jong-Shi Pang, "A New Decomposition Method for Multiuser DC-Programming and Its Applications," Signal Processing, IEEE Transactions on, vol.62, no.11, pp.2984, 2998, June1, 2014. doi: 10.1109/TSP.2014.2315167 We propose a novel decomposition framework for the distributed optimization of Difference Convex (DC)-type nonseparable sum-utility functions subject to coupling convex constraints. A major contribution of the paper is to develop for the first time a class of (inexact) best-response-like algorithms with provable convergence, where a suitably convexified version of the original DC program is iteratively solved. The main feature of the proposed successive convex approximation method is its decomposability structure across the users, which leads naturally to distributed algorithms in the primal and/or dual domain. The proposed framework is applicable to a variety of multiuser DC problems in different areas, ranging from signal processing, to communications and networking. As a case study, in the second part of the paper we focus on two examples, namely: i) a novel resource allocation problem in the emerging area of cooperative physical layer security and ii) and the renowned sum-rate maximization of MIMO Cognitive Radio networks. Our contribution in this context is to devise a class of easy-to-implement distributed algorithms with provable convergence to stationary solution of such problems. Numerical results show that the proposed distributed schemes reach performance close to (and sometimes better than) that of centralized methods.
    Keywords: MIMO communication; approximation theory; cognitive radio; convex programming; cooperative communication; distributed algorithms; iterative methods; multiuser detection; resource allocation; telecommunication security; MIMO cognitive radio networks; best-response-like algorithms; convex constraint coupling; cooperative physical layer security; decomposability structure; decomposition method; difference convex-type nonseparable sum-utility functions; distributed algorithms; distributed optimization; inexact algorithms; multiuser DC-programming; novel resource allocation problem; renowned sum-rate maximization; signal processing; successive convex approximation method; Approximation methods; Convergence; Couplings; Jamming; Linear programming; Optimization; Signal processing algorithms; Cooperative physical layer security; cognitive radio; difference convex program; distributed algorithms; successive convex approximation (ID#:14-2901)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781556&isnumber=6809867

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Compiler Security

Compiler Security


Much of software security focuses on applications, but compiler security should also be an area of concern. Compilers can "correct" secure coding in the name of efficient processing. The works cited here look at various approaches and issues in compiler security. These articles appeared in the first half of 2014.

  • Bayrak, A; Regazzoni, F.; Novo Bruna, D.; Brisk, P.; Standaert, F.; Ienne, P., "Automatic Application of Power Analysis Countermeasures," Computers, IEEE Transactions on, vol. PP, no. 99, pp.1,1, Jan 2014. doi: 10.1109/TC.2013.219 We introduce a compiler that automatically inserts software countermeasures to protect cryptographic algorithms against power-based side-channel attacks. The compiler first estimates which instruction instances leak the most information through side-channels. This information is obtained either by dynamic analysis, evaluating an information theoretic metric over the power traces acquired during the execution of the input program, or by static analysis. As information leakage implies a loss of security, the compiler then identifies (groups of) instruction instances to protect with a software countermeasure such as random precharging or Boolean masking. As software protection incurs significant overhead in terms of cryptosystem runtime and memory usage, the compiler protects the minimum number of instruction instances to achieve a desired level of security. The compiler is evaluated on two block ciphers, AES and Clefia; our experiments demonstrate that the compiler can automatically identify and protect the most important instruction instances. To date, these software countermeasures have been inserted manually by security experts, who are not necessarily the main cryptosystem developers. Our compiler offers significant productivity gains for cryptosystem developers who wish to protect their implementations from side-channel attacks.
    Keywords: Assembly; Computers; Cryptography; Sensitivity; Software; Automatic Programming; Physical security (ID#:14-2705)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6671593&isnumber=4358213
  • Yier Jin, "EDA Tools Trust Evaluation Through Security Property Proofs," Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014 , vol., no., pp.1,4, 24-28 March 2014. doi: 10.7873/DATE.2014.260 The security concerns of EDA tools have long been ignored because IC designers and integrators only focus on their functionality and performance. This lack of trusted EDA tools hampers hardware security researchers' efforts to design trusted integrated circuits. To address this concern, a novel EDA tools trust evaluation framework has been proposed to ensure the trustworthiness of EDA tools through its functional operation, rather than scrutinizing the software code. As a result, the newly proposed framework lowers the evaluation cost and is a better fit for hardware security researchers. To support the EDA tools evaluation framework, a new gate-level information assurance scheme is developed for security property checking on any gatelevel netlist. Helped by the gate-level scheme, we expand the territory of proof-carrying based IP protection from RT-level designs to gate-level netlist, so that most of the commercially trading third-party IP cores are under the protection of proof-carrying based security properties. Using a sample AES encryption core, we successfully prove the trustworthiness of Synopsys Design Compiler in generating a synthesized netlist.
    Keywords: cryptography; electronic design automation; integrated circuit design; AES encryption core; EDA tools trust evaluation; Synopsys design compiler; functional operation; gate-level information assurance scheme; gate-level netlist; hardware security researchers; proof-carrying based IP protection; security property proofs; software code; third-party IP cores; trusted integrated circuits ;Hardware; IP networks; Integrated circuits; Logic gates; Sensitivity; Trojan horses (ID#:14-2706)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6800461&isnumber=6800201
  • Woodruff, J.; Watson, R.N.M.; Chisnall, D.; Moore, S.W.; Anderson, J.; Davis, B.; Laurie, B.; Neumann, P.G.; Norton, R.; Roe, M., "The CHERI Capability Model: Revisiting RISC In An Age Of Risk," Computer Architecture (ISCA), 2014 ACM/IEEE 41st International Symposium on , vol., no., pp.457,468, 14-18 June 2014. doi: 10.1109/ISCA.2014.6853201 Motivated by contemporary security challenges, we reevaluate and refine capability-based addressing for the RISC era. We present CHERI, a hybrid capability model that extends the 64-bit MIPS ISA with byte-granularity memory protection. We demonstrate that CHERI enables language memory model enforcement and fault isolation in hardware rather than software, and that the CHERI mechanisms are easily adopted by existing programs for efficient in-program memory safety. In contrast to past capability models, CHERI complements, rather than replaces, the ubiquitous page-based protection mechanism, providing a migration path towards deconflating data-structure protection and OS memory management. Furthermore. CHERI adheres to a strict RISC philosophy: it maintains a load-store architecture and requires only single-cycle instructions, and supplies protection primitives to the compiler, language runtime, and operating system. We demonstrate a mature FPGA implementation that runs the FreeBSD operating system with a full range of software and an open-source application suite compiled with an extended LLVM to use CHERI memory protection. A limit study compares published memory safety mechanisms in terms of instruction count and memory overheads. The study illustrates that CHERI is performance-competitive even while providing assurance and greater flexibility with simpler hardware.
    Keywords: field programmable gate arrays; operating systems (computers);reduced instruction set computing; security of data; CHERI hybrid capability model; CHERI memory protection; FPGA implementation; FreeBSD operating system; MIPS ISA;OS memory management; RISC era; byte-granularity memory protection; capability hardware enhanced RISC instruction; compiler; data-structure protection; fault isolation; field programmable gate array; in-program memory safety; instruction count ;instruction set architecture; language memory model enforcement; language runtime; load-store architecture; memory overhead; open-source application suite; reduces instruction set computing; single-cycle instructions; ubiquitous page-based protection mechanism; Abstracts; Coprocessors; Ground penetrating radar; Registers; Safety (ID#:14-2707)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6853201&isnumber=6853187
  • Barbosa, C.E.; Trindade, G.; Epelbaum, V.J.; Gomes Chang, J.; Oliveira, J.; Rodrigues Neto, J.A; Moreira de Souza, J., "Challenges on Designing A Distributed Collaborative UML Editor," Computer Supported Cooperative Work in Design (CSCWD), Proceedings of the 2014 IEEE 18th International Conference on, pp.59,64, 21-23 May 2014.doi: 10.1109/CSCWD.2014.6846817 Software development projects with geographically disperse teams, especially when use UML models for code generation, may gain performance by using tools with collaborative capabilities. This study reviews the distributed collaborative UML editors available in the literature. The UML Editors were compared using a Workstyle Model. Then, we discuss the fundamental challenges which these kind of UML Editors face to assist distributed developers and stakeholders across disperse locations.
    Keywords: Unified Modeling Language; groupware; program compilers; project management; software development management; UML models; Workstyle model; code generation; collaborative capabilities; distributed collaborative UML editors; geographically disperse teams; software development projects; Collaboration; Real-time systems; Security; Software; Synchronization; Syntactics; Unified modeling language; UML ;challenges; comparation; editor; review (ID#:14-2708)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846817&isnumber=6846800
  • Larsen, P.; Brunthaler, S.; Franz, M., "Security through Diversity: Are We There Yet?," Security & Privacy, IEEE, vol.12, no.2, pp.28,35, Mar.-Apr. 2014. doi: 10.1109/MSP.2013.129 Because most software attacks rely on predictable behavior on the target platform, mass distribution of identical software facilitates mass exploitation. Countermeasures include moving-target defenses in general and biologically inspired artificial software diversity in particular. Although the concept of software diversity has interested researchers for more than 20 years, technical obstacles prevented its widespread adoption until now. Massive-scale software diversity has become practical due to the Internet (enabling distribution of individualized software) and cloud computing (enabling the computational power to perform diversification). In this article, the authors take stock of the current state of software diversity research. The potential showstopper issues are mostly solved; the authors describe the remaining issues and point to a realistic adoption path.
    Keywords: cloud computing; security of data; software engineering; Internet; biologically inspired artificial software diversity; cloud computing; mass exploitation; mass identical software distribution; massive-scale software diversity; moving-target defenses; predictable behavior; security; software attacks; target platform; Computer crime; Computer security; Internet; Memory management; Prediction methods; Program processors; Runtime environment; Software architecture; compilers; error handling and recovery; programming languages; software engineering; system issues; testing and debugging (ID#:14-2709)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6617633&isnumber=6798534
  • Agosta, G.; Barenghi, A; Pelosi, G.; Scandale, M., "A Multiple Equivalent Execution Trace Approach To Secure Cryptographic Embedded Software," Design Automation Conference (DAC), 2014 51st ACM/EDAC/IEEE, pp.1,6, 1-5 June 2014. doi: 10.1109/DAC.2014.6881537 We propose an efficient and effective method to secure software implementations of cryptographic primitives on low-end embedded systems, against passive side-channel attacks relying on the observation of power consumption or electro-magnetic emissions. The proposed approach exploits a modified LLVM compiler toolchain to automatically generate a secure binary characterized by a randomized execution flow. Also, we provide a new method to refresh the random values employed in the share splitting approaches to lookup table protection, addressing a currently open issue. We improve the current state-of-the-art in dynamic executable code countermeasures removing the requirement of a writeable code segment, and reducing the countermeasure overhead.
    Keywords: cryptography; embedded systems; program compilers; table lookup; LLVM compiler toolchain; countermeasure overhead reduction; cryptographic embedded software security; cryptographic primitives; dynamic executable code countermeasures; electromagnetic emissions; lookup table protection; low-end embedded systems; multiple equivalent execution trace approach; passive side-channel attacks; power consumption observation; random values; randomized execution flow; share splitting approach; writeable code segment; Ciphers; Optimization; Power demand; Registers; Software; Power Analysis Attacks; Software Countermeasures; Static Analysis (ID#:14-2710)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6881537&isnumber=6881325
  • Calvagna, A; Fornaia, A; Tramontana, E., "Combinatorial Interaction Testing of a Java Card Static Verifier," Software Testing, Verification and Validation Workshops (ICSTW), 2014 IEEE Seventh International Conference on, pp.84,87, March 31 2014-April 4 2014. doi: 10.1109/ICSTW.2014.10 We present a combinatorial interaction testing approach to perform validation testing of a fundamental component for the security of Java Cards: the byte code verifier. Combinatorial testing of all states of the Java Card virtual machine has been adopted as the coverage criteria. We developed a formal model of the Java Card byte code syntax to enable the combinatorial enumeration of well-formed states, and a formal model of the byte code semantic rules to be able to distinguish between well-typed and ill-typed ones, and to derive actual test programs from them. A complete framework has been implemented, enabling fully automated application and evaluation of the conformance tests to any verifier implementation.
    Keywords: Java; combinatorial mathematics; formal verification; operating systems (computers); program compilers; program testing; virtual machines; Java card byte code syntax; Java card static verifier; Java card virtual machine; byte code semantic rules; byte code verifier; combinatorial enumeration; combinatorial interaction testing; formal model; test programs; validation testing; Java; Law; Load modeling; Semantics; Testing; Virtual machining; Java virtual machine; combinatorial interaction testing; software engineering (ID#:14-2711)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825642&isnumber=6825623
  • Hu Ge; Li Ting; Dong Hang; Yu Hewei; Zhang Miao, "Malicious Code Detection for Android Using Instruction Signatures," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on , vol., no., pp.332,337, 7-11 April 2014. doi: 10.1109/SOSE.2014.48 This paper provides an overview of the current static analysis technology of Android malicious code, and a detailed analysis of the format of APK which is the application name of Android platform executable file (dex). From the perspective of binary sequence, Dalvik VM file is syncopated in method, and these test samples are analyzed by automated DEX file parsing tools and Levenshtein distance algorithm, which can detect the malicious Android applications that contain the same signatures effectively. Proved by a large number of samples, this static detection system that based on signature sequences can't only detect malicious code quickly, but also has a very low rate of false positives and false negatives.
    Keywords: Android (operating system); digital signatures; program compilers; program diagnostics; APK format; Android malicious code detection; Android platform executable file; Dalvik VM file; Levenshtein distance algorithm; automated DEX file parsing tools; binary sequence; instruction signatures; malicious Android applications detection; signature sequences; static analysis technology; static detection system; Libraries; Malware; Mobile communication; Smart phones; Software; Testing; Android; DEX; Static Analysis; malicious code (ID#:14-2712)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830926&isnumber=6825948

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Compressive Sampling

Compressive Sampling


Compressive sampling (or compressive sensing) is an important theory in signal processing. It allows efficient acquisition and reconstruction of a signal and may also be the basis for user identification. The works cited here were published or presented between January and August of 2014.

  • Wei Wang; Xiao-Yi Pan; Yong-Cai Liu; De-Jun Feng; Qi-Xiang Fu, "Sub-Nyquist Sampling Jamming Against ISAR With Compressive Sensing," Sensors Journal, IEEE, vol.14, no.9, pp.3131,3136, Sept. 2014. doi: 10.1109/JSEN.2014.2323978 Shannon-Nyquist theorem indicates that under-sampling at low rates will lead to aliasing in the frequency domain of signal and can be utilized in electronic warfare. However, the question is whether it still works when the compressive sensing (CS) algorithm is applied into reconstruction of target. This paper concerns sub-Nyquist sampled jamming signals and its corresponding influence on inverse synthetic aperture radar (ISAR) imaging via CS. Results show that multiple deceptive false-target images with finer resolution will be induced after the sub-Nyquist sampled jamming signals dealed with CS-based reconstruction algorithm; hence, the sub-Nyquist sampling can be adopted in the generation of decoys against ISAR with CS. Experimental results of the scattering model of the Yak-42 plane and real data are used to verify the correctness of the analyses.
    Keywords: compressed sensing; image reconstruction; image resolution; image sampling; jamming; radar imaging; synthetic aperture radar; CS-based reconstruction algorithm; ISAR imaging; Shannon-Nyquist theorem;Yak-42 plane; compressive sensing algorithm; decoy generation; electronic warfare; frequency domain analysis; inverse synthetic aperture radar imaging; multiple deceptive false-target image resolution; scattering model; subNyquist sampled jamming signal; Compressed sensing; mage resolution; Imaging; Jamming; Radar imaging; Scattering; Sub-Nyquist sampling; compressive sensing (CS); deception jamming; inverse syntheticaperture radar (ISAR) (ID#:14-2713)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6815640&isnumber=6862121
  • Lagunas, E.; Najar, M., "Robust Primary User Identification Using Compressive Sampling For Cognitive Radios," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pp.2347,2351, 4-9 May 2014. doi: 10.1109/ICASSP.2014.6854019 In cognitive radio (CR), the problem of limited spectral resources is solved by enabling unlicensed systems to opportunistically utilize the unused licensed bands. Compressive Sensing (CS) has been successfully applied to alleviate the sampling bottleneck in wideband spectrum sensing leveraging the sparseness of the signal spectrum in open-access networks. This has inspired the design of a number of techniques that identify spectrum holes from sub-Nyquist samples. However, the existence of interference emanating from low-regulated transmissions, which cannot be taken into account in the CS model because of their non-regulated nature, greatly degrades the identification of licensed activity. Capitalizing on the sparsity described by licensed users, this paper introduces a feature-based technique for primary user's spectrum identification with interference immunity which works with a reduced amount of data. The proposed method detects which channels are occupied by primary users' and also identify the primary users transmission powers without ever reconstructing the signals involved. Simulation results show the effectiveness of the proposed technique for interference suppression and primary user detection.
    Keywords: cognitive radio; compressed sensing; interference suppression; radio spectrum management; cognitive radio; compressive sensing; feature-based technique; interference immunity; interference suppression; licensed users; limited spectral resources; low-regulated transmissions; open-access networks; primary user detection; sampling bottleneck; signal spectrum; spectrum holes; spectrum identification; sub-Nyquist samples; unlicensed systems; unused licensed bands; wideband spectrum sensing; Correlation; Feature extraction; Interference; Noise; Sensors; Spectral shape; Vectors (ID#:14-2714)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6854019&isnumber=6853544
  • Yuxin Chen; Goldsmith, AJ.; Eldar, Y.C., "Channel Capacity Under Sub-Nyquist Nonuniform Sampling," Information Theory, IEEE Transactions on, vol.60, no.8, pp.4739,4756, Aug. 2014. doi: 10.1109/TIT.2014.2323406 This paper investigates the effect of sub-Nyquist sampling upon the capacity of an analog channel. The channel is assumed to be a linear time-invariant Gaussian channel, where perfect channel knowledge is available at both the transmitter and the receiver. We consider a general class of right-invertible time-preserving sampling methods which includes irregular nonuniform sampling, and characterize in closed form the channel capacity achievable by this class of sampling methods, under a sampling rate and power constraint. Our results indicate that the optimal sampling structures extract out the set of frequencies that exhibits the highest signal-to-noise ratio among all spectral sets of measure equal to the sampling rate. This can be attained through filterbank sampling with uniform sampling grid employed at each branch with possibly different rates, or through a single branch of modulation and filtering followed by uniform sampling. These results reveal that for a large class of channels, employing irregular nonuniform sampling sets, while are typically complicated to realize in practice, does not provide capacity gain over uniform sampling sets with appropriate preprocessing. Our findings demonstrate that aliasing or scrambling of spectral components does not provide capacity gain in this scenario, which is in contrast to the benefits obtained from random mixing in spectrum-blind compressive sampling schemes.
    Keywords: Gaussian channels; channel bank filters; channel capacity; sampling methods; transceivers; analog channel;capacity gain; channel capacity; filterbank sampling; filtering single branch ;frequency set; irregular nonuniform sampling; irregular nonuniform sampling sets ;linear time-invariant Gaussian channel; modulation single branch; optimal sampling structures; power constraint; random mixing; receiver; right-invertible time-preserving sampling methods; sampling rate; signal-to-noise ratio; spectral components aliasing; spectral components scrambling; spectral sets; spectrum-blind compressive sampling schemes;s ubNyquist nonuniform sampling effect; transmitter; uniform sampling grid; Channel capacity; Data preprocessing; Measurement; Modulation; Nonuniform sampling; Upper bound; Be
    URLing density; Nonuniform sampling; channel capacity; irregular sampling; sampled analog channels; sub-Nyquist sampling; time-preserving sampling systems(ID#:14-2715)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814945&isnumber=6851961
  • Feng Xi; Shengyao Chen; Zhong Liu, "Quadrature Compressive Sampling for Radar Signals," Signal Processing, IEEE Transactions on, vol.62, no.11, pp.2787,2802, June1, 2014. doi: 10.1109/TSP.2014.2315168 Quadrature sampling has been widely applied in coherent radar systems to extract in-phase and quadrature ( I and Q) components in the received radar signal. However, the sampling is inefficient because the received signal contains only a small number of significant target signals. This paper incorporates the compressive sampling (CS) theory into the design of the quadrature sampling system, and develops a quadrature compressive sampling (QuadCS) system to acquire the I and Q components with low sampling rate. The QuadCS system first randomly projects the received signal into a compressive bandpass signal and then utilizes the quadrature sampling to output compressive I and Q components. The compressive outputs are used to reconstruct the I and Q components. To understand the system performance, we establish the frequency domain representation of the QuadCS system. With the waveform-matched dictionary, we prove that the QuadCS system satisfies the restricted isometry property with overwhelming probability. For K target signals in the observation interval T, simulations show that the QuadCS requires just O(Klog(BT/K)) samples to stably reconstruct the signal, where B is the signal bandwidth. The reconstructed signal-to-noise ratio decreases by 3 dB for every octave increase in the target number K and increases by 3 dB for every octave increase in the compressive bandwidth. Theoretical analyses and simulations verify that the proposed QuadCS is a valid system to acquire the I and Q components in the received radar signals.
    Keywords: compressed sensing; frequency-domain analysis; probability; radar receivers; radar signal processing; signal reconstruction; signal sampling; QuadCS system; compressive bandpass signal; frequency domain representation; noise figure 3 dB; probability; quadrature compressive sampling theory; received radar signal sampling ;signal reconstruction; signal-to-noise ratio;waveform-matching;Bandwidth;Baseband;Demodulation;Dictionaries;Frequency-domain analysis; Radar; Vectors; Analog-to-digital conversion; compressive sampling; quadrature sampling; restricted isometry property; sparse signal reconstruction (ID#:14-2716)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781614&isnumber=6809867
  • Xianbiao Shu; Jianchao Yang; Ahuja, N., "Non-local Compressive Sampling Recovery," Computational Photography (ICCP), 2014 IEEE International Conference on, pp.1,8, 2-4 May 2014. doi: 10.1109/ICCPHOT.2014.6831806 Compressive sampling (CS) aims at acquiring a signal at a sampling rate below the Nyquist rate by exploiting prior knowledge that a signal is sparse or correlated in some domain. Despite the remarkable progress in the theory of CS, the sampling rate on a single image required by CS is still very high in practice. In this paper, a non-local compressive sampling (NLCS) recovery method is proposed to further reduce the sampling rate by exploiting non-local patch correlation and local piecewise smoothness present in natural images. Two non-local sparsity measures, i.e., non-local wavelet sparsity and non-local joint sparsity, are proposed to exploit the patch correlation in NLCS. An efficient iterative algorithm is developed to solve the NLCS recovery problem, which is shown to have stable convergence behavior in experiments. The experimental results show that our NLCS significantly improves the state-of-the-art of image compressive sampling.
    Keywords: compressed sensing; correlation theory ;image sampling; iterative methods; natural scenes; wavelet transforms; NLCS recovery method; Nyquist rate; image compressive sampling; iterative algorithm; local piecewise smoothness; natural images; nonlocal compressive sampling recovery; nonlocal joint sparsity; nonlocal patch correlation; nonlocal sparsity measure; nonlocal wavelet sparsity; sampling rate reduction; signal acquisition; sparse signal; Correlation; Image coding; Imaging; Joints; Three-dimensional displays; Videos; Wavelet transforms (ID#:14-2717)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6831806&isnumber=6831796
  • Banitalebi-Dehkordi, M.; Abouei, J.; Plataniotis, K.N., "Compressive-Sampling-Based Positioning in Wireless Body Area Networks," Biomedical and Health Informatics, IEEE Journal of, vol.18, no.1, pp.335, 344, Jan. 2014.doi: 10.1109/JBHI.2013.2261997 Recent achievements in wireless technologies have opened up enormous opportunities for the implementation of ubiquitous health care systems in providing rich contextual information and warning mechanisms against abnormal conditions. This helps with the automatic and remote monitoring/tracking of patients in hospitals and facilitates and with the supervision of fragile, elderly people in their own domestic environment through automatic systems to handle the remote drug delivery. This paper presents a new modeling and analysis framework for the multipatient positioning in a wireless body area network (WBAN) which exploits the spatial sparsity of patients and a sparse fast Fourier transform (FFT)-based feature extraction mechanism for monitoring of patients and for reporting the movement tracking to a central database server containing patient vital information. The main goal of this paper is to achieve a high degree of accuracy and resolution in the patient localization with less computational complexity in the implementation using the compressive sensing theory. We represent the patients' positions as a sparse vector obtained by the discrete segmentation of the patient movement space in a circular grid. To estimate this vector, a compressive-sampling-based two-level FFT (CS-2FFT) feature vector is synthesized for each received signal from the biosensors embedded on the patient's body at each grid point. This feature extraction process benefits in the combination of both short-time and long-time properties of the received signals. The robustness of the proposed CS-2FFT-based algorithm in terms of the average positioning error is numerically evaluated using the realistic parameters in the IEEE 802.15.6-WBAN standard in the presence of additive white Gaussian noise. Due to the circular grid pattern and the CS-2FFT feature extraction method, the proposed scheme represents a significant reduction in the computational complexity, while improving the level of the resolut- on and the localization accuracy when compared to some classical CS-based positioning algorithms.
    Keywords: AWGN; body sensor networks; compressed sensing; drug delivery systems; fast Fourier transforms; feature extraction; geriatrics; health care; hospitals; medical signal processing; patient monitoring; personal area networks; telemedicine; tracking; ubiquitous computing; CS-2FFT feature extraction method; CS-2FFT feature vector synthesis; CS-2FFT-based algorithm robustness; FFT-based feature extraction mechanism; IEEE 802.15.6-WBAN standard; abnormal condition contextual information; abnormal condition warning mechanism; additive white Gaussian noise; automatic drug delivery system; automatic patient monitoring; automatic patient tracking; average positioning error; biosensor signal; central database server; circular grid pattern; classical CS-based positioning algorithm; compressive sensing theory; compressive-sampling-based positioning; compressive-sampling-based two-level FFT feature vector; computational complexity reduction; feature extraction process; fragile elderly people supervision; hospital; movement tracking reporting; multipatient positioning analysis; multipatient positioning modeling; numerical evaluation; patient localization accuracy; patient localization resolution; patient movement space discrete segmentation; patient spatial sparsity; patient vital information; remote drug delivery; remote patient monitoring; remote patient tracking; signal long-time properties; signal short-time properties; sparse fast Fourier transform;sparse vector estimation; ubiquitous health care system; wireless body area network; wireless technology; Compressive sampling (CS);patient localization; spatial sparsity; wireless body area networks (WBANs) (ID#:14-2718)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6514596&isnumber=6701130
  • Gishkori, S.; Lottici, V.; Leus, G., "Compressive Sampling-Based Multiple Symbol Differential Detection for UWB Communications," Wireless Communications, IEEE Transactions on, vol.13, no.7, pp.3778,3 790, July 2014. doi: 10.1109/TWC.2014.2317175 Compressive sampling (CS) based multiple symbol differential detectors are proposed for impulse-radio ultra-wideband signaling, using the principles of generalized likelihood ratio tests. The CS based detectors correspond to two communication scenarios. One, where the signaling is fully synchronized at the receiver and the other, where there exists a symbol level synchronization only. With the help of CS, the sampling rates are reduced much below the Nyquist rate to save on the high power consumed by the analog-to-digital converters. In stark contrast to the usual compressive sampling practices, the proposed detectors work on the compressed samples directly, thereby avoiding a complicated reconstruction step and resulting in a reduction of the implementation complexity. To resolve the detection of multiple symbols, compressed sphere decoders are proposed as well, for both communication scenarios, which can further help to reduce the system complexity. Differential detection directly on the compressed symbols is generally marred by the requirement of an identical measurement process for every received symbol. Our proposed detectors are valid for scenarios where the measurement process is the same as well as where it is different for each received symbol.
    Keywords: compressed sensing; signal detection; signal reconstruction; signal sampling; statistical testing; synchronisation; ultra wideband communication; CS based detectors; Nyquist rate; UWB communications; analog-to-digital converters; complicated reconstruction step; c ompressed sphere decoders; compressive sampling-based multiple symbol differential detection; generalized likelihood ratio tests; identical measurement process ;impulse-radio ultra-wideband signaling; symbol level synchronization; system complexity reduction; Complexity theory;Detectors;Joints;Receivers;Synchronization;Vectors;Compressive sampling (CS);multiple symbol differential detection (MSDD);sphere decoding (SD);ultra-wideband impulse radio (UWB-IR) (ID#:14-2719)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6797969&isnumber=6850111
  • Shuyuan Yang; HongHong Jin; Min Wang; Yu Ren; Licheng Jiao, "Data-Driven Compressive Sampling and Learning Sparse Coding for Hyperspectral Image Classification," Geoscience and Remote Sensing Letters, IEEE, vol.11, no.2, pp.479, 483, Feb. 2014. doi: 10.1109/LGRS.2013.2268847 Exploring the sparsity in classifying hyperspectral vectors proves to lead to state-of-the-art performance. To learn a compact and discriminative dictionary for accurate and fast classification of hyperspectral images, a data-driven Compressive Sampling (CS) and learning sparse coding scheme are use to reduce the dimensionality and size of the dictionary respectively. First, a sparse radial basis function (RBF) kernel learning network (S-RBFKLN) is constructed to learn a compact dictionary for sparsely representing hyperspectral vectors. Then a data-driven compressive sampling scheme is designed to reduce the dimensionality of the dictionary, and labels of new samples are derived from coding coefficients. Some experiments are taken on NASA EO-1 Hyperion data and AVIRIS Indian Pines data to investigate the performance of the proposed method, and the results show its superiority to its counterparts.
    Keywords: geophysical image processing; hyperspectral imaging; image classification; AVIRIS Indian Pines data; NASA EO-1 Hyperion data; coding coefficients; data-driven compressive sampling; hyperspectral image classification; hyperspectral vectors ;learning sparse coding scheme; Dictionaries; Hyperspectral imaging; Image coding; Kernel; Training; Vectors; Compressive sampling (CS); data-driven; hyperspectral image classification ;sparse radial basis function kernel learning network (S-RBFKLN) (ID#:14-2720)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6578556&isnumber=6675034
  • Yan Jing; Naizhang Feng; Yi Shen, "Bearing Estimation Of Coherent Signals Using Compressive Sampling Array," Instrumentation and Measurement Technology Conference (I2MTC) Proceedings, 2014 IEEE International, pp.1221,1225, 12-15 May 2014. doi: 10.1109/I2MTC.2014.6860938 Compressive sampling (CS) is an attractive theory which can achieve sparse signals acquisition and compression simultaneously. Exploiting the sparse property in the spatial domain, the direction of arrival (DOA) of narrowband signals is studied by using compressive sampling measurements in the form of random projections of sensor arrays. The proposed approach, CS array DOA estimation based on eigen space (CSA-ES-DOA) uses a very small number of measurements to resolve the DOA estimation of the coherent signals and two closely adjacent signals. Theoretical analysis and simulation results demonstrate that the proposed approaches can maintain high angular resolution, low hardware complexity and low computational cost.
    Keywords: {compressed sensing; direction-of-arrival estimation; eigenvalues and eigenfunctions; signal detection; signal sampling; DOA estimation; bearing estimation; coherent signals; compressive sampling array; direction of arrival; narrowband signals; sparse signals acquisition; Arrays; Compressed sensing; Direction-of-arrival estimation; Estimation; Multiple signal classification; Signal resolution; Vectors; coherent signals; compressive sampling array; direction of arrival; eigen space (ID#:14-2721)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6860938&isnumber=6860504
  • Angrisani, L.; Bonavolonta, F.; Lo Moriello, R.S.; Andreone, A; Casini, R.; Papari, G.; Accardo, D., "First Steps Towards An Innovative Compressive Sampling Based-Thz Imaging System For Early Crack Detection On Aereospace Plates," Metrology for Aerospace (MetroAeroSpace), 2014 IEEE, pp.488,493, 29-30 May 2014. doi: 10.1109/MetroAeroSpace.2014.6865974 The paper deals with the problem of early detecting cracks in composite materials for avionic applications. In particular, the authors present a THz imaging system that exploits compressive sampling (CS) to detect submillimeter cracks with a reduced measurement burden. Traditional methods for THz imaging usually involve raster scan of the issue of interest by means of highly collimated radiations and the corresponding image is achieved by measuring the received THz power in different positions (pixels) of the desired image. As it can be expected, the higher the required resolution, the longer the measurement time. On the contrary, two different approaches for THz imaging (namely, continuous wave and time domain spectroscopy) combined with a proper CS solution are used to assure as good results as those granted by traditional raster scan; a proper set of masks (each of which characterized by a specific random pattern) are defined to the purpose. A number of tests conducted on simulated data highlighted the promising performance of the proposed method thus suggesting its implementation in an actual measurement setup.
    Keywords: aerospace materials; avionics; composite materials; compressed sensing; condition monitoring; crack detection; plates (structures);terahertz wave imaging; CS; THz imaging system; aerospace plates; avionic applications; composite materials; compressive sampling; continuous wave; early crack detection; submillimeter cracks; time domain spectroscopy; Detectors; Image reconstruction; Image resolution ;Imaging; Laser excitation; Quantum cascade lasers; Skin; compressive sampling THz imaging ;cracks detection; nondestructive evaluation (ID#:14-2722)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6865974&isnumber=6865882
  • Das, S.; Singh Sidhu, T., "Application of Compressive Sampling in Synchrophasor Data Communication in WAMS," Industrial Informatics, IEEE Transactions on, vol.10, no.1, pp.450, 460, Feb. 2014. doi: 10.1109/TII.2013.2272088 In this paper, areas of power system synchrophasor data communication which can be improved by compressive sampling (CS) theory are identified. CS reduces the network bandwidth requirements of Wide Area Measurement Systems (WAMS). It is shown that CS can reconstruct synchrophasors at higher rates while satisfying the accuracy requirements of IEEE standard C37.118.1-2011. Different steady state and dynamic power system scenarios are considered here using mathematical models of C37.118.1-2011. Synchrophasors of lower reporting rates are exempted from satisfying the accuracy requirements of C37.118.1-2011 during system dynamics. In this work, synchrophasors are accurately reconstructed from above and below Nyquist rates. Missing data often pose challenges to the WAMS applications. It is shown that missing and bad data can be reconstructed satisfactorily using CS. Performance of CS is found to be better than the existing interpolation techniques for WAMS communication.
    Keywords: IEEE standards; compressed sensing; interpolation;phasor measurement; CS theory; IEEE standard C37.118.1-2011; Nyquist rates; WAMS communication; compressive sampling; dynamic power system scenario; interpolation technique; mathematical model; network bandwidth requirements; power system synchrophasor data communication; steady state power system scenario; wide area measurement systems; Compressive sampling; phasor measurement unit; smart grid; synchrophasor; wide area measurement system (WAMS) (ID#:14-2723)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6553079&isnumber=6683081
  • Xi, Feng; Chen, Shengyao; Liu, Zhong, "Quadrature Compressive Sampling For Radar Signals: Output Noise And Robust Reconstruction," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on, pp.790,794, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889353 The quadrature compressive sampling (QuadCS) system is a recently developed low-rate sampling system for acquiring inphase and quadrature (I and Q) components of radar signals. This paper investigates the output noise and robust reconstruction of the QuadCS system with the practical non-ideal bandpass filter. For independently and identically distributed Gaussian input noise, we find that the output noise is a correlated Gaussian one in the non-ideal case. Then we exploit the correlation property and develop a robust reconstruction formulation. Simulations show that the reconstructed signal-to-noise ratio is enhanced 3-4dB with the robust formulation.
    Keywords: Compressive sampling; Gaussian noise; quadrature demodulation; radar signals (ID#:14-2724)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889353&isnumber=6889177
  • Budillon, A; Ferraioli, G.; Schirinzi, G., "Localization Performance of Multiple Scatterers in Compressive Sampling SAR Tomography: Results on COSMO-SkyMed Data," Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of, vol.7, no.7, pp.2902, 2910, July 2014. doi: 10.1109/JSTARS.2014.2344916 The 3-D SAR tomographic technique based on compressive sampling (CS) has been proven very performing in recovering the 3-D reflectivity function and hence in estimating multiple scatterers lying in the same range-azimuth resolution cell, but at different elevations. In this paper, a detection method for multiple scatterers, assuming the number of scatterers to be known or preliminarily estimated, has been investigated. The performance of CS processing for identifying and locating multiple scatterers has been analyzed for different number of measurements and different reciprocal distances between the scatterers, in presence of the off-grid effect, and in the case of super-resolution imaging. The proposed method has been tested on simulated and real COSMO-SkyMed data.
    Keywords: Detectors ;Image resolution; Signal resolution; Signal to noise ratio; Synthetic aperture radar; Tomography; Compressive sampling (CS); detection; synthetic aperture radar (SAR); tomography (ID#:14-2725)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6881815&isnumber=6881766
  • Ningfei Dong; Jianxin Wang, "Channel Gain Mismatch And Time Delay Calibration For Modulated Wideband Converter-Based Compressive Sampling," Signal Processing, IET, vol.8, no.2, pp.211, 219, April 2014. doi: 10.1049/iet-spr.2013.0137 The modulated wideband converter (MWC) is a recently proposed compressive sampling system for acquiring sparse multiband signals. For the MWC with digital sub-channel separation block, channel gain mismatch and time delay will lead to a potential performance loss in reconstruction. These gains and delays are represented as an unknown multiplicative diagonal matrix here. The authors formulate the estimation problem as a convex optimisation problem, which can be efficiently solved by utilising least squares estimation. Then the calibrated system model is obtained and the estimates of the gains and time delays of physical channels from the estimate of this matrix are calculated. Numerical simulations verify the effectiveness of the proposed approach.
    Keywords: {channel estimation; compressed sensing; delay estimation ;least mean squares methods; matrix multiplication; modulation; signal detection; signal reconstruction; MWC; channel gain mismatch; compressive sampling system; convex optimisation problem; digital subchannel separation block; gain estimation; least square estimation; modulated wideband converter; numerical simulation; potential performance loss; signal reconstruction; sparse multiband signal acquisition; time delay calibration; time delay estimation; unknown multiplicative diagonal matrix (ID#:14-2726)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6786869&isnumber=6786851
  • Sejdic, E.; Rothfuss, M.A; Gimbel, M.L.; Mickle, M.H., "Comparative Analysis Of Compressive Sensing Approaches For Recovery Of Missing Samples In Implantable Wireless Doppler Device," Signal Processing, IET, vol.8, no.3, pp.230, 238, May 2014. doi: 10.1049/iet-spr.2013.0402 An implantable wireless Doppler device used in microsurgical free flap surgeries can suffer from lost data points. To recover the lost samples, the authors considered the approaches based on a recently proposed compressive sensing. In this paper, they performed a comparative analysis of several different approaches by using synthetic and real signals obtained during blood flow monitoring in four pigs. They considered three different bases functions: Fourier bases, discrete prolate spheroidal sequences and modulated discrete prolate spheroidal sequences, respectively. To avoid the computational burden, they considered the approaches based on the l1 minimisation for all the three bases. To understand the trade-off between the computational complexity and the accuracy, they also used a recovery process based on a matching pursuit and modulated discrete prolate spheroidal sequences bases. For both the synthetic and the real signals, the matching approach with modulated discrete prolate spheroidal sequences provided the most accurate results. Future studies should focus on the optimisation of the modulated discrete prolate spheroidal sequences in order to further decrease the computational complexity and increase the accuracy.
    Keywords: blood flow measurement; compressed sensing; computational complexity; medical signal processing; minimisation; prosthetics; signal sampling; blood flow monitoring; compressive sensing; implantable wireless Doppler device; matching pursuit; microsurgical free flap surgery; missing sample recovery; modulated discrete prolate spheroidal sequences base; recovery process (ID#:14-2727)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6817399&isnumber=6816971
  • Mihajlovic, Radomir; Scekic, Marijana; Draganic, Andjela; Stankovic, Srdjan, "An Analysis Of CS Algorithms Efficiency For Sparse Communication Signals Reconstruction," Embedded Computing (MECO), 2014 3rd Mediterranean Conference on, pp.221,224, 15-19 June 2014. doi: 10.1109/MECO.2014.6862700 As need for increasing the speed and accuracy of the real applications is constantly growing, the new algorithms and methods for signal processing are intensively developing. Traditional sampling approach based on Sampling theorem is, in many applications, inefficient because of production a large number of signal samples. Generally, small number of significant information is presented within the signal compared to its length. Therefore, the Compressive Sensing method is developed as an alternative sampling strategy. This method provides efficient signal processing and reconstruction, without need for collecting all of the signal samples. Signal is sampled in a random way, with number of acquired samples significantly smaller than the signal length. In this paper, the comparison of the several algorithms for Compressive Sensing reconstruction is presented. The one dimensional band-limited signals that appear in wireless communications are observed and the performance of the algorithms in non-noisy and noisy environments is tested. Reconstruction errors and execution times are compared between different algorithms, as well.
    Keywords: Compressed sensing; Image reconstruction; Matching pursuit algorithms; Optimization; Reconstruction algorithms; Signal processing; Signal processing algorithms; Compressive Sensing; basis pursuit; iterative hard thresholding; orthogonal matching pursuit; wireless signals (ID#:14-2728)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6862700&isnumber=6862649 isnumber=6862649

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Computational Intelligence

Computational Intelligence


Computational Intelligence

  • Lavania, S.; Darbari, M.; Ahuja, N.J.; Siddqui, IA, "Application of computational intelligence in measuring the elasticity between software complexity and deliverability," Advance Computing Conference (IACC), 2014 IEEE International , vol., no., pp.1415,1418, 21-22 Feb. 2014 doi: 10.1109/IAdCC.2014.6779533 Abstract: The paper highlights various issues of complexity and deliverability and its impact on software popularity. The use of Expert Intelligence system helps us in identifying the dominant and non-dominant impediments of software. FRBS is being developed to quantify the trade-off between complexity and deliverability issues of a software system.
    Keywords: {computational complexity;expert systems;software quality;FRGS;computational intelligence;dominant impediments;elasticity measurement;expert intelligence system;nondominant impediments;software complexity;software deliverability;software popularity;Conferences;Decision support systems;Handheld computers;Complexity;Deliverability;Expert System}, (ID#:14-2762)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779533&isnumber=6779283
  • Yannakakis, G.N.; Togelius, J., "A Panorama of Artificial and Computational Intelligence in Games," Computational Intelligence and AI in Games, IEEE Transactions on , vol.PP, no.99, pp.1,1 doi: 10.1109/TCIAIG.2014.2339221 Abstract: This paper attempts to give a high-level overview 4of the field of artificial and computational intelligence (AI/CI) in games, with particular reference to how the different core research areas within this field inform and interact with each other, both actually and potentially. We identify ten main research areas within this field: NPC behavior learning, search and planning, player modeling, games as AI benchmarks, procedural content generation, computational narrative, believable agents, AI-assisted game design, general game artificial intelligence and AI in commercial games. We view and analyze the areas from three key perspectives: (1) the dominant AI method(s) used under each area; (2) the relation of each area with respect to the end (human) user; and (3) the placement of each area within a human-computer (player-game) interaction perspective. In addition, for each of these areas we consider how it could inform or interact with each of the other areas; in those cases where we find that meaningful interaction either exists or is possible, we describe the character of that interaction and provide references to published studies, if any. We believe that this paper improves understanding of the current nature of the game AI/CI research field and the interdependences between its core areas by providing a unifying overview. We also believe that the discussion of potential interactions between research areas provides a pointer to many interesting future research projects and unexplored subfields.
    Keywords: {Artificial intelligence;Computational modeling;Evolutionary computation;Games;Planning;Seminars}, (ID#:14-2763)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6855367&isnumber=4804729
  • Myers, AJ.; Megherbi, D.B., "An efficient computational intelligence technique for affine-transformation-invariant image face detection, tracking, and recognition in a video stream," Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA), 2014 IEEE International Conference on , vol., no., pp.88,93, 5-7 May 2014 doi: 10.1109/CIVEMSA.2014.6841444 Abstract: While there are many current approaches to solving the difficulties that come with detecting, tracking, and recognizing a given face in a video sequence, the difficulties arising when there are differences in pose, facial expression, orientation, lighting, scaling, and location remain an open research problem. In this paper we present and perform the study and analysis of a computationally efficient approach for each of the three processes, namely a given template face detection, tracking, and recognition. The proposed algorithms are faster relatively to other existing iterative methods. In particular, we show that unlike such iterative methods, the proposed method does not estimate a given face rotation angle or scaling factor by looking into all possible face rotations or scaling factors. The proposed method looks into segmenting and aligning the distance between two eyes' pupils in a given face image with the image x-axis. Reference face images in a given database are normalized with respect to translation, rotation, and scaling. We show here how the proposed method to estimate a given face image template rotation and scaling factor leads to real-time template image rotation and scaling corrections. This allows the recognition algorithm to be less computationally complex than iterative methods.
    Keywords: {face recognition;image sequences;iterative methods;video signal processing;affine-transformation-invariant image;computational intelligence technique;face detection;face image template;face recognition;face tracking;iterative methods;reference face images;video sequence;video stream;Databases;Face;Face recognition;Histograms;Lighting;Nose;Streaming media;computational intelligence;detection;facial;machine learning;real-time;recognition;tracking;video}, (ID#:14-2764)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841444&isnumber=6841424
  • Antoniades, A; Took, C.C., "A Google approach for computational intelligence in big data," Neural Networks (IJCNN), 2014 International Joint Conference on , vol., no., pp.1050,1054, 6-11 July 2014 doi: 10.1109/IJCNN.2014.6889469 Abstract: With the advent of the emerging field of big data, it is becoming increasingly important to equip machine learning algorithms to cope with volume, variety, and velocity of data. In this work, we employ the MapRe-duce paradigm to address these issues as an enabling technology for the well-known support vector machine to perform distributed classification of skin segmentation. An open source implementation of MapReduce called Hadoop offers a streaming facility, which allows us to focus on the computational intelligence problem at hand, instead of focusing on the implementation of the learning algorithm. This is the first time that support vector machine has been proposed to operate in a distributed fashion as it is, circumventing the need for long and tedious mathematical derivations. This highlights the main advantages of MapReduce - its generality and distributed computation for machine learning with minimum effort. Simulation results demonstrate the efficacy of MapReduce when distributed classification is performed even when only two machines are involved, and we highlight some of the intricacies of MapReduce in the context of big data.
    Keywords: {Big Data;distributed processing;learning (artificial intelligence);pattern classification;public domain software;support vector machines;Google approach;MapReduce;big data;computational intelligence;distributed classification;machine learning algorithms;open source Hadoop;skin segmentation;streaming facility;support vector machine;Big data;Context;Machine learning algorithms;Skin;Support vector machines;Testing;Training}, (ID#:14-2765)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889469&isnumber=6889358
  • Sharif, N.; Zafar, K.; Zyad, W., "Optimization of requirement prioritization using Computational Intelligence technique," Robotics and Emerging Allied Technologies in Engineering (iCREATE), 2014 International Conference on , vol., no., pp.228,234, 22-24 April 2014 doi: 10.1109/iCREATE.2014.6828370 Abstract: Requirement Engineering (RE) is considered as an important part in Software Development Life Cycle. It is a traditional Software Engineering (SE) process. The goal of RE is to Identify, Analyze, Document and Validate requirements. Requirement Prioritization is a crucial step towards making good decisions about product plan but it is often neglected. It is observed that in many cases the product is considered as a failure without proper prioritization because it fails to meet its core objectives. When a project has tight schedule, restricted resources, and customer expectations are high then it is necessary to deploy the most critical and important features as early as possible. For this purpose requirements are prioritized. Several requirement prioritization techniques have been presented by various researchers over the past years in the domain of SE as well as Computational Intelligence. A new technique is presented in this paper which is a hybrid of both domains named as FuzzyHCV. FuzzyHCV is a hybrid of Hierarchical Cumulative Voting (HCV) and Fuzzy Expert System. Comparative analysis is performed between new technique and an existing HCV technique. Result shows that proposed technique has proved to be more reliable and accurate.
    Keywords: {expert systems;fuzzy set theory;software engineering;statistical analysis;FuzzyHCV technique;RE;SE process;computational intelligence technique;fuzzy expert system;hierarchical cumulative voting;requirement engineering;requirement prioritization techniques;software development life cycle;software engineering process;Computers;Documentation;Expert systems;Fuzzy systems;Software;Software engineering;Fuzzy HCV;Fuzzy systems;HCV;Requirement prioritization}, (ID#:14-2766)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828370&isnumber=6828323
  • Alvares, Marcos; Marwala, Tshilidzi; de Lima Neto, Fernando Buarque, "Application of Computational Intelligence For Source Code Classification," Evolutionary Computation (CEC), 2014 IEEE Congress on, vol., no., pp.895, 902, 6-11 July 2014. doi: 10.1109/CEC.2014.6900300 Multi-language Source Code Management systems have been largely used to collaboratively manage software development projects. These systems represent a fundamental step in order to fully use communication enhancements by producing concrete value on the way people collaborate to produce more reliable computational systems. These systems evaluate results of analyses in order to organise and optimise source code. These analyses are strongly dependent on technologies (i.e. framework, programming language, libraries) each of them with their own characteristics and syntactic structure. To overcome such limitation, source code classification is an essential preprocessing step to identify which analyses should be evaluated. This paper introduces a new approach for generating content-based classifiers by using Evolutionary Algorithms. Experiments were performed on real world source code collected from more than 200 different open source projects. Results show us that our approach can be successfully used for creating more accurate source code classifiers. The resulting classifier is also expansible and flexible to new classification scenarios (opening perspectives for new technologies).
    Keywords: {Algorithm design and analysis;Computer languages;Databases;Genetic algorithms;Libraries;Sociology;Statistics}, (ID#:14-2767)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900300&isnumber=6900223

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Confinement

Confinement


In photonics, confinement is important to loss avoidance. In quantum theory, it relates to energy levels. The articles cited here cover both concepts and were presented or published in the first half of 2014.

  • Hasan, D.; Alam, M.S., "Ultra-Broadband Confinement in Deep Sub-Wavelength Air Hole of a Suspended Core Fiber," Lightwave Technology, Journal of, vol.32, no. 8, pp. 1434, 1441, April 15, 2014. doi: 10.1109/JLT.2014.2306292 We demonstrate low loss (0.4043 dB/Km at 1.55 mm) deep sub-wavelength broadband evanescent field confinement in low index material from near IR to mid IR wavelengths with the aid of an specialty optical fiber whilst achieving at least 1.5 dB improvement of figure of merit over the previous design. Plane strain analysis has been conducted to foresee fiber material dependent fabrication challenges associated with such nanoscale feature due to thermal stress. Size dependence of air hole is explained rigorously by modifying the existent slot waveguide model. We report significant improvement of field intensity, interaction length, bandwidth and surface sensitivity over the conventional free standing nanowire structure. The effect of metal layer thickness on surface plasmon resonance sensitivity is explored as well. A method to obtain strong evanescent field in such structure for medical sensing is also demonstrated. The proposed technique to enhance sub-wavelength confinement is expected to be of potential engineering merits for optical nanosensors, atomic scale waveguide for single molecule inspection and ultra-low mode volume cavity.
    Keywords: fibre optic sensors;nanomedicine;nanophotonics;nanosensors;nanowires;optical fibre fabrication; optical fibre losses; optical materials; surface plasmon resonance thermal stresses; atomic scale waveguide; bandwidth; conventional free standing nanowire structure; deep subwavelength air hole; fiber material dependent fabrication; field intensity; figure of merit; gain 1.5 dB; interaction length; low index material; low loss deep subwavelength broadband evanescent field confinement; medical sensing; metal layer thickness ;mid IR wavelengths; nanoscale feature; optical nanosensors; plane strain analysis; single molecule inspection; size dependence; slot waveguide model; specialty optical fiber; strong evanescent field; subwavelength confinement; surface plasmon resonance sensitivity; surface sensitivity; suspended core fiber; thermal stress; ultrabroadband confinement; ultralow mode volume cavity; wavelength 1.55 mum; Indexes; Materials; Optical fiber devices; Optical fiber dispersion; Optical fibers; Optical surface waves; Characteristic decay length; evanescent sensing; field intensity; slot waveguide; sub-wavelength confinement ;suspended core fiber(ID#:14-2729)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6742596&isnumber=6759765
  • Paul, U.; Hasan, M.; Rahman, M.T.; Bhuiyan, AG., "Effect of QD Size And Band-Offsets On Confinement Energy In Inn QD Heterostructure," Electrical Information and Communication Technology (EICT), 2013 International Conference on, pp.1,4, 13-15 Feb. 2014. doi: 10.1109/EICT.2014.6777897 Detailed theoretical analysis of how QD size variation and band-offset affects the confinement energy of InN QD is presented. Low dimensional structures show a strong quantum confinement effect, which results in shifting the ground state away from the band edge and discrete eigen-states. Graphically solving 1D Schrodinger ground quantized energy levels of electrons were computed and using Luttinger-Khon 4x4 Hamiltonian matrix ground quantized energy level of holes were determined. Our results allow us to tune dot size and band-offset to obtain required bandgap for InN based low dimensional device design.
    Keywords: III-V semiconductors; Schrodinger equation; energy gap; ground states ;indium compounds; semiconductor heterojunctions; semiconductor quantum dots; InN; Luttinger-Khon 4x4 Hamiltonian matrix ground quantized energy level; band edge; band gap; band-offset effects; confinement energy; discrete eigenstates graphically solving 1D Schrodinger ground quantized energy levels; ground state; low dimensional device design; quantum confinement effect; quantum dot heterostructure; quantum dot size effect; quantum dot size variation; theoretical analysis; Charge carrier processes; Energy states; Equations; Materials; Mathematical model; Optoelectronic devices; Quantum dots; Confinement energy; Indium Nitride; Quantum dots (QD) (ID#:14-2730)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6777897&isnumber=6777807
  • Tripathi, Neeti; Yamashita, Masaru; Uchida, Takeyuki; Akai, Tomoko, "Observations on Size Confinement Effect In B-C-N Nanoparticles Embedded In Mesoporous Silica Channels," Applied Physics Letters, vol. 105, no.1, pp.014106,014106-4, Jul 2014. doi: 10.1063/1.4890000 Fluorescent B-C-N/silica nanoparticles were synthesized by solution impregnation method. Effect of B-C-N particle size on the optical properties was investigated by varying the silica pore sizes. Formation of B-C-N nanoparticles within the mesoporous matrix is confirmed by x-ray diffraction, transmission electron microscopy, and Fourier transform infrared spectroscopy. Furthermore, a remarkable blue-shift in emission peak centres with decreasing pore size in conjugation with band gap modification, ascribed to the size confinement effect. A detailed analysis of experimental results by theoretically defined confinement models demonstrates that the B-C-N nanoparticles in the size range of 3-13 nm falls within the confinement regime. This work demonstrated the experimental evidence of the size confinement effect in smaller size B-C-N nanoparticles.
    Keywords: (not provided) (ID#:14-2731)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6853278&isnumber=6849859
  • Lingyun Wang; Youmin Wang; Xiaojing Zhang, "Integrated Grating-Nanoslot Probe Tip for Near-Field Subwavelength Light Confinement and Fluorescent Sensing," Selected Topics in Quantum Electronics, IEEE Journal of, vol.20, no.3, pp.184,194, May-June 2014. doi: 10.1109/JSTQE.2014.2301232 We demonstrate a near-field sub-wavelength light confinement probe tip comprised of compact embedded metallic focus grating (CEMFG) coupler and photonic crystal (PhC) based l/4 nano-slot tip, in terms of its far-field radiation directivity and near-field sub-wavelength light enhancement. The embedded metallic grating coupler increases the free space coupling at tilted coupling angle of 25deg with over 280 times light intensity enhancement for 10 mm coupler size. Further, 20 nm air slot embedded in single line defect PhC waveguide are designed, using the impedance matching concept of the l/4 "air rod", to form the TE mode light wave resonance right at the probe tip aperture opening. This leads to the light beam spot size reduction down to l/20. The near-field center peak intensity is enhanced by 4.2 times from that of the rectangular waveguide input, with the total enhancement factor of 1185 from free space laser source intensity. The near-field fluorescence excitation and detection also demonstrate its single molecular enhanced fluorescence measurement capability.
    Keywords: diffraction gratings; fluorescence; integrated optics; nanophotonics; nanosensors; optical couplers; optical sensors; optical waveguides; photonic crystals; rectangular waveguides; TE mode light wave resonance; air rod; air slot; compact embedded metallic focus grating coupler; coupler size; far-field radiation directivity; fluorescent sensing; free space coupling; free space laser source intensity; impedance matching; integrated grating-nanoslot probe tip; light beam spot size reduction ;light intensity enhancement; near-field center peak intensity; near-field fluorescence excitation; near-field sub-wavelength light confinement probe tip; near-field sub-wavelength light enhancement; near-field subwavelength light confinement; photonic crystal based l/4 nanoslot tip; probe tip aperture opening; rectangular waveguide input; single line defect PhC waveguide; single molecular enhanced fluorescence measurement capability; size 10 mum; tilted coupling angle; Couplers; Couplings; Etching; Gratings; Metals; Optical waveguides; Probes; l/4 nano-slot; Metallic grating; light confinement; near-field; photonic crystal; single molecule fluorescence detection (ID#:14-2732)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6716001&isnumber=6603368
  • Ali, M.S.; Islam, A; Ahmad, R.; Siddique, AH.; Nasim, K.M.; Khan, M.AG.; Habib, M.S., "Design Of Hybrid Photonic Crystal Fibers For Tailoring Dispersion And Confinement Loss," Electrical Information and Communication Technology (EICT), 2013 International Conference on, pp. 1, 4, 13-15 Feb. 2014 doi: 10.1109/EICT.2014.6777861 This paper presents the proposal of a hybrid cladding photonic crystal fiber offering flat dispersion and low confinement operating in the Telecom bands. Simulation results reveal that near zero ultra flattened dispersion of 0 +- 1.20 ps/(nm.km) is obtained in a 1.25 to 1.70 mm wavelength range i.e. 450 nm flat band along with low confinement losses which is less than 10-2 dB/km at operating wavelength 1.55 mm. Moreover, the sensitivity of the fiber dispersion properties to a +-1% to +-5% variation in the optimum parameters is studied for practical conditions.
    Keywords: holey fibres; optical fibre dispersion; optical fibre losses; photonic crystals; Telecom bands; design; fiber dispersion properties; flat dispersion; hybrid cladding photonic crystal fiber; low confinement losses; wavelength 1.25 mum to 1.70 mum; Chromatic dispersion; Optical fiber communication; Optical fiber dispersion; Optical fibers; Photonic crystal fibers; Refractive index; chromatic dispersion; confinement loss; effective area; photonic crystal fiber (ID#:14-2733)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6777861&isnumber=6777807
  • Ghasemi, M.; Choudhury, P.K., "Effect Due To Down-Tapering On The Hybrid Mode Power Confinement In Liquid Crystal Optical Fiber," Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), 2014 11th International Conference on, pp.1,4, 14-17 May 2014. doi: 10.1109/ECTICon.2014.6839753 The paper presents analysis of the wave propagation through down-tapered three-layer liquid crystal optical fiber in respect of power confinement due to the hybrid modes supported by the guide. The inner two regions are homogeneous and isotropic dielectrics whereas the outermost layer being composed of radially anisotropic liquid crystal material. It has been found that the guide supports relatively very high amount of power in the liquid crystal region, which indicates the possible use of such microstructures in varieties of optical applications. The effects on confinement due to the positive and the negative (illustrating the taper type) values of taper slopes are reported.
    Keywords: dielectric materials; liquid crystals; micro-optics; optical fibres; down-tapering; homogeneous dielectrics; hybrid mode power confinement; hybrid modes; isotropic dielectrics; liquid crystal optical fiber; power confinement; radially anisotropic liquid crystal material; taper slopes; wave propagation; Dielectrics; Equations ;Liquid crystals; Optical fiber dispersion; Optical fibers; Liquid crystal fibers; complex mediums; electromagnetic wave propagation (ID#:14-2734)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6839753&isnumber=6839704
  • Janjua, B.; Ng, T.K.; Alyamani, AY.; El-Desouki, M.M.; Ooi, B.S., "Enhancement of Hole Confinement by Monolayer Insertion in Asymmetric Quantum-Barrier UVB Light Emitting Diodes," Photonics Journal, IEEE, vol.6, no.2, pp.1,9, April 2014. doi: 10.1109/JPHOT.2014.2310199 We study the enhanced hole confinement by having a large bandgap AlGaN monolayer insertion (MLI) between the quantum well (QW) and the quantum barrier (QB). The numerical analysis examines the energy band alignment diagrams, using a self-consistent 6 x 6 k *p method and, considering carrier distribution, recombination rates (Shockley-Reed-Hall, Auger, and radiative recombination rates), under equilibrium and forward bias conditions. The active region is based on AlaGa1-aN (barrier)/AlbGa1-bN (MLI)/AlcGa1-cN (well)/AldGa1-dN (barrier), where b d a c. A large bandgap AlbGa1 - bN mono layer, inserted between the QW and QB, was found to be effective in providing stronger hole confinement. With the proposed band engineering scheme, an increase of more than 30% in spatial overlap of carrier wavefunction was obtained, with a considerable increase in carrier density and direct radiative recombination rates. The single-QW-based UV-LED was designed to emit at 280 nm, which is an effective wavelength for water disinfection.
    Keywords: Auger effect; III-V semiconductors; aluminium compounds; electron-hole recombination; gallium compounds; k.p calculations; light emitting diodes; monolayers; semiconductor quantum wells; wave functions; wide band gap semiconductors; AlGaN; Auger recombination rates; MLI; Shockley-Reed-Hall recombination rates; asymmetric quantum-barrier UVB light emitting diodes; carrier density; carrier distribution; carrier wavefunction; direct radiative recombination rates; energy band alignment diagrams; enhanced hole confinement; hole confinement; monolayer insertion; numerical analysis; radiative recombination rates; recombination rates ;self-consistent 6 x 6 k *p method; water disinfection; Aluminum gallium nitride; Charge carrier density; Charge carrier processes; III-V semiconductor materials; Light emitting diodes; Radiative recombination ;Light emitting diodes (LEDs);energy barrier; semiconductor quantum well; thin insertion layer; ultraviolet; water disinfection; wavefunction overlap (ID#:14-2735)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6758387&isnumber=6750774
  • Qijing Lu; Fang-Jie Shu; Chang-Ling Zou, "Extremely Local Electric Field Enhancement and Light Confinement in Dielectric Waveguide," Photonics Technology Letters, IEEE, vol.26, no.14, pp.1426, 1429, July15, 15 2014. doi: 10.1109/LPT.2014.2322595 Extremely local electric field enhancement and light confinement are demonstrated in dielectric waveguides with corner and gap geometry. Classical electromagnetic theory predicts that the field enhancement and confinement abilities are inversely proportional to radius of rounded corner (r) and gap (g), and shows a singularity for infinitesimal r and g. For practical parameters with r = g = 10 nm, the mode area of opposing apex-to-apex fan-shaped waveguides can be as small as 4 x 10-3 A0 (A0 = l2/4), far beyond the diffraction limit. The lossless dielectric corner and gap structures offer an alternative method to enhance light-matter interactions without the use of metal nanostructures, and can find applications in quantum electrodynamics, sensors, and nanoparticle trapping.
    Keywords: light diffraction; optical waveguide theory; apex-to-apex fan-shaped waveguides; classical electromagnetic theory; corner geometry; dielectric waveguide; diffraction limit; extremely local electric field enhancement; gap geometry; gap radius; gap structures; light confinement; light-matter interactions; lossless dielectric corner; nanoparticle trapping; quantum electrodynamics; rounded corner radius; sensors; Antennas; Dielectrics; Electric fields; Optical waveguides; Plasmons; Waveguide discontinuities; Dielectric waveguides; nanophotonics; optical waveguides (ID#:14-2736)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6815656&isnumber=6840377
  • Kai-Jun Che, "Waveguide modulated photonic molecules with metallic confinement," Transparent Optical Networks (ICTON), 2014 16th International Conference on , vol., no., pp.1,3, 6-10 July 2014 doi: 10.1109/ICTON.2014.6876649 Abstract: Photonic molecules based on the evanescent wave have been displayed unique physical characteristics, such as quality factor enhancement of optical system and mode transition between different order modes etc. Waveguide, as basic photonic element, is introduced for indirect optical interaction and guided emission of photonics molecules. Due to that the metal can effectively confine the photons in a fixed space and facilitates the high density device package as optical insulator, the optical characteristics of photonic molecules with metallic confinement, including the mode and emission characteristics, are investigated by electromagnetic analysis, combined with finite difference time domain simulations. The results show the metal dissipation of odd and even state split since they have different morphologies at coupling area and the guided emission is strongly determined by the metal-dielectric confined waveguide. Moreover, non-local optical interaction between two whispering gallery circular resonators through a waveguide coupled in radial direction is proposed for breaking the small depth of evanescent permeation. Strong optical interaction is found from the even state and interaction intensity is relative to the features of waveguide.
    Keywords: Q-factor; finite difference time-domain analysis; optical resonators; optical waveguides; coupling area; electromagnetic analysis; evanescent permeation; evanescent wave; even state; finite difference time domain simulations; guided emission ;high density device package; interaction intensity; metal dissipation; metal-dielectric confined waveguide; metallic confinement; mode transition; nonlocal optical interaction; odd state; optical insulator; photonic molecules; quality factor enhancement; waveguide modulated photonic molecules; whispering gallery circular resonators; Integrated optics; Optical coupling; Optical resonators; Optical surface waves; Optical waveguides; Photonics; Stimulated emission; guided emission; metallic confinement; n on-local optical interaction; photonic molecules (ID#:14-2737)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6876649&isnumber=6876260
  • Hegde, Ganesh; Povolotskyi, Michael; Kubis, Tillmann; Charles, James; Klimeck, Gerhard, "An Environment-Dependent Semi-Empirical Tight Binding Model Suitable For Electron Transport In Bulk Metals, Metal Alloys, Metallic Interfaces, And Metallic Nanostructures. II. Application--Effect Of Quantum Confinement And Homogeneous Strain On Cu Conductance," Journal of Applied Physics, vol.115, no.12, pp.123704, 123704-5, Mar 2014. doi: 10.1063/1.4868979 The Semi-Empirical tight binding model developed in Part I Hegde et al. [J. Appl. Phys. 115, 123703 (2014)] is applied to metal transport problems of current relevance in Part II. A systematic study of the effect of quantum confinement, transport orientation, and homogeneous strain on electronic transport properties of Cu is carried out. It is found that quantum confinement from bulk to nanowire boundary conditions leads to significant anisotropy in conductance of Cu along different transport orientations. Compressive homogeneous strain is found to reduce resistivity by increasing the density of conducting modes in Cu. The [110] transport orientation in Cu nanowires is found to be the most favorable for mitigating conductivity degradation since it shows least reduction in conductance with confinement and responds most favorably to compressive strain.
    Keywords: (not provided) (ID#:14-2738)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778709&isnumber=6777935
  • Padilla, J.L.; Alper, C.; Gamiz, F.; Ionescu, AM., "Assessment of field-induced quantum confinement in heterogate germanium electron-hole bilayer tunnel field-effect transistor," Applied Physics Letters, vol.105, no.8, pp.082108, 082108-4, Aug 2014. doi: 10.1063/1.4894088 The analysis of quantum mechanical confinement in recent germanium electron-hole bilayer tunnel field-effect transistors has been shown to substantially affect the band-to-band tunneling (BTBT) mechanism between electron and hole inversion layers that constitutes the operating principle of these devices. The vertical electric field that appears across the intrinsic semiconductor to give rise to the bilayer configuration makes the formerly continuous conduction and valence bands become a discrete set of energy subbands, therefore increasing the effective bandgap close to the gates and reducing the BTBT probabilities. In this letter, we present a simulation approach that shows how the inclusion of quantum confinement and the subsequent modification of the band profile results in the appearance of lateral tunneling to the underlap regions that greatly degrades the subthreshold swing of these devices. To overcome this drawback imposed by confinement, we propose an heterogate configuration that proves to suppress this parasitic tunneling and enhances the device performance.
    Keywords: (not provided) (ID#:14-2739)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6887270&isnumber=6884699
  • Puthen Veettil, B.; Konig, D.; Patterson, R.; Smyth, S.; Conibeer, G., "Electronic Confinement In Modulation Doped Quantum Dots," Applied Physics Letters, vol.104, no. 15, pp. 153102, 153102-3, Apr 2014. doi: 10.1063/1.4871576 Modulation doping, an effective way to dope quantum dots (QDs), modifies the confinement energy levels in the QDs. We present a self-consistent full multi-grid solver to analyze the effect of modulation doping on the confinement energy levels in large-area structures containing Si QDs in SiO2 and Si3N4 dielectrics. The confinement energy was found to be significantly lower when QDs were in close proximity to dopant ions in the dielectric. This effect was found to be smaller in Si3N4, while smaller QDs in SiO2 were highly susceptible to energy reduction. The energy reduction was found to follow a power law relationship with the QD size.
    Keywords: (not provided) (ID#:14-2740)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798595&isnumber=6798591
  • Li Wei; Aldawsari, S.; Wing-Ki Liu; West, B.R., "Theoretical Analysis of Plasmonic Modes in a Symmetric Conductor-Gap-Dielectric Structure for Nanoscale Confinement," Photonics Journal, IEEE, vol.6, no.3, pp.1, 10, June 2014. doi: 10.1109/JPHOT.2014.2326677 A hybrid plasmonic waveguide is considered as one of the most promising architectures for long-range subwavelength guiding. The objective of this paper is to present a theoretical analysis of plasmonic guided modes in a symmetric conductor-gap-dielectric (SCGD) system. It consists of a thin metal conductor symmetrically sandwiched by two-layer dielectrics with low-index nanoscale gaps inside. The SCGD waveguide can support ultra-long range surface plasmon-polariton mode when the thickness of a low-index gap is smaller than a cutoff gap thickness. For relatively high index contrast ratios of the cladding to gap layers, the cutoff gap thickness is only a few nanometers, within which the electric field of the guided SCGD mode is tightly confined. The dispersion equations and approximate analytical expressions of the cutoff gap thickness are derived in order to characterize the properties of the guided mode. Our simulation results show that the cutoff gap thickness can be tailored by the metal film thickness and the indices of the cladding and gap materials. The geometrical scheme for lateral confinement is also presented. Such a structure with unique features of low-loss and strong confinement has applications in the fabrication of active and passive plasmonic devices.
    Keywords: metallic thin films; nanophotonics; optical waveguides; plasmonics; polaritons; surface plasmons; cutoff gap thickness; dispersion equations; electric field; hybrid plasmonic waveguide; low-index nanoscale gaps; metal film thickness; nanoscale confinement; plasmonic guided modes; surface plasmon-polariton mode; symmetric conductor-gap-dielectric structure; theoretical analysis; thin metal conductor; two-layer dielectrics; Equations; Films; Indexes; Metals; Optical waveguides; Plasmons; Propagation losses; Surface plasmons; guided wave; integrated optics; waveguides (ID#:14-2741)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6823089&isnumber=6809260
  • Barbagiovanni, E.G.; Lockwood, D.J.; Rowell, N.L.; Costa Filho, R.N.; Berbezier, I; Amiard, G.; Favre, L.; Ronda, A; Faustini, M.; Grosso, D., "Role of Quantum Confinement In Luminescence Efficiency Of Group IV Nanostructures," Journal of Applied Physics, vol.115, no.4, pp. 044311, 044311-4, Jan 2014. doi: 10.1063/1.4863397 Experimental results obtained previously for the photoluminescence efficiency (PLeff) of Ge quantum dots (QDs) are theoretically studied. A log-log plot of PLeff versus QD diameter (D) resulted in an identical slope for each Ge QD sample only when EG(D2+D)1. We identified that above D 6.2 nm: EGD1 due to a changing effective mass (EM), while below D 4.6 nm: EGD2 due to electron/hole confinement. We propose that as the QD size is initially reduced, the EM is reduced, which increases the Bohr radius and interface scattering until eventually pure quantum confinement effects dominate at small D.
    Keywords: (not provided) (ID#:14-2742)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6728975&isnumber=6720061
  • Ishizaka, Yuhei; Nagai, Masaru; Saitoh, Kunimasa, "Strong Light Confinement In A Metal-Assisted Silicon Slot Waveguide," Optical Fibre Technology, 2014 OptoElectronics and Communication Conference and Australian Conference on, pp.103,105, 6-10 July 2014. A metal-assisted silicon slot waveguide is presented. Numerical results show that the proposed structure achieves a strong light confinement in a low-index region, which leads to the improvement of the sensitivity in refractive index sensors.
    Keywords: Metals; Optical waveguides; Optimized production technology; Refractive index; Sensitivity; Sensors; Silicon (ID#:14-2743)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6888012&isnumber=6887957
  • Park, Y.; Hirose, Y.; Nakao, S.; Fukumura, T.; Xu, J.; Hasegawa, T., "Quantum Confinement Effect In Bi Anti-Dot Thin Films With Tailored Pore Wall Widths And Thicknesses," Applied Physics Letters, vol. 104, no. 2, pp.023106,023106-4, Jan 2014. doi: 10.1063/1.4861775 We investigated quantum confinement effects in Bi anti-dot thin films grown on anodized aluminium oxide templates. The pore wall widths (wBi) and thickness (t) of the films were tailored to have values longer or shorter than Fermi wavelength of Bi (lF = 40 nm). Magnetoresistance measurements revealed a well-defined weak antilocalization effect below 10 K. Coherence lengths (Lph) as functions of temperature were derived from the magnetoresistance vs field curves by assuming the Hikami-Larkin-Nagaoka model. The anti-dot thin film with wBi and t smaller than lF showed low dimensional electronic behavior at low temperatures where Lph(T) exceed wBi or t.
    Keywords: (not provided) (ID#:14-2744)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6712880&isnumber=6712870

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Control Theory

Control Theory


According to Wikipedia, "Control theory is an interdisciplinary branch of engineering and mathematics that deals with the behavior of dynamical systems with inputs, and how their behavior is modified by feedback." In cyber security, control theory offers methods and approaches to potentially solve hard problems. The articles cited here look at both theory and applications and were presented in the first half of 2014.

  • Spyridopoulos, Theodoros; Maraslis, Konstantinos; Tryfonas, Theo; Oikonomou, George; Li, Shancang, "Managing Cyber Security Risks In Industrial Control Systems With Game Theory And Viable System Modelling," System of Systems Engineering (SOSE), 2014 9th International Conference on, pp.266,271, 9-13 June 2014. doi: 10.1109/SYSOSE .2014.6892499 Cyber security risk management in Industrial Control Systems has been a challenging problem for both practitioners and the research community. Their proprietary nature along with the complexity of those systems renders traditional approaches rather insufficient and creating the need for the adoption of a holistic point of view. This paper draws upon the principles of the Viable System Model and Game Theory in order to present a novel systemic approach towards cyber security management in this field, taking into account the complex inter-dependencies and providing cost-efficient defence solutions.
    Keywords: Airports; Computer security; Game theory; Games; Industrial control; Risk management; asset evaluation; game theory; industrial control systems; risk management; viable system model (ID#:14-2745)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6892499&isnumber=6892448
  • Kumar, P.; Singh, AK.; Kummari, N.K., "P-Q Theory Based Modified Control Algorithm For Load Compensating Using DSTATCOM," Harmonics and Quality of Power (ICHQP), 2014 IEEE 16th International Conference on, pp.591,595, 25-28 May 2014. doi: 10.1109/ICHQP.2014.6842810 This paper proposes a control algorithm for DSTATCOM (Distributed STATic COMpensator) to compensate the source current harmonics in a non-sinusoidal voltage source environment. A 3-leg VSC (voltage source converter) based DSTATCOM is used for the load compensation, on a system consisting balanced 5th harmonic PCC voltages, in 3-phase, 4-wire distribution system. Simulations are performed in MATLAB(r) environment for two load conditions, i.e., (i) a 3-phase non-linear load (NLL), and (ii) a NLL with reactive load. The results show that the proposed modification in the p-q theory control algorithm allows successful harmonic compensation at load side.
    Keywords: compensation; power convertors; static VAr compensators; 3-leg VSC; 3-phase 4-wire distribution system; 3-phase nonlinear load; Matlab environment; NLL; balanced 5th harmonic PCC voltages; control algorithm; distributed static compensator; load compensation; nonsinusoidal voltage source environment ;p-q theory based modified control algorithm; point of common coupling; reactive load; source current harmonic compensation; voltage source converter based DSTATCOM; Harmonic analysis; Power harmonic filters; Reactive power; Rectifiers; Vectors; Voltage control; DSTATCOM; Harmonics; p-q theory (ID#:14-2746)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842810&isnumber=6842734
  • Veremey, Evgeny I, "Computer Technologies Based On Optimization Approach In Control Theory," Computer Technologies in Physical and Engineering Applications (ICCTPEA), 2014 International Conference on, pp.200,201, June 30 2014-July 4 2014. doi: 10.1109/ICCTPEA.2014.6893359 Report is devoted to basic conceptions of computer technologies and systems application in the wide area of control systems and processes investigation and design. A special attention is focused on the ideology of optimization approach connected with the problems of control systems modeling, analysis, and synthesis. Some questions of digital control laws real-time implementation are discussed. Computational algorithms are proposed for optimization problems with no formalized performance indices. The main positions are illustrated by correspondent numerical examples.
    Keywords: (not provided) (ID#:14-2747)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6893359&isnumber=6893238
  • Fatemi, M.; Haykin, S., "Cognitive Control: Theory and Application," Access, IEEE, vol.2, pp.698, 710, 2014. doi: 10.1109/ACCESS.2014.2332333 From an engineering point-of-view, cognitive control is inspired by the prefrontal cortex of the human brain; cognitive control may therefore be viewed as the overarching function of a cognitive dynamic system. In this paper, we describe a new way of thinking about cognitive control that embodies two basic components: learning and planning, both of which are based on two notions: 1) two-state model of the environment and the perceptor and 2) perception-action cycle, which is a distinctive characteristic of the cognitive dynamic system. Most importantly, it is shown that the cognitive control learning algorithm is a special form of Bellman's dynamic programming. Distinctive properties of the new algorithm include the following: 1) optimality of performance; 2) algorithmic convergence to optimal policy; and 3) linear law of complexity measured in terms of the number of actions taken by the cognitive controller on the environment. To validate these intrinsic properties of the algorithm, a computational experiment is presented, which involves a cognitive tracking radar that is known to closely mimic the visual brain. The experiment illustrates two different scenarios: 1) the impact of planning on learning curves of the new cognitive controller and 2) comparison of the learning curves of three different controllers, based on dynamic optimization, traditional (Q) -learning, and the new algorithm. The latter two algorithms are based on the two-state model, and they both involve the use of planning.
    Keywords: cognition; computational complexity; dynamic programming; Bellman dynamic programming; Q-learning; algorithmic convergence; cognitive control learning algorithm; cognitive dynamic system; cognitive tracking radar; dynamic optimization; human brain; learning curves; linear complexity law; perception-action cycle; performance optimality; prefrontal cortex; two-state model; visual brain; Brain modeling; Cognition; Complexity theory; Control systems; Dynamic programming; Heuristic algorithms; Perception; Radar tracking; Bayesian filtering; Cognitive dynamic systems; Shannon's entropy; cognitive control; dynamic programming; entropic state; explore/exploit tradeoff ;learning; planning; two-state model (ID#:14-2748)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6843352&isnumber=6705689
  • Yin-Lam Chow; Pavone, M., "A Framework For Time-Consistent, Risk-Averse Model Predictive Control: Theory And Algorithms," American Control Conference (ACC), 2014, pp.4204,4211, 4-6 June 2014. doi: 10.1109/ACC.2014.6859437 In this paper we present a framework for risk-averse model predictive control (MPC) of linear systems affected by multiplicative uncertainty. Our key innovation is to consider time-consistent, dynamic risk metrics as objective functions to be minimized. This framework is axiomatically justified in terms of time-consistency of risk preferences, is amenable to dynamic optimization, and is unifying in the sense that it captures a full range of risk assessments from risk-neutral to worst case. Within this framework, we propose and analyze an online risk-averse MPC algorithm that is provably stabilizing. Furthermore, by exploiting the dual representation of time-consistent, dynamic risk metrics, we cast the computation of the MPC control law as a convex optimization problem amenable to implementation on embedded systems. Simulation results are presented and discussed.
    Keywords: convex programming; linear systems; predictive control; risk analysis; stability; uncertain systems; MPC control law; convex optimization problem; dynamic optimization; dynamic risk metrics; linear systems; multiplicative uncertainty; risk preference; risk-averse model predictive control; stability; time-consistent model predictive control; Equations; Markov processes; Mathematical model; Measurement; Predictive control; Random variables; Stability analysis; LMIs; Predictive control for linear systems; Stochastic systems (ID#:14-2749)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6859437&isnumber=6858556
  • Ueyama, Y., "Feedback Gain Indicates The Preferred Direction In Optimal Feedback Control Theory," Advanced Motion Control (AMC), 2014 IEEE 13th International Workshop on, pp.651,656, 14-16 March 2014. doi: 10.1109/AMC.2014.6823358 We investigated the role of feedback gain in optimal feedback control (OFC) theory using a neuromotor system. Neural studies have shown that directional tuning, known as the "preferred direction" (PD), is a basic functional property of cell activity in the primary motor cortex (M1). However, it is not clear which directions the M1 codes for, because neural activities can correlate with several directional parameters, such as joint torque and end-point motion. Thus, to examine the computational mechanism in the M1, we modeled the isometric motor task of a musculoskeletal system required to generate the desired joint torque. Then, we computed the optimal feedback gain according to OFC. The feedback gain indicated directional tunings of the joint torque and end-point motion in Cartesian space that were similar to the M1 neuron PDs observed in previous studies. Thus, we suggest that the M1 acts as a feedback gain in OFC.
    Keywords: biocontrol; feedback; neurophysiology; optimal control; biological motor system; central nervous system; directional tuning; end-point motion; isometric motor task; joint torque; musculoskeletal system; neuromotor system; optimal feedback control theory; optimal feedback gain; preferred direction; primary motor cortex; Elbow; Force; Joints; Kalman filters; Muscles; Shoulder; Torque; isometric task; motor control; motor cortex; musculoskeletal systems; population coding (ID#:14-2750)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6823358&isnumber=6823244
  • Xiaoliang Zhang; Junqiang Bai, "Aerodynamic Optimization Utilizing Control Theory," Control and Decision Conference (2014 CCDC), The 26th Chinese, pp.1293, 1298, May 31 2014-June 2 2014. doi: 10.1109/CCDC.2014.6852366 This paper presents the method of aerodynamic optimization utilizing control theory, which is also called the adjoint method. The discrete adjoint equations are obtained from an unstructured cell-vortex finite-volume Navier-Stokes solver. The developed adjoint equations solver is verified by comparison of objective sensitivities with finite differences. An aerodynamic optimization system is developed combining the flow solver, adjoint solver, mesh deformation and a gradient-based optimizer. The surface geometry is parameterized using Free Form Deformation (FFD) method and a linear elasticity method is employed for the volume mesh deformation during optimization process. This optimization system is successfully applied to a design case of ONERA M6 transonic wing design.
    Keywords: Navier-Stokes equations; aerodynamics; aerospace components; computational fluid dynamics; design engineering; elasticity; finite difference methods; finite volume methods; gradient methods; mechanical engineering computing; mesh generation; optimisation; transonic flow; vortices; FFD method; ONERA M6 transonic wing design; adjoint equations solver; adjoint method; aerodynamic optimization; computational fluid dynamics; control theory; discrete adjoint equations; finite difference; flow solver; free form deformation method; gradient-based optimizer; linear elasticity method; surface geometry; unstructured cell-vortex finite-volume Navier-Stokes solver; volume mesh deformation; Aerodynamics; Equations; Geometry; Mathematical model; Optimization; Sensitivity; Vectors; Aerodynamic and Adjoint method; Control theory; Optimization (ID#:14-2751)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6852366&isnumber=6852105
  • Khanum, S.; Islam, M.M., "An Enhanced Model Of Vertical Handoff Decision Based On Fuzzy Control Theory & User Preference," Electrical Information and Communication Technology (EICT), 2013 International Conference on , vol., no., pp.1,6, 13-15 Feb. 2014. doi: 10.1109/EICT.2014.6777873 With the development of wireless communication technology, various wireless networks will exist with different features in same premises. Heterogeneous networks will be dominant in the next generation wireless networks. In such networks choose the most suitable network for mobile user is one of the key issues. Vertical handoff decision making is one of the most important topics in wireless heterogeneous networks architecture. Here the most significant parameters are considered in vertical handoff decision. The proposed method considered Received signal strength (RSS), Monetary Cost(C), Bandwidth (BW), Battery consumption (BC), Security (S) and Reliability (R). Handoff decision making is divided in two sections. First section calculates system obtained value (SOV) considering RSS, C, BW and BC. SOV is calculated using fuzzy logic theory. Today's mobile user are very intelligent in deciding there desired type of services. User preferred network is choose from user priority list is called User obtained value (UOV). Then handoff decisions are made based on SOV & UOV to select the most appropriate network for the mobile nodes (MNs). Simulation results show that fuzzy control theory & user preference based vertical handoff decision algorithm (VHDA) is able to make accurate handoff decisions, reduce unnecessary handoffs decrease handoff calculation time and decrease the probability of call blocking and dropping.
    Keywords: decision making; fuzzy control; fuzzy set theory; mobile computing; mobility management (mobile radio);probability; telecommunication network reliability; telecommunication security; MC; RSS; SOV; VHDA; bandwidth; battery consumption; decrease call blocking probability; decrease call dropping probability; decrease handoff calculation time; fuzzy control theory; fuzzy logic theory; mobile nodes; monetary cost; next generation wireless networks; received signal strength; reliability; security; system obtained value calculation; unnecessary handoff reduction; user obtained value; user preference; user priority list; vertical handoff decision enhancement model; vertical handoff decision making; wireless communication technology; wireless heterogeneous networks architecture; Bandwidth; Batteries; Communication system security Mobile communication; Vectors; Wireless networks; Bandwidth; Cost; Fuzzy control theory; Heterogeneous networks; Received signal strength; Security and user preference; Vertical handoff (ID#:14-2752)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6777873&isnumber=6777807

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Covert Channels

Covert Channels


A covert channel is a simple, effective mechanism for sending and receiving data between machines without alerting any firewalls or intrusion detectors on the network. In cybersecurity science, they have value both as a means for defense and attack. The work cited here, presented or published between January and October of 2014, looks at covert channels in radar and other signal processors., timing, IPv6, DNS and attacks within the Cloud.

  • Shi, H.; Tennant, A, "Covert Communication Using A Directly Modulated Array Transmitter," Antennas and Propagation (EuCAP), 2014 8th European Conference on, pp.352, 354, 6-11 April 2014. doi: 10.1109/EuCAP.2014.6901764 A Direct Antenna Modulation (DAM) scheme is configured on a 2-element array with 2-bit phase control. Such a transmitter is shown to generate constellations with two different orders simultaneously towards different transmitting angles. A possible covert communication scenario is presented in which a constellation with 16 desired signals can be generated at the intended direction, while at a second direction one with reduced number of distinct signal points is purposely generated to prevent accurate demodulation by eavesdropper. In addition, system can be configured to actively lead low-level constellation towards up to two independent pre-known eavesdropping angles.
    Keywords: Antenna arrays; Arrays; Constellation diagram; Transmitting antennas; Direct Antenna Modulation (DAM); constellation; phased array(ID#:14-2768)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6901764&isnumber=6901672
  • Shrestha, P.L.; Hempel, M.; Sharif, H., "Towards a Unified Model For The Analysis Of Timing-Based Covert Channels," Communications (ICC), 2014 IEEE International Conference on, pp.816,820, 10-14 June 2014. doi: 10.1109/ICC.2014.6883420 Covert channels are a network security risk growing both in sophistication and utilization, and thus posing an increasing threat. They leverage benign and overt network activities, such as the modulation of packet inter-arrival time, to covertly transmit information without detection by current network security approaches such as firewalls. This makes them a grave security concern. Thus, researching methods for detecting and disrupting such covert communication is of utmost importance. Understanding and developing analytical models is an essential requirement of covert channel analysis. Unfortunately, due to the enormous range of covert channel algorithms available it becomes very inefficient to analyze them on a case-by-case basis. Hence, a unified model that can represent a wide variety of covert channels is required, but is not yet available. In other publications, individual models to analyze the capacity of interrupt-related covert channels have been discussed. In our work, we present a unique model to unify these approaches. This model has been analyzed and we have presented the results and verification of our approach using MATLAB simulations.
    Keywords: firewalls; telecommunication channels; Matlab simulations; analytical models; covert communication; firewalls; interrupt-related covert channels; network security risk; packet inter-arrival time modulation; timing-based covert channel analysis; Analytical models; Delays; Jitter; Mathematical model; Receivers; Security; Capacity; Covert Communication; Intemipt-Related Covert Channel; Mathematical Modeling; Model Analysis; Network Security; Packet Rate Timing Channels (ID#:14-2769)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883420&isnumber=6883277
  • Rezaei, F.; Hempel, M.; Shrestha, P.L.; Sharif, H., "Achieving Robustness And Capacity Gains In Covert Timing Channels," Communications (ICC), 2014 IEEE International Conference on, pp.969,974, 10-14 June 2014. doi: 10.1109/ICC.2014.6883445 In this paper, we introduce a covert timing channel (CTC) algorithm and compare it to one of the most prevailing CTC algorithms, originally proposed by Cabuk et al. CTC is a form of covert channels - methods that exploit network activities to transmit secret data over packet-based networks - by modifying packet timing. This algorithm is a seminal work, one of the most widely cited CTCs, and the foundation for many CTC research activities. In order to overcome some of the disadvantages of this algorithm we introduce a covert timing channel technique that leverages timeout thresholds. The proposed algorithm is compared to the original algorithm in terms of channel capacity, impact on overt traffic, bit error rates, and latency. Based on our simulation results the proposed algorithm outperforms the work from Cabuk et al., especially in terms of its higher covert data transmission rate with lower latency and fewer bit errors. In our work we also address the desynchronization problem found in Cabuk et al.'s algorithm in our simulation results and show that even in the case of the synchronization-corrected Cabuk et al. algorithm our proposed method provides better results in terms of capacity and latency.
    Keywords: channel capacity; wireless channels; CTC algorithms; bit error rates; capacity gains; channel capacity; covert timing channel algorithm; desynchronization problem; overt traffic; packet timing; packet-based networks; secret data ;timeout thresholds; Algorithm design and analysis; Bit error rate; Channel capacity; Delays; Jitter; Receivers; Capacity; Covert Communication; Covert Timing Channel; Hidden Information; Latency; Network Security (ID#:14-2770)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883445&isnumber=6883277
  • Mavani, M.; Ragha, L., "Covert Channel In Ipv6 Destination Option Extension Header," Circuits, Systems, Communication and Information Technology Applications (CSCITA), 2014 International Conference on, pp.219,224, 4-5 April 2014. doi: 10.1109/CSCITA.2014.6839262 IPv6 is next generation Internet protocol whose market is going to increase as IPv4 addresses are exhausted and more mobile devices are attached to Internet. The experience with IPv6 protocol is less as its deployment is slow. So there are many unknown threats possible in IPv6 networks. One such threat addressed in this paper is covert communication in the network. Covert channel is way of communicating classified information. In network it is done by network protocol's control fields. Destination option Extension header of IPv6 is used to pass secret information which is shown experimentally in real test network set up. For creation of attack packets Scapy-Python based API is used. Covert channel due to unknown option and nonzero padding in PadN option is shown. Their detection is also proposed and detector logic is implemented using shell scripting and C programming.
    Keywords: IP networks; application program interfaces; computer network security; protocols; C programming; IPv4 addresses ;IPv6 destination option extension header; IPv6 networks; PadN option; Scapy-Python based API attack packets; covert channel; covert communication; detector logic; extension header; mobile devices; network protocol control fields; next generation Internet protocol; nonzero padding; shell scripting; test network set up; Detectors; IP networks; Information technology; Internet; Operating systems; Protocols; Security; Extension Header; IPv6; Scapy; covert channel (ID#:14-2771)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6839262&isnumber=6839219
  • Binsalleeh, H.; Kara, AM.; Youssef, A; Debbabi, M., "Characterization of Covert Channels in DNS," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1,5, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814008 Malware families utilize different protocols to establish their covert communication networks. It is also the case that sometimes they utilize protocols which are least expected to be used for transferring data, e.g., Domain Name System (DNS). Even though the DNS protocol is designed to be a translation service between domain names and IP addresses, it leaves some open doors to establish covert channels in DNS, which is widely known as DNS tunneling. In this paper, we characterize the malicious payload distribution channels in DNS. Our proposed solution characterizes these channels based on the DNS query and response messages patterns. We performed an extensive analysis of malware datasets for one year. Our experiments indicate that our system can successfully determine different patterns of the DNS traffic of malware families.
    Keywords: cryptographic protocols; invasive software; DNS protocol; DNS traffic; DNS tunneling; IP addresses; communication networks; covert channel characterization; domain name system; malicious payload distribution channels; malware datasets; malware families; message patterns; translation service; Command and control systems; Malware; Payloads; Protocols; Servers ;Tunneling (ID#:14-2772)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814008&isnumber=6813963
  • Shrestha, P.L.; Hempel, M.; Sharif, H.; Chen, H.-H., "An Event-Based Unified System Model to Characterize and Evaluate Timing Covert Channels," Systems Journal, IEEE, vol. PP, no.99, pp. 1, 10, July 2014. doi: 10.1109/JSYST.2014.2328665 Covert channels are communication channels to transmit information utilizing existing system resources without being detected by network security elements, such as firewalls. Thus, they can be utilized to leak confidential governmental, military, and corporate information. Malicious users, like terrorists, can use covert channels to exchange information without being detected by cyber-intelligence services. Therefore, covert channels can be a grave security concern, and it is important to detect, eliminate, and disrupt covert communications. Active network wardens can attempt to eliminate such channels by traffic modification, but such an implementation will also hamper innocuous traffic, which is not always acceptable. Owing to a large number of covert channel algorithms, it is not possible to deal with them on a case-by-case basis. Therefore, it necessitates a unified system model that can represent them. In this paper, we present an event-based model to represent timing covert channels. Based on our model, we calculate the capacity of various covert channels and evaluate their essential features, such as the impact of network jitter noise and packet losses. We also used simulations to obtain these parameters to verify its accuracy and applicability.
    Keywords: Capacity; covert channel; delay jitter; interrupt-related channel; packet loss; security; timing channel (ID#:14-2773)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6851146&isnumber=4357939
  • Wu, Z.; Xu, Z.; Wang, H., "Whispers in the Hyper-Space: High-Bandwidth and Reliable Covert Channel Attacks Inside the Cloud," Networking, IEEE/ACM Transactions on, vol. PP, no.99, pp.1, 1, February 2014. doi: 10.1109/TNET.2014.2304439 Privacy and information security in general are major concerns that impede enterprise adaptation of shared or public cloud computing. Specifically, the concern of virtual machine (VM) physical co-residency stems from the threat that hostile tenants can leverage various forms of side channels (such as cache covert channels) to exfiltrate sensitive information of victims on the same physical system. However, on virtualized x86 systems, covert channel attacks have not yet proven to be practical, and thus the threat is widely considered a "potential risk." In this paper, we present a novel covert channel attack that is capable of high-bandwidth and reliable data transmission in the cloud. We first study the application of existing cache channel techniques in a virtualized environment and uncover their major insufficiency and difficulties. We then overcome these obstacles by: 1) redesigning a pure timing-based data transmission scheme, and 2) exploiting the memory bus as a high-bandwidth covert channel medium. We further design and implement a robust communication protocol and demonstrate realistic covert channel attacks on various virtualized x86 systems. Our experimental results show that covert channels do pose serious threats to information security in the cloud. Finally, we discuss our insights on covert channel mitigation in virtualized environments.
    Keywords: Cloud; covert channel; network security (ID#:14-2774)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6744676&isnumber=4359146
  • Kadhe, S.; Jaggi, S.; Bakshi, M.; Sprintson, A, "Reliable, Deniable, And Hidable Communication Over Multipath Networks," Information Theory (ISIT), 2014 IEEE International Symposium on , vol., no., pp.611,615, June 29 2014-July 4 2014. doi: 10.1109/ISIT.2014.6874905 We consider the scenario wherein a transmitter Alice wants to (potentially) communicate to the intended receiver Bob over a multipath network, i.e., a network consisting of multiple parallel links, in the presence of a passive eavesdropper Willie, who observes an unknown subset of links. A primary goal of our communication protocol is to make the communication "deniable", i.e., Willie should not be able to reliably estimate whether or not Alice is transmitting any covert information to Bob. Moreover, if Alice is indeed actively communicating, her covert messages should be information-theoretically "hidable" in the sense that Willie's observations should not leak any information about Alice's (potential) message to Bob - our notion of hidability is slightly stronger than the notion of information-theoretic strong secrecy well-studied in the literature. We demonstrate that deniability does not imply either hidability or (weak or strong) information-theoretic secrecy; nor does information-theoretic secrecy imply deniability. We present matching inner and outer bounds on the capacity for deniable and hidable communication over multipath networks.
    Keywords: encoding; protocols; radio receivers; radio transmitters; telecommunication links telecommunication network reliability; telecommunication security; communication hidability; communication protocol; information theoretic secrecy; multipath networks; multiple parallel links; passive eavesdropper; telecommunication network reliability; Artificial neural networks; Cryptography; Encoding; Reliability theory (ID#:14-2775)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6874905&isnumber=6874773
  • Hong Zhao, "Covert channels in 802.11e wireless networks," Wireless Telecommunications Symposium (WTS), 2014 , vol., no., pp.1,5, 9-11 April 2014. doi: 10.1109/WTS.2014.6834991 WLANs (Wireless Local Area Networks) have been widely used in business, school and public areas. The newly deployed 802.11e protocol provides QoS in WLANs. However there are some vulnerability in it. This paper analyzed the 802.11e protocol for QoS support in WLANs and two new covert channels are proposed. These proposed covert channels provide signalling method in order to have reliable communication. The proposed covert channels have no impact on normal traffic pattern, thus it cannot be detected by monitoring traffic pattern.
    Keywords: protocols; quality of service; wireless LAN;802.11e wireless networks; QoS support; WLAN; Wireless Local Area Networks; covert channels; signalling method; traffic pattern monitoring; Communication system security; IEEE 802.11e Standard; Protocols; Quality of service; Wireless LAN; Wireless communication;802.11e WLAN; Network Steganography; information hiding (ID#:14-2776)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6834991&isnumber=6834983
  • Bash, B.A; Goeckel, D.; Towsley, D., "LPD Communication When The Warden Does Not Know When," Information Theory (ISIT), 2014 IEEE International Symposium on, pp.606, 610, June 29 2014-July 4 2014. doi: 10.1109/ISIT.2014.6874904 Unlike standard security methods (e.g. encryption), low probability of detection (LPD) communication does not merely protect the information contained in a transmission from unauthorized access, but prevents the detection of a transmission in the first place. In this work we study the impact of secretly pre-arranging the time of communication. We prove that if Alice has AWGN channels to Bob and the warden, and if she and Bob can choose a single n symbol period slot out of T(n) such slots, keeping the selection secret from the warden (and, thus, forcing him to monitor all T(n) slots), then Alice can reliably transmit O(min{n log T(n),n}) bits to Bob while keeping the warden's detector ineffective. The result indicates that only an additional log T(n) secret bits need to be exchanged between Alice and Bpob prior to communication to produce a multiplicative gain of log T(n) in the amount of transmitted covert information.
    Keywords: AWGN channels; computational complexity; probability; telecommunication network reliability; telecommunication security; AWGN channels; LPD communication; ow probability-of-detection; symbol period slot; transmission detection protection; unauthorized access; AWGN channels; Detectors; Random variables; Reliability; Vectors; Yttrium (ID#:14-2777)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6874904&isnumber=6874773
  • Naseer, N.; Keum-Shik Hong; Bhutta, M.R.; Khan, M.J., "Improving Classification Accuracy Of Covert Yes/No Response Decoding Using Support Vector Machines: An fNIRS Study," Robotics and Emerging Allied Technologies in Engineering (iCREATE), 2014 International Conference on , vol., no., pp.6,9, 22-24 April 2014. doi: 10.1109/iCREATE.2014.6828329 One of the aims of brain-computer interface (BCI) is to restore the means of communication for people suffering severe motor impairment, anarthria, or persisting in a vegetative state. Yes/no decoding with the help of an imaging technology such as functional near-infrared spectroscopy (fNIRS) can make this goal a reality. fNIRS is a relatively new non-invasive optical imaging modality offering the advantages of low cost, safety, portability and ease of use. Recently, an fNIRS based online covert yes/no decision decoding framework was presented [Naseer and Hong (2013) online binary decision decoding using functional near infrared spectroscopy for development of a braincomputer interface]. Herein we propose a method to improve support vector machine classification accuracies for decoding covert yes/no responses by using signal slope values of oxygenated and deoxygenated hemoglobin as features calculated for a confined temporal window within the total task period.
    Keywords: brain-computer interfaces; infrared spectra; medical signal processing; signal classification; support vector machines; BCI; brain-computer interface; classification accuracy; covert yes-no response decoding framework; deoxygenated hemoglobin; fNIRS; functional near-infrared spectroscopy; noninvasive optical imaging modality; oxygenated hemoglobin; signal slope values; support vector machines; temporal window; Accuracy;B rain-computer interfaces; Decoding; Detectors; Optical imaging; Spectroscopy; Support vector machines; Binary decision decoding; Brain-computer interface; Functional near-infrared spectroscopy; Support vector machines; Yes/no decoding (ID#:14-2778)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828329&isnumber=6828323
  • Beato, F.; De Cristofaro, E.; Rasmussen, K.B., "Undetectable Communication: The Online Social Networks Case," Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference, pp.19,26, 23-24 July 2014. doi: 10.1109/PST.2014.6890919 Online Social Networks (OSNs) provide users with an easy way to share content, communicate, and update others about their activities. They also play an increasingly fundamental role in coordinating and amplifying grassroots movements, as demonstrated by recent uprisings in, e.g., Egypt, Tunisia, and Turkey. At the same time, OSNs have become primary targets of tracking, profiling, as well as censorship and surveillance. In this paper, we explore the notion of undetectable communication in OSNs and introduce formal definitions, alongside system and adversarial models that complement better understood notions of anonymity and confidentiality. We present a novel scheme for secure covert information sharing that, to the best of our knowledge, is the first to achieve undetectable communication in OSNs. We demonstrate, via an open-source prototype, that additional costs are tolerably low.
    Keywords: data privacy; security of data; social networking (online);OSNs; anonymity notion; confidentiality notion; covert information sharing security; online social networks; open-source prototype; undetectable communication; Entropy; Facebook; Indexes; Internet; Security; Servers (ID#:14-2779)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890919&isnumber=6890911
  • Lakhani, H.; Zaffar, F., "Covert Channels in Online Rogue-Like Games," Communications (ICC), 2014 IEEE International Conference on, pp.761, 767, 10-14 June 2014. doi: 10.1109/ICC.2014.6883411 Covert channels allow two parties to exchange secret data in the presence of adversaries without disclosing the fact that there is any secret data in their communications. We propose and implement EEDGE, an improved method for steganography in mazes that builds upon the work done by Lee et al; and has a significantly higher embedding capacity. We apply EEDGE to the setting of online rogue-like games, which have randomly generated mazes as the levels for players; and show that this can be used to successfully create an efficient, error-free, high bit-rate covert channel.
    Keywords: computer games; electronic data interchange; steganography; EEDGE; covert channels; error free channel; high bit rate covert channel; online rogue like games; secret data exchange; steganography; Bit rate; Games; Image edge detection ;Information systems; Lattices; Receivers; Security (ID#:14-2780)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883411&isnumber=6883277
  • Dainotti, A; King, A; Claffy, K.; Papale, F.; Pescape, A, "Analysis of a "/0" Stealth Scan from a Botnet," Networking, IEEE/ACM Transactions on, vol. PP, no. 99, pp.1, 1, Jan 2014. doi: 10.1109/TNET.2013.2297678 Botnets are the most common vehicle of cyber-criminal activity. They are used for spamming, phishing, denial-of-service attacks, brute-force cracking, stealing private information, and cyber warfare. Botnets carry out network scans for several reasons, including searching for vulnerable machines to infect and recruit into the botnet, probing networks for enumeration or penetration, etc. We present the measurement and analysis of a horizontal scan of the entire IPv4 address space conducted by the Sality botnet in February 2011. This 12-day scan originated from approximately 3 million distinct IP addresses and used a heavily coordinated and unusually covert scanning strategy to try to discover and compromise VoIP-related (SIP server) infrastructure. We observed this event through the UCSD Network Telescope, a /8 darknet continuously receiving large amounts of unsolicited traffic, and we correlate this traffic data with other public sources of data to validate our inferences. Sality is one of the largest botnets ever identified by researchers. Its behavior represents ominous advances in the evolution of modern malware: the use of more sophisticated stealth scanning strategies by millions of coordinated bots, targeting critical voice communications infrastructure. This paper offers a detailed dissection of the botnet's scanning behavior, including general methods to correlate, visualize, and extrapolate botnet behavior across the global Internet.
    Keywords: Animation; Geology; IP networks; Internet; Ports (Computers); Servers; Telescopes; Botnet; Internet background radiation; Internet telephony; Network Telescope; VoIP; communication system security; darknet; network probing; scanning (ID#:14-2781)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6717049&isnumber=4359146
  • Suzhi Bi; Ying Jun Zhang, "Using Covert Topological Information for Defense Against Malicious Attacks on DC State Estimation," Selected Areas in Communications, IEEE Journal on, vol.32, no.7, pp.1471, 1485, July 2014. doi: 10.1109/JSAC.2014.2332051 Accurate state estimation is of paramount importance to maintain the power system operating in a secure and efficient state. The recently identified coordinated data injection attacks to meter measurements can bypass the current security system and introduce errors to the state estimates. The conventional wisdom to mitigate such attacks is by securing meter measurements to evade malicious injections. In this paper, we provide a novel alternative to defend against false data injection attacks using covert power network topological information. By keeping the exact reactance of a set of transmission lines from attackers, no false data injection attack can be launched to compromise any set of state variables. We first investigate from the attackers' perspective the necessary condition to perform an injection attack. Based on the arguments, we characterize the optimal protection problem, which protects the state variables with minimum cost, as a well-studied Steiner tree problem in a graph. In addition, we also propose a mixed defending strategy that jointly considers the use of covert topological information and secure meter measurements when either method alone is costly or unable to achieve the protection objective. A mixed-integer linear programming formulation is introduced to obtain the optimal mixed defending strategy. To tackle the NP-hardness of the problem, a tree-pruning-based heuristic is further presented to produce an approximate solution in polynomial time. The advantageous performance of the proposed defending mechanisms is verified in IEEE standard power system test cases.
    Keywords: integer programming; linear programming; power system security; power system state estimation; power transmission faults; power transmission lines; power transmission protection; smart meters; smart power grids; trees (mathematics);DC state estimation; NP-hardness problem; Steiner tree problem; coordinated data injection attacks identification; covert power network topological information; current security system; false data injection attack; graph theory; malicious attacks; mixed-integer linear programming; necessary condition; optimal mixed defending strategy; optimal protection problem; polynomial time; power system state estimation; secure meter measurements; state variables; transmission lines; tree-pruning-based heuristic; Phase measurement; Power measurement; Power transmission lines; State estimation; Transmission line measurements; Voltage measurement; False-data injection attack; graph algorithms; power system state estimation; smart grid security (ID#:14-2782)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6840294&isnumber=6879523
  • Pak Hou Che; Bakshi, M.; Chung Chan; Jaggi, S., "Reliable, Deniable And Hidable Communication," Information Theory and Applications Workshop (ITA), 2014, pp.1, 10, 9-14 Feb. 2014. doi: 10.1109/ITA.2014.6804271 Alice wishes to potentially communicate covertly with Bob over a Binary Symmetric Channel while Willie the wiretapper listens in over a channel that is noisier than Bob's. We show that Alice can send her messages reliably to Bob while ensuring that even whether or not she is actively communicating is (a) deniable to Willie, and (b) optionally, her message is also hidable from Willie. We consider two different variants of the problem depending on the Alice's "default" behavior, i.e., her transmission statistics when she has no covert message to send: 1) When Alice has no covert message, she stays "silent", i.e., her transmission is 0; 2) When has no covert message, she transmits "innocently", i.e., her transmission is drawn uniformly from an innocent random codebook; We prove that the best rate at which Alice can communicate both deniably and hid ably in model 1 is O(1/n). On the other hand, in model 2, Alice can communicate at a constant rate.
    Keywords: binary codes; channel coding; random codes; reliability; Alice default behavior; binary symmetric channel; random codebook; transmission statistics; wiretapper; Decoding; Encoding; Error probability; Measurement; Noise Reliability; Throughput (ID#:14-2783)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6804271&isnumber=6804199

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Cryptanalysis

Cryyptanalysis


Cryptanalysis is a core function for cybersecurity research. 2014 has been a very productive year so far for research in this area. The work cited below looks at AES, biclique, Lightweight Welch-Gong Stream Ciphers, and a number of smart card issues, and power injection and use, among other things. These works appeared between January and October of 2014.

  • Heys, H., "Integral Cryptanalysis Of The BSPN Block Cipher," Communications (QBSC), 2014 27th Biennial Symposium on, pp.153, 158, 1-4 June 2014. doi: 10.1109/QBSC.2014.6841204 In this paper, we investigate the application of integral cryptanalysis to the Byte-oriented Substitution Permutation Network (BSPN) block cipher. The BSPN block cipher has been shown to be an efficient block cipher structure, particularly for environments using 8-bit microcontrollers. In our analysis, we are able to show that integral cryptanalysis has limited success when applied to BSPN. A first order attack, based on a deterministic integral, is only applicable to structures with 3 or fewer rounds, while higher order attacks and attacks using a probabilistic integral were found to be only applicable to structures with 4 or less rounds. Since a typical BSPN block cipher is recommended to have 8 or more rounds, it is expected that the BSPN structure is resistant to integral cryptanalysis.
    Keywords: cryptography ;integral equations; microcontrollers; probability; BSPN block cipher; block cipher structure; byte-oriented substitution permutation network; deterministic integral; first order attack; higher order attacks ;integral cryptanalysis; microcontrollers; probabilistic integral; word length 8 bit; Ciphers; Encryption; Microcontrollers; Probabilistic logic; Probability; Resistance; block ciphers; cryptanalysis; cryptography (ID#:14-2784)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841204&isnumber=6841165
  • Dadhich, A; Gupta, A; Yadav, S., "Swarm Intelligence Based Linear Cryptanalysis Of Four-Round Data Encryption Standard Algorithm," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on, pp.378,383, 7-8 Feb. 2014. doi: 10.1109/ICICICT.2014.6781312 The proliferation of computers, internet and wireless communication capabilities into the physical world has led to ubiquitous availability of computing infrastructure. With the expanding number and type of internet capable devices and the enlarged physical space of distributed and cloud computing, computer systems are evolving into complex and pervasive networks. Amidst the aforesaid rapid growth in technology, secure transmission of data is also equally important. The amount of sensitive information deposited and transmitted over the internet is absolutely critical and needs principles that enforce legal and restricted use and interpretation of data. The data needs to be protected from eavesdroppers and potential attackers who undermine the security processes and perform actions in excess of their permissions. Cryptography algorithms form a central component of the security mechanisms used to safeguard network transmissions and data storage. As the encrypted data security largely depends on the techniques applied to create, manage and distribute the keys, therefore a cryptographic algorithm might be rendered useless due to poor management of the keys. This paper presents a novel computational intelligence based approach for known ciphertext-only cryptanalysis of four-round Data Encryption Standard algorithm. In ciphertext-only attack, the encryption algorithm used and the ciphertext to be decoded are known to cryptanalyst and is termed as the most difficult attack encountered in cryptanalysis. The proposed approach uses Swarm Intelligences to deduce optimum keys according to their fitness values and identifies the best keys through a statistical probability based fitness function. The results suggest that the proposed approach is intelligent in finding missing key bits of the Data Encryption Standard algorithm.
    Keywords: cloud computing; cryptography; probability; statistical analysis; swarm intelligence; Internet; ciphertext-only attack; ciphertext-only cryptanalysis; cloud computing; computational intelligence based approach; cryptography algorithms; data storage; distributed computing; four-round data encryption standard algorithm; network transmissions; secure data transmission; statistical probability based fitness function; swarm intelligence based linear cryptanalysis; Cryptography; MATLAB; NIST; Ciphertext; Cryptanalysis Cryptography; Information Security ;Language model; Particle Swarm Optimization; Plaintext; Swarm Intelligence (ID#:14-2785)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781312&isnumber=6781240
  • Alghazzawi, D.M.; Hasan, S.H.; Trigui, M.S., "Advanced Encryption Standard - Cryptanalysis Research," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on, pp.660,667, 5-7 March 2014. doi: 10.1109/IndiaCom.2014.6828045 Advanced Encryption Standard (AES) has been the focus of Cryptanalysis since it was released in the 2001, November. The research gained more important when AES as declared as the Type-1 Suite-B Encryption Algorithm, by the NSA in 2003(CNSSP-15). Which makes it deemed suitable for being utilized for encryption of the both Classified & Unclassified security documents and system. The following papers discusses the Cryptanalysis research being carried out on the AES and discusses the different techniques being used establish the advantages of the algorithm being used in Security systems. It would conclude by the trying to assess the duration in which AES can be effectively used in the National Security Applications.
    Keywords: algebraic codes; cryptography; standards; AES; Advanced Encryption Standard; NSA encryption algorithm; algebraic attack; cryptanalysis research; national security applications; security systems; Ciphers; Classification algorithms; Encryption; Equations; Timing; Cryptanalysis; Encryption; Network Security (ID#:14-2786)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828045&isnumber=6827395
  • Kumar, R.; Jovanovic, P.; Polian, I, "Precise Fault-Injections Using Voltage And Temperature Manipulation For Differential Cryptanalysis," On-Line Testing Symposium (IOLTS), 2014 IEEE 20th International, pp.43, 48, 7-9 July 2014. doi: 10.1109/IOLTS.2014.6873670 State-of-the-art fault-based cryptanalysis methods are capable of breaking most recent ciphers after only a few fault injections. However, they require temporal and spatial accuracies of fault injection that were believed to rule out low-cost injection techniques such as voltage, frequency or temperature manipulation. We investigate selection of supply-voltage and temperature values that are suitable for high-precision fault injection even up to a single bit. The object of our studies is an ASIC implementation of the recently presented block cipher PRINCE, for which a two-stage fault attack scheme has been suggested lately. This attack requires, on average, about four to five fault injections in well-defined locations. We show by electrical simulations that voltage-temperature points exist for which faults show up at locations required for a successful attack with a likelihood of around 0.1%. This implies that the complete attack can be mounted by approximately 4,000 to 5,000 fault injection attempts, which is clearly feasible.
    Keywords: application specific integrated circuits; cryptography; fault diagnosis; integrated circuit design ;block cipher PRINCE; differential cryptanalysis; electrical simulations; fault-based cryptanalysis methods; high-precision fault injection; low-cost injection techniques; supply-voltage selection; temperature manipulation; temperature values; two-stage fault attack scheme; voltage manipulation; voltage-temperature points; Ciphers; Circuit faults; Clocks; Logic gates; Mathematical model; Temperature distribution (ID#:14-2787)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6873670&isnumber=6873658
  • Bhateja, A; Kumar, S., "Genetic Algorithm With Elitism For Cryptanalysis Of Vigenere Cipher," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on, pp.373,377, 7-8 Feb. 2014. doi: 10.1109/ICICICT.2014.6781311 In today's world, with increasing usage of computer networks and internet, the importance of network, computer and information security is obvious. One of the widely used approaches for information security is Cryptography. Cryptanalysis is a way to break the cipher text without having the encryption key. This paper describes a method of deciphering encrypted messages of Vigenere cipher cryptosystems by Genetic Algorithm using elitism with a novel fitness function. Roulette wheel method, two point crossover and cross mutation is used for selection and for the generation of the new population. We conclude that the proposed algorithm can reduce the time complexity and gives better results for such optimization problems.
    Keywords: cryptography; genetic algorithms; Internet; Vigenere cipher; computer networks; computer security; cross mutation; cryptanalysis; cryptography; elitism; encryption key; fitness function; genetic algorithm; information security; network security; roulette wheel method; two point crossover; Ciphers; Genetic algorithms; Genetics; Lead; Size measurement; Vigenere cipher; chromosomes; cryptanalysis; elitism; fitness function; genes; genetic algorithm (ID#:14-2788)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781311&isnumber=6781240
  • Lin Ding; Chenhui Jin; Jie Guan; Qiuyan Wang, "Cryptanalysis of Lightweight WG-8 Stream Cipher," Information Forensics and Security, IEEE Transactions on, vol.9, no.4, pp.645,652, April 2014. doi: 10.1109/TIFS.2014.2307202 WG-8 is a new lightweight variant of the well-known Welch-Gong (WG) stream cipher family, and takes an 80-bit secret key and an 80-bit initial vector (IV) as inputs. So far no attack on the WG-8 stream cipher has been published except the attacks by the designers. This paper shows that there exist Key-IV pairs for WG-8 that can generate keystreams, which are exact shifts of each other throughout the keystream generation. By exploiting this slide property, an effective key recovery attack on WG-8 in the related key setting is proposed, which has a time complexity of 253.32 and requires 252 chosen IVs. The attack is minimal in the sense that it only requires one related key. Furthermore, we present an efficient key recovery attack on WG-8 in the multiple related key setting. As confirmed by the experimental results, our attack recovers all 80 bits of WG-8 in on a PC with 2.5-GHz Intel Pentium 4 processor. This is the first time that a weakness is presented for WG-8, assuming that the attacker can obtain only a few dozen consecutive keystream bits for each IV. Finally, we give a new Key/IV loading proposal for WG-8, which takes an 80-bit secret key and a 64-bit IV as inputs. The new proposal keeps the basic structure of WG-8 and provides enough resistance against our related key attacks.
    Keywords: computational complexity; cryptography; microprocessor chips;80-bit initial vector;80-bit secret key; Intel Pentium 4 processor; Welch-Gong stream cipher; frequency 2.5 GHz; key recovery attack; keystream generation; lightweight WG-8 stream cipher cryptanalysis; related key attack; slide property; time complexity; Ciphers; Clocks; Equations; Proposals; Time complexity;Cryptanalysis;WG-8;lightweight stream cipher; related key attack (ID#:14-2789)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6746224&isnumber=6755552
  • Madhusudhan, R.; Kumar, S.R., "Cryptanalysis of a Remote User Authentication Protocol Using Smart Cards," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp.474,477, 7-11 April 2014. doi: 10.1109/SOSE.2014.84 Remote user authentication using smart cards is a method of verifying the legitimacy of remote users accessing the server through insecure channel, by using smart cards to increase the efficiency of the system. During last couple of years many protocols to authenticate remote users using smart cards have been proposed. But unfortunately, most of them are proved to be unsecure against various attacks. Recently this year, Yung-Cheng Lee improved Shin et al.'s protocol and claimed that their protocol is more secure. In this article, we have shown that Yung-Cheng-Lee's protocol too has defects. It does not provide user anonymity; it is vulnerable to Denial-of-Service attack, Session key reveal, user impersonation attack, Server impersonation attack and insider attacks. Further it is not efficient in password change phase since it requires communication with server and uses verification table.
    Keywords: computer network security; cryptographic protocols; message authentication ;smart cards; Yung-Cheng-Lee's protocol; cryptanalysis; denial-of-service attack; insecure channel; insider attacks; legitimacy verification; password change phase; remote user authentication protocol; server impersonation attack; session key; smart cards; user impersonation attack; verification table;Authentication;Bismuth;Cryptography;Protocols;Servers;Smart cards; authentication; smart card; cryptanalysis; dynamic id (ID#:14-2790)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830951&isnumber=6825948
  • Phuong Ha Nguyen; Sahoo, D.P.; Mukhopadhyay, D.; Chakraborty, R.S., "Cryptanalysis of Composite PUFs (Extended abstract-invited talk)," VLSI Design and Test, 18th International Symposium on, pp.1,2, 16-18 July 2014.doi: 10.1109/ISVDAT.2014.6881035 In recent years, Physically Unclonable Functions (PUFs) have become important cryptographic primitive and are used in secure systems to resist physical attacks. Since PUFs have many useful properties such as memory-leakage resilience, unclonablity, tampering-resistance, PUF has drawn great interest in academia as well as industry. As extremely useful hardware security primitives, PUFs are used in various proposed applications such as device authentication and identification, random number generation, and intellectual property protection. One of important requirement to PUFs is that PUFs should have small hardware overhead in order to be utilized in lightweight application such as RFID. To achieve this goal, Composite PUFs are developed and introduced in RECONFIG2013 and HOST2014. In a nutshell, Composite PUFs are built by using many small PUFs primitives. In this talk, we show that Composite PUFs introduced in RECONFIG2013 are not secure by presenting its cryptanalysis.
    Keywords: cryptography; data protection; message authentication; random number generation; composite PUFs cryptanalysis; cryptographic primitive; device authentication; intellectual property protection; physically unclonable functions; random number generation; Authentication; Computational modeling; Hardware; Industries; Random number generation; PUF; Physically unclonable function; cryptanalysis (ID#:14-2791)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6881035&isnumber=6881034
  • Huixian Li; Liaojun Pang, "Cryptanalysis of Wang et al.'s Improved Anonymous Multi-Receiver Identity-Based Encryption Scheme," Information Security, IET , vol.8, no.1, pp.8,11, Jan. 2014. doi: 10.1049/iet-ifs.2012.0354 Fan et al. proposed an anonymous multi-receiver identity-based encryption scheme in 2010, and showed that the identity of any legal receiver can be kept anonymous to anyone else. In 2012, Wang et al. pointed out that Fan et al.'s scheme cannot achieve the anonymity and that every legal receiver can determine whether the other is one of the legal receivers. At the same time, they proposed an improved scheme based on Fan et al.'s scheme to solve this anonymity problem. Unfortunately, the authors find that Wang et al.'s improved scheme still suffers from the same anonymity problem. Any legal receiver of Wang et al.'s improved scheme can judge whether anyone else is a legal receiver or not. In this study, the authors shall give the detailed anonymity analysis of Wang et al.'s improved scheme.
    Keywords: broadcasting; cryptography; receivers; telecommunication security; Wang et al improved scheme ;anonymity problem; anonymous multireceiver identity-based encryption scheme; cryptanalysis; legal receiver (ID#:14-2792)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6687152&isnumber=6687150
  • Sarvabhatla, Mrudula; Giri, M.; Vorugunti, Chandra Sekhar, "Cryptanalysis of "a Biometric-Based User Authentication Scheme For Heterogeneous Wireless Sensor Networks"," Contemporary Computing (IC3), 2014 Seventh International Conference on, pp.312,317, 7-9 Aug. 2014. doi: 10.1109/IC3.2014.6897192 With the advancement of Internet of Things (IoT) technology and rapid growth of WSN applications, provides an opportunity to connect WSN to IoT, which results in the secure sensor data can be accessible via in secure Internet. The integration of WSN and IoT effects lots of security challenges and requires strict user authentication mechanism. Quite a few isolated user verification or authentication schemes using the password, the biometrics and the smart card have been proposed in the literature. In 2013, A.K Das et al. designed a biometric-based remote user verification scheme using smart card for heterogeneous wireless sensor networks. A.K Das et al insisted that their scheme is secure against several known cryptographic attacks. Unfortunately, in this manuscript we will show that their scheme fails to resist replay attack, user impersonation attack, failure to accomplish mutual authentication and failure to provide data privacy.
    Keywords: Authentication; Biometrics (access control); Elliptic curve cryptography; Smart cards; Wireless sensor networks; Biometric; Cryptanalysis; Smart Card; User Authentication; Wireless Sensor Networks (ID#:14-2793)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6897192&isnumber=6897132
  • Aboud, S.J.; Al-fayoumi, M., "Cryptanalysis Of Password Authentication System," Computer Science and Information Technology (CSIT), 2014 6th International Conference on, pp.14,17, 26-27 March 2014. doi: 10.1109/CSIT.2014.6805972 The password authentication systems have been increasing in recent years. Therefore authors have been concentrated these days on introducing more password authentication systems. Thus, in 2011, Lee et al., presented an enhanced system to resolve the vulnerabilities of selected system. But, we notice that Lee et al., system is still weak to server attack and stolen smart card attack. Also, a password change protocol of the system is neither suitable to users nor low efficient. There is no handy data can be gained from the values kept in smart cards. Therefore, a stolen smart card attack can be blocked. To prevent server attack, we suggest transferring a user authentication operation from servers to a registration centre, which can guarantee every server, has another private key.
    Keywords: cryptography; message authentication; smart cards; cryptanalysis; password authentication system; password change protocol; private key; registration centre; server attack; stolen smart card attack; user authentication operation; Authentication; Computer hacking; Cryptography; Protocols; Servers; Smart cards (ID#:14-2794)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6805972&isnumber=6805962
  • Ahmadi, S.; Ahmadian, Z.; Mohajeri, J.; Aref, M.R., "Low-Data Complexity Biclique Cryptanalysis of Block Ciphers With Application to Piccolo and HIGHT," Information Forensics and Security, IEEE Transactions on, vol.9, no.10, pp.1641,1652, Oct. 2014. doi: 10.1109/TIFS.2014.2344445 In this paper, we present a framework for biclique cryptanalysis of block ciphers which extremely requires a low amount of data. To that end, we enjoy a new representation of biclique attack based on a new concept of cutset that describes our attack more clearly. Then, an algorithm for choosing two differential characteristics is presented to simultaneously minimize the data complexity and control the computational complexity. Then, we characterize those block ciphers that are vulnerable to this technique and among them, we apply this attack on lightweight block ciphers Piccolo-80, Piccolo-128, and HIGHT. The data complexity of these attacks is only 16-plaintext-ciphertext pairs, which is considerably less than the existing cryptanalytic results. In all the attacks, the computational complexity remains the same as the previous ones or even it is slightly improved.
    Keywords: Ciphers; Computational complexity; Encryption; Optimization; Schedules; Biclique cryptanalysis; attack complexity; lightweight block ciphers (ID#:14-2795)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6868260&isnumber=6891522
  • Mala, H., "Biclique-based Cryptanalysis Of The Block Cipher SQUARE," Information Security, IET, vol.8, no.3, pp.207, 212, May 2014. doi: 10.1049/iet-ifs.2011.0332 SQUARE, an eight-round substitution-permutation block cipher, is considered as a predecessor of the advanced encryption standard (AES). Recently, the concept of biclique-based key recovery of block ciphers was introduced and applied to full-round versions of three variants of AES. In this paper, this technique is applied to analyse the block cipher SQUARE. First, a biclique for three rounds of SQUARE using independent related-key differentials has been found. Then, an attack on this cipher is presented, with a data complexity of about 248 chosen plaintexts and a time complexity of about 2125.7 encryptions. The attack is the first successful attack on full-round SQUARE in the single-key scenario.
    Keywords: computational complexity; cryptography; AES; advanced encryption standard; biclique-based cryptanalysis; biclique-based key recovery; block cipher SQUARE; block ciphers; data complexity; eight-round substitution-permutation block cipher ;independent related-key differentials; time complexity (ID#:14-2796)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6786901&isnumber=6786849
  • Kramer, J.; Kasper, M.; Seifert, J.-P., "The Role Of Photons In Cryptanalysis," Design Automation Conference (ASP-DAC), 2014 19th Asia and South Pacific, pp.780, 787, 20-23 Jan. 2014 doi: 10.1109/ASPDAC.2014.6742985 Photons can be exploited to reveal secrets of security ICs like smartcards, secure microcontrollers, and cryptographic coprocessors. One such secret is the secret key of cryptographic algorithms. This work gives an overview about current research on revealing these secret keys by exploiting the photonic side channel. Different analysis methods are presented. It is shown that the analysis of photonic emissions also helps to gain knowledge about the attacked device and thus poses a threat to modern security ICs. The presented results illustrate the differences between the photonic and other side channels, which do not provide fine-grained spatial information. It is shown that the photonic side channel has to be addressed by software engineers and during chip design.
    Keywords: photons; private key cryptography; cryptanalysis; integrated circuit; photonic emissions; photonic side channel; photons; secret keys; security IC; Algorithm design and analysis; Cryptography; Detectors; Integrated circuits; Photonics; Random access memory (ID#:14-2797)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6742985&isnumber=6742831
  • Xu, J.; Hu, L.; Sun, S., "Cryptanalysis of Two Cryptosystems Based On Multiple Intractability Assumptions," Communications, IET , vol.8, no.14, pp.2433,2437, Sept. 25 2014. doi: 10.1049/iet-com.2013.1101 Two public key cryptosystems based on the two intractable number-theoretic problems, integer factorisation and simultaneous Diophantine approximation, were proposed in 2005 and 2009, respectively. In this study, the authors break these two cryptosystems for the recommended minimum parameters by solving the corresponding modular linear equations with small unknowns. For the first scheme, the public modulus is factorised and the secret key is recovered with the Gauss algorithm. By using the LLL basis reduction algorithm for a seven-dimensional lattice, the public modulus in the second scheme is also factorised and the plaintext is recovered from a ciphertext. The author's attacks are efficient and verified by experiments which were done within 5s.
    Keywords: (not provided) (ID#:14-2798)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900024&isnumber=6900021
  • Kuo, Po-Chun; Cheng, Chen-Mou, "Lattice-based Cryptanalysis -- How To Estimate The Security Parameter Of Lattice-Based Cryptosystem," Consumer Electronics - Taiwan (ICCE-TW), 2014 IEEE International Conference on, pp.53,54, 26-28 May 2014. doi: 10.1109/ICCE-TW.2014.6904097 The usual cryptosystem behind debit card is RSA cryptosystem, which would be broken immediately by quantum computer. Thus, post-quantum cryptography rises and aims to develop cryptosystems which resist the quantum attack. Lattice-based cryptography is one on post-quantum cryptography, and is used to construct various cryptosystems. The central problem behind the lattice-based cryptosystem is Shortest Vector Problem (SVP), finding the shortest vector in the given lattice. Based on the previous results, we re-design the implementation method to improve the performance on GPU. Moreover, we implement and compare the enumeration and sieve algorithm to solve SVP on GPU. Thus, we can estimate the security parameter of lattice-based cryptosystem in reasonable way.
    Keywords: Algorithm design and analysis; Approximation algorithms; Cryptography; Graphics processing units; Lattices; Vectors (ID#:14-2799)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6904097&isnumber=6903991
  • Jun Xu; Lei Hu; Siwei Sun; Yonghong Xie, "Cryptanalysis of Countermeasures Against Multiple Transmission Attacks on NTRU," Communications, IET, vol.8, no.12, pp.2142, 2146, August 14 2014. doi: 10.1049/iet-com.2013.1092 The original Number Theory Research Unit (NTRU) public key cryptosystem is vulnerable to multiple transmission attacks, and the designers of NTRU presented two countermeasures to prevent such attacks. In this study, the authors show that the first countermeasure is still not secure, the plaintext can be revealed by a linearisation attack technique. Moreover, they demonstrate that the first countermeasure is even not secure for broadcast attacks, a class of more general attacks than multiple transmission attacks. For the second countermeasure, they show that one special case of its padding function for the plaintext is also insecure and the original plaintext can be obtained by lattice methods.
    Keywords: public key cryptography; broadcast attacks; lattice methods; linearisation attack technique; multiple transmission attacks; original NTRU public key cryptosystem (ID#:14-2800)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6871476&isnumber=6871466
  • Li Wei; Tao Zhi; Gu Dawu; Sun Li; Qu Bo; Liu Zhiqiang; Liu Ya, "An Effective Differential Fault Analysis On The Serpent Cryptosystem In The Internet of Things," Communications, China , vol.11, no.6, pp.129,139, June 2014. doi: 10.1109/CC.2014.6879011 Due to the strong attacking ability, fast speed, simple implementation and other characteristics, differential fault analysis has become an important method to evaluate the security of cryptosystem in the Internet of Things. As one of the AES finalists, the Serpent is a 128-bit Substitution-Permutation Network (SPN) cryptosystem. It has 32 rounds with the variable key length between 0 and 256 bits, which is flexible to provide security in the Internet of Things. On the basis of the byte-oriented model and the differential analysis, we propose an effective differential fault attack on the Serpent cryptosystem. Mathematical analysis and simulating experiment show that the attack could recover its secret key by introducing 48 faulty ciphertexts. The result in this study describes that the Serpent is vulnerable to differential fault analysis in detail. It will be beneficial to the analysis of the same type of other iterated cryptosystems.
    Keywords: Internet of Things; computer network security; mathematical analysis; private key cryptography; Internet of Things; SPN cryptosystem; Serpent cryptosystem; byte-oriented model; cryptosystem security; differential fault analysis; differential fault attack; faulty ciphertexts; mathematical analysis; secret key recovery; substitution-permutation network cryptosystem; word length 0 bit to 256 bit; Educational institutions; Encryption; Internet of Things; Schedules; cryptanalysis; differential fault analysis ;internet of things; serpent (ID#:14-2801)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6879011&isnumber=6878993
  • Tauleigne, R.; Datcu, O.; Stanciu, M., "Thwarting Cryptanalytic Attacks Based On The Correlation Function," Communications (COMM), 2014 10th International Conference on, pp.1, 4, 29-31 May 2014. doi: 10.1109/ICComm.2014.6866745 Many studies analyze the encrypted transmission using the synchronization of chaotic signals. This requires the exchange of an analog synchronization signal, which almost always is a state of the chaotic generator. However, very few different chaotic structures are used for this purpose, still. The uniqueness of their dynamics allows the identification of these structures by simple autocorrelation. In order to thwart all cryptanalytic attacks based on the identification of this dynamics, we propose a numerical method without memory in order to reversibly destroy the shape of the transmitted signal. After analog-to-digital conversion of the synchronization signal, we apply permutations of the weights of its bits to each binary word. These permutations significantly change the shape of the transmitted signal, increasing its versatility and spreading its spectrum. If the message is simply added to the synchronization signal, being the easiest to decrypt, it undergoes the same transformation. It is therefore extremely difficult to detect the message in the transmitted signal by using a temporal analysis, as well as a frequency one. The present work illustrates the proposed method for the chaotic Colpitts oscillator. Nevertheless, the algorithm does not depend on the chosen chaotic generator. Finally, by only increasing the size of the permutation matrix, the complexity of the change in the waveform is increased in a factorial way.
    Keywords: analogue-digital conversion; chaos generators; correlation methods; cryptography; oscillators; signal detection; synchronisation; analog synchronization signal analog-to-digital conversion; autocorrelation function; chaotic Colpitts oscillator; chaotic generator; chaotic structure identification; encrypted signal transmission; frequency analysis; message detection; temporal analysis; thwarting cryptanalytic attacks; weight permutation matrix; Chaotic communication; Computer hacking; Receivers; Shape; Synchronization; Transmitters; chaotic system; correlation; cryptanalysis; encryption; synchronization (ID#:14-2802)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866745&isnumber=6866648
  • Ali, A, "Some Words On Linearisation Attacks On FCSR-Based Stream Ciphers," Applied Sciences and Technology (IBCAST), 2014 11th International Bhurban Conference on, pp.195, 202, 14-18 Jan. 2014. doi: 10.1109/IBCAST.2014.6778145 Linearisation attacks are effective against those stream ciphers whose analysis theory depends on the properties of 2-adic numbers. This paper discuses these attacks in the context of Feedback with Carry Shift Register (FCSR) based stream ciphers. In this context, linearisation attacks build upon the theory of linearisation intervals of the FCSR state update function. The paper presents detailed theoretical results on FCSRs, which describe various operational aspects of the FCSR state update function in relation to the linearisation intervals. Linearisation attacks combine these theoretical results on FCSRs with the concepts of well-known techniques of cryptanalysis, which depends upon the structures of specific ciphers to be analysed such as linear cryptanalysis, correlation attacks, guess-and-determine attacks, and algebraic attacks. In the context of FCSR-based stream ciphers, the paper describes three variants of linearisation attacks. These variants are named as "Conventional Linearisation Attacks", "Fast Linearisation Attacks" and "Improved Linearisation Attacks". These variants of linearisation attacks provide trade-offs between data, time and memory complexities with respect to each other. Moreover this paper also presents a detailed comparison of linearisation attacks with other well-known techniques of cryptanalysis.
    Keywords: algebra; cryptography; shift registers; FCSR state update function ;FCSR-based stream ciphers; Feedback with Carry Shift Register; algebraic attacks; conventional linearisation attacks; correlation attacks; fast linearisation attacks; guess-and-determine attacks; improved linearisation attacks; linear cryptanalysis; linearisation interval theory; trade-offs; Adders; Ciphers; Equations; Hamming weight; Mathematical model; Registers; CLAs; FLAs; ILAs; New results; linearisation attacks; tradeoffs (ID#:14-2803)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778145&isnumber=6778084
  • Khan, AK.; Mahanta, H.J., "Side Channel Attacks And Their Mitigation Techniques," Automation, Control, Energy and Systems (ACES), 2014 First International Conference on, pp.1,4, 1-2 Feb. 2014. doi: 10.1109/ACES.2014.6807983 Side channel cryptanalysis is one of the most volatile fields of research in security prospects. It has proved that cryptanalysis is no more confined to its dependence on plain text or cipher text. Indeed side channel attack uses the physical characteristics of the cryptographic device to find the cryptographic algorithm used and also the secret key. It is one of the most efficient techniques and has successfully broken almost all the cryptographic algorithms today. In this paper we aim to present a review on the various side channel attacks possible. Also, the techniques proposed to mitigate such an attack have been stated.
    Keywords: cryptography; cryptographic device; ivolatile field; mitigation technique ;security prospect; side channel attack; side channel cryptanalysis; Ciphers ;Elliptic curve cryptography; Encryption; Hardware; Timing; AES; DES; DPA; Power Analysis; SPA; cryptographic device (ID#:14-2804)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6807983&isnumber=6807973
  • Rudra, M.R.; Daniel, N.A; Nagoorkar, V.; Hoe, D.H.K., "Designing Stealthy Trojans With Sequential Logic: A Stream Cipher Case Study," Design Automation Conference (DAC), 2014 51st ACM/EDAC/IEEE, pp.1,4, 1-5 June 2014. doi: 10.1145/2593069.2596677 This paper describes how a stealthy Trojan circuit can be inserted into a stream cipher module. The stream cipher utilizes several shift register-like structures to implement the keystream generator and to process the encrypted text. We demonstrate how an effective trigger can be built with the addition of just a few logic gates inserted between the shift registers and one additional flip-flop. By distributing the inserted Trojan logic both temporally and over the logic design space, the malicious circuit is hard to detect by both conventional and more recent static analysis methods. The payload is designed to weaken the cipher strength, making it more susceptible to cryptanalysis by an adversary.
    Keywords: cryptography; flip-flops; invasive software; logic design; sequential circuits; shift registers; cipher strength; cryptanalysis; encrypted text ;flip-flop; keystream generator; logic design space; logic gates; malicious circuit; sequential logic; shift register-like structures; static analysis methods; stealthy trojan circuit; stream cipher module; trojan logic; Ciphers; Encryption; Hardware; Logic gates; Shift registers; Trojan horses; hardware trojan; sequential-based Trojan; stream cipher (ID#:14-2805)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6881499&isnumber=6881325
  • Chouhan, D.S.; Mahajan, R.P., "An Architectural Framework For Encryption & Generation Of Digital Signature Using DNA Cryptography," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on, pp.743,748, 5-7 March 2014. doi: 10.1109/IndiaCom.2014.6828061 As most of the modern encryption algorithms are broken fully/partially, the world of information security looks in new directions to protect the data it transmits. The concept of using DNA computing in the fields of cryptography has been identified as a possible technology that may bring forward a new hope for hybrid and unbreakable algorithms. Currently, several DNA computing algorithms are proposed for cryptography, cryptanalysis and steganography problems, and they are proven to be very powerful in these areas. This paper gives an architectural framework for encryption & Generation of digital signature using DNA Cryptography. To analyze the performance; the original plaintext size and the key size; together with the encryption and decryption time are examined also the experiments on plaintext with different contents are performed to test the robustness of the program.
    Keywords: biocomputing; digital signatures; DNA computing; DNA cryptography; architectural framework; cryptanalysis; decryption time; digital signature encryption; digital signature generation ;encryption algorithms; encryption time; information security; key size; plaintext size; steganography; Ciphers; DNA; DNA computing; Digital signatures; Encoding; Encryption; DNA; DNA computing DNA cryptography; DNA digital coding (ID#:14-2806)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828061&isnumber=6827395
  • Te-Yu Chen; Chung-Huei Ling; Min-Shiang Hwang, "Weaknesses of the Yoon-Kim-Yoo Remote User Authentication Scheme Using Smart Cards," Electronics, Computer and Applications, 2014 IEEE Workshop on, pp.771,774, 8-9 May 2014. doi: 10.1109/IWECA.2014.6845736 A user authentication scheme is a mechanism employed by a server to authenticate the legality of a user before he/she is allowed to access the resource or service provided by the server. Due to the Internet's openness and lack of security concern, the user authentication scheme is one of the most important security primitives in the Internet activities. Many researchers have been devoted to the study of this issue. There are many authentication schemes have been proposed up to now. However, most of these schemes have both the advantages and disadvantages. Recently, Yoon, Kim and Yoo proposed a remote user authentication scheme which is an improvement of Liaw et al.'s scheme. Unfortunately, we find their scheme is not secure enough. In this paper, we present some flaws in Yoon-Kim-Yoo's scheme. This proposed cryptanalysis contributes important heuristics on the secure concern when researchers design remote user authentication schemes.
    Keywords: Internet; cryptography; message authentication; smart cards; Internet activities; Yoon-Kim-Yoo remote user authentication scheme weakness; cryptanalysis; security primitives; smart cards; Cryptography; Entropy; Ice; Smart card; cryptography; guessing attack; user authentication (ID#:14-2807)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845736&isnumber=6845536
  • Ximeng Liu; Jianfeng Ma; Jinbo Xiong; Qi Li; Tao Zhang; Hui Zhu, "Threshold Attribute-Based Encryption With Attribute Hierarchy For Lattices In The Standard Model," Information Security, IET, vol.8, no.4, pp.217,223, July 2014. doi: 10.1049/iet-ifs.2013.0111 Attribute-based encryption (ABE) has been considered as a promising cryptographic primitive for realising information security and flexible access control. However, the characteristic of attributes is treated as the identical level in most proposed schemes. Lattice-based cryptography has been attracted much attention because of that it can resist to quantum cryptanalysis. In this study, lattice-based threshold hierarchical ABE (lattice-based t-HABE) scheme without random oracles is constructed and proved to be secure against selective attribute set and chosen plaintext attacks under the standard hardness assumption of the learning with errors problem. The notion of the HABE scheme can be considered as the generalisation of traditional ABE scheme where all attributes have the same level.
    Keywords: authorisation; cryptography; attribute characteristics; attribute hierarchy; cryptographic primitive; flexible access control; information security; lattice-based cryptography; lattice-based t-HABE scheme; lattice-based threshold hierarchical ABE scheme; plaintext attacks; quantum cryptanalysis; random oracles; selective attribute set; standard model; threshold attribute-based encryption (ID#:14-2808)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842406&isnumber=6842405
  • Shao-zhen Chen; Tian-min Xu, "Biclique Key Recovery for ARIA-256," Information Security, IET, vol.8, no.5, pp.259,264, Sept. 2014. doi: 10.1049/iet-ifs.2012.0353 In this study, combining the biclique cryptanalysis with the meet-in-the-middle (MITM) attack, the authors present the first key recovery method for the full ARIA-256 faster than brute-force. The attack requires 280 chosen plaintexts, and the time complexity is about 2255.2 full-round ARIA encryptions.
    Keywords: cryptography; MITM attack; biclique cryptanalysis; biclique key recovery; first key recovery method; full-round ARIA encryptions; meet-in-the-middle attack; time complexity (ID#:14-2809)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6881822&isnumber=6881821
  • Zadeh, AA; Heys, H.M., "Simple Power Analysis Applied To Nonlinear Feedback Shift Registers," Information Security, IET, vol.8, no.3, pp.188, 198, May 2014. doi: 10.1049/iet-ifs.2012.0186 Linear feedback shift registers (LFSRs) and nonlinear feedback shift register (NLFSRs) are major components of stream ciphers. It has been shown that, under certain idealised assumptions, LFSRs and LFSR-based stream ciphers are susceptible to cryptanalysis using simple power analysis (SPA). In this study, the authors show that SPA can be practically applied to a CMOS digital hardware circuit to determine the bit values of an NLFSR and SPA therefore has applicability to NLFSR-based stream ciphers. A new approach is used with the cryptanalyst collecting power consumption information from the system on both edges (triggering and non-triggering) of the clock in the digital hardware circuit. The method is applied using simulated power measurements from an 80-bit NLFSR targeted to an 180 nm CMOS implementation. To overcome inaccuracies associated with mapping power measurements to the cipher data, the authors offer novel analytical techniques which help the analysis to find the bit values of the NLFSR. Using the obtained results, the authors analyse the complexity of the analysis on the NLFSR and show that SPA is able to successfully determine the NLFSR bits with modest computational complexity and a small number of power measurement samples.
    Keywords: CMOS logic circuits; computational complexity; cryptography; power aware computing; shift registers; CMOS digital hardware circuit; LFSR; LFSR-based stream ciphers; NLFSR-based stream ciphers; SPA; bit value determination; cipher data; clock edges; computational complexity; cryptanalysis; digital hardware circuit; linear feedback shift registers; nonLFSR; nonlinear feedback shift registers; power consumption information; simple power analysis; simulated power measurements; size 180 nm; stream ciphers; word length 80 bit (ID#:14-2810)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6786955&isnumber=6786849
  • Harish, P.D.; Roy, S., "Energy Oriented Vulnerability Analysis on Authentication Protocols for CPS," Distributed Computing in Sensor Systems (DCOSS), 2014 IEEE International Conference on, pp.367,371, 26-28 May 2014. doi: 10.1109/DCOSS.2014.52 In this work we compute the energy generated by modular exponentiation, a widely used powerful tool in password authentication protocols for cyber physical systems. We observe modular exponentiation to be an expensive operation in terms of energy consumption in addition to be known to be computationally intensive. We then analyze the security and energy consumption an advanced smart card based password authentication protocol for cyber physical systems, that use modular exponentiation. We devise a generic cryptanalysis method on the protocol, in which the attacker exploits the energy and computational intensive nature of modular exponentiation to a perform denial of service (DoS) attack. We also show other similar protocols to be vulnerable to this attack. We then suggest methods to prevent this attack.
    Keywords: authorisation; energy conservation; CPS; DoS attack; cyber physical systems; denial-of-service attack; energy consumption; energy oriented vulnerability analysis;modular exponentiation; smart card based password authentication protocol; Authentication; Energy consumption; Energy measurement; Protocols; Servers; Smart cards (ID#:14-2811)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846192&isnumber=6846129

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Data at Rest - Data i Motion

Data at Rest - Data in Motion


Data protection has distinguished between data in motion and data at rest for more than a decade. Research into these areas continues with the proliferation of cloud and mobile technologies. The articles cited here, separated by motion and rest, were offered in the first half of 2014. Data in Motion:

  • Ediger, D.; McColl, R.; Poovey, J.; Campbell, D., "Scalable Infrastructures for Data in Motion," Cluster, Cloud and Grid Computing (CCGrid), 2014 14th IEEE/ACM International Symposium on, vol., no., pp.875,882, 26-29 May 2014. doi: 10.1109/CCGrid.2014.91 Analytics applications for reporting and human interaction with big data rely upon scalable frameworks for data ingest, storage, and computation. Batch processing of analytic workloads increases latency of results and can perform redundant computation. In real-world applications, new data points are continuously arriving and a suite of algorithms must be updated to reflect the changes. Reducing the latency of re-computation by keeping algorithms online and up-to-date enables fast query, experimentation, and drill-down. In this paper, we share our experiences designing and implementing scalable infrastructure around No SQL databases for social media analytics applications. We propose a new heterogeneous architecture and execution model for streaming data applications that focuses on throughput and modularity.
    Keywords: Big Data; SQL; data analysis; social networking (online); NoSQL databases; analytic workloads; batch processing; big data; data in motion; data ingest; data storage; execution model; heterogeneous architecture; recomputation latency reduction; redundant computation; scalable infrastructures; social media analytics applications; streaming data applications; Algorithm design and analysis; Clustering algorithms; Computational modeling; Data structures; Databases; Media; Servers (ID#:14-2753)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846541&isnumber=6846423
  • Veiga Neves, M.; De Rose, C.AF.; Katrinis, K.; Franke, H., "Pythia: Faster Big Data in Motion through Predictive Software-Defined Network Optimization at Runtime," Parallel and Distributed Processing Symposium, 2014 IEEE 28th International, pp.82,90, 19-23 May 2014. doi: 10.1109/IPDPS.2014.20 The rise of Internet of Things sensors, social networking and mobile devices has led to an explosion of available data. Gaining insights into this data has led to the area of Big Data analytics. The MapReduce framework, as implemented in Hadoop, is one of the most popular frameworks for Big Data analysis. To handle the ever-increasing data size, Hadoop is a scalable framework that allows dedicated, seemingly unbound numbers of servers to participate in the analytics process. Response time of an analytics request is an important factor for time to value/insights. While the compute and disk I/O requirements can be scaled with the number of servers, scaling the system leads to increased network traffic. Arguably, the communication-heavy phase of MapReduce contributes significantly to the overall response time, the problem is further aggravated, if communication patterns are heavily skewed, as is not uncommon in many MapReduce workloads. In this paper we present a system that reduces the skew impact by transparently predicting data communication volume at runtime and mapping the many end-to-end flows among the various processes to the underlying network, using emerging software-defined networking technologies to avoid hotspots in the network. Dependent on the network oversubscription ratio, we demonstrate reduction in job completion time between 3% and 46% for popular MapReduce benchmarks like Sort and Nutch.
    Keywords: Big Data; computer networks; parallel programming; telecommunication traffic; Big Data analytics; Hadoop; MapReduce workloads; Nutch MapReduce benchmark; Pythia; Sort MapReduce benchmark; communication patterns; communication-heavy phase; compute requirements; data communication volume prediction; data size; disk I/O requirements; end-to-end flow mapping; job completion time reduction; network oversubscription ratio; network traffic; predictive software-defined network optimization; response time; runtime analysis; scalable framework; system scaling; unbound server numbers; Big data; Instruments; Job shop scheduling; Resource management; Routing; Runtime; Servers; Data communication; Data processing; Distributed computing (ID#:14-2754)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6877244&isnumber=6877223
  • Hou, Junhui; Bian, Zhen-Peng; Chau, Lap-Pui; Magnenat-Thalmann, Nadia; He, Ying, "Restoring Corrupted Motion Capture Data Via Jointly Low-Rank Matrix Completion," Multimedia and Expo (ICME), 2014 IEEE International Conference on , vol., no., pp.1,6, 14-18 July 2014. doi: 10.1109/ICME.2014.6890222 Motion capture (mocap) technology is widely used in various applications. The acquired mocap data usually has missing data due to occlusions or ambiguities. Therefore, restoring the missing entries of the mocap data is a fundamental issue in mocap data analysis. Based on jointly low-rank matrix completion, this paper presents a practical and highly efficient algorithm for restoring the missing mocap data. Taking advantage of the unique properties of mocap data (i.e, strong correlation among the data), we represent the corrupted data as two types of matrices, where both the local and global characteristics are taken into consideration. Then we formulate the problem as a convex optimization problem, where the missing data is recovered by solving the two matrices using the alternating direction method of multipliers algorithm. Experimental results demonstrate that the proposed scheme significantly outperforms the state-of-the-art algorithms in terms of both the quality and computational cost.
    Keywords: Accuracy; Computational efficiency; Computers; Convex functions; Image restoration; Optimization; Trajectory; Motion capture; convex optimization; low-rank; matrix completion (ID#:14-2755)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890222&isnumber=6890121
  • Tennekoon, R.; Wijekoon, J.; Harahap, E.; Nishi, H.; Saito, E.; Katsura, S., "Per Hop Data Encryption Protocol For Transmission Of Motion Control Data Over Public Networks," Advanced Motion Control (AMC),2014 IEEE 13th International Workshop on , vol., no., pp.128,133, 14-16 March 2014. doi: 10.1109/AMC.2014.6823269 Bilateral controllers are widely used vital technology to perform remote operations and telesurgeries. The nature of the bilateral controller enables control objects, which are geographically far from the operation location. Therefore, the control data has to travel through public networks. As a result, to maintain the effectiveness and the consistency of applications such as teleoperations and telesurgeries, faster data delivery and data integrity are essential. The Service-oriented Router (SoR) was introduced to maintain the rich information on the Internet and to achieve maximum benefit from networks. In particular, the security, privacy and integrity of bilateral communication are not discoursed in spite of its significance brought by its underlying skill information or personal vital information. An SoR can analyze all packet or network stream transactions on its interfaces and store them in high throughput databases. In this paper, we introduce a hop-by-hop routing protocol which provides hop-by-hop data encryption using functions of the SoR. This infrastructure can provide security, privacy and integrity by using these functions. Furthermore, we present the implementations of proposed system in the ns-3 simulator and the test result shows that in a given scenario, the protocol only takes a processing delay of 46.32 ms for the encryption and decryption processes per a packet.
    Keywords: Internet; computer network security; control engineering computing; cryptographic protocols; data communication; data integrity; data privacy; force control; medical robotics; motion control; position control; routing protocols;surgery;telecontrol;telemedicine;telerobotics;Internet;SoR; bilateral communication; bilateral controller; control objects; data delivery; data integrity; decryption process; hop-by-hop data encryption; hop-by-hop routing protocol; motion control data transmission; network stream transaction analysis;ns-3 simulator; operation location; packet analysis; per hop data encryption protocol; personal vital information; privacy; processing delay; public network; remote operation; security;s ervice-oriented router; skill information; teleoperation; telesurgery; throughput database; Delays; Encryption; Haptic interfaces; Routing protocols; Surgery; Bilateral Controllers; Service-oriented Router; hop-by-hop routing; motion control over networks; ns-3 (ID#:14-2756)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6823269&isnumber=6823244

Data at Rest:

  • Ferretti, L.; Colajanni, M.; Marchetti, M., "Distributed, Concurrent, and Independent Access to Encrypted Cloud Databases," Parallel and Distributed Systems, IEEE Transactions on , vol.25, no.2, pp.437,446, Feb. 2014 doi: 10.1109/TPDS.2013.154 Abstract: Placing critical data in the hands of a cloud provider should come with the guarantee of security and availability for data at rest, in motion, and in use. Several alternatives exist for storage services, while data confidentiality solutions for the database as a service paradigm are still immature. We propose a novel architecture that integrates cloud database services with data confidentiality and the possibility of executing concurrent operations on encrypted data. This is the first solution supporting geographically distributed clients to connect directly to an encrypted cloud database, and to execute concurrent and independent operations including those modifying the database structure. The proposed architecture has the further advantage of eliminating intermediate proxies that limit the elasticity, availability, and scalability properties that are intrinsic in cloud-based solutions. The efficacy of the proposed architecture is evaluated through theoretical analyses and extensive experimental results based on a prototype implementation subject to the TPC-C standard benchmark for different numbers of clients and network latencies.
    Keywords: {cloud computing; cryptography; database management systems; TPC-C standard benchmark; availability property; cloud database services; concurrent access; data confidentiality; database structure modification; distributed access; elasticity property; encrypted cloud database; encrypted data concurrent operation execution; geographically distributed clients; independent access; intermediate proxies elimination; network latencies; scalability property; Cloud; SecureDBaaS; confidentiality; database ;security (ID#:14-2757)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6522403&isnumber=6689796
  • Woods, Jacqueline; Iyengar, Sridhar; Sinha, Amit; Mitra, Subhasish; Cannady, Stacy, "A New Era Of Computing: Are You "Ready Now" To Build A Smarter And Secured Enterprise?," Quality Electronic Design (ISQED), 2014 15th International Symposium on, pp.1,7, 3-5 March 2014. doi: 10.1109/ISQED.2014.6783293 We are experiencing fundamental changes in how we interact, live, work and succeed in business. To support the new paradigm, computing must be simpler, more responsive and more adaptive, with the ability to seamlessly move from monolithic applications to dynamic services, from structured data at rest to unstructured data in motion, from supporting standard device interfaces to supporting a myriad of new and different devices every day. IBM understands this need to integrate social, mobile, cloud and big data to deliver value for your enterprise, so join this discussion, and learn how IBM helps customers leverage these technologies for superior customer value.
    Keywords: (not provided) (ID#:14-2758)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6783293&isnumber=6783285
  • Rodriguez Garcia, Ricardo; Thorpe, Julie; Vargas Martin, Miguel, "Crypto-assistant: Towards Facilitating Developer's Encryption Of Sensitive Data," Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on, pp.342,346, 23-24 July 2014. doi: 10.1109/PST.2014.6890958 The lack of encryption of data at rest or in motion is one of the top 10 database vulnerabilities [1]. We suggest that this vulnerability could be prevented by encouraging developers to perform encryption-related tasks by enhancing their integrated development environment (IDE). To this end, we created the Crypto-Assistant: a modified version of the Hibernate Tools plug-in for the popular Eclipse IDE. The purpose of the Crypto-Assistant is to mitigate the impact of developers' lack of security knowledge related to encryption by facilitating the use of encryption directives via a graphical user interface that seamlessly integrates with Hibernate Tools. Two preliminary tests helped us to identify items for improvement which have been implemented in Crypto-Assistant. We discuss Crypto-Assistant's architecture, interface, changes in the developers' workflow, and design considerations.
    Keywords: Databases; Encryption; Java; Prototypes; Software (ID#:14-2759)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890958&isnumber=6890911
  • Hankins, R.Q.; Jigang Liu, "A Novel Approach To Evaluating Similarity In Computer Forensic Investigations," Electro/Information Technology (EIT), 2014 IEEE International Conference on,, pp.567,572, 5-7 June 2014. doi: 10.1109/EIT.2014.6871826 Abstraction-based approaches to data analysis in computer forensics require substantial human effort to determine what data is useful. Automated or semi-automated, similarity-based approaches allow rapid computer forensics analysis of large data sets with less focus on untangling many layers of abstraction. Rapid and automated ranking of data by its value to a computer forensics investigation eliminates much of the human effort required in the computer forensics process, leaving investigators to judge and specify what data is interesting, and automating the rest of analysis. In this paper, we develop two algorithms that find portions of a string relevant to an investigation, then refine that portion using a combination of human and computer analysis to rapidly and effectively extract the most useful data from the string, speeding, automatically documenting, and partially automating analysis.
    Keywords: data analysis; digital forensics; abstraction-based approach; computer analysis; computer forensic investigations; data analysis; data ranking; human analysis; similarity evaluation; similarity-based approach; Algorithm design and analysis; Computational complexity; Computers; Digital forensics; Measurement (ID#:14-2760)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6871826&isnumber=6871745
  • D'Orazio, C.; Ariffin, A; Choo, K.-K.R., "iOS Anti-forensics: How Can We Securely Conceal, Delete and Insert Data?," System Sciences (HICSS), 2014 47th Hawaii International Conference on, pp.4838,4847, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.594 With increasing popularity of smart mobile devices such as iOS devices, security and privacy concerns have emerged as a salient area of inquiry. A relatively under-studied area is anti-mobile forensics to prevent or inhibit forensic investigations. In this paper, we propose a "Concealment" technique to enhance the security of non-protected (Class D) data that is at rest on iOS devices, as well as a "Deletion" technique to reinforce data deletion from iOS devices. We also demonstrate how our "Insertion" technique can be used to insert data into iOS devices surreptitiously that would be hard to pick up in a forensic investigation.
    Keywords: data privacy; digital forensics; iOS (operating system); mobile computing; mobile handsets; antimobile forensics; concealment technique; data deletion; deletion technique; forensic investigations; iOS antiforensics; iOS devices; insertion technique; nonprotected data security; privacy concerns; security concerns; smart mobile devices; Cryptography; File systems; Forensics; Mobile handsets; Random access memory; Videos; iOS anti-forensics; iOS forensics; mobile anti-forensics; mobile forensics (ID#:14-2761)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759196&isnumber=6758592

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Edge Detection

Edge Detection


Edge detection is an important issue in image and signal processing. The work cited here includes an overview of the topic, several approaches, and applications for radar and sonar. These works were presented or published between January and August of 2014.

  • Waghule, D.R.; Ochawar, R.S., "Overview on Edge Detection Methods," Electronic Systems, Signal Processing and Computing Technologies (ICESC), 2014 International Conference on, pp.151,155, 9-11 Jan. 2014. doi: 10.1109/ICESC.2014.31 Edge in an image is a contour across which the brightness of the image changes abruptly. Edge detection plays a vital role in image processing. Edge detection is a process that detects the presence and location of edges constituted by sharp changes in intensity of the image. An important property of the edge detection method is its ability to extract the accurate edge line with good orientation. Different edge detectors work better under different conditions. Comparative evaluation of different methods of edge detection makes it easy to decide which edge detection method is appropriate for image segmentation. This paper presents an overview of the published work on edge detection.
    Keywords: edge detection; image segmentation; edge detection methods ;image intensity; image processing; image segmentation; sharp changes; Algorithm design and analysis; Detectors; Field programmable gate arrays; Image edge detection; Morphology; Wavelet transforms; Edge Detection; Edge Detectors; FPGA; Wavelets (ID#:14-2812)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6745363&isnumber=6745317
  • Isik, S.; Ozkan, K., "A Novel Multi-Scale And Multi-Expert Edge Detection Method Based On Common Vector Approach," Signal Processing and Communications Applications Conference (SIU), 2014 22nd , vol., no., pp.1630,1633, 23-25 April 2014. doi: 10.1109/SIU.2014.6830558 Edge detection is most popular problem in image analysis. To develop an edge detection method that has efficient computation time, sensing to noise as minimum level and extracting meaningful edges from the image, so that many crowded edge detection algorithms have emerged in this area. The different derivative operators and possible different scales are needed in order to properly determine all meaningful edges in a processed image. In this work, we have combined the edge information obtained from each operators at different scales with the concept of common vector approach and obtained edge segments that connected, thin and robust to the noise.
    Keywords: edge detection; expert systems; common vector approach; crowded edge detection algorithms; edge information; image analysis; multiexpert edge detection method; multiscale edge detection method; Conferences; Image edge detection; Noise; Pattern recognition; Speech; Vectors; common vector approach; edge detection; multi-expert; multi-scale (ID#:14-2813)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830558&isnumber=6830164
  • Wenlong Fu; Johnston, M.; Mengjie Zhang, "Low-Level Feature Extraction for Edge Detection Using Genetic Programming," Cybernetics, IEEE Transactions on, vol.44, no.8, pp.1459,1472, Aug. 2014. doi: 10.1109/TCYB.2013.2286611 Edge detection is a subjective task. Traditionally, a moving window approach is used, but the window size in edge detection is a tradeoff between localization accuracy and noise rejection. An automatic technique for searching a discriminated pixel's neighbors to construct new edge detectors is appealing to satisfy different tasks. In this paper, we propose a genetic programming (GP) system to automatically search pixels (a discriminated pixel and its neighbors) to construct new low-level subjective edge detectors for detecting edges in natural images, and analyze the pixels selected by the GP edge detectors. Automatically searching pixels avoids the problem of blurring edges from a large window and noise influence from a small window. Linear and second-order filters are constructed from the pixels with high occurrences in these GP edge detectors. The experiment results show that the proposed GP system has good performance. A comparison between the filters with the pixels selected by GP and all pixels in a fixed window indicates that the set of pixels selected by GP is compact but sufficiently rich to construct good edge detectors.
    Keywords: edge detection; feature extraction; filtering theory; genetic algorithms; image denoising; GP system; edge detection; genetic programming; linear filters; localization accuracy; low-level feature extraction; natural images; noise rejection; second-order filters; Accuracy; Detectors; Educational institutions; Feature extraction; Image edge detection; Noise; Training; Edge detection; feature extraction; genetic programming (ID#:14-2814)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6649981&isnumber=6856256
  • Naumenko, AV.; Lukin, V.V.; Vozel, B.; Chehdi, K.; Egiazarian, K., "Neural Network Based Edge Detection In Two-Look And Dual-Polarization Radar Images," Radar Symposium (IRS), 2014 15th International , vol., no., pp.1,4, 16-18 June 2014. doi: 10.1109/IRS.2014.6869302 Edge detection is a standard operation in image processing. It becomes problematic if noise is not additive, not Gaussian and not i.i.d. as this happens in images acquired by synthetic aperture radar (SAR). To perform edge detection better, it has been recently proposed to apply a trained neural network (NN) and SAR image pre-filtering for single-look mode. In this paper, we demonstrate that the proposed detector is, after certain modifications, applicable for edge detection in two-look and dual-polarization SAR images with and without pre-filtering. Moreover, we show that a recently introduced parameter AUC (Area Under the Curve) can be helpful in optimization of parameters for elementary edge detectors used as inputs of the NN edge detector. Quantitative analysis results confirming efficiency of the proposed detector are presented. Its performance is also studied for real-life TerraSAR-X data.
    Keywords: edge detection; neural nets; radar computing; radar imaging; radar polarimetry; synthetic aperture radar; NN edge detector; SAR image pre-filtering; area under the curve; dual-polarization radar images; image processing; neural network based edge detection; parameter optimization; real-life TerraSAR-X data;single-look mode; synthetic aperture radar; two-look radar images; Artificial neural networks; Detectors; Image edge detection; Noise; Speckle; Synthetic aperture radar; Training; Synthetic aperture radar; edge detection; neural network; polarimetric; speckle; two-look images (ID#:14-2815)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6869302&isnumber=6869176
  • Tong Chunya; Teng Linlin; Zhou Jiaming; He Kejia; Zhong Qiubo, "A Novel Method Of Edge Detection With Gabor Wavelet Based On FFTW," Electronics, Computer and Applications, 2014 IEEE Workshop on, pp.625,628, 8-9 May 2014. doi: 10.1109/IWECA.2014.6845697 Since remote sensing images' features of substantial data and complex landmark, so it needs a higher requirement for edge detection operator. Using Gabor wavelet as the edge detection operator can get over the limitations of grads operator and Canny operator in edge detection. However, the method based on 2-D Gabor wavelet takes more time. In response to this lack of Gabor wavelet, this paper presents an edge detection method based on parallel processing of FFTW and Gabor wavelet and the experimental analysis shows this method can improve the processing speed of the algorithm greatly.
    Keywords: Gabor filters; discrete Fourier transforms; edge detection; geophysical image processing; remote sensing; wavelet transforms;2D Gabor wavelet; Canny operator; FFTW; complex landmark feature; discrete Fourier transformation; edge detection method; grads operator; parallel processing; remote sensing images ;substantial data feature; Image edge detection; Image resolution; Wavelet transforms; FFTW; Gabor wavelet; edge detection; parallel; processing; remote sensing images (ID#:14-2816)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845697&isnumber=6845536
  • Nai-Quei Chen; Jheng-Jyun Wang; Li-An Yu; Chung-Yen Su, "Sub-pixel Edge Detection of LED Probes Based on Canny Edge Detection and Iterative Curve Fitting," Computer, Consumer and Control (IS3C), 2014 International Symposium on , vol., no., pp.131,134, 10-12 June 2014. doi: 10.1109/IS3C.2014.45 In recent years, the demands of LED are increasing. In order to test the quality of LEDs, we need LED probes to detect it, so the accuracy and manufacturing methods are attracted more attention by companies. LED probes are ground by people so far. When processing, we often consider the angle and radius of a probe (the radius is between 0.015 mm and 0.03 mm), so it is hard to balance between precision and quality. In this study, we proposed an effective method to measure the angle and radius of a probe. The method is based on Canny edge detection and a curve fitting with iteration. Experimental results show the effectiveness of the proposed method.
    Keywords: curve fitting; edge detection; iterative methods; light emitting diodes; Canny edge detection; LED probes; LED quality test iterative curve fitting; probe angle; probe radius; subpixel edge detection; Computational modeling; Curve fitting; Equations; Image edge detection; Light emitting diodes; Mathematical model; Probes; Edge detection; LED; Probe; Sub-pixel edge detection (ID#:14-2817)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845477&isnumber=6845429
  • Nascimento, AD.C.; Horta, M.M.; Frery, AC.; Cintra, R.J., "Comparing Edge Detection Methods Based on Stochastic Entropies and Distances for PolSAR Imagery," Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of, vol.7, no.2, pp.648,663, Feb. 2014. doi: 10.1109/JSTARS.2013.2266319 Polarimetric synthetic aperture radar (PolSAR) has achieved a prominent position as a remote imaging method. However, PolSAR images are contaminated by speckle noise due to the coherent illumination employed during the data acquisition. This noise provides a granular aspect to the image, making its processing and analysis (such as in edge detection) hard tasks. This paper discusses seven methods for edge detection in multilook PolSAR images. In all methods, the basic idea consists in detecting transition points in the finest possible strip of data which spans two regions. The edge is contoured using the transitions points and a B-spline curve. Four stochastic distances, two differences of entropies, and the maximum likelihood criterion were used under the scaled complex Wishart distribution; the first six stem from the h-f class of measures. The performance of the discussed detection methods was quantified and analyzed by the computational time and probability of correct edge detection, with respect to the number of looks, the backscatter matrix as a whole, the SPAN, the covariance an the spatial resolution. The detection procedures were applied to three real PolSAR images. Results provide evidence that the methods based on the Bhattacharyya distance and the difference of Shannon entropies outperform the other techniques.
    Keywords: data acquisition; edge detection; entropy; geophysical techniques; image resolution; maximum likelihood estimation; radar imaging; radar polarimetry; remote sensing by radar; speckle; splines (mathematics);statistical distributions; stochastic processes; synthetic aperture radar; B-spline curve; Bhattacharyya distance; SPAN; Shannon entropies; backscatter matrix; coherent illumination; computational time; data acquisition; detection methods; detection procedures; edge detection methods; image analysis; image processing; look number; maximum likelihood criterion; multilook PolSAR images; polarimetric synthetic aperture radar; probability; real PolSAR images; remote imaging method; scaled complex Wishart distribution; spatial resolution; speckle noise; stochastic distances; stochastic entropies; transition points; Edge detection; image analysis; information theory; polarimetric SAR (ID#:14-2818)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6550901&isnumber=6730960
  • Weibin Rong; Zhanjing Li; Wei Zhang; Lining Sun, "An Improved Canny Edge Detection Algorithm," Mechatronics and Automation (ICMA), 2014 IEEE International Conference on , vol., no., pp.577,582, 3-6 Aug. 2014. doi: 10.1109/ICMA.2014.6885761The traditional Canny edge detection algorithm is sensitive to noise, therefore, it's easy to lose weak edge information when filtering out the noise, and its fixed parameters show poor adaptability. In response to these problems, this paper proposed an improved algorithm based on Canny algorithm. This algorithm introduced the concept of gravitational field intensity to replace image gradient, and obtained the gravitational field intensity operator. Two adaptive threshold selection methods based on the mean of image gradient magnitude and standard deviation were put forward for two kinds of typical images (one has less edge information, and the other has rich edge information) respectively. The improved Canny algorithm is simple and easy to realize. Experimental results show that the algorithm can preserve more useful edge information and more robust to noise.
    Keywords: edge detection; adaptive threshold selection methods; gravitational field intensity operator; image gradient magnitude; improved Canny edge detection algorithm; standard deviation; Algorithm design and analysis; Histograms; Image edge detection; Noise; Robustness; Standards; Tires; Adaptive threshold; Canny algorithm; Edge detection; Gravitational field intensity operator (ID#:14-2819)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6885761&isnumber=6885661
  • Catak, M.; Duran, N., "2-Dimensional Auto-Regressive Process Applied To Edge Detection," Signal Processing and Communications Applications Conference (SIU), 2014 22nd , vol., no., pp.1442,1445, 23-25 April 2014. doi: 10.1109/SIU.2014.6830511Edge detection has important applications in image processing area. In addition to well-known deterministic approaches, stochastic models have been developed and validated on edge detection. In this study, a stochastic auto-regressive process method has been presented and this method applied to gray scale and color scale images. Results have been compared to other well-recognized edge detectors, then applicability of the developed method is pointed out.
    Keywords: {edge detection; image colour analysis; stochastic processes; autoregressive process; color scale images; edge detection; edge detectors; gray scale images; image processing; stochastic autoregressive process method; stochastic models; Art; Conferences; Feature extraction; Image edge detection; MATLAB; Signal processing; Stochastic processes; auto-regressive process; color image processing; edge detection (ID#:14-2820)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830511&isnumber=6830164
  • Wang, Xingmei; Liu, Guangyu; Li, Lin; Liu, Zhipeng, "A Novel Quantum-Inspired Algorithm For Edge Detection Of Sonar Image," Control Conference (CCC), 2014 33rd Chinese, pp.4836,4841, 28-30 July 2014. doi: 10.1109/ChiCC.2014.6895759 In order to extract the underwater object contours of sonar image accurately, a novel quantum-inspired edge detection algorithm is proposed. This algorithm use parameters of anisotropic second-order distribution characteristics MRF (Markov Random Field, MRF) model to describe the texture feature of original sonar image to smooth noise. Based on the conditions mentioned above, sonar image is represented by quantum bit on quantum theory, structure edge detection operator of sonar image by establishing a quantum superposition relationship between pixels. Evaluation the results of quantum-inspired edge detection by PSNR (Peak Signal to Noise Ratio, PSNR), and then complete the quantum-inspired edge detection of sonar image. The comparison different experiments demonstrate that the proposed algorithm get good smoothing result of original sonar image and underwater object contours can be extracted accurately. And it has better adaptability.
    Keywords: Histograms; Image edge detection; PSNR; Quantum mechanics; Sonar detection; Edge Detection; Peak Signal to Noise Ratio; Quantum-inspired; Sonar image (ID#:14-2821)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6895759&isnumber=6895198
  • Baselice, F.; Ferraioli, G.; Reale, D., "Edge Detection Using Real and Imaginary Decomposition of SAR Data," Geoscience and Remote Sensing, IEEE Transactions on, vol.52, no.7, pp.3833,3842, July 2014 doi: 10.1109/TGRS.2013.2276917 The objective of synthetic aperture radar (SAR) edge detection is the identification of contours across the investigated scene, exploiting SAR complex data. Edge detectors available in the literature exploit singularly amplitude and interferometric phase information, looking for reflectivity or height difference between neighboring pixels, respectively. Recently, more performing detectors based on the joint processing of amplitude and interferometric phase data have been presented. In this paper, we propose a novel approach based on the exploitation of real and imaginary parts of single-look complex acquired data. The technique is developed in the framework of stochastic estimation theory, exploiting Markov random fields. Compared to available edge detectors, the technique proposed in this paper shows useful advantages in terms of model complexity, phase artifact robustness, and scenario applicability. Experimental results on both simulated and real TerraSAR-X and COSMO-SkyMed data show the interesting performances and the overall effectiveness of the proposed method.
    Keywords: edge detection; geophysical image processing; remote sensing by radar; synthetic aperture radar; COSMO-SkyMed data; Markov random fields; SAR complex data; SAR data imaginary decomposition; SAR data real decomposition; TerraSAR-X data; amplitude phase information; contour identification; edge detection; interferometric phase data; interferometric phase information; single-look complex acquired data; stochastic estimation theory; synthetic aperture radar; Buildings; Detectors; Estimation; Image edge detection; Joints; Shape; Synthetic aperture radar; Edge detection; Markov random fields (MRFs); synthetic aperture radar (SAR) (ID#:14-2822)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6595051&isnumber=6750067
  • Byungjin Chung; Joohyeok Kim; Changhoon Yim, "Fast Rough Mode Decision Method Based On Edge Detection For Intra Coding in HEVC," Consumer Electronics (ISCE 2014), The 18th IEEE International Symposium on, pp.1,2, 22-25 June 2014. doi: 10.1109/ISCE.2014.6884419 In this paper, we propose a fast rough mode decision method based on edge detection for intra coding in HEVC. It performs edge detection using Sobel operator and estimates the angular direction using gradient values. Histogram mapping is used to reduce the number of prediction modes for full rate-distortion optimization (RDO). The proposed method can achieve processing speed improvement through RDO computation reduction. Simulation results shows that encoding time is reduced significantly compared to HM-13.0 with acceptable BD-PSNR and BD-rate.
    Keywords: edge detection; video coding;BD-rate;HEVC;HM-13.0;RDO computation reduction; Sobel operator; acceptable BD-PSNR; angular direction estimation; edge detection; encoding time; fast rough mode decision method; full RDO; full rate-distortion optimization; gradient values; histogram mapping; intracoding; prediction mode number reduction; processing speed improvement; Encoding; Histograms; Image edge detection; Rate-distortion ; Simulation; Standards; Video coding; HEVC; edge detection; intra prediction; prediction mode (ID#:14-2823)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6884419&isnumber=6884278
  • Muhammad, A; Bala, I; Salman, M.S.; Eleyan, A, "DWT Subbands Fusion Using Ant Colony Optimization For Edge Detection," Signal Processing and Communications Applications Conference (SIU), 2014 22nd, pp.1351,1354, 23-25 April 2014. doi: 10.1109/SIU.2014.6830488 In this paper, a new approach for image edge detection using wavelet based ant colony optimization (ACO) is proposed. The proposed approach applies discrete wavelet transform (DWT) on the image. ACO is applied to the generated four subbands (Approximation, horizontal, vertical, and diagonal) separately for edge detection. After obtaining edges from the 4 subbands, inverse DWT is applied to fuse the results into one image with same size as the original one. The proposed approach outperforms the conventional ACO approach.
    Keywords: ant colony optimisation; discrete wavelet transforms; edge detection; image fusion; ACO; DWT subbands fusion; ant colony optimization; discrete wavelet transform; image edge detection; inverse DWT; Ant colony optimization; Conferences; Discrete wavelet transforms; Image edge detection; Image reconstruction; Signal processing algorithms; ant colony optimization; discrete wavelet transform; edge detection (ID#:14-2824)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830488&isnumber=6830164

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Expert Systems

Expert Systems


Expert systems based on fuzzy logic hold promise for solving many problems. The research presented here address black hole attacks in wireless sensor networks, a fuzzy tool for conducting information security risk assessments, and expert code generator, and other topics. These works were presented between January and August of 2014.

  • Taylor, V.F.; Fokum, D.T., "Mitigating Black Hole Attacks In Wireless Sensor Networks Using Node-Resident Expert Systems," Wireless Telecommunications Symposium (WTS), 2014, pp.1, 7, 9-11 April 2014. doi: 10.1109/WTS.2014.6835013 Wireless sensor networks consist of autonomous, self-organizing, low-power nodes which collaboratively measure data in an environment and cooperate to route this data to its intended destination. Black hole attacks are potentially devastating attacks on wireless sensor networks in which a malicious node uses spurious route updates to attract network traffic that it then drops. We propose a robust and flexible attack detection scheme that uses a watchdog mechanism and lightweight expert system on each node to detect anomalies in the behaviour of neighbouring nodes. Using this scheme, even if malicious nodes are inserted into the network, good nodes will be able to identify them based on their behaviour as inferred from their network traffic. We examine the resource-preserving mechanisms of our system using simulations and demonstrate that we can allow groups of nodes to collectively evaluate network traffic and identify attacks while respecting the limited hardware resources (processing, memory and storage) that are typically available on wireless sensor network nodes.
    Keywords: expert systems; telecommunication computing; telecommunication network routing; telecommunication security; telecommunication traffic; wireless sensor networks; autonomous self-organizing low-power nodes; black hole attacks; flexible attack detection scheme; lightweight expert system; malicious node; network traffic; node-resident expert systems; resource-preserving mechanisms; spurious route updates; watchdog mechanism ;wireless sensor networks; Cryptography; Expert systems; Intrusion detection; Monitoring; Routing; Routing protocols; Wireless sensor networks (ID#:14-2825)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6835013&isnumber=6834983
  • Bartos, J.; Walek, B.; Klimes, C.; Farana, R., "Fuzzy Tool For Conducting Information Security Risk Analysis," Control Conference (ICCC), 2014 15th International Carpathian, pp.28,33, 28-30 May 2014. doi: 10.1109/CarpathianCC.2014.6843564 The following article proposes fuzzy tool for processing risk analysis in the area of information security. The paper reviews today's approaches (qualitative and quantitative methodologies) and together with already published results proposes a fuzzy tool to support our novel approach. In this paper the fuzzy tool itself is proposed and also every main part of this tool is described. The proposed fuzzy tool is connected with expert system and methodology which is the part of more complex approach to decision making process. The knowledge base of expert system is created based on user input values and the knowledge of the problem domain. The proposed fuzzy tool is demonstrated on examples and problems from the area of information security.
    Keywords: expert systems; fuzzy set theory; risk analysis; security of data; decision making process; expert system; fuzzy tool; information security risk analysis; qualitative methodologies; quantitative methodologies; Expert systems; Information security; Organizations; Risk management; expert system; fuzzy; fuzzy tool; information security; risk analysis; uncertainty (ID#:14-2826)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6843564&isnumber=6843557
  • Imam, AT.; Rousan, T.; Aljawarneh, S., "An Expert Code Generator Using Rule-Based And Frames Knowledge Representation Techniques," Information and Communication Systems (ICICS), 2014 5th International Conference on, pp.1 , 6, 1-3 April 2014. doi: 10.1109/IACS.2014.6841951 This paper aims to demonstrate the development of an expert code generator using rule-based and frames knowledge representation techniques (ECG-RF). The ECG-RF system presented in this paper is a passive code generator that carries out the task of automatic code generation in fixed-structure software. To develop an ECG-RF system, the artificial intelligence (AI) of rule-based system and frames knowledge representation techniques was applied to a code generation task. ECG-RF fills a predefined frame of a certain fixed-structure program with code chunks retrieved from ECG-RF's knowledge base. The filling operation is achieved by ECG-RF's inference engine and is guided by the information collected from the user via a graphic user interface (GUI). In this paper, an ECG-RF system for generating a device driver program is presented and implemented with VBasic software. The results show that the ECG-RF design concept is reasonably reliable.
    Keywords: graphical user interfaces; inference mechanisms ;knowledge based systems; program compilers; ECG-RF design concept; ECG-RF inference engine ;ECG-RF knowledge base; ECG-RF system; GUI; VBasic software; artificial intelligence; automatic code generation; code chunks; code generation task; device driver program; expert code generator; fixed-structure program; fixed-structure software; frames knowledge representation techniques; graphic user interface; passive code generator;r ule-based system; Engines; Generators; Graphical user interfaces; Knowledge representation; Programming; Software; Software engineering; Automatic Code Generation; Expert System; Frames Knowledge Representation Techniques; Software Development (ID#:14-2827)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841951&isnumber=6841931
  • Mavropoulos, C.; Ping-Tsai Chung, "A Rule-based Expert System: Speakeasy - Smart Drink Dispenser," Systems, Applications and Technology Conference (LISAT), 2014 IEEE Long Island, pp.1,6, 2-2 May 2014. doi: 10.1109/LISAT.2014.6845224 In this paper, we develop a knowledge-based expert system case study called Speakeasy Expert System (S.E.S.) for exercising the rule-based expert system programming in both CLIPS and VisiRule. CLIPS stands for "C Language Integrated Production System" and it's an expert system tool created to facilitate the development of software to model human knowledge or expertise. VisiRule is a tool that allows experts to build decision models using a graphical paradigm, but one that can be annotated using code and or Boolean logic and then executed and exported to other programs and processes. Nowadays, there are billions of computing devices are interconnected in computing and communications. These devices include from various desktop personal computers, laptops, servers, embedded computers to small ones such as mobile phones. This growth shows no signs of slowing down and becomes the cause of a new technology in computing and communications. This new technology is called Internet of Things (IOT). In this study, we propose and extend the S.E.S into a Smart Drink Dispenser using IOT Technology. We present Data Flow Diagram of S.E.S in IOT Environment and its IOT architecture, and propose the usage and implementation of S.E.S.
    Keywords: Boolean functions; C language; decision making; expert systems; Boolean logic; C language integrated production system; CLIPS; SES; VisiRule; decision models; graphical paradigm; human knowledge; knowledge-based expert system; rule-based expert system programming; smart drink dispenser; speakeasy expert system; Alcoholic beverages; Business; Decision trees; Expert systems; Internet of Things; Artificial Intelligence (AI);CLIPS; Decision Making Information System ;Internet of Things (IOT);Knowledge-based Expert Systems; Radio-frequency Identification (RFID); VisiRule (ID#:14-2828)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845224&isnumber=6845183
  • Yuzuguzel, H.; Cemgil, AT.; Anarim, E., "Query Ranking Strategies In Probabilistic Expert Systems," Signal Processing and Communications Applications Conference (SIU), 2014 22nd, pp.1199, 1202, 23-25 April 2014. doi: 10.1109/SIU.2014.6830450 The number of features are quite high in many fields. For instance, the number of symptoms are around thousands in probabilistic medical expert systems. Since it is not practical to query all the symptoms to reach the diagnosis, query choice becomes important. In this work, 3 query ranking strategies in probabilistic expert systems are proposed and their performances on synthetic data are evaluated.
    Keywords: medical diagnostic computing; medical expert systems; probability; query processing; medical diagnosis; probabilistic expert systems; probabilistic medical expert systems; query ranking strategies; Conferences; Entropy; Expert systems; Inference algorithms; Probabilistic logic; Sequential diagnosis; Signal processing; medical diagnosis; relative-entropy; sequential diagnosis (ID#:14-2829)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830450&isnumber=6830164
  • GaneshKumar, P.; Rani, C.; Devaraj, D.; Victoire, T.AA, "Hybrid Ant Bee Algorithm for Fuzzy Expert System Based Sample Classification," Computational Biology and Bioinformatics, IEEE/ACM Transactions on, vol.11, no.2, pp.347, 360, March-April 2014. doi: 10.1109/TCBB.2014.2307325 Accuracy maximization and complexity minimization are the two main goals of a fuzzy expert system based microarray data classification. Our previous Genetic Swarm Algorithm (GSA) approach has improved the classification accuracy of the fuzzy expert system at the cost of their interpretability. The if-then rules produced by the GSA are lengthy and complex which is difficult for the physician to understand. To address this interpretability-accuracy tradeoff, the rule set is represented using integer numbers and the task of rule generation is treated as a combinatorial optimization task. Ant colony optimization (ACO) with local and global pheromone updations are applied to find out the fuzzy partition based on the gene expression values for generating simpler rule set. In order to address the formless and continuous expression values of a gene, this paper employs artificial bee colony (ABC) algorithm to evolve the points of membership function. Mutual Information is used for idenfication of informative genes. The performance of the proposed hybrid Ant Bee Algorithm (ABA) is evaluated using six gene expression data sets. From the simulation study, it is found that the proposed approach generated an accurate fuzzy system with highly interpretable and compact rules for all the data sets when compared with other approaches.
    Keywords: ant colony optimisation; classification; fuzzy systems; genetic algorithms; genetics; genomics; medical expert systems; ABA; ACO; GSA; Genetic Swarm Algorithm approach; accuracy maximization; ant colony optimization; artificial bee colony algorithm; classification accuracy; combinatorial optimization task; complexity minimization; continuous expression values; formless expression values; fuzzy expert system based microarray data classification; fuzzy partition; gene expression data sets; gene expression values; global pheromone updation; hybrid ant bee algorithm; if-then rules; informative gene identification; integer numbers; interpretability-accuracy tradeoff; local pheromone updation; membership function; mutual information; rule generation; rule set; sample classification; simulation study; Accuracy; Computational biology; Data models; Expert systems; Fuzzy systems; Gene expression; IEEE transactions; Microarray data; ant colony optimization; artificial bee colony; fuzzy expert system; mutual information (ID#:14-2830)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6746045&isnumber=6819503
  • Carreto, C.; Baltazar, M., "An Expert System for Mobile Devices Based On Cloud Computing," Information Systems and Technologies (CISTI), 2014 9th Iberian Conference on, pp.1, 6, 18-21 June 2014. doi: 10.1109/CISTI.2014.6876953 This paper describes the implementation of an Expert System for Android mobile devices, directed to the common user and the ability to use different knowledge bases, selectable by the user. The system uses a cloud computing-based architecture to facilitate the creation and distribution of different knowledge bases.
    Keywords: cloud computing; expert systems; mobile computing; smart phones; Android mobile devices; cloud computing-based architecture; expert system; knowledge base; mobile devices; Androids; Engines; Expert systems; Google; Humanoid robots; Mobile communication; Android; Cloud computing; Expert System (ID#:14-2831)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6876953&isnumber=6876860
  • Pokhrel, J.; Lalanne, F.; Cavalli, A; Mallouli, W., "QoE Estimation for Web Service Selection Using a Fuzzy-Rough Hybrid Expert System," Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on, pp.629,634, 13-16 May 2014. doi: 10.1109/AINA.2014.77 With the proliferation of web services on the Inter-net, it has become important for service providers to select the best services for their clients in accordance to their functional and non-functional requirements. Generally, QoS parameters are used to select the most performing web services, however, these parameters do not necessarily reflect the user's satisfaction. Therefore, it is necessary to estimate the quality of web services on the basis of user satisfaction, i.e., Quality of Experience(QoE). In this paper, we propose a novel method based on a fuzzy-rough hybrid expert system for estimating QoE of web services for web service selection. It also presents how different QoS parameters impact the QoE of web services. For this, we conducted subjective tests in controlled environment with real users to correlate QoS parameters to subjective QoE. Based on this subjective test, we derive membership functions and inference rules for the fuzzy system. Membership functions are derived using a probabilistic approach and inference rules are generated using Rough Set Theory (RST). We evaluated our system in a simulated environment in MATLAB. The simulation results show that the estimated web quality from our system has a high correlation with the subjective QoE obtained from the participants in controlled tests.
    Keywords: Web services; expert systems; fuzzy set theory; probability; quality of experience; rough set theory; Internet; MATLAB; QoE estimation; QoS parameters; RST; fuzzy system; fuzzy-rough hybrid expert system; inference rules; membership functions; probabilistic approach; quality of experience; rough set theory; user satisfaction; web service selection; web services proliferation; Availability; Estimation; Expert systems; Quality of service; Set theory; Web services; QoE; QoS; Web Services; intelligent systems (ID#:14-2832)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838723&isnumber=6838626
  • Kaur, B.; Madan, S., "A Fuzzy Expert System To Evaluate Customer's Trust In B2C E-Commerce Websites," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on, pp.394,399, 5-7 March 2014. doi: 10.1109/IndiaCom.2014.6828166 With the profound Internet perforation being the most significant advancement in the technology of the last few years, the platform for e-Commerce growth is set. E-Commerce industry has experienced astounding growth in recent years. For the successful implementation of a B2C E-business, it is necessary to understand the trust issues associated with the online environment which holds the customer back from shopping online. This paper proposes a model to discern the impact of trust factors pertaining in Indian E-Commerce marketplace on the customers' intention to purchase from an e-store. The model is based on Mamdani Fuzzy Inference System which is used for computation of the trust index of an e-store in order to assess the confidence level of the customers in the online store. The study first identifies the trust factors and thereby investigates the experts on them in order to examine the significance of the factors. Thereafter, the customers' responses regarding B2C E-Commerce websites with respect to the trust parameters are studied which leads to the development of the fuzzy system. The questionnaire survey method was used to gather primary data which was later used for the purpose of rule formation for the fuzzy inference system.
    Keywords: Web sites; consumer behaviour; electronic commerce; expert systems; fuzzy reasoning; purchasing; retail data processing ;trusted computing; B2C e-business; B2C e-commerce Websites; Indian e-commerce marketplace; Internet perforation; Mamdani fuzzy inference system; customer confidence level; customer intention; customer trust; e-commerce growth ;e-commerce industry; e-store; fuzzy expert system; fuzzy system development; online environment; online shopping; online store; purchasing; trust factors; trust index; trust issues; trust parameters; Business; Computational modeling; Expert systems; Fuzzy logic; Fuzzy systems ;Indexes; Internet; Customer's Trust; E-Commerce Trust; Fuzzy System; Online Trust; Trust; Trust Factors; Trust Index (ID#:14-2833)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828166&isnumber=6827395
  • Wen-xue Geng; Fan'e Kong; Dong-qian Ma, "Study on Tactical Decision Of UAV Medium-Range Air Combat," Control and Decision Conference (2014 CCDC), The 26th Chinesepp.135,139, May 31 2014-June 2 2014. doi: 10.1109/CCDC.2014.6852132 To process the uncertainty of decision-making environment and the real-time during the tactical decision of UAV medium-range air combat, a hybrid tactical decision-making method based on rule sets and Fuzzy Bayesian network (FBN) was proposed. By studying the process of UAV air combat, the main factors that affect the tactical decision were analyzed. A corresponding FBN and expert system were built up. The hybrid system retained the advantage of expert system by the first call to it. In the meantime, the system could also process the uncertainty of decision-making environment by means of the FBN. Finally, through the air combat simulation, the correctness, real-time and effectiveness in an uncertain environment of the hybrid tactical decision-making method were verified.
    Keywords: aerospace computing; autonomous aerial vehicles; belief networks; control engineering computing; decision making; expert systems; fuzzy control; fuzzy neural nets; military aircraft; military computing; neurocontrollers; FBN; UAV air combat; UAV medium-range air combat; air combat simulation; decision-making environment; expert system; fuzzy Bayesian network; hybrid system; hybrid tactical decision-making method; rule sets; uncertain environment; Atmospheric modeling; Bayes methods; Decision making; Expert systems; Missiles; Uncertainty; Fuzzy Bayesian network; UAV; expert system; medium-range air combat (ID#:14-2834)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6852132&isnumber=6852105
  • Pozna, Claudiu; Foldesi, Peter; Precup, Radu-Emil; Koczy, Laszlo T., "On the Development Of Signatures For Artificial Intelligence Applications," Fuzzy Systems (FUZZ-IEEE), 2014 IEEE International Conference onpp.1304,1310, 6-11 July 2014. doi: 10.1109/FUZZ-IEEE.2014.6891636 This paper illustrates developments of signatures for Artificial Intelligence (AI) applications. Since the signatures are data structures with efficient results in modeling of fuzzy inference systems and of uncertain expert systems, the paper starts with the analysis of the data structures used in AI applications from the knowledge representation and manipulation point of view. An overview on the signatures, on the operators on signatures and on classes of signatures is next given. Using the proto fuzzy inference system, these operators are applied in a new application of fuzzy inference system modeled by means of signatures and of classes of signatures.
    Keywords: Adaptation models; Artificial intelligence; Data structures; Educational institutions; Fuzzy logic; Fuzzy sets; Unified modeling language; Artificial Intelligence; expert systems; knowledge representation; proto fuzzy inference systems; signatures (ID#:14-2835)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6891636&isnumber=6891523

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Facial Recognition

Facial Recognition


Facial recognition tools have long been the stuff of action-adventure films. In the real world, they present opportunities and complex problems being examined by researchers. The works cited here, presented or published in the first three quarters of 2014, address various techniques and issues such as the use of TDM, PCA and Markov models, application of keystroke dynamics to facial thermography, multiresolution alignment, and sparse representation.

  • Henderson, G.; Ellefsen, I, "Applying Keystroke Dynamics Techniques to Facial Thermography for Verification," IST-Africa Conference Proceedings, 2014, pp.1, 10, 7-9 May 2014. doi: 10.1109/ISTAFRICA.2014.6880626 The problem of verifying that the person accessing a system is the same person that was authorized to do so has existed for many years. Some of the solutions that have been developed to address this problem include continuous Facial Recognition and Keystroke Dynamics. Each of these has their own inherent flaws. We will propose an approach that makes use of Facial Recognition and Keystroke Dynamic techniques and applies them to Facial Thermography. The mechanisms required to implement this new technique are discussed, as well as the trade-offs between the proposed approach and the existing techniques. This will be followed by a discussion on some of the strengths and weaknesses of the proposed approach that need to be considered before the system should be considered for an organization. Keywords: authorisation; face recognition; infrared imaging; continuous facial recognition; facial thermography; keystroke dynamic techniques; person authorization; person verification; Accuracy; Cameras; Face; Face recognition; Fingerprint recognition; Security; Standards; Facial Recognition; Facial Thermography; Keystroke Dynamics; Temperature Digraphs; Verification (ID#:14-2872) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6880626&isnumber=6880588
  • Meher, S.S.; Maben, P., "Face Recognition And Facial Expression Identification Using PCA," Advance Computing Conference (IACC), 2014 IEEE International, pp.1093,1098, 21-22 Feb. 2014 doi: 10.1109/IAdCC.2014.6779478 The face being the primary focus of attention in social interaction plays a major role in conveying identity and emotion. A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. The main aim of this paper is to analyse the method of Principal Component Analysis (PCA) and its performance when applied to face recognition. This algorithm creates a subspace (face space) where the faces in a database are represented using a reduced number of features called feature vectors. The PCA technique has also been used to identify various facial expressions such as happy, sad, neutral, anger, disgust, fear etc. Experimental results that follow show that PCA based methods provide better face recognition with reasonably low error rates. From the paper, we conclude that PCA is a good technique for face recognition as it is able to identify faces fairly well with varying illuminations, facial expressions etc. Keywords: emotion recognition; face recognition; principal component analysis; vectors; video signal processing; PCA; database; digital image; error rates; face recognition; facial expression identification; facial recognition system; feature vectors; person identification; person verification; principal component analysis; social interaction; video frame; Conferences; Erbium; Eigen faces; Face recognition; Principal Component Analysis (PCA) (ID#:14-2873) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779478&isnumber=6779283
  • Vijayalakshmi, M.; Senthil, T., "Automatic Human Facial Expression Recognition Using Hidden Markov Model," Electronics and Communication Systems (ICECS), 2014 International Conference on, pp.1,5, 13-14 Feb. 2014. doi: 10.1109/ECS.2014.6892800 Facial Recognition is a type of biometric software application that can identify a specific individual in a digital image by analyzing and comparing patterns. These systems are commonly used for the security purposes but are increasingly being used in a variety of other applications such as residential security, voter verification, banking using ATM. Changes in facial expression become a difficult task in recognizing faces. In this paper continuous naturalistic affective expressions will be recognized using Hidden Markov Model (HMM) framework. Active Appearance Model (AAM) landmarks are considered for each frame of the videos. The AAMs were used to track the face and extract its visual features. There are six different facial expressions considered over here: Happy, Sadness, Anger, Fear, Surprise, Disgust, Fear and Sad. Different Expression recognition problem is solved through a multistage automatic pattern recognition system where the temporal relationships are modeled through the HMM framework. Dimension levels (i.e., labels) can be defined as the hidden states sequences in the HMM framework. Then the probabilities of these hidden states and their state transitions can be accurately computed from the labels of the training set. Through a three stage classification approach, the output of a first-stage classification is used as observation sequences for a second stage classification, modeled as a HMM-based framework. The k-NN will be used for the first stage classification. A third classification stage, a decision fusion tool, is then used to boost overall performance. Keywords: biometrics (access control);face recognition; hidden Markov models; AAM landmarks; ATM; HMM framework; Hidden Markov Model; active appearance model; automatic human facial expression recognition; banking; biometric software application; digital image; hidden states; residential security; state transitions; voter verification; Active appearance model; Computational modeling; Face recognition; Hidden Markov models; Speech; Speech recognition; Support vector machine classification; Active Appearance Model (AAM);Dimension levels; Hidden Markov model (HMM); K Nearest Neighbor (k-NN) (ID#:14-2874) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6892800&isnumber=6892507
  • Chehata, Ramy C.G.; Mikhael, Wasfy B.; Atia, George, "A Transform Domain Modular Approach For Facial Recognition Using Different Representations And Windowing Techniques," Circuits and Systems (MWSCAS), 2014 IEEE 57th International Midwest Symposium on, pp.817,820, 3-6 Aug. 2014. doi: 10.1109/MWSCAS.2014.6908540 A face recognition algorithm based on a newly developed Transform Domain Modular (TDM) approach is proposed. In this approach, the spatial faces are divided into smaller sub-images, which are processed using non-overlapping and overlapping windows. Each image is subsequently transformed using a compressing transform such as the two dimensional discrete cosine transform. This produces the TDM-2D and the TDM-Dia based on two-dimensional and diagonal representations of the data, respectively. The performance of this approach for facial image recognition is compared with the state of the art successful techniques. The test results, for noise free and noisy images, yield higher than 97.5% recognition accuracy. The improved recognition accuracy is achieved while retaining comparable or better computation complexity and storage savings. Keywords: Face; Face recognition; Principal component analysis; Testing; Time division multiplexing; Training; Transforms (ID#:14-2875) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6908540&isnumber=6908326
  • Aldhahab, Ahmed; Atia, George; Mikhael, Wasfy B., "Supervised Facial Recognition Based On Multi-Resolution Analysis And Feature Alignment," Circuits and Systems (MWSCAS), 2014 IEEE 57th International Midwest Symposium on, pp.137,140, 3-6 Aug. 2014. doi: 10.1109/MWSCAS.2014.6908371 A new supervised algorithm for face recognition based on the integration of Two-Dimensional Discrete Multiwavelet Transform (2-D DMWT), 2-D Radon Transform, and 2-D Discrete Wavelet Transform (2-D DWT) is proposed1. In the feature extraction step, Multiwavelet filter banks are used to extract useful information from the face images. The extracted information is then aligned using the Radon Transform, and localized into a single band using 2-D DWT for efficient sparse data representation. This information is fed into a Neural Network based classifier for training and testing. The proposed method is tested on three different databases, namely, ORL, YALE and subset fc of FERET, which comprise different poses and lighting conditions. It is shown that this approach can significantly improve the classification performance and the storage requirements of the overall recognition system. Keywords: Classification algorithms; Databases; Discrete wavelet transforms; Feature extraction; Multiresolution analysis; Training (ID#:14-2876) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6908371&isnumber=6908326
  • Zhen Gao; Shangfei Wang; Chongliang Wu; Jun Wang; Qiang Ji, "Facial Action Unit Recognition By Relation Modeling From Both Qualitative Knowledge And Quantitative Data," Multimedia and Expo Workshops (ICMEW), 2014 IEEE International Conference on, pp.1,6, 14-18 July 2014. doi: 10.1109/ICMEW.2014.6890672 In this paper, we propose to capture Action Unit (AU) relations existing in both qualitative knowledge and quantitative data through Credal Networks (CN). Each node of the CN represents an AU label, and the links and probability intervals capture the probabilistic dependencies among multiple AUs. The structure of CN is designed based on prior knowledge. The parameters of CN are learned from both knowledge and ground-truth AU labels. The AU preliminary estimations are obtained by an existing image-driven recognition method. With the learned credal network, we infer the true AU labels by combining the relationships among labels with the previous obtained estimations. Experimental results on the CK+ database and MMI database demonstrate that with complete AU labels, our CN model is slightly better than the Bayesian Network (BN) model, demonstrating that credal sets learned from data can capture uncertainty more reliably; With incomplete and error-prone AU annotations, our CN model outperforms the BN model, indicating that credal sets can successfully capture qualitative knowledge. Keywords: face recognition; image sequences; probability; uncertainty handling; visual databases; AU label; AU preliminary estimation; BN model; CK+ database; N model; MMI database; credal network; error prone AU annotation; facial action unit recognition; image driven recognition method; incomplete AU annotation; probabilistic dependency; probability interval; relation modeling; uncertainty handling; Data models; Databases; Gold; Hidden Markov models; Image recognition; Mathematical model; Support vector machines; AU recognition; credal network; prior knowledge (ID#:14-2877) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890672&isnumber=6890528
  • Leventic, H.; Livada, C.; Gaclic, I, "Towards Fixed Facial Features Face Recognition," Systems, Signals and Image Processing (IWSSIP), 2014 International Conference on, pp.267,270, 12-15 May 2014 Abstract: In this paper we propose a framework for recognition of faces in controlled conditions. The framework consists of two parts: face detection and face recognition. For face detection we are using the Viola-Jones face detector. The proposal for face recognition part is based on the calculation of certain ratios on the face, where the features on the face are located by the use of Hough transform for circles. Experiments show that this framework presents a possible solution for the problem of face recognition. Keywords: Hough transforms; face recognition; Hough transform; Viola-Jones face detector; face detection; face recognition; fixed facial feature; Equations; Face; Face recognition; Nose; Transforms; Hough transform; Viola-Jones; face detection; face recognition (ID#:14-2878) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6837682&isnumber=6837609
  • Wilber, M.J.; Rudd, E.; Heflin, B.; Yui-Man Lui; Boult, T.E., "Exemplar Codes For Facial Attributes And Tattoo Recognition," Applications of Computer Vision (WACV), 2014 IEEE Winter Conference on, pp.205,212, 24-26 March 2014. doi: 10.1109/WACV.2014.6836099 When implementing real-world computer vision systems, researchers can use mid-level representations as a tool to adjust the trade-off between accuracy and efficiency. Unfortunately, existing mid-level representations that improve accuracy tend to decrease efficiency, or are specifically tailored to work well within one pipeline or vision problem at the exclusion of others. We introduce a novel, efficient mid-level representation that improves classification efficiency without sacrificing accuracy. Our Exemplar Codes are based on linear classifiers and probability normalization from extreme value theory. We apply Exemplar Codes to two problems: facial attribute extraction and tattoo classification. In these settings, our Exemplar Codes are competitive with the state of the art and offer efficiency benefits, making it possible to achieve high accuracy even on commodity hardware with a low computational budget. Keywords: computer vision; face recognition; feature extraction; image classification; image representation; probability; classification efficiency; exemplar codes; extreme value theory; facial attribute extraction; linear classifiers; mid-level representations; probability normalization; real-world computer vision systems; tattoo classification; tattoo recognition; Accuracy; Face; Feature extraction; Libraries; Pipelines; Support vector machines; Training (ID#:14-2879) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6836099&isnumber=6835728
  • Hehua Chi; Yu Hen Hu, "Facial Image De-Identification Using Identity Subspace Decomposition," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on , vol., no., pp.524,528, 4-9 May 2014. doi: 10.1109/ICASSP.2014.6853651 How to conceal the identity of a human face without covering the facial image? This is the question investigated in this work. Leveraging the high dimensional feature representation of a human face in an Active Appearance Model (AAM), a novel method called the identity subspace decomposition (ISD) method is proposed. Using ISD, the AAM feature space is deposed into an identity sensitive subspace and an identity insensitive subspace. By replacing the feature values in the identity sensitive subspace with the averaged values of k individuals, one may realize a k-anonymity de-identification process on facial images. We developed a heuristic approach to empirically select the AAM features corresponding to the identity sensitive subspace. We showed that after applying k-anonymity de-identification to AAM features in the identity sensitive subspace, the resulting facial images can no longer be distinguished by either human eyes or facial recognition algorithms. Keywords: face recognition; AAM feature space; ISD; active appearance model; facial image de-identification; facial recognition algorithms; high dimensional feature representation; human eye recognition algorithms; identity subspace decomposition method; identiy subspace decomposition; k-anonymity de-identification process; sensitive subspace; Active appearance model; Databases; Face; Face recognition; Facial features; Privacy; Vectors; active appearance model; data privacy; face recognition ;identification of persons (ID#:14-2880) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6853651&isnumber=6853544
  • Ptucha, R.; Savakis, AE., "LGE-KSVD: Robust Sparse Representation Classification," Image Processing, IEEE Transactions on, vol.23, no.4, pp.1737, 1750, April 2014. doi: 10.1109/TIP.2014.2303648 The parsimonious nature of sparse representations has been successfully exploited for the development of highly accurate classifiers for various scientific applications. Despite the successes of Sparse Representation techniques, a large number of dictionary atoms as well as the high dimensionality of the data can make these classifiers computationally demanding. Furthermore, sparse classifiers are subject to the adverse effects of a phenomenon known as coefficient contamination, where, for example, variations in pose may affect identity and expression recognition. We analyze the interaction between dimensionality reduction and sparse representations, and propose a technique, called Linear extension of Graph Embedding K-means-based Singular Value Decomposition (LGE-KSVD) to address both issues of computational intensity and coefficient contamination. In particular, the LGE-KSVD utilizes variants of the LGE to optimize the K-SVD, an iterative technique for small yet over complete dictionary learning. The dimensionality reduction matrix, sparse representation dictionary, sparse coefficients, and sparsity-based classifier are jointly learned through the LGE-KSVD. The atom optimization process is redefined to allow variable support using graph embedding techniques and produce a more flexible and elegant dictionary learning algorithm. Results are presented on a wide variety of facial and activity recognition problems that demonstrate the robustness of the proposed method. Keywords: dictionaries; image representation; iterative methods; optimisation; singular value decomposition; LGE-KSVD; activity recognition problems; atom optimization process; coefficient contamination; computational intensity; dictionary learning algorithm; dimensionality reduction matrix; expression recognition; facial recognition problems; graph embedding techniques; terative technique; linear extension of graph embedding k-means-based singular value decomposition; robust sparse representation classification ;sparse coefficients; sparse representation dictionary; sparsity-based classifier; Contamination; Dictionaries; Image reconstruction; Manifolds; Principal component analysis; Sparse matrices; Training; Dimensionality reduction; activity recognition; facial analysis; manifold learning; sparse representation (ID#:14-2881) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6728639&isnumber=6742656
  • Bong-Nam Kang; Jongmin Yoon; Hyunsung Park; Daijin Kim, "Face Recognition Using Affine Dense SURF-Like Descriptors," Consumer Electronics (ICCE), 2014 IEEE International Conference on, pp.129,130, 10-13 Jan. 2014. doi: 10.1109/ICCE.2014.6775938 In this paper, we propose the method for pose and facial expression invariant face recognition using the affine dense SURF-like descriptors. The proposed method consists of four step, 1) we normalize the face image using the face and eye detector. 2) We apply the affine simulation for synthesizing various pose face images. 3) We make a descriptor on the overlapping block-based grid keypoints. 4) A probe image is compared with the referenced images by performing the nearest neighbor matching. To improve the recognition rate, we use the keypoint distance ratio and false matched keypoint ratio. The proposed method showed the better performance than that of the conventional methods in terms of the recognition rates. Keywords: face recognition; probes; affine dense SURF-like descriptors; eye detector; face detector ;facial expression invariant face recognition; false matched keypoint ratio; keypoint distance ratio; nearest neighbor matching; overlapping block-based grid keypoints; pose face images; probe image; recognition rate; recognition rates; Computer vision; Conferences; Educational institutions; Face; Face recognition; Probes; Vectors (ID#:14-2882) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6775938&isnumber=6775879

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Forward Error Correction

Forward Error Correction


Controlling errors in data transmission in noisy or lossy circuits is a problem often solved by channel coding or forward error correction. The articles cited here look at bit error rates, energy efficiency, hybrid networks, and transportation systems. This research was presented in the first three quarters of 2014.

  • Hai Dao Thanh; Morvan, M.; Gravey, P.; Cugini, F.; Cerutti, I, "On the Spectrum-Efficiency Of Transparent Optical Transport Network Design With Variable-Rate Forward Error Correction Codes," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.1173, 1177, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6779143 We discuss the flexible rate optical transmission enabled by forward error correction (FEC) codes adjustment. The adaptation of FEC codes to given transmission condition gives rise to trade-off between transmission rate and optical reach. In this paper, that compromise is addressed from network planning standpoint. A static transparent network planning taking into account that rate-reach trade-off is formulated. A case study is solved in realistic NSF network with a comparison between mixed line rate (MLR) (10/40/100 Gbps) and flexible rate (FlexRate) by FEC variation (10-100 Gbps with a step of 10 Gbps). The result shows that the maximum link load could be reduced up to ~60% in FlexRate compared with MLR and the reduction becomes evident at high traffic load. Moreover, thanks to finer rate adaptation, the FlexRate could support an amount of traffic around three times higher than MLR.
    Keywords: forward error correction; light transmission; optical fibre networks; telecommunication network planning; telecommunication traffic; variable rate codes; flexible rate optical transmission; mixed line rate; network planning standpoint; static transparent network planning; traffic load; transparent optical transport network design; variable rate forward error correction codes; Adaptive optics; Integrated optics; Optical fiber networks; Optical fibers; Planning; Transponders; Elastic Transponder; Fiber-Optic Communication; Flexible Optical Network; Forward Error Correction; Network Optimization (ID#:14-3083)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779143&isnumber=6778899
  • Ahmed, Q.Z.; Ki-Hong Park; Alouini, M.-S.; Aissa, S., "Linear Transceiver Design for Nonorthogonal Amplify-and-Forward Protocol Using a Bit Error Rate Criterion," Wireless Communications, IEEE Transactions on, vol.13, no.4, pp.1844, 1853, April 2014. doi: 10.1109/TWC.2014.022114.130369 The ever growing demand of higher data rates can now be addressed by exploiting cooperative diversity. This form of diversity has become a fundamental technique for achieving spatial diversity by exploiting the presence of idle users in the network. This has led to new challenges in terms of designing new protocols and detectors for cooperative communications. Among various amplify-and-forward (AF) protocols, the half duplex non-orthogonal amplify-and-forward (NAF) protocol is superior to other AF schemes in terms of error performance and capacity. However, this superiority is achieved at the cost of higher receiver complexity. Furthermore, in order to exploit the full diversity of the system an optimal precoder is required. In this paper, an optimal joint linear transceiver is proposed for the NAF protocol. This transceiver operates on the principles of minimum bit error rate (BER), and is referred as joint bit error rate (JBER) detector. The BER performance of JBER detector is superior to all the proposed linear detectors such as channel inversion, the maximal ratio combining, the biased maximum likelihood detectors, and the minimum mean square error. The proposed transceiver also outperforms previous precoders designed for the NAF protocol.
    Keywords: amplify and forward communication; cooperative communication; detector circuits; diversity reception; error statistics; least mean squares methods; maximum likelihood detection; optimisation; precoding; protocols; radio transceivers; JBER detector; NAF protocols; biased maximum likelihood detectors; bit error rate criterion ;channel inversion; cooperative communications; cooperative diversity; duplex nonorthogonal amplify-and-forward protocol; error performance; idle users; joint bit error rate; linear detectors; linear transceiver design; maximal ratio combining; minimum mean square error; optimal joint linear transceiver; optimal precoder; receiver complexity; spatial diversity; Bit error rate; Complexity theory; Detectors; Diversity reception; Modulation; Protocols; Vectors; Cooperative diversity; bit error rate (BER);minimum mean square error (MMSE); nonorthogonal amplify-and-forward protocol}, (ID#:14-3084)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6754118&isnumber=6803026
  • Fareed, M.M.; Uysal, M.; Tsiftsis, T.A, "Error-Rate Performance Analysis of Cooperative OFDMA System With Decode-and-Forward Relaying," Vehicular Technology, IEEE Transactions on, vol.63, no.5, pp.2216,2223, Jun 2014 doi: 10.1109/TVT.2013.2290780 In this paper, we investigate the performance of a cooperative orthogonal frequency-division multiple-access (OFDMA) system with decode-and-forward (DaF) relaying. Specifically, we derive a closed-form approximate symbol-error-rate expression and analyze the achievable diversity orders. Depending on the relay location, a diversity order up to (L(SkD) + 1) + Sm=1M min(L(SkRm) + 1, L(RmD) + 1) is available, where M is the number of relays, and L(SkD) + 1, L(SkRm) + 1, and L(RmD) + 1 are the lengths of channel impulse responses of source-to-destination, source-to-mth relay, and mth relay-to-destination links, respectively. Monte Carlo simulation results are also presented to confirm the analytical findings.
    Keywords: Monte Carlo methods; OFDM modulation; cooperative communication; decode and forward communication; diversity reception; frequency division multiple access; telecommunication channels; transient response; DaF relaying; Monte Carlo simulation; channel impulse responses; closed-form approximate symbol-error-rate expression; cooperative OFDMA system; decode-and-forward relaying; diversity orders; error-rate performance analysis; orthogonal frequency-division multiple-access system; relay location; relay-to-destination links; source-to-destination; source-to-mth relay; Approximation methods; Error analysis; Maximum likelihood decoding; OFDM; Relays; Resource management; Upper bound; Error rate; Orthogonal frequency division multiple access; error rate; orthogonal frequency-division multiple access (OFDMA); power allocation; relay channels (ID#:14-3085)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6663693&isnumber=6832681
  • Kaddoum, G.; Gagnon, F., "Lower Bound On The Bit Error Rate Of A Decode-And-Forward Relay Network Under Chaos Shift Keying Communication System," Communications, IET, vol.8, no.2, pp.227,232, January 23 2014. doi: 10.1049/iet-com.2013.0421 This study carries out the first-ever investigation of the analysis of a cooperative decode-and-forward (DF) relay network with chaos shift keying (CSK) modulation. The performance analysis of DF-CSK in this study takes into account the dynamical nature of chaotic signal, which is not similar to a conventional binary modulation performance computation methodology. The expression of a lower bound bit error rate (BER) is derived in order to investigate the performance of the cooperative system under independently and identically distributed Gaussian fading wireless environments. The effect of the non-periodic nature of chaotic sequence leading to a non-constant bit energy of the considered modulation is also investigated. A computation approach of the BER expression based on the probability density function of the bit energy of the chaotic sequence, channel distribution and number of relays is presented. Simulation results prove the accuracy of the authors BER computation methodology.
    Keywords: Gaussian distribution; chaotic communication; cooperative communication; decode and forward communication; error statistics; fading channels; phase shift keying; probability; relay networks (telecommunication);BER;CSK modulation; binary modulation; bit error rate; channel distribution; chaos shift keying communication system; chaotic sequence; chaotic signal; cooperative decode-and-forward relay network; distributed Gaussian fading wireless environments; nonconstant bit energy; nonperiodic nature; probability density function (ID#:14-3086)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6740269&isnumber=6740097
  • Al-Kali, M.; Li Yu; Mohammed, AA, "Performance Analysis Of Energy Efficiency And Symbol Error Rate In Amplify-And-Forward Cooperative MIMO Networks," Ubiquitous and Future Networks (ICUFN), 2014 Sixth International Conf. on vol., no., pp.448,453, 8-11 July 2014. doi: 10.1109/ICUFN.2014.6876831 In this paper, we analyze the energy efficiency and the symbol error rate (SER) in the cooperative multiple-input multiple-output (MIMO) relay networks. We employ an amplify-and-forward (AF) relay scheme, where a relay access point occupied with Q antennas cooperatively forwards packets to the destination. Under the assumption of Rayleigh fading channels and time division multiplexing (TDM), we derive new exact closed-form expressions for the outage probability, SER and the energy efficiency valid for Q antennas. Further asymptotic analysis is done in high SNR regime to characterize the energy efficiency in terms of the diversity order and the array gain. Subsequently, our expressions are quantitatively compared with Monte Carlo simulations. Numerical results are provided to validate the exact and the asymptotic expressions. The results show that the energy efficiency decreases with the number of antennas at the relay according to Q+1. The behavior of the energy efficiency with the relay locations is also discussed in this paper.
    Keywords: MIMO communication; Monte Carlo methods; Rayleigh channels; amplify and forward communication; fading channels; probability; relay networks (telecommunication) ;time division multiplexing; AF relay scheme; MIMO relay networks; Monte Carlo simulations; Q antennas; Rayleigh fading channels; SER; TDM; amplify-and-forward cooperative MIMO networks; array gain; asymptotic analysis; energy efficiency; multiple-input multiple-output relay networks; outage probability; performance analysis; relay locations; symbol error rate; time division multiplexing; Antennas; Arrays; Diversity reception; MIMO; Modulation; Relays; Signal to noise ratio; Cooperative MIMO; cooperative diversity; energy efficiency; symbol error rate (ID#:14-3087)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6876831&isnumber=6876727
  • Rasmussen, A; Yankov, M.P.; Berger, M.S.; Larsen, K.J.; Ruepp, S., "Improved Energy Efficiency for Optical Transport Networks by Elastic Forward Error Correction," Optical Communications and Networking, IEEE/OSA Journal of, vol. 6, no.4, pp.397, 407, April 2014. doi: 10.1364/JOCN.6.000397 In this paper we propose a scheme for reducing the energy consumption of optical links by means of adaptive forward error correction (FEC). The scheme works by performing on the fly adjustments to the code rate of the FEC, adding extra parity bits to the data stream whenever extra capacity is available. We show that this additional parity information decreases the number of necessary decoding iterations and thus reduces the power consumption in iterative decoders during periods of low load. The code rate adjustments can be done on a frame-by-frame basis and thus make it possible to manipulate the balance between effective data rate and FEC coding gain without any disruption to the live traffic. As a consequence, these automatic adjustments can be performed very often based on the current traffic demand and bit error rate performance of the links through the network. The FEC scheme itself is designed to work as a transparent add-on to transceivers running the optical transport network (OTN) protocol, adding an extra layer of elastic soft-decision FEC to the built-in hard-decision FEC implemented in OTN, while retaining interoperability with existing OTN equipment. In order to facilitate dynamic code rate adaptation, we propose a programmable encoder and decoder design approach, which can implement various codes depending on the desired code rate using the same basic circuitry. This design ensures optimal coding gain performance with a modest overhead for supporting multiple codes with minimal impact on the area and power requirements of the decoder.
    Keywords: access protocols; energy conservation; error statistics; forward error correction; iterative decoding; optical fibre networks; optical links; optical transceivers; power consumption ;telecommunication standards; OTN protocol; adaptive FEC; adaptive forward error correction; bit error rate; built-in hard-decision FEC; data stream; decoding iterations; dynamic code rate adaptation; elastic forward error correction; elastic soft-decision FEC; energy consumption; energy efficiency; iterative decoders; optical links; optical transport network protocol; optimal coding gain performance; parity information ;power consumption; programmable encoder; traffic demand; transceivers; Bit error rate; Decoding; Encoding; Forward error correction; Iterative decoding; Optical fiber communication; Elastic optical networks; Optical transport networks; Optically switched networks; Rate adaptive forward error correction (ID#:14-3088)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821329&isnumber=6821321
  • Ying Zhang; Huapeng Zhao; Chuanyi Pan, "Optimization of an Amplify-and-Forward Relay Network Considering Time Delay and Estimation Error in Channel State Information," Vehicular Technology, IEEE Transactions on, vol.63, no.5, pp. 2483, 2488, Jun 2014. doi: 10.1109/TVT.2013.2292939 This paper presents the optimization of an amplify-and-forward (AF) relay network with time delay and estimation error in channel state information (CSI). The CSI time delay and estimation error are modeled by the channel time variation model and stochastic error model, respectively. The conditional probability density function of the ideal CSI upon the estimated CSI is computed based on these two models, and it is used to derive the conditional expectation of the mean square error (MSE) between estimated and desired signals upon estimated CSI, which is minimized to optimize the beamforming and equalization coefficients. Computer simulations show that the proposed method obtains lower bit error rate (BER) than the conventional minimum MSE and the maxmin SNR strategies when CSI contains time delay and estimation error.
    Keywords: amplify and forward communication; delays; least mean squares methods; optimisation;relay networks (telecommunication); stochastic processes; BER; amplify-and-forward relay network; beamforming; bit error rate; channel state information; channel time variation model; conditional probability density function; equalization coefficients; estimation error; minimum mean square error; stochastic error model; time delay; Bit error rate; Channel estimation; Correlation; Delay effects; Estimation error; Relays; Signal to noise ratio; Amplify and forward (AF); Amplify-and-forward; conditional expectation; estimation error; minimum mean square error; minimum mean square error (MMSE);outdated channel state information; outdated channel state information (CSI);relay network (ID#:14-3089)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6675878&isnumber=6832681
  • Rafique, D.; Napoli, A; Calabro, S.; Spinnler, B., "Digital Preemphasis in Optical Communication Systems: On the DAC Requirements for Terabit Transmission Applications," Lightwave Technology, Journal of, vol.32, no.19, pp.3247, 3256, Oct.1, 1 2014. doi: 10.1109/JLT.2014.2343957 Next-generation coherent optical systems are geared to employ high-speed digital-to-analog converters (DAC), allowing for digital preprocessing of the signal and flexible optical transport networks. However, one of the major obstacles in such architectures is the limited resolution (less than 5.5 effective bits) and -3 dB bandwidth of commercial DACs, typically limited to half of the currently commercial baud rates, and even relatively reduced in case of higher baud rate transponders (400 Gb/s and 1 Tb/s). In this paper, we propose a simple digital preemphasis (DPE) algorithm to compensate for DAC-induced signal distortions, and exhaustively investigate the impact of DAC specifications on system performance, both with and without DPE. As an outcome, performance improvements are established across various DAC hardware requirements (effective number of bits and bandwidth) and channel baud rates, for m-state quadrature amplitude modulation (QAM) formats. In particular, we show that lower order modulation formats are least affected by DAC limitations, however, they benefit the most from DPE in extremely challenging hardware conditions. On the contrary, higher order formats are severely limited by DAC distortions, and moderately benefit from DPE across a wide range of DAC specifications. Moreover, effective number of bit requirements are established for m-state QAM, assuming low and high baud rate transmission regimes. Finally, we discuss the application scenarios for the proposed DPE in next-generation terabit transmission systems, and establish maximum transportable baud rates, which are shown to be used toward increasing channel baud rates to reduce terabit subcarrier count or toward increasing forward error correction (FEC) overheads to reduce the pre-FEC bit error rate threshold. Maximum baud rates after DPE are summarized here for polarization multiplexed BPSK, QPSK, 8QAM, and 16QAM, assuming two DACs: Current commer- ial DACs (5.5 effective bits, 16 GHz bandwidth) 57, 54, 51, and 48 Gbaud, respectively. Next-generation DACs (7 effective bits, 22 GHz bandwidth): 62, 61, 60, and 58 Gbaud, respectively.
    Keywords: Bandwidth; Noise; Q-factor; Quadrature amplitude modulation; Receivers; Transfer functions; Coherent detection; Nyquist; digital signal processing; digital-to-analog converter; pre-emphasis (ID#:14-3090)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6868202&isnumber=6877758
  • Qiang Huo; Tianxi Liu; Shaohui Sun; Lingyang Song; Bingli Jiao, "Selective Combining For Hybrid Cooperative Networks," Communications, IET, vol.8, no.4, pp.471,482, March 6 2014. doi: 10.1049/iet-com.2013.0323 In this study, we consider the selective combining in hybrid cooperative networks (SCHCNs scheme) with one source node, one destination node and N relay nodes. In the SCHCN scheme, each relay first adaptively chooses between amplify-and-forward protocol and decode-and-forward protocol on a per frame basis by examining the error-detecting code result, and Nc(1 Nc N) relays will be selected to forward their received signals to the destination. We first develop a signal-to-noise ratio (SNR) threshold-based frame error rate (FER) approximation model. Then, the theoretical FER expressions for the SCHCN scheme are derived by utilising the proposed SNR threshold-based FER approximation model. The analytical FER expressions are validated through simulation results.
    Keywords: amplify and forward communication; cooperative communication; decode and forward communication; diversity reception; error detection codes; error statistics ;FER approximation model; SCHCN; amplify-and-forward protocol; decode-and-forward protocol; destination node; error detecting code; frame error rate; hybrid cooperative networks; relay nodes; selective combining; signal-to-noise ratio (ID#:14-3091)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6758416&isnumber=6758407
  • Haifeng Zhu; Bajekal, S.; Lakamraju, V.; Murray, B., "A Radio System Design Tool For Forward Error Corrections In Wireless CSMA Networks: Analysis And Economics," Radio and Wireless Symposium (RWS), 2014 IEEE, pp.145,147, 19-23 Jan. 2014. doi: 10.1109/RWS.2014.6830160 As cyber-physical systems become pervasive, their power consumption and system design practices are major concerns. This paper explores problems of deploying Forward Error Correction (FEC) in wireless commercial standards such as IEEE 802.11b and 802.15.4. First, we describe battery life estimation that includes practical factors such as system issues and the negative impact by retransmissions vs. power impact by overhead of encoding schemes. Secondly, we explore the link to design economics and demonstrate a design decision method. Theoretical analyses validated with simulations provide a decision tool for engineers and management during system design. Different from previous unfavorable usage in FEC, we show that for cyber-physical devices FEC should be now strongly considered under proper circumstances, as it provides the opportunity for saving communications-related energy for prolonged battery life, which is critical for devices in hard-to-reach locations and battlefield.
    Keywords: Zigbee; carrier sense multiple access; encoding;forward error correction; power consumption; wireless LAN; wireless channels;FEC; IEEE 802.11b;IEEE 802.15.4;battery life estimation; cyber-physical devices; cyber-physical systems; design decision method; economics; encoding; forward error corrections; power consumption; radio system design tool; wireless CSMA networks; wireless commercial standards; Automatic repeat request; Batteries; Bit error rate; Economics ;Encoding; Forward error correction; Power demand; FEC; Wireless; power consumption; system design (ID#:14-3092)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830160&isnumber=6830066
  • Chang, Shih-Ying; Chiao, Hsin-Ta; Hung, Yu-Hsian, "Ideal Forward Error Correction Codes for High-Speed Rail Multimedia Communications," Vehicular Technology, IEEE Transactions on, vol. PP, no.99, pp.1, 1, March 2014. doi: 10.1109/TVT.2014.2310897 In recent years, Application Layer-Forward Error Correction (AL-FEC), especially rateless AL-FEC, has received a lot of attention due to its superior performance in both transmissional and computational efficiency. Rateless AL-FEC (e.g., Raptor code or LT code) can protect a large data block with an overhead somewhat close to ideal codes. In the meantime, its data processing rates of both encoding and decoding are quite efficient even in software implementations. However, we found that conventional rateless AL-FEC schemes may not be the best candidates when considering streaming over WiMAX networks for high speed rail reception in Taiwan. In this paper, we propose a new ideal AL-FEC scheme based on the Chinese Remainder Theorem (CRT) to facilitate streaming service delivery for highspeed rail reception. The proposed scheme can support the rateless property, but it requires less transmission overhead than conventional rateless codes. Although it requires higher computational cost than conventional rateless codes, the cost is affordable for commodity laptops. Besides measuring the FEC computation, storage, and decoder overhead, we also evaluate its performance in an emulation environment for simulating highspeed rail reception over WiMAX networks. The emulation result shows that the proposed scheme can achieve the same error protection as Raptor codes, but it requires less transmission overhead, suitable for protecting data transmission over bandwidthlimited, high-mobility erasure channels.
    Keywords: Decoding; Digital video broadcasting; Encoding; Forward error correction; Maintenance engineering; Systematics; WiMAX (ID#:14-3093)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6763072&isnumber=4356907
  • JongJun Park; Jongsoo Jeong; Hoon Jeong; Liang, C.-J.M.; JeongGil Ko, "Improving the Packet Delivery Performance for Concurrent Packet Transmissions in WSNs," Communications Letters, IEEE , vol.18, no. 1, pp.58,61, January 2014. doi: 10.1109/LCOMM.2013.112013.131974 In this letter, we investigate the properties of packet collisions in IEEE 802.15.4-based wireless sensor networks when packets with the same content are transmitted concurrently. While the nature of wireless transmissions allows the reception of a packet when the same packet is transmitted at different radios with (near) perfect time synchronization, we find that in practical systems, platform specific characteristics, such as the independence and error of the crystal oscillators, cause packets to collide disruptively when the two signals have similar transmission powers (i.e., differences of <;2 dBm). In such scenarios, the packet reception ratio (PRR) of concurrently transmitted packets falls below 10%. Nevertheless, we empirically show that the packet corruption patterns are easily recoverable using forward error correction schemes and validate this using implementations of RS and convolutional codes. Overall, our results show that using such error correction schemes can increase the PRR by more than four-fold.
    Keywords: Reed-Solomon codes; Zigbee; convolutional codes; wireless sensor networks; IEEE 802.15.4-based wireless sensor networks; RS codes; WSN; concurrent packet transmissions; convolutional codes; forward error correction schemes; packet delivery performance; Convolutional codes; Crystals; Forward error correction; IEEE 802.15 Standards; Oscillators; Radio transmitters; Wireless sensor networks; Concurrent transmissions and forward error correction; wireless sensor networks (ID#:14-3094)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6679191&isnumber=6716946

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Fuzzy Logic and Security

Fuzzy Logic and Security


Fuzzy logic is being used to develop a number of security systems. The articles cited here include research into fuzzy logic-based security for software defined networks, industrial controls, intrusion response and recovery, wireless sensor networks, and more. These works were presented or published in 2014.

  • Dotcenko, S.; Vladyko, A; Letenko, I, "A Fuzzy Logic-Based Information Security Management For Software-Defined Networks," Advanced Communication Technology (ICACT), 2014 16th International Conference on, vol., no., pp.167,171, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6778942 Abstract: In terms of network security, software-defined networks (SDN) offer researchers unprecedented control over network infrastructure and define a single point of control over the data flows routing of all network infrastructure. OpenFlow protocol is an embodiment of the software-defined networking paradigm. OpenFlow network security applications can implement more complex logic processing flows than their permission or prohibition. Such applications can implement logic to provide complex quarantine procedures, or redirect malicious network flows for their special treatment. Security detection and intrusion prevention algorithms can be implemented as OpenFlow security applications, however, their implementation is often more concise and effective. In this paper we considered the algorithm of the information security management system based on soft computing, and implemented a prototype of the intrusion detection system (IDS) for software-defined network, which consisting of statistic collection and processing module and decision-making module. These modules were implemented in the form of application for the Beacon controller in Java. Evaluation of the system was carried out on one of the main problems of network security - identification of hosts engaged in malicious network scanning. For evaluation of the modules work we used mininet environment, which provides rapid prototyping for OpenFlow network. The proposed algorithm combined with the decision making based on fuzzy rules has shown better results than the security algorithms used separately. In addition the number of code lines decreased by 20-30%, as well as the opportunity to easily integrate the various external modules and libraries, thus greatly simplifies the implementation of the algorithms and decision-making system.
    Keywords: decision making; fuzzy logic; protocols; security of data; software radio; telecommunication control; telecommunication network management; telecommunication network routing; telecommunication security; Java; OpenFlow protocol; beacon controller; data flows routing; decision making; decision-making module; fuzzy logic-based information security management; intrusion detection system; intrusion prevention algorithms; logic processing flows; malicious network flows; malicious network scanning; mininet environment; network infrastructure; network security; processing module; security detection; soft computing; software-defined networks; statistic collection; Decision making; Information security; Software algorithms; Switches; Training; Fuzzy Logic; Information security; OpenFlow; Port scan; Software-Defined Networks (ID#:14-2862) the implementation of the algorithms and decision-making system.
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778942&isnumber=6778899
  • Vollmer, T.; Manic, M.; Linda, O., "Autonomic Intelligent Cyber-Sensor to Support Industrial Control Network Awareness," Industrial Informatics, IEEE Transactions on, vol.10, no.2, pp.1647, 1658, May 2014. doi: 10.1109/TII.2013.2270373 The proliferation of digital devices in a networked industrial ecosystem, along with an exponential growth in complexity and scope, has resulted in elevated security concerns and management complexity issues. This paper describes a novel architecture utilizing concepts of autonomic computing and a simple object access protocol (SOAP)-based interface to metadata access points (IF-MAP) external communication layer to create a network security sensor. This approach simplifies integration of legacy software and supports a secure, scalable, and self-managed framework. The contribution of this paper is twofold: 1) A flexible two-level communication layer based on autonomic computing and service oriented architecture is detailed and 2) three complementary modules that dynamically reconfigure in response to a changing environment are presented. One module utilizes clustering and fuzzy logic to monitor traffic for abnormal behavior. Another module passively monitors network traffic and deploys deceptive virtual network hosts. These components of the sensor system were implemented in C++ and PERL and utilize a common internal D-Bus communication mechanism. A proof of concept prototype was deployed on a mixed-use test network showing the possible real-world applicability. In testing, 45 of the 46 network attached devices were recognized and 10 of the 12 emulated devices were created with specific operating system and port configurations. In addition, the anomaly detection algorithm achieved a 99.9% recognition rate. All output from the modules were correctly distributed using the common communication structure. the implementation of the algorithms and decision-making system.
    Keywords: access protocols; computer network security; fault tolerant computing; field buses; fuzzy logic; industrial control; intelligent sensors; meta data; network interfaces; pattern clustering; C++;IF-MAP; PERL; SOAP-based interface; anomaly detection algorithm; autonomic computing; autonomic intelligent cyber-sensor; digital device proliferation; flexible two-level communication layer; fuzzy logic; industrial control network awareness; internal D-Bus communication mechanism; legacy software; metadata access point external communication layer; mixed-use test network; network security sensor; networked industrial ecosystem; proof of concept prototype; self-managed framework; service oriented architecture; simple object access protocol-based interface; traffic monitor; virtual network hosts; Autonomic computing; control systems ;industrial ecosystems; network security; service-oriented architecture (ID#:14-2863) the implementation of the algorithms and decision-making system.
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6547755&isnumber=6809862
  • Zonouz, S.A; Khurana, H.; Sanders, W.H.; Yardley, T.M., "RRE: A Game-Theoretic Intrusion Response and Recovery Engine," Parallel and Distributed Systems, IEEE Transactions on, vol.25, no.2, pp.395, 406, Feb. 2014. doi: 10.1109/TPDS.2013.211 Preserving the availability and integrity of networked computing systems in the face of fast-spreading intrusions requires advances not only in detection algorithms, but also in automated response techniques. In this paper, we propose a new approach to automated response called the response and recovery engine (RRE). Our engine employs a game-theoretic response strategy against adversaries modeled as opponents in a two-player Stackelberg stochastic game. The RRE applies attack-response trees (ART) to analyze undesired system-level security events within host computers and their countermeasures using Boolean logic to combine lower level attack consequences. In addition, the RRE accounts for uncertainties in intrusion detection alert notifications. The RRE then chooses optimal response actions by solving a partially observable competitive Markov decision process that is automatically derived from attack-response trees. To support network-level multiobjective response selection and consider possibly conflicting network security properties, we employ fuzzy logic theory to calculate the network-level security metric values, i.e., security levels of the system's current and potentially future states in each stage of the game. In particular, inputs to the network-level game-theoretic response selection engine, are first fed into the fuzzy system that is in charge of a nonlinear inference and quantitative ranking of the possible actions using its previously defined fuzzy rule set. Consequently, the optimal network-level response actions are chosen through a game-theoretic optimization process. Experimental results show that the RRE, using Snort's alerts, can protect large networks for which attack-response trees have more than 500 nodes. the implementation of the algorithms and decision-making system.
    Keywords: Boolean functions; Markov processes; computer network security; decision theory; fuzzy set theory; stochastic games; trees (mathematics); ART; Boolean logic; RRE; Snort alerts; attack-response trees; automated response techniques; detection algorithms; fuzzy logic theory; fuzzy rule set; fuzzy system; game-theoretic intrusion response and recovery engine strategy; game-theoretic optimization process; intrusion detection; lower level attack consequences; network level game-theoretic response selection engine; network security property; network-level multiobjective response selection; network-level security metric values; networked computing systems; nonlinear inference; optimal network-level response actions; partially observable competitive Markov decision process; system-level security events; two-player Stackelberg stochastic game; Computers; Engines; Games; Markov processes; Security; Subspace constraints; Uncertainty; Computers; Engines; Games; Intrusion response systems; Markov decision processes; Markov processes; Security; Subspace constraints; Uncertainty; and fuzzy logic and control; network state estimation; stochastic games (ID#:14-2864) the implementation of the algorithms and decision-making system.
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6583161&isnumber=6689796
  • Thorat, S.S.; Markande, S.D., "Reinvented Fuzzy logic Secure Media Access Control Protocol (FSMAC) to improve lifespan of Wireless Sensor Networks," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on, pp.344,349, 7-8 Feb. 2014. doi: 10.1109/ICICICT.2014.6781305 Wireless Sensor Networks (WSN) have grown in size and importance in a very short time. WSN is very sensitive to various attacks, hence Security has become prominent issue in WSNs. Denial-Of-Service (DOS) attack is one of main concern for WSNs. DOS Attack diminishes the resources of sensor nodes which affect the normal functioning of the node. Media Access Control (MAC) layer is responsible for communication within multiple access networks and incorporates shared medium. Fuzzy logic-optimized Secure Media Access Control (FSMAC) Protocol gives good solution against DOS Attack. It detects all intrusion taking place and also decreases average energy consumed by the sensor network than in attacked scenario. These results are responsible for increase in the lifespan of a sensor network. Fuzzy logic deals with uncertainty for human reasoning and decision making. Innovational use of Fuzzy logic theory is applied to this FSMAC protocol to enhance the performance. Here in this paper, Reinvention in FSMAC protocol is proposed using new intrusion detector parameters like No of times node sensed channel free and Variation in channel sense period. Performance of new protocol is tested on the basis of time of first node dead and average energy consumed by the sensor node. These results show that the lifespan of sensor network increases and average energy consumed by sensor node decreases. the implementation of the algorithms and decision-making system.
    Keywords: access protocols; cryptographic protocols; decision making; energy consumption; fuzzy logic; telecommunication security; wireless sensor networks; DOS attack; FSMAC protocol; WSN improvement; decision making; denial of service; energy consumption; fuzzy logic secure media access control protocol; human reasoning intrusion detector parameter; multiple access networks; sensor nodes; uncertainty handling; wireless sensor network; Frequency division multiaccess; Indexes; Protocols; Receivers; Uncertainty; Wireless sensor networks; Denial-Of-Service (DOS) Attack; Fuzzy logic-optimized Secure Media access Control Protocol (FSMAC); Media Access Control (MAC) Protocol; Security Issues; Wireless Sensor Networks (ID#:14-2865) the implementation of the algorithms and decision-making system.
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781305&isnumber=6781240
  • Rambabu, C.; Obulesu, Y.P.; Saibabu, C., "Evolutionary Algorithm-Based Technique For Power System Security Enhancement," Advances in Electrical Engineering (ICAEE), 2014 International Conference on, pp.1,5, 9-11 Jan. 2014. doi: 10.1109/ICAEE.2014.6838521 Security constraint optimal power flow is one of the most cost effective measures to promote both cost minimization and maximum voltage security without jeopardizing the system operation. It is developed into a multi-objective problem that involves objectives such as economical operating condition of the system and system security margin. This paper explores the application of Particle Swarm Optimization Algorithm (PSO) to solve the security enhancement problem. In this paper, a novel fuzzy logic composite multi-objective evolutionary algorithm for security problem is presented. Flexible AC Transmission Systems (FACTS) devices, are modern compensators of active and reactive powers, can be considered viable options in providing security enhancement. The proposed algorithm is tested on the IEEE 30-bus system. The proposed methods have achieved solutions with good accuracy, stable convergence characteristics, simple implementation and satisfactory computation time. the implementation of the algorithms and decision-making system.
    Keywords: flexible AC transmission systems; fuzzy logic; particle swarm optimisation; power system security; FACTS; IEEE 30-bus system; cost minimization; economical operating condition; flexible AC transmission systems; fuzzy logic; maximum voltage security; multiobjective evolutionary algorithm; multiobjective problem; optimal power flow ;particle swarm optimization algorithm; power system security enhancement; security enhancement problem; Indexes; Power capacitors; Power system stability; Reactive power; Security; Silicon; Thyristors; Fuzzy Logic; Particle Swarm Optimization; Power System Security; TCSC (ID#:14-2866) the implementation of the algorithms and decision-making system.
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838521&isnumber=6838422
  • AlOmary, R.Y.; Khan, S.A, "Fuzzy Logic Based Multi-Criteria Decision-Making Using Dubois and Prade's Operator For Distributed Denial Of Service Attacks In Wireless Sensor Networks," Information and Communication Systems (ICICS), 2014 5th International Conference on, pp.1,6, 1-3 April 2014 doi: 10.1109/IACS.2014.6841979 Wireless sensor networks (WSNs) have emerged as an important technology for monitoring of critical situations that require real-time sensing and data acquisition for decision-making purposes. Security of wireless sensor networks is a contemporary challenging issue. A significant number of various types of malicious attacks have been identified against the security of WSNs in recent times. Due to the unreliable and untrusted environments in which WSNs operate, the threat of distributed attacks against sensory resources such as power consumption, communication, and computation capabilities cannot be neglected. In this paper, a fuzzy logic based approach is proposed in the context of distributed denial of service attacks in WSNs. The proposed approach is modelled and formulated as multi-criteria decision-making problem, while considering attack detection rate and energy decay rate as the two decision criteria. Using the Dubois and Prade's fuzzy operator, a mechanism is developed to achieve the best trade-off between the two aforementioned conflicting criteria. Empirical analysis proves the effectiveness of the proposed approach. the implementation of the algorithms and decision-making system.
    Keywords: computer network security; decision making; fuzzy logic; wireless sensor networks; Dubois; Prade; WSN; attack detection rate; data acquisition; distributed denial of service attacks; energy decay rate; fuzzy logic based approach; fuzzy operator; malicious attacks; multicriteria decision-making problem; real-time sensing; sensory resources; wireless sensor networks security; Computer crime; Decision making; Fuzzy logic; Monitoring; Wireless sensor networks (ID#:14-2867) the implementation of the algorithms and decision-making system.
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841979&isnumber=6841931
  • Chaudhary, A; Kumar, A; Tiwari, V.N., "A Reliable Solution Against Packet Dropping Attack Due To Malicious Nodes Using Fuzzy Logic in MANETs," Optimization, Reliability, and Information Technology (ICROIT), 2014 International Conference on, pp.178,181, 6-8 Feb. 2014. doi: 10.1109/ICROIT.2014.6798326 The recent trend of mobile ad hoc network increases the ability and impregnability of communication between the mobile nodes. Mobile ad Hoc networks are completely free from pre-existing infrastructure or authentication point so that all the present mobile nodes which are want to communicate with each other immediately form the topology and initiates the request for data packets to send or receive. For the security perspective, communication between mobile nodes via wireless links make these networks more susceptible to internal or external attacks because any one can join and move the network at any time. In general, Packet dropping attack through the malicious node (s) is one of the possible attack in the mobile ad hoc network. This paper emphasized to develop an intrusion detection system using fuzzy Logic to detect the packet dropping attack from the mobile ad hoc networks and also remove the malicious nodes in order to save the resources of mobile nodes. For the implementation point of view Qualnet simulator 6.1 and Mamdani fuzzy inference system are used to analyze the results. Simulation results show that our system is more capable to detect the dropping attacks with high positive rate and low false positive. the implementation of the algorithms and decision-making system.
    Keywords: fuzzy logic; inference mechanisms; mobile ad hoc networks; mobile computing; security of data; MANET; Mamdani fuzzy inference system; Qualnet simulator 6.1;data packets; fuzzy logic; intrusion detection system; malicious nodes; mobile ad hoc network; mobile nodes; packet dropping attack; wireless links; Ad hoc networks; Artificial intelligence; Fuzzy sets; Mobile computing; Reliability engineering; Routing; Fuzzy Logic; Intrusion Detection System (IDS); MANETs Security Issues; Mobile Ad Hoc networks (MANETs); Packet Dropping attack (ID#:14-2868) the implementation of the algorithms and decision-making system.
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798326&isnumber=6798279
  • Khanum, S.; Islam, M.M., "An Enhanced Model Of Vertical Handoff Decision Based On Fuzzy Control Theory & User Preference," Electrical Information and Communication Technology (EICT), 2013 International Conference on, pp.1,6, 13-15 Feb. 2014. doi: 10.1109/EICT.2014.6777873 With the development of wireless communication technology, various wireless networks will exist with different features in same premises. Heterogeneous networks will be dominant in the next generation wireless networks. In such networks choose the most suitable network for mobile user is one of the key issues. Vertical handoff decision making is one of the most important topics in wireless heterogeneous networks architecture. Here the most significant parameters are considered in vertical handoff decision. The proposed method considered Received signal strength (RSS), Monetary Cost(C), Bandwidth (BW), Battery consumption (BC), Security (S) and Reliability (R). Handoff decision making is divided in two sections. First section calculates system obtained value (SOV) considering RSS, C, BW and BC. SOV is calculated using fuzzy logic theory. Today's mobile user are very intelligent in deciding there desired type of services. User preferred network is choose from user priority list is called User obtained value (UOV). Then handoff decisions are made based on SOV & UOV to select the most appropriate network for the mobile nodes (MNs). Simulation results show that fuzzy control theory & user preference based vertical handoff decision algorithm (VHDA) is able to make accurate handoff decisions, reduce unnecessary handoffs decrease handoff calculation time and decrease the probability of call blocking and dropping. the implementation of the algorithms and decision-making system.
    Keywords: decision making; fuzzy control; fuzzy set theory; mobile computing; mobility management (mobile radio); probability; telecommunication network reliability; telecommunication security; MC; RSS; SOV; VHDA; bandwidth; battery consumption; decrease call blocking probability; decrease call dropping probability; decrease handoff calculation time; fuzzy control theory; fuzzy logic theory; mobile nodes; monetary cost; next generation wireless networks; received signal strength; reliability; security; system obtained value calculation; unnecessary handoff reduction; user obtained value; user preference; user priority list; vertical handoff decision enhancement model; vertical handoff decision making; wireless communication technology; wireless heterogeneous networks architecture; Bandwidth; Batteries; Communication system security; Mobile communication; Vectors; Wireless networks; Bandwidth; Cost; Fuzzy control theory; Heterogeneous networks; Received signal strength; Security and user preference; Vertical handoff (ID#:14-2869) the implementation of the algorithms and decision-making system.
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6777873&isnumber=6777807
  • Karakis, R.; Guler, I, "An Application Of Fuzzy Logic-Based Image Steganography," Signal Processing and Communications Applications Conference (SIU), 2014 22nd, pp.156, 159, 23-25 April 2014. doi: 10.1109/SIU.2014.6830189 Today, data security in digital environment (such as text, image and video files) is revealed by development technology. Steganography and Cryptology are very important to save and hide data. Cryptology saves the message contents and Steganography hides the message presence. In this study, an application of fuzzy logic (FL)-based image Steganography was performed. First, the hidden messages were encrypted by XOR (eXclusive Or) algorithm. Second, FL algorithm was used to select the least significant bits (LSB) of the image pixels. Then, the LSBs of selected image pixels were replaced with the bits of the hidden messages. The method of LSB was improved as robustly and safely against steganalysis by the FL-based LSB algorithm. the implementation of the algorithms and decision-making system.
    Keywords: cryptography; fuzzy logic; image coding; steganography; FL-based LSB algorithm; XOR algorithm; cryptology; data security; eXclusive OR algorithm; fuzzy logic; image steganography; least significant bits; Conferences; Cryptography; Fuzzy logic; Internet; PSNR; Signal processing algorithms (ID#:14-2870) the implementation of the algorithms and decision-making system.
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830189&isnumber=6830164
  • Nesteruk, P.; Nesteruk, L.; Kotenko, I, "Creation of a Fuzzy Knowledge Base for Adaptive Security Systems," Parallel, Distributed and Network-Based Processing (PDP), 2014 22nd Euromicro International Conference on, pp.574, 577, 12-14 Feb. 2014. doi: 10.1109/PDP.2014.115 To design next generation adaptive security systems the powerful intelligent components should be developed. The paper describes the fuzzy knowledge base specifying relationships between threats and protection mechanisms by Mathworks MATLAB Fuzzy Logic Toolbox. The goal is to increase the effectiveness of the system reactions by minimization of neural network weights. We demonstrate a technique for creation of a fuzzy knowledge base to improve the system protection via rules monitoring and correction. the implementation of the algorithms and decision-making system.
    Keywords: adaptive systems; fuzzy set theory; knowledge based systems; security of data; MATLAB; adaptive security systems; fuzzy knowledge; fuzzy logic toolbox; neural network weights; rules monitoring; Adaptation models; Adaptive systems; Biological system modeling; Fuzzy logic; Knowledge based systems; MATLAB; Security; MATLAB Fuzzy Logic Toolbox; adaptive security rules; adaptive security system; fuzzy knowledge base (ID#:14-2871) the implementation of the algorithms and decision-making system.
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6787332&isnumber=6787236

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Visible Light Communication

Visible Light Communication


Visible light communication (VLC) offers an unregulated and free light spectrum and potentially could be a solution for overcoming overcrowded radio spectrum, especially for wireless communication systems, and doing it securely. In the articles cited here, security issues are addressed related to secure bar codes for smart phones, reducing the impact of ambient light (optical "noise"), physical layer security for indoor visible light, and using xenon flashlights for mobile payments. Also sited are works covering a broader range of visible light communications topics. These works appeared in the first half of 2014.

  • Bingsheng Zhang; Kui Ren; Guoliang Xing; Xinwen Fu; Cong Wang, "SBVLC: Secure Barcode-Based Visible Light Communication For Smartphones," INFOCOM, 2014 Proceedings IEEE, pp.2661,2669, April 27 2014-May 2 2014. doi: 10.1109/INFOCOM.2014.6848214 As an alternative to NFC technology, 2D barcodes have been increasingly used for security-sensitive applications including payments and personal identification. However, the security of barcode-based communication in mobile applications has not been systematically studied. Due to the visual nature, 2D barcodes are subject to eavesdropping when they are displayed on the screen of a smartphone. On the other hand, the fundamental design principles of 2D barcodes make it difficult to add security features. In this paper, we propose SBVLC - a secure system for barcode-based visible light communication (VLC) between smartphones. We formally analyze the security of SBVLC based on geometric models and propose physical security enhancement mechanisms for barcode communication by manipulating screen view angles and leveraging user-induced motions. We then develop two secure data exchange schemes. These schemes are useful in many security-sensitive mobile applications including private information sharing, secure device pairing, and mobile payment. SBVLC is evaluated through extensive experiments on both Android and iOS smartphones.
    Keywords: Android (operating system); bar codes; electronic data interchange; mobile commerce; near-field communication; radiofrequency identification; smart phones; telecommunication security;2D barcodes; Android smartphones; NFC technology; SBVLC; eavesdropping; geometric model; iOS smartphones; mobile payment; payments identification; personal identification; physical security enhancement mechanism; private information sharing; screen view angle manipulation; secure barcode-based visible light communication; secure data exchange scheme; secure device pairing; security sensitive application; security sensitive mobile application; user induced motion; Cameras; Decoding; Receivers; Security; Smart phones; Solid modeling; Three-dimensional displays}, (ID#:14-2927)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848214&isnumber=6847911
  • Verma, S.; Shandilya, A; Singh, A, "A Model For Reducing The Effect Of Ambient Light Source In VLC System," Advance Computing Conference (IACC), 2014 IEEE International, pp.186,188, 21-22 Feb. 2014. doi: 10.1109/IAdCC.2014.6779317 In recent years, Visible Light Communication has generated worldwide interest in the field of wireless communication because of its low cost and secure data exchange. However VLC suffers from serious drawbacks which degrade the communication performance. One of the major problems faced by any VLC system is the interference caused by ambient light noise, deteriorating the performance of the system. In this paper we propose an AVR based model to mitigate the ambient light noise interference and discuss its effectiveness. Further we have discussed other difficulties of VLC system.
    Keywords: electronic data interchange; interference suppression; light interference; optical communication; optical noise; telecommunication security; AVR based model; VLC system; ambient light noise interference mitigation; ambient light source; secure data exchange; visible light communication; wireless communication; Conferences; Decision support systems; Handheld computers; Ambient noise mitigation; LED transmitter; Visible Light Communication (VLC) (ID#:14-2928)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779317&isnumber=6779283
  • Mostafa, A; Lampe, L., "Physical-layer Security For Indoor Visible Light Communications," Communications (ICC), 2014 IEEE International Conference on, pp.3342,3347, 10-14 June 2014. doi: 10.1109/ICC.2014.6883837 This paper considers secure transmission over the visible light communication (VLC) channel by the means of physical-layer security techniques. In particular, we consider achievable secrecy rates of the multiple-input, single-output (MISO) wiretap VLC channel. The VLC channel is modeled as a deterministic and real-valued Gaussian channel subject to amplitude constraints. We utilize null-steering and artificial noise strategies to achieve positive secrecy rates when the eavesdropper's channel state information (CSI) is perfectly known and entirely unknown to the transmitter, respectively. In both scenarios, the legitimate receiver's CSI is available to the transmitter. We numerically evaluate achievable secrecy rates under typical VLC scenarios and show that simple precoding techniques can significantly improve the confidentiality of VLC links.
    Keywords: Gaussian channels indoor communication; optical communication; precoding; radio receivers; radio transmitters; telecommunication security; CSI; MISO channel; achievable secrecy rates; amplitude constraints; artificial noise; channel state information; deterministic Gaussian channel; indoor visible light communications; legitimate receiver; multiple-input single-output channel; null steering; physical layer security; positive secrecy rates; real-valued Gaussian channel; secure transmission; simple precoding; transmitter; wiretap VLC channel; Light emitting diodes; Optical transmitters; Receivers; Security; Signal to noise ratio; Vectors (ID#:14-2929)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883837&isnumber=6883277
  • Galal, M.M.; El Aziz, AA; Fayed, H.A; Aly, M.H., "Employing Smartphones Xenon Flashlight For Mobile Payment," Multi-Conference on Systems, Signals & Devices (SSD), 2014 11th International, pp.1,5, 11-14 Feb. 2014. doi: 10.1109/SSD.2014.6808780 Due to the huge dependence of the users on their smartphones and the huge technological advances in their design, smartphones have replaced many electronic devices nowadays. For that reason, it is of great interest to use such phones to replace magnetic cards. This paper uses the built-in Xenon flashlight in today's Android smartphones to experimentally transmit the data stored on the user magnetic card to a card reader or automatic teller machine (ATM). We experimentally modulate the embedded Xenon flashlight in a smartphone with the required information of a traditional magnetic card and transmit the light over a secure high speed optical link at 15 bps with no additional hardware at the user end. The paper introduces the design of an implemented small, inexpensive supplementary receiver circuit module, which is easily attached to a contemporary card reader or ATM machine. Furthermore, the paper tests the system performance under the effect of interference from another transmitter as well as compares its speed and security to the regular ATM card and to other competing technologies.
    Keywords: electronic commerce; optical links; smart phones; ATM; Android smartphones; automatic teller machine; contemporary card reader; magnetic cards; mobile payment; secure high speed optical link; smartphones Xenon flashlight; supplementary receiver circuit module; IEC standards; Photodetectors; Pulse width modulation; Receivers; Smart phones; Transmitters; ATM machines; Visible light communication; Xenon flashlight; smart payments; smartphones (ID#:14-2930)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6808780&isnumber=6808745
  • Kizilirmak, R.C.; Uysal, M., "Relay-assisted OFDM Transmission For Indoor Visible Light Communication," Communications and Networking (BlackSeaCom), 2014 IEEE International Black Sea Conference on, pp.11,15, 27-30 May 2014. doi: 10.1109/BlackSeaCom.2014.6848995 In this study, we investigate a relay-assisted visible light communication (VLC) system where an intermediate light source cooperates with the main light source. Specifically, we consider two light sources in an office space; one is the information source employed on the ceiling and the other one is a task light mounted on a desk. Our system builds upon DC biased optical orthogonal frequency division multiplexing (DCO-OFDM). The task light performs amplify-and-forward relaying to assist the communication and operates in half-duplex mode. We investigate the error rate performance of the proposed OFDM-based relay-assisted VLC system. Furthermore, we present joint AC and DC optimal power allocation in order to improve the performance. The DC power allocation is controlled by sharing the number of LED chips between the terminals and the AC power allocation decides the fraction of the information signal energy to be consumed at the terminals. Simulation results reveal that the VLC system performance can be improved via relay-assisted transmission and the performance gain as much as 6 dB can be achieved.
    Keywords: OFDM modulation; amplify and forward communication; indoor communication ;light sources; optical communication; optical modulation; relay networks (telecommunication);DC biased optical orthogonal frequency division multiplexing; LED chips; VLC system; amplify-and-forward relaying; error rate performance; half-duplex mode; indoor visible light communication; information signal energy; information source; intermediate light source; joint AC-DC optimal power allocation; office space; relay-assisted OFDM transmission; relay-assisted visible light communication system; Bit error rate; Light sources; Lighting; OFDM; Relays; Resource management; Sea surface; DCO-OFDM; Visible light communication; amplify-and-forward; half-duplex; power allocation (ID#:14-2931)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848995&isnumber=6848989
  • Fisne, A; Toker, C., "Investigation of the Channel Structure in Visible Light Communication," Signal Processing and Communications Applications Conference (SIU), 2014 22nd, pp.1646, 1649, 23-25 April 2014. doi: 10.1109/SIU.2014.6830562 Visible Light Communication comes forward particularly in indoor communication as an important alternative to the radio communication systems, nowadays. In Visible Light Communication, information is transferred by means of the light used for lighting rather than radio frequencies. In this paper, structure of the channel used for Visible Light Communication is examined. The effects of geometry between the receiver and transmitter upon communication are analyzed, supported with simulations.
    Keywords: geometry; indoor communication; optical receivers; optical transmitters; channel structure; geometry effects; indoor communication; lighting; visible light communication; Conferences; Light emitting diodes; Lighting; Masers; Mathematical model; Signal to noise ratio (ID#:14-2932)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830562&isnumber=6830164
  • Wang Yuanquan; Chi Nan, "A High-Speed Bi-Directional Visible Light Communication System Based on RGB-LED," Communications, China, vol.11, no.3, pp.40, 44, March 2014. doi: 10.1109/CC.2014.6825257 In this paper, we propose and experimentally demonstrate a bi-directional indoor communication system based on visible light RGB-LED. Spectrally efficient modulation formats (QAM-OFDM), advanced digital signal processing, pre- and post-equalization are adopted to compensate the severe frequency response of indoor channel. In this system, we utilize red-green-blue Light emitting diodes (LEDs), of which each color can be used to carry different signals. For downlink, the low frequencies of each color are used while for uplink, the high frequencies are used. The overall data rate of downlink and uplink are 1.15-Gb/s and 300-Mb/s. The bit error ratios (BERs) for all channels after 0.7 m indoor delivery are below pre-forward-error-correction (pre-FEC) threshold of 3.8x103. To the best of our knowledge, this is the highest data rate in bi-directional visible light communication system.
    Keywords: {OFDM modulation; error statistics; forward error correction; indoor communication; light emitting diodes; optical communication; quadrature amplitude modulation; telecommunication channels; BER; QAM-OFDM; advanced digital signal processing ;bi-directional indoor communication system; bi-directional visible light communication system; bit error ratios ;bit rate 1.15 Gbit/s; bit rate 300 Mbit/s; downlink; equalization; error-correction; frequency response; high-speed bi-directional visible light communication system; indoor channel; indoor delivery; modulation formats; preFEC threshold; preforward-error-correction; red-green-blue Light emitting diodes; uplink; visible light RGB-LED; Bidirectional control; Downlink; Image color analysis; Light emitting diodes; Modulation; OFDM; Uplink; bidirectional transmission; light emitting diode; orthogonal frequency division multiplexing; visible light communication (ID#:14-2933)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825257&isnumber=6825249
  • Xu Bao; Xiaorong Zhu; Tiecheng Song; Yanqiu Ou, "Protocol Design and Capacity Analysis in Hybrid Network of Visible Light Communication and OFDMA Systems," Vehicular Technology, IEEE Transactions on, vol.63, no.4, pp.1770, 1778, May 2014. doi: 10.1109/TVT.2013.2286264 Visible light communication (VLC) uses a vast unregulated and free light spectrum. It is considered to be a solution for overcoming the crowded radio spectrum for wireless communication systems. However, duplex communication, user mobility, and handover mechanisms are becoming challenging tasks in a VLC system. This paper proposes a hybrid network model of VLC and orthogonal frequency-division multiplexing access (OFDMA) in which the VLC channel is only used for downlink transmission, whereas OFDMA channels are served for uplinks in any situation or for downlinks only without VLC hotspots coverage. A novel protocol is proposed combined with access, horizontal, and vertical handover mechanisms for mobile terminal (MT) to resolve user mobility among different hotspots and OFDMA system. A new VLC network scheme and its frame format are presented to deal with the multiuser access problems in every hotspot. In addition, a new metric r is defined to evaluate the capacity of this hybrid network as the spatial density of interarrival time of MT requests in s-1m-2 under the assumption of the homogenous Poisson point process (HPPP) distribution of MTs. Analytical and simulation results show improvements in capacity performance of the hybrid, when compared to OFDMA system.
    Keywords: Poisson distribution; frequency division multiple access; optical communication; protocols; HPPP distribution; OFDMA channels; OFDMA systems; VLC channel; VLC hotspots coverage; capacity analysis; downlink transmission; duplex communication; free light spectrum; handover mechanisms; homogenous Poisson point process; hybrid network model; interarrival time; mobile terminal; mobility mechanisms; multiuser access problems; orthogonal frequency-division multiplexing access; protocol design; radio spectrum; spatial density; visible light communication; wireless communication systems; Downlink; Handover; Protocols; Radio frequency; Servers; Uplink; Capacity analysis; VLC frame format; Visible light Communication (VLC);capacity analysis; horizontal and vertical handover; hybrid VLC-OFDMA network; hybrid visible light communication (VLC)-orthogonal frequency-division multiplexing access (OFDMA) network (ID#:14-2934)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6637084&isnumber=6812142
  • Mondal, R.K.; Saha, N.; Yeong Min Jang, "Performance Enhancement Of MIMO Based Visible Light Communication," Electrical Information and Communication Technology (EICT), 2013 International Conference on, pp.1,5, 13-15 Feb. 2014.doi: 10.1109/EICT.2014.6777901 The camera based visible light communication (VLC) is the merger of VLC with vision technology in order to deploy VLC features in hand-held devices e.g., in Smartphone, employing light emitting diode (LED) transmitter to camera communication. However, the most advantageous features of VLC technology have not been achieved due to the low frame handling rate in camera module. In contrast, the spatially light source separation characteristic in camera module explores the scope to deploy multiple-input multiple-output (MIMO) concept for enhancing the overall system capacity and robust signal reception in camera based VLC system. In this paper, the performance of spatial multiplexing in MIMO based VLC system is evaluated.
    Keywords: {MIMO communication; cameras; light emitting diodes; optical communication; LED transmitter; MIMO based visible light communication; Smartphone; VLC; camera based visible light communication; hand held devices; light emitting diode; performance enhancement; robust signal reception; spatial multiplexing; vision technology; Bit error rate; Cameras; MIMO; Multiplexing; Optical transmitters; Receivers; Signal to noise ratio; LED; MIMO; Spatial Multiplexing; Visible light communication; image sensor (ID#:14-2935)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6777901&isnumber=6777807
  • Din, I; Hoon Kim, "Energy-Efficient Brightness Control and Data Transmission for Visible Light Communication," Photonics Technology Letters, IEEE, vol. 26, no. 8, pp.781, 784, April 15, 2014. doi: 10.1109/LPT.2014.2306195 This letter considers the efficient utilization of energy in a visible light communication (VLC) system. A joint brightness control and data transmission are presented to reduce the total power consumption while satisfying lighting and communication requirements. An optimization problem is formulated to determine the optimal parameters for the input waveform of light emitting diode (LED) lamps; this problem reduces the total energy consumption of the LED lamps while ensuring the desired brightness and communication link quality. The simulation results show that the proposed scheme increases the energy efficiency of the VLC system.
    Keywords: {LED lamps; brightness; data communication; energy consumption; optical communication equipment; optical links; LED lamps; VLC system; communication link quality; data transmission; energy consumption; energy efficiency; energy-efficient brightness control; input waveform; light emitting diode lamps; optimization problem; power consumption; visible light communication system; Brightness; Data communication; LED lamps; Modulation; Optical receivers; Visible light communication; energy efficiency; subcarrier pulse position modulation; wireless communication (ID#:14-2936)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6740016&isnumber=6776431
  • Monteiro, E.; Hranilovic, S., "Design and Implementation of Color-Shift Keying for Visible Light Communications," Lightwave Technology, Journal of, vol.32, no.10, pp.2053, 2060, May 15, 2014. doi: 10.1109/JLT.2014.2314358 Color-shift keying (CSK) is a visible light communication intensity modulation scheme, outlined in IEEE 802.15.7, that transmits data imperceptibly through the variation of the color emitted by red, green, and blue light emitting diodes. An advantage of CSK is that the power envelope of the transmitted signal is fixed; therefore, CSK reduces the potential for human health complications related to fluctuations in light intensity. In this work, a rigorous design framework for high order CSK constellations is presented. A key benefit of the frame work is that it optimizes constellations while accounting for crosstalk between the color communication channels. In addition, and unlike previous approaches, the method is capable of optimizing 3-D constellations. Furthermore, a prototype CSK communication system is presented to validate the performance of the optimized constellations, which provide gains of 1-3 dB over standard 805.15.7 constellations.
    Keywords: IEEE standards; light emitting diodes; optical communication equipment; optical crosstalk; optical design techniques; optical modulation; optimisation; visible spectra;3D high order CSK constellation optimization; IEEE 802.15.7;blue light emitting diodes; color communication channels; color-shift keying design; color-shift keying implementation; data transmission; gain 1 dB to 3 dB; green light emitting diodes; light intensity fluctuations; optical crosstalk; red light emitting diodes; signal transmission; visible light communication intensity modulation scheme; Color; Image color analysis; Light emitting diodes; Noise; Optical receivers; Optical transmitters; Optimization; Color-shift keying (CSK); intensity modulation; visible light communications (VLC) (ID#:14-2937)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6780585&isnumber=6808425

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Web Caching

Web Caching


Web Caching Web caches offer a potential for mischief. With the expanded need for caching capability with the cloud and mobile communications, the need for more and better security has also grown. The articles cited here address cache security issues including geo-inference attacks, scriptless timing attacks, and a proposed incognito tab. Other research on caching generally is cited. These articles appeared in 2014.

  • Jia, Y.; Dong, X.; Liang, Z.; Saxena, P., "I Know Where You've Been: Geo-Inference Attacks via the Browser Cache," Internet Computing, IEEE, vol. PP, no.99, pp.1, 1, August 2014. doi: 10.1109/MIC.2014.103 Many websites customize their services according to different geo-locations of users, to provide more relevant content and better responsiveness, including Google, Craigslist, etc. Recently, mobile devices further allow web applications to directly read users' geo-location information from GPS sensors. However, if geo-oriented websites leave location-sensitive content in the browser cache, other sites can sniff users' geo-locations by utilizing timing side-channels. In this paper, we demonstrate that such geo-location leakage channels are widely open in popular web applications today, including 62 percent of Alexa Top 100 websites. With geo-inference attacks that measure the timing of browser cache queries, we can locate users' countries, cities and neighborhoods in our case studies. We also discuss existing defenses and propose a more balanced solution to defeat such attacks with minor performance overhead.
    Keywords: (not provided) (ID#:14-3050)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6879050&isnumber=5226613
  • Bin Liang; Wei You; Liangkun Liu; Wenchang Shi; Heiderich, M., "Scriptless Timing Attacks on Web Browser Privacy," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.112,123, 23-26 June 2014. doi: 10.1109/DSN.2014.93 The existing Web timing attack methods are heavily dependent on executing client-side scripts to measure the time. However, many techniques have been proposed to block the executions of suspicious scripts recently. This paper presents a novel timing attack method to sniff users' browsing histories without executing any scripts. Our method is based on the fact that when a resource is loaded from the local cache, its rendering process should begin earlier than when it is loaded from a remote website. We leverage some Cascading Style Sheets (CSS) features to indirectly monitor the rendering of the target resource. Three practical attack vectors are developed for different attack scenarios and applied to six popular desktop and mobile browsers. The evaluation shows that our method can effectively sniff users' browsing histories with very high precision. We believe that modern browsers protected by script-blocking techniques are still likely to suffer serious privacy leakage threats.
    Keywords: data privacy; online front-ends; CSS features; Web browser privacy; Web timing attack methods; cascading style sheets; client-side scripts; desktop browser; mobile browser; privacy leakage threats; rendering process; script-blocking techniques; scriptless timing attacks; user browsing history; Animation; Browsers; Cascading style sheets; History; Rendering (computer graphics); Timing; Web privacy; browsing history; scriptless attack; timing attack (ID#:14-3051)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903572&isnumber=6903544
  • Qingsong Wei; Cheng Chen; Jun Yang, "CBM: A Cooperative Buffer Management for SSD," Mass Storage Systems and Technologies (MSST), 2014 30th Symposium on, pp.1, 12, 2-6 June 2014. doi: 10.1109/MSST.2014.6855545 Random writes significantly limit the application of Solid State Drive (SSD) in the I/O intensive applications such as scientific computing, Web services, and database. While several buffer management algorithms are proposed to reduce random writes, their ability to deal with workloads mixed with sequential and random accesses is limited. In this paper, we propose a cooperative buffer management scheme referred to as CBM, which coordinates write buffer and read cache to fully exploit temporal and spatial localities among I/O intensive workload. To improve both buffer hit rate and destage sequentiality, CBM divides write buffer space into Page Region and Block Region. Randomly written data is put in the Page Region at page granularity, while sequentially written data is stored in the Block Region at block granularity. CBM leverages threshold-based migration to dynamically classify random write from sequential writes. When a block is evicted from write buffer, CBM merges the dirty pages in write buffer and the clean pages in read cache belonging to the evicted block to maximize the possibility of forming full block write. CBM has been extensively evaluated with simulation and real implementation on OpenSSD. Our testing results conclusively demonstrate that CBM can achieve up to 84% performance improvement and 85% garbage collection overhead reduction compared to existing buffer management schemes.
    Keywords: cache storage ;flash memories; input-output programs; CBM; I/O intensive workload; OpenSSD; block granularity; block region; buffer hit rate; buffer management algorithms; cooperative buffer management; flash memory; garbage collection overhead reduction; page region; performance improvement; random write reduction; solid state drive; write sequentiality; Algorithm design and analysis; Buffer storage; Flash memories; Nonvolatile memory; Power line communications; Radiation detectors; Random access memory; buffer hit ratio; cooperative buffer management; flash memory; write sequentiality (ID#:14-3052)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6855545&isnumber=6855532
  • Gomaa, H.; Messier, G.G.; Davies, R., "Hierarchical Cache Performance Analysis Under TTL-Based Consistency," Networking, IEEE/ACM Transactions on, vol. PP, no. 99, pp. 1, 1, May 2014. doi: 10.1109/TNET.2014.2320723 This paper introduces an analytical model for characterizing the instantaneous hit ratio and instantaneous average hit distance of a traditional least recently used (LRU) cache hierarchy. The analysis accounts for the use of two variants of the Time-to-Live (TTL) weak consistency mechanism. The first is the typical TTL scheme (TTL-T) used in the HTTP/1.1 protocol where expired objects are refreshed using conditional GET requests. The second is TTL immediate ejection (TTL-IE) where objects are ejected as soon as they expire. The analysis also accounts for two sharing protocols: Leave Copy Everywhere (LCE) and Promote Cached Objects (PCO). PCO is a new sharing protocol introduced in this paper that decreases the user's perceived latency and is robust under nonstationary access patterns.
    Keywords: Analytical models; IEEE transactions; Markov processes; Measurement; Probability; Protocols; Servers; Analysis; Markov chain; Web; cache consistency; content-centric network (CCN);hierarchical cache; least recently used (LRU);time-to-live (TTL) (ID#:14-3053)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6812201&isnumber=4359146
  • Kumar, K.; Bose, J., "User Data Management By Tabs During A Browsing Session," Digital Information and Communication Technology and it's Applications (DICTAP), 2014 Fourth International Conference on, pp.258,263, 6-8 May 2014.doi: 10.1109/DICTAP.2014.6821692 Nowadays, most browsers are multi tab, where the user activity is segregated in parallel sessions, one on each tab. However, the user data, including history, cookies and cache, while browsing is not similarly segregated and only accessible together. This presents difficulties for the user to access their data separately by the tab. In this paper, we seek to solve the problem by organizing tab specific browser data in different tabs. We implement the system and present alternate ways to visualize the tab specific data, and also show it does not lead to appreciable slowdown in the browser performance. We also propose a method to convert an incognito tab, where the data is not stored, while browsing into a normal tab and vice versa. Such methods of tabbed data management will enable the user to better organize and view the tab specific data.
    Keywords: data handling; online front-ends; Web browser; incognito tab; multitab browsing session; parallel sessions; tab specific browser data; user data management; Browsers; Clustering algorithms; Context; Databases; History; Organizing; Switches; android; incognito mode; tabbed browsing; user data; web browser (ID#:14-3054)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821692&isnumber=6821645
  • Kazi, A.W.; Badr, H., "Some Observations On The Performance of CCN-Flooding," Computing, Networking and Communications (ICNC), 2014 International Conference on, pp.334, 340, 3-6 Feb. 2014 doi: 10.1109/ICCNC.2014.6785356 We focus on one of the earliest forwarding strategies proposed for Content-Centric Networks (CCN), namely the CCN-Flooding approach to populate the Forwarding Information Bases (FIB) and forward packets. Pure CCN-Flooding in its own right is a potentially viable, though highly deprecated, option to forward packets. But CCN-Flooding is also proposed as an integral component of alternative forwarding strategies. Thus, it cannot entirely be dismissed, and its behavior merits study. We examine the CCN-Flooding approach using a combination of several topologies and workload sets with differing characteristics. In addition to topological effects, we identify various issues that arise, such as: the difficulty of calibrating Pending Interest Table (PIT) timeouts; a PIT-induced isolation effect that negatively impacts bandwidth consumption and system response time; and the effects of adopting or not adopting FIB routes based on volatile in-network cache entries. In conclusion, we briefly compare CCN-Flooding vs. CCN-Publication when the overhead bandwidth costs of pre-populating FIBs in the latter are also taken into account.
    Keywords: computer networks; packet radio networks; telecommunication network topology; CCN-flooding; CCN-publication; FIB; PIT timeouts; PIT-induced isolation effect; bandwidth consumption; behavior merits; content-centric networks; forward packets; forwarding information bases; integral component; pending interest table timeouts; several topology; system response time; topological effects; volatile in-network cache entries; workload sets; Bandwidth; Floods; IP networks; Measurement; Network topology; Topology; Web and internet services; CCN performance evaluation; bandwidth consumption; caching; forwarding (ID#:14-3055)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785356&isnumber=6785290
  • Lei Wang; Jianfeng Zhan; Chunjie Luo; Yuqing Zhu; Qiang Yang; Yongqiang He; Wanling Gao; Zhen Jia; Yingjie Shi; Shujie Zhang; Chen Zheng; Gang Lu; Zhan, K.; Xiaona Li; Bizhu Qiu, "BigDataBench: A Big Data Benchmark Suite From Internet Services," High Performance Computer Architecture (HPCA), 2014 IEEE 20th International Symposium on, pp.488,499, 15-19 Feb. 2014. doi: 10.1109/HPCA.2014.6835958 As architecture, systems, and data management communities pay greater attention to innovative big data systems and architecture, the pressure of benchmarking and evaluating these systems rises. However, the complexity, diversity, frequently changed workloads, and rapid evolution of big data systems raise great challenges in big data benchmarking. Considering the broad use of big data systems, for the sake of fairness, big data benchmarks must include diversity of data and workloads, which is the prerequisite for evaluating big data systems and architecture. Most of the state-of-the-art big data benchmarking efforts target evaluating specific types of applications or system software stacks, and hence they are not qualified for serving the purposes mentioned above. This paper presents our joint research efforts on this issue with several industrial partners. Our big data benchmark suite-BigDataBench not only covers broad application scenarios, but also includes diverse and representative data sets. Currently, we choose 19 big data benchmarks from dimensions of application scenarios, operations/ algorithms, data types, data sources, software stacks, and application types, and they are comprehensive for fairly measuring and evaluating big data systems and architecture. BigDataBench is publicly available from the project home page http://prof.ict.ac.cn/BigDataBench. Also, we comprehensively characterize 19 big data workloads included in BigDataBench with varying data inputs. On a typical state-of-practice processor, Intel Xeon E5645, we have the following observations: First, in comparison with the traditional benchmarks: including PARSEC, HPCC, and SPECCPU, big data applications have very low operation intensity, which measures the ratio of the total number of instructions divided by the total byte number of memory accesses; Second, the volume of data input has non-negligible impact on micro-architecture characteristics, which may impose challenges for simulation-based- big data architecture research; Last but not least, corroborating the observations in CloudSuite and DCBench (which use smaller data inputs), we find that the numbers of L1 instruction cache (L1I) misses per 1000 instructions (in short, MPKI) of the big data applications are higher than in the traditional benchmarks; also, we find that L3 caches are effective for the big data applications, corroborating the observation in DCBench.
    Keywords: Big Data; Web services; cache storage; memory architecture; Big Data benchmark suite; Big Data systems; BigDataBench; CloudSuite; DCBench; HPCC; Intel Xeon E5645;Internet services;L1 instruction cache misses; MPKI; PARSEC; SPECCPU; big data benchmark suite; big data benchmarking; data management community; data sources; data types; memory access; micro-architecture characteristics; simulation-based big data architecture research; software stacks; system software stack; Benchmark testing; Computer architecture; Search engines; Social network services; System software (ID#:14-3056)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6835958&isnumber=6835920
  • Imtiaz, Al; Hossain, Md.Jayed, "Distributed Cache Management Architecture: To Reduce The Internet Traffic By Integrating Browser And Proxy Caches," Electrical Engineering and Information & Communication Technology (ICEEICT), 2014 International Conference on, pp.1,4, 10-12 April 2014. doi: 10.1109/ICEEICT.2014.6919088 The World Wide Web is one of the most popular Internet applications, and its traffic volume is increasing and evolving due to the popularity of social networking, file hosting, and video streaming sites. Wide ranges of research have been done on this field and numbers of architecture exist for caching those web content. Each of those has their own advantages and limitations. Browser caches handle single user by caching and storing web content on user computer. Where the proxy caches could handles thousands of users by handling, providing, and optimizing those web contents. But the World Wide Web (WWW) suffers from scaling and reliability problems due to overloaded and congested proxy servers. Distributed and Hierarchical architecture could be integrated as hybrid architecture for better performance and efficiency. Based on the secondary information by literature review, this paper is aimed to propose few feasible strategies to improve the cache management architecture by integrating browser with proxy caches server, where the browser cache will act as proxy cache server by sharing its content through hybrid architecture. This paper will also focus on the present architecture and challenges of current system that are needed to be resolved.
    Keywords: Browsers; Computer architecture; Computers; Internet; Protocols; Servers; Web pages; Browser cache; Cache management; Distributed cache; Web Traffic; Web cache (ID#:14-3057)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6919088&isnumber=6919024
  • Einziger, G.; Friedman, R., "TinyLFU: A Highly Efficient Cache Admission Policy," Parallel, Distributed and Network-Based Processing (PDP), 2014 22nd Euromicro International Conference on, pp.146, 153, 12-14 Feb. 2014. doi: 10.1109/PDP.2014.34 This paper proposes to use a frequency based cache admission policy in order to boost the effectiveness of caches subject to skewed access distributions. Rather than deciding on which object to evict, TinyLFU decides, based on the recent access history, whether it is worth admitting an accessed object into the cache at the expense of the eviction candidate. Realizing this concept is enabled through a novel approximate LFU structure called TinyLFU, which maintains an approximate representation of the access frequency of recently accessed objects. TinyLFU is extremely compact and lightweight as it builds upon Bloom filter theory. The paper shows an analysis of the properties of TinyLFU including simulations of both synthetic workloads as well as YouTube and Wikipedia traces.
    Keywords: cache storage; data structures; Bloom filter theory; TinyLFU; Wikipedia; YouTube; access frequency; frequency based cache admission policy; novel approximate LFU structure; Approximation methods; Finite wordlength effects; Histograms; History; Memory management; Optimization; Radiation detectors; Cache; LFU; TinyLFU; approximate count; bloom filter; cloud cache; data cache; sketch; sliding window; web cache; zipf (ID#:14-3058)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6787265&isnumber=6787236
  • Pal, M.B.; Jain, D.C., "Web Service Enhancement Using Web Pre-fetching by Applying Markov Model," Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on, pp.393,397, 7-9 April 2014. doi: 10.1109/CSNT.2014.84 Rapid growth of web application has increased the researcher's interests in this era. All over the world has surrounded by the computer network. There is a very useful application call web application used for the communication and data transfer. An application that is accessed via a web browser over a network is called the web application. Web caching is a well-known strategy for improving the performance of Web based system by keeping Web objects that are likely to be used in the near future in location closer to user. The Web caching mechanisms are implemented at three levels: client level, proxy level and original server level. Significantly, proxy servers play the key roles between users and web sites in lessening of the response time of user requests and saving of network bandwidth. Therefore, for achieving better response time, an efficient caching approach should be built in a proxy server. This paper use FP growth, weighted rule mining concept and Markov model for fast and frequent web pre fetching in order to has improved user response of web page and expedites users visiting speed.
    Keywords: Markov processes; Web services; Web sites; cache storage; data mining; file servers; Markov model; Web application; Web based system performance; Web browser; Web caching mechanism; Web objects; Web page; Web prefetching; Web service enhancement; Web sites; client level caching; communication; computer network; data transfer; network bandwidth saving; proxy level caching; proxy server; server level caching; user request; user response; weighted rule mining concept; Cleaning; Markov processes; Servers; Web mining; Web pages; Log file; Web Services; data cleaning; log preprocessing (ID#:14-3059)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821425&isnumber=6821334
  • Johnson, T.; Seeling, P., "Desktop and Mobile Web Page Comparison: Characteristics, Trends, And Implications," Communications Magazine, IEEE, vol.52, no.9, pp.144, 151, September 2014. doi: 10.1109/MCOM.2014.6894465 The broad proliferation of mobile devices in recent years has drastically changed the means of accessing the World Wide Web. Describing a shift away from the desktop computer era for content consumption, predictions indicate that the main access of web-based content will come from mobile devices. Concurrently, the manner of content presentation has changed as well; web artifacts are allowing for richer media and higher levels of user interaction which is enabled by the increasing speeds of access networks. This article provides an overview of more than two years of high level web page characteristics by comparing the desktop and mobile client versions. Our study is the first long-term evaluation of differences as seen by desktop and mobile web browser clients. We showcase the main differentiating factors with respect to the number of web page object requests, their sizes, relationships, and web page object caching. We find that over time, initial page view sizes and number of objects increase faster for desktop versions. However, web page objects have similar sizes in both versions, though they exhibit a different composition by type of object in greater detail.
    Keywords: Web sites; microcomputers; mobile computing; online front-ends; subscriber loops; World Wide Web; access networks; broad proliferation; content consumption; desktop client versions; desktop computer; desktop web page; high level web page characteristics; mobile client versions; mobile devices; mobile web browser clients; mobile web page; web artifacts; web page object caching; web-based content; Cascading style sheets; Internet; Market research; Mobile communication; Mobile handsets; Web pages (ID#:14-3060)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6894465&isnumber=6894440
  • Pourmir, A.; Ramanathan, P., "Distributed caching and coding in VoD," Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on, pp.233, 238, April 27 2014-May 2 2014. doi: 10.1109/INFCOMW.2014.6849237 Caching decreases content access time by keeping contents closer to the clients. In this paper we show that network coding chunks of different contents and storing them in cache, can be beneficial. Recent research considers caching network coded chunks of same content, but not different contents. This paper proposes three different methods, IP, layered-IP and Greedy algorithm, with different performance and complexity. Simulation results show that caching encoded chunks of different contents can significantly reduce the average data access time. Although we evaluate our ideas using Video on Demand (VoD) application on cable networks, they can be extended to broader contexts including content distribution in peer-to-peer networks and proxy web caches.
    Keywords: IP networks; Internet; cache storage; client-server systems; computational complexity; greedy algorithms; integer programming; network coding; peer-to-peer computing; video on demand; VoD application; average data access time; binary integer program; cable networks; caching network coded chunks; content distribution; greedy algorithm; layered-IP methods; peer-to-peer networks; proxy Web cache; video on demand application; Arrays; Conferences; Encoding; IP networks; Mathematical model; Probability; Servers; Binary Integer Program; caching; network coding (ID#:14-3061)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849237&isnumber=6849127
  • Fankhauser, T.; Qi Wang; Gerlicher, A.; Grecos, C.; Xinheng Wang, "Web Scaling Frameworks: A Novel Class Of Frameworks For Scalable Web Services In Cloud Environments," Communications (ICC), 2014 IEEE International Conference on, pp.1760, 1766, 10-14 June 2014. doi: 10.1109/ICC.2014.6883577 The social web and huge growth of mobile smart devices dramatically increases the performance requirements for web services. State-of-the-art Web Application Frameworks (WAFs) do not offer complete scaling concepts with automatic resource-provisioning, elastic caching or guaranteed maximum response times. These functionalities, however, are supported by cloud computing and needed to scale an application to its demands. Components like proxies, load-balancers, distributed caches, queuing and messaging systems have been around for a long time and in each field relevant research exists. Nevertheless, to create a scalable web service it is seldom enough to deploy only one component. In this work we propose to combine those complementary components to a predictable, composed system. The proposed solution introduces a novel class of web frameworks called Web Scaling Frameworks (WSFs) that take over the scaling. The proposed mathematical model allows a universally applicable prediction of performance in the single-machine- and multi-machine scope. A prototypical implementation is created to empirically validate the mathematical model and demonstrates both the feasibility and increase of performance of a WSF. The results show that the application of a WSF can triple the requests handling capability of a single machine and additionally reduce the number of total machines by 44%.
    Keywords: Web services; cache storage; cloud computing; WSFs; Web application frameworks; Web scaling frameworks; automatic resource-provisioning; cloud computing; cloud environments; distributed caches; elastic caching; guaranteed maximum response times; load-balancers; mathematical model; messaging systems; mobile smart devices; proxies; queuing; scalable Web services; social Web; Concurrent computing; Delays; Mathematical model; Multimedia communication; Radio frequency; Web services (ID#:14-3062)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883577&isnumber=6883277
  • Guedes, Erico A.C.; Silva, Luis E.T.; Maciel, Paulo R.M., "Performability Analysis Of I/O Bound Application On Container-Based Server Virtualization Cluster," Computers and Communication (ISCC), 2014 IEEE Symposium on, pp.1, 7, 23-26 June 2014. doi: 10.1109/ISCC.2014.6912556 Use of server virtualization for providing applications produces overloads that degrade the performance of provided systems. The use of container-based virtualization enabled a narrowing of this overload. On this work, we go a step forward and demonstrate how a broad tuning combination of several performance factors concerning to web cache server - the I/O bound application analysed - to file system and to operating system, led to a higher performance of proposed cluster, when it is executed on a container-based operating system virtualization environment. Availability and performance similarity of web cache service, under non-virtualized and virtualized systems, were evaluated when submitted to proposed web workload. Results reveal that web cache service provided under virtual environment, without unresponsiveness fails due to overload, i. e., with high availability, presents a 6% higher hit ratio and a 21.4% lower response time than those observed on non-virtualized environments.
    Keywords: Availability; Operating systems; Protocols; Servers; Throughput; Time factors; Virtualization; Container-based Operating Systems; Performability; Server Virtualization (ID#:14-3063)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6912556&isnumber=6912451
  • Akherfi, K.; Harroud, H.; Gerndt, M., "A Mobile Cloud Middleware To Support Mobility And Cloud Interoperability," Multimedia Computing and Systems (ICMCS), 2014 International Conference on, pp.1189, 1194, 14-16 April 2014. doi: 10.1109/ICMCS.2014.6911331 With the recent advances in cloud computing and the improvement in the capabilities of mobile devices in terms of speed, storage, and computing power, Mobile Cloud Computing (MCC) is emerging as one of important branches of cloud computing. MCC is an extension of cloud computing with the support of mobility. In this paper, we first present the specific concerns and key challenges in mobile cloud computing, we then discuss the different approaches to tackle the main issues in MCC that have been introduced so far, and finally we focus on describing the proposed overall architecture of a middleware that will contribute to providing mobile users data storage and processing services based on their mobile devices capabilities, availability, and usage. A prototype of the middleware is developed and three scenarios are described to demonstrate how the middleware performs in adapting the provision of cloud web services by transforming SOAP messages to REST and XML format to JSON, in optimizing the results by extracting relevant information, and in improving the availability by caching. Initial analysis shows that the mobile cloud middleware improves the quality of service for mobiles, and provides lightweight responses for mobile cloud services.
    Keywords: cloud computing; middleware; mobile computing; object-oriented methods; JSON format; MCC;REST format; SOAP messages; XML format; cloud interoperability; data processing service; data storage service; mobile cloud computing; mobile devices; mobility support; Cloud computing; Mobile communication; Mobile computing; Mobile handsets; Simple object access protocol; XML (ID#:14-3064)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6911331&isnumber=6911126
  • Yizheng Chen; Antonakakis, M.; Perdisci, R.; Nadji, Y.; Dagon, D.; Wenke Lee, "DNS Noise: Measuring the Pervasiveness of Disposable Domains in Modern DNS Traffic," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.598,609, 23-26 June 2014. doi: 10.1109/DSN.2014.61 In this paper, we present an analysis of a new class of domain names: disposable domains. We observe that popular web applications, along with other Internet services, systematically use this new class of domain names. Disposable domains are likely generated automatically, characterized by a "one-time use" pattern, and appear to be used as a way of "signaling" via DNS queries. To shed light on the pervasiveness of disposable domains, we study 24 days of live DNS traffic spanning a year observed at a large Internet Service Provider. We find that disposable domains increased from 23.1% to 27.6% of all queried domains, and from 27.6% to 37.2% of all resolved domains observed daily. While this creative use of DNS may enable new applications, it may also have unanticipated negative consequences on the DNS caching infrastructure, DNSSEC validating resolvers, and passive DNS data collection systems.
    Keywords: Internet; query processing; telecommunication traffic; ubiquitous computing; DNS caching infrastructure; DNS noise; DNS queries; DNS traffic; DNSSEC; Internet service provider; Internet services; Web applications; disposable domain pervasiveness measurement; live DNS traffic spanning; one-time use pattern; passive DNS data collection systems; signaling; Data collection Educational institutions; Google; Internet; Monitoring; Servers; Web and internet services; Disposable Domain Name; Internet Measurement (ID#:14-3065)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903614&isnumber=6903544
  • Rottenstreich, O.; Keslassy, I., "The Bloom Paradox: When Not to Use a Bloom Filter," Networking, IEEE/ACM Transactions on, vol. PP, no.99, pp.1, 1, Feb 2014. doi: 10.1109/TNET.2014.2306060 In this paper, we uncover the Bloom paradox in Bloom Filters: Sometimes, the Bloom Filter is harmful and should not be queried. We first analyze conditions under which the Bloom paradox occurs in a Bloom Filter and demonstrate that it depends on the a priori probability that a given element belongs to the represented set. We show that the Bloom paradox also applies to Counting Bloom Filters (CBFs) and depends on the product of the hashed counters of each element. In addition, we further suggest improved architectures that deal with the Bloom paradox in Bloom Filters, CBFs, and their variants. We further present an application of the presented theory in cache sharing among Web proxies. Lastly, using simulations, we verify our theoretical results and show that our improved schemes can lead to a large improvement in the performance of Bloom Filters and CBFs.
    Keywords: A priori membership probability; Bloom Filter; Counting Bloom Filter; the Bloom Filter paradox (ID#:14-3066)
    URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6748924&isnumber=4359146

Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


SoS VO Member Contributions

Member Contribution



The Science of Security Virtual Organization was established to provide a focal point for security science related work and to facilitate the creation of a collaborative community to advance security science. The SoS VO provides a wide range of information, networking, and collaboration capabilities, and the newsletter was developed to showcase research programs of interest to the Science of Security Community (SoS). The SoS VO encourages members to contribute material related to Science of Security, and we are pleased to present the contribution of SoS member Saman Zonouz, Assistant Professor, Electrical and Computer Engineering Department, Rutgers University.

Cyber-Physical Systems Security

Cyber-Physical systems generally are systems where computers control physical entities. They exist in areas as diverse as automobiles, manufacturing, energy, transportation, chemistry, and computer appliances. In this bibliography, the primary focus of published research is in smart grid technologies--the use of cyber-physical systems to coordinate the generation, transmission, and use of electrical power and its sources. Because of its strategic importance and the consequences of intrusion, smart grid is of particular importance to the Science of Security.

"A Trusted Safety Verifier for Process Controller Code", McLaughlin, Stephen; Zonouz, Saman; Pohly, Devin; and McDaniel, Patrick, Networks and Distributed Systems Symposium (NDSS) 2014 Attackers can leverage security vulnerabilities in control systems to make physical processes behave unsafely. Currently, the safe behavior of a control system relies on a Trusted Computing Base (TCB) of commodity machines, firewalls, networks, and embedded systems. These large TCBs, often containing known vulnerabilities, expose many attack vectors which can impact process safety. In this paper, we present the Trusted Safety Verifier (TSV), a minimal TCB for the verification of safety-critical code executed on programmable controllers. No controller code is allowed to be executed before it passes physical safety checks by TSV. If a safety violation is found, TSV provides a demonstrative test case to system operators. TSV works by first translating assembly-level controller code into an intermediate language, ILIL. ILIL allows us to check code containing more instructions and features than previous controller code safety verification techniques. TSV efficiently mixes symbolic execution and model checking by transforming an ILIL program into a novel temporal execution graph that lumps together safety equivalent controller states. We implemented TSV on a Raspberry Pi computer as a bump-in-the-wire that intercepts all controller bound code. Our evaluation shows that it can test a variety of programs for common safety properties in an average of less than three minutes, and under six minutes in the worst case--a small one-time addition to the process engineering life cycle. (ID#:14-3329)
URL: http://www.internetsociety.org/doc/trusted-safety-verifier-process-controller-code


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


General Topics of Interest

General Topics of Interest


General Topics of Interest reflects today's most popularly discussed challenges and issues in the Cybersecurity space. GToI includes news items related to Cybersecurity, updated information regarding academic SoS research, interdisciplinary SoS research, profiles on leading researchers in the field of SoS, and global research being conducted on related topics.

(ID#:14-3351)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.




The Secret Life of Passwords

The Secret Life of Passwords



"The Secret Life of Passwords", 23 November 2014, The New York Times.

Your password must contain an uppercase letter, a number, an ancient rune, a hieroglyph, and an Elvish character. How secure are our most secure passwords? Ian Urbina, who participated in the 2013 SoS Paper Competition, captivates in a narrative about the other side of passwords. Instead of bothersome access codes painfully extracted from the stubborn recesses of our minds, they are surprisingly intimate and expressive. As Urbina writes, passwords can be "suffused with pathos, mischief, sometimes even poetry" (Urbina 2014). In this New York Times article, Urbina takes readers back to one of the greatest tragedies in American history - September 11th, 2001. Howard Lutnick, chief executive of the financial services firm Cantor Fitzgerald, had just lost 658 of his co-workers and friends. Lutnick recounts the second blow of realizing his company could fold if the passwords of his fallen coworkers and friends could not be recovered. Read Lutnik's story, and Urbina's consequently inspired journey into passwords to discover if human sentimentality - considered a weakness by security professionals - is actually a saving grace. See http://www.nytimes.com/2014/11/19/magazine/the-secret-life-of-passwords.html?_r=0.

(ID#: 14-3550)


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.


Upcoming Events

Upcoming Events


FloCon 2015
This open network security conference invites members of the security industry to an event featuring keynote speakers, tutorials, and conversation. Sponsors include CERT and the Software Engineering Institute at Carnegie Mellon. (ID# 14-70066)
Event Date: Mon 1/12/15 - Thurs 1/15/15
Location: Portland, Oregon
URL: https://www.cert.org/flocon/

ICS Security Conference
ICS invites experienced professionals, academics, practitioners, and newcomers to participate in a technical presentations, idea sharing, and training sessions. Sponsors include IBM, Cisco, and more. (ID# 14-70067)
Event Date: Tues 1/13/15 - Fri 1/16/15
Location: North Miami, FL
URL: http://www.cvent.com/events/s4x15-week/event-summary-6527b763e4b94569a3612510327b7278.aspx

Shmoocon
Scmoocon is the annual East Coast hacker convention focused on answering the toughest challenges in the infosec community. Join in for three days of security talks, speakers, and info sessions. Some topics include "Knock Knock: A Survey of iOS Authentication Methods", "The Windows Sandbox Paradox", and much more. (ID# 14-70068)
Event Date: Fri 1/16/15 - Sun 1/18/15
Location: Washington, DC
URL: http://www.shmoocon.org/shmoocon

2015 International Conference on Engineering and Info Technology (ICEIT 2015)
ICEIT 2015 strives to provide a forum and environment for scholars, professionals, academics, and graduate student to present latest research findings and give peer review and feedback. (ID# 14-70069)
Event Date: Mon 1/19/15 - Wed 1/21/15
Location: Singapore
URL: http://www.iceit-conf.org/

Sparklecon 2.0
Determined to change the perception of hacker conventions, Sparklecon 2.0 is a free event inviting beginners to experts in the security field. Topics to be discussed include "Locks and Physical Security", "Infosec, Wifi, and Crypto", and "Hardware and Electronics". (ID# 14-70070)
Event Date: Fri 1/23/15 - Sun 1/25/15
Location: Fullerton, CA
URL: http://www.sparklecon.org/wiki/index.php?title=Main_Page

OWASP AppSec California
California's leading app security conference, AppSec, invites the best and the brightest in information security to attend, speak, and discuss. Keynote speakers include Yahoo's InfoSec Vice President, Security Engineer at Twitter, and more. Events will cover topics in secure app building, Advanced Web Exploitation, .NET reversing and exploitation, and much more. (ID# 14-70071)
Event Date: Mon 1/26/15 - Wed 1/28/15
Location: Santa Monica, CA
URL: https://2015.appseccalifornia.org/#about

Financial Cryptography and Data Security 2015
This conferences focuses on the critical importance of securing transactions and systems. This event invites security and cryptography researchers, experts, and practitioners, as well as economists, policy-makers, and members of commercial industry concerned about vulnerabilities in current methods of transaction. Several workshops, speaker sessions, presentations, and panels will be held. (ID# 14-70072)
Event Date: Mon 1/26/15 - Fri 1/30/15
Location: InterContinental San Juan Hotel, Puerto Rico
URL: http://fc15.ifca.ai/

ISSA CISO Forum 2015
The topic of this year's event is "InfoSec and Legal Collaboration", and invites leaders in information security, law, and privacy to join the conversation about changes to information security practices. Executive roundtables, speaker sessions, and panels will be featured. (ID# 14-70073)
Event Date: Thurs 1/29/15 - Fri 1/30/15
Location: Atlanta, GA
URL: http://www.issa.org/?page=CISO2015January

NEDForum London
NEDForum will focus on the Darknet, threat intelligence, attack detection, and cyber risk mitigation. Is the Darknet opening up new doors for companies to benefit from? What are the commercial and legal implications and, if any, repercussions of Darknet sites? Speakers include McAfee Chief Technology Officer, London Police Commissioner, and more. (ID# 14-70074)
Event Date: Fri 1/30/15
Location: London, UK
URL: http://www.nedforum.com/

DEFCON | OWASP Lucknow Information Security Meet 2015
DEFCON Lucknow is a DEFCON registered convention inviting all infosec enthusiasts, researchers, hackers, coders, netsec professionals, web developers, students, government, and industry. The event will feature keynote speakers, technical talks, and networking opportunities. (ID# 14-70075)
Event Date: Thurs 2/22/15
Location: The Grand JBR, Lucknow, India
URL: http://defconlucknow.in/

SANS Cyber Threat Intelligence Summit
This two day event focuses on enabling organizations to better prepare, detect, analyze, and defend against cyber threat and attacks. Speakers include cybersecurity expert Brian Krebs, training courses, and information sessions and discussions. (ID# 14-70076)
Event Date: Mon 2/2/15 - Mon 2/9/15
Location: Washington DC
URL: https://www.sans.org/event/cyber-threat-intelligence-summit-2015?utm_source=offsite&utm_medium=EventListing&utm_content=20141003_TE_100314_CTIntel15_C...

Nullcon Conference 2015
Nullcon serves as a space for industry professionals, members of academia, scholars, and interested parties to exchange information on the latest attack vectors, zero day vulnerabilities, and unknown threats. (ID# 14-70077)
Event Date: Wed 2/4/15 - Thurs 2/5/15
Location: Goa, India
URL: http://nullcon.net/website/


Note:

Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.