Publications of Interest
The Publications of Interest section contains bibliographical citations, abstracts if available and links on specific topics and research problems of interest to the Science of Security community.
How recent are these publications?
These bibliographies include recent scholarly research on topics which have been presented or published within the past year. Some represent updates from work presented in previous years, others are new topics.
How are topics selected?
The specific topics are selected from materials that have been peer reviewed and presented at SoS conferences or referenced in current work. The topics are also chosen for their usefulness for current researchers.
How can I submit or suggest a publication?
Researchers willing to share their work are welcome to submit a citation, abstract, and URL for consideration and posting, and to identify additional topics of interest to the community. Researchers are also encouraged to share this request with their colleagues and collaborators.
Submissions and suggestions may be sent to: research (at) securedatabank.net
(ID#:14-3138)
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Big Data
Big data security is a growing area of interest for researchers. The work presented here ranges from cyber-threat detection in critical infrastructures to privacy protection. This work was presented and published in the first half of 2014.
- Abawajy, J.; Kelarev, A; Chowdhury, M., "Large Iterative Multitier Ensemble Classifiers for Security of Big Data," Emerging Topics in Computing, IEEE Transactions on, vol. PP, no.99, pp.1,1, April 2014. doi: 10.1109/TETC.2014.2316510 This article introduces and investigates Large Iterative Multitier Ensemble (LIME) classifiers specifically tailored for Big Data. These classifiers are very large, but are quite easy to generate and use. They can be so large that it makes sense to use them only for Big Data. They are generated automatically as a result of several iterations in applying ensemble meta classifiers. They incorporate diverse ensemble meta classifiers into several tiers simultaneously and combine them into one automatically generated iterative system so that many ensemble meta classifiers function as integral parts of other ensemble meta classifiers at higher tiers. In this paper, we carry out a comprehensive investigation of the performance of LIME classifiers for a problem concerning security of big data. Our experiments compare LIME classifiers with various base classifiers and standard ordinary ensemble meta classifiers. The results obtained demonstrate that LIME classifiers can significantly increase the accuracy of classifications. LIME classifiers performed better than the base classifiers and standard ensemble meta classifiers.
Keywords: Big data; Data handling; Data mining; Data storage systems; Information management; Iterative methods; Malware (ID#:14-2639)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6808522&isnumber=6558478
- Hurst, W.; Merabti, M.; Fergus, P., "Big Data Analysis Techniques for Cyber-threat Detection in Critical Infrastructures," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on, pp.916, 921, 13-16 May 2014. doi: 10.1109/WAINA.2014.141 The research presented in this paper offers a way of supporting the security currently in place in critical infrastructures by using behavioral observation and big data analysis techniques to add to the Defense in Depth (DiD). As this work demonstrates, applying behavioral observation to critical infrastructure protection has effective results. Our design for Behavioral Observation for Critical Infrastructure Security Support (BOCISS) processes simulated critical infrastructure data to detect anomalies which constitute threats to the system. This is achieved using feature extraction and data classification. The data is provided by the development of a nuclear power plant simulation using Siemens Tecnomatix Plant Simulator and the programming language SimTalk. Using this simulation, extensive realistic data sets are constructed and collected, when the system is functioning as normal and during a cyber-attack scenario. The big data analysis techniques, classification results and an assessment of the outcomes is presented.
Keywords: Big Data; critical infrastructures; feature extraction; pattern classification; programming languages; security of data; BOCISS process; DiD; Siemens Tecnomatix Plant Simulator; anomaly detection; behavioral observation; big data analysis techniques ;critical infrastructure protection ;critical infrastructure security support process; cyber-attack scenario; cyber-threat detection; data classification; defence in depth; feature extraction; nuclear power plant simulation; programming language SimTalk; realistic data set; simulated critical infrastructure data; Big data; Data models; Feature extraction; Inductors; Security; Support vector machine classification; Water resources; Behavioral Observation; Big Data; Critical Infrastructure; Data Classification; Simulation (ID#:14-2640)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844756&isnumber=6844560
- Demchenko, Y.; de Laat, C.; Membrey, P., "Defining Architecture Components of the Big Data Ecosystem," Collaboration Technologies and Systems (CTS), 2014 International Conference on, pp.104,112, 19-23 May 2014. doi: 10.1109/CTS.2014.6867550 Big Data are becoming a new technology focus both in science and in industry and motivate technology shift to data centric architecture and operational models. There is a vital need to define the basic information/semantic models, architecture components and operational models that together comprise a so-called Big Data Ecosystem. This paper discusses a nature of Big Data that may originate from different scientific, industry and social activity domains and proposes improved Big Data definition that includes the following parts: Big Data properties (also called Big Data 5V: Volume, Velocity, Variety, Value and Veracity), data models and structures, data analytics, infrastructure and security. The paper discusses paradigm change from traditional host or service based to data centric architecture and operational models in Big Data. The Big Data Architecture Framework (BDAF) is proposed to address all aspects of the Big Data Ecosystem and includes the following components: Big Data Infrastructure, Big Data Analytics, Data structures and models, Big Data Lifecycle Management, Big Data Security. The paper analyses requirements to and provides suggestions how the mentioned above components can address the main Big Data challenges. The presented work intends to provide a consolidated view of the Big Data phenomena and related challenges to modern technologies, and initiate wide discussion.
Keywords: Big Data; data analysis; security of data; BDAF; Big Data analytics; Big Data architecture framework; Big Data ecosystem; Big Data infrastructure; Big Data lifecycle management; Big Data properties ;Big Data security; data analytics; data centric architecture; data infrastructure; data models; data operational models; data security; data structures; information-semantic models; value property; variety property; velocity property; veracity property; volume property; Big data; Biological system modeling; Computer architecture; Data models; Ecosystems; Industries; Security; Big Data Architecture Framework (BDAF); Big Data Ecosystem; Big Data Infrastructure (BDI); Big Data Lifecycle Management (BDLM);Big Data Technology; Cloud based Big Data Infrastructure Services (ID#:14-2641)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6867550&isnumber=6867522
- Rongxing Lu; Hui Zhu; Ximeng Liu; Liu, J.K.; Jun Shao, "Toward Efficient And Privacy-Preserving Computing In Big Data Era," Network, IEEE , vol.28, no.4, pp.46,50, July-August 2014. doi: 10.1109/MNET.2014.6863131 Big data, because it can mine new knowledge for economic growth and technical innovation, has recently received considerable attention, and many research efforts have been directed to big data processing due to its high volume, velocity, and variety (referred to as "3V") challenges. However, in addition to the 3V challenges, the flourishing of big data also hinges on fully understanding and managing newly arising security and privacy challenges. If data are not authentic, new mined knowledge will be unconvincing; while if privacy is not well addressed, people may be reluctant to share their data. Because security has been investigated as a new dimension, "veracity," in big data, in this article, we aim to exploit new challenges of big data in terms of privacy, and devote our attention toward efficient and privacy-preserving computing in the big data era. Specifically, we first formalize the general architecture of big data analytics, identify the corresponding privacy requirements, and introduce an efficient and privacy-preserving cosine similarity computing protocol as an example in response to data mining's efficiency and privacy requirements in the big data era.
Keywords: Big Data; data analysis; data mining; data privacy; security of data; big data analytics; big data era; big data processing; data mining efficiency; privacy requirements; privacy-preserving cosine similarity computing protocol; security; Authentication; Big data; Cryptography; Data privacy; Economics ;Information analysis; Privacy (ID#:14-2642)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6863131&isnumber=6863119
- Kan Yang; Xiaohua Jia; Kui Ren; Ruitao Xie; Liusheng Huang, "Enabling Efficient Access Control With Dynamic Policy Updating For Big Data In The Cloud," INFOCOM, 2014 Proceedings IEEE, pp.2013,2021, April 27 2014-May 2 2014. doi: 10.1109/INFOCOM.2014.6848142 Due to the high volume and velocity of big data, it is an effective option to store big data in the cloud, because the cloud has capabilities of storing big data and processing high volume of user access requests. Attribute-Based Encryption (ABE) is a promising technique to ensure the end-to-end security of big data in the cloud. However, the policy updating has always been a challenging issue when ABE is used to construct access control schemes. A trivial implementation is to let data owners retrieve the data and re-encrypt it under the new access policy, and then send it back to the cloud. This method incurs a high communication overhead and heavy computation burden on data owners. In this paper, we propose a novel scheme that enabling efficient access control with dynamic policy updating for big data in the cloud. We focus on developing an outsourced policy updating method for ABE systems. Our method can avoid the transmission of encrypted data and minimize the computation work of data owners, by making use of the previously encrypted data with old access policies. Moreover, we also design policy updating algorithms for different types of access policies. The analysis shows that our scheme is correct, complete, secure and efficient.
Keywords: Big Data; authorisation; cloud computing ;cryptography; ABE; Big Data; access control; access policy; attribute-based encryption; cloud; dynamic policy updating; end-to-end security ;outsourced policy updating method; Access control; Big data; Encryption; Public key; Servers;ABE; Access Control; Big Data; Cloud; Policy Updating (ID#:14-2643)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848142&isnumber=6847911
- Xindong Wu; Xingquan Zhu; Gong-Qing Wu; Wei Ding, "Data Mining With Big Data," Knowledge and Data Engineering, IEEE Transactions on, vol.26, no.1, pp.97,107, Jan. 2014. doi: 10.1109/TKDE.2013.109 Big Data concern large-volume, complex, growing data sets with multiple, autonomous sources. With the fast development of networking, data storage, and the data collection capacity, Big Data are now rapidly expanding in all science and engineering domains, including physical, biological and biomedical sciences. This paper presents a HACE theorem that characterizes the features of the Big Data revolution, and proposes a Big Data processing model, from the data mining perspective. This data-driven model involves demand-driven aggregation of information sources, mining and analysis, user interest modeling, and security and privacy considerations. We analyze the challenging issues in the data-driven model and also in the Big Data revolution.
Keywords: data mining; user modelling; Big Data processing model; Big Data revolution ;HACE theorem; data collection capacity; data driven model; data mining; data storage; demand driven aggregation; growing data sets; information sources; networking; user interest modeling; Data handling; Data models; Data privacy; Data storage systems; Distributed databases; Information management; Big Data; autonomous sources; complex and evolving associations; data mining; heterogeneity (ID#:14-2644)
URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6547630&isnumber=6674933
- Sandryhaila, A; Moura, J., "Big Data Analysis with Signal Processing on Graphs: Representation and processing of massive data sets with irregular structure," Signal Processing Magazine, IEEE, vol.31, no.5, pp.80, 90, Sept. 2014. doi: 10.1109/MSP.2014.2329213 Analysis and processing of very large data sets, or big data, poses a significant challenge. Massive data sets are collected and studied in numerous domains, from engineering sciences to social networks, biomolecular research, commerce, and security. Extracting valuable information from big data requires innovative approaches that efficiently process large amounts of data as well as handle and, moreover, utilize their structure. This article discusses a paradigm for large-scale data analysis based on the discrete signal processing (DSP) on graphs (DSPG). DSPG extends signal processing concepts and methodologies from the classical signal processing theory to data indexed by general graphs. Big data analysis presents several challenges to DSPG, in particular, in filtering and frequency analysis of very large data sets. We review fundamental concepts of DSPG, including graph signals and graph filters, graph Fourier transform, graph frequency, and spectrum ordering, and compare them with their counterparts from the classical signal processing theory. We then consider product graphs as a graph model that helps extend the application of DSPG methods to large data sets through efficient implementation based on parallelization and vectorization. We relate the presented framework to existing methods for large-scale data processing and illustrate it with an application to data compression.
Keywords: Big data; Data storage; Digital signal processing; Fourier transforms; Graph theory; Information analysis; Information processing; Time series analysis (ID#:14-2645)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6879640&isnumber=6879573
- Peng Li; Song Guo, "Load Balancing For Privacy-Preserving Access To Big Data In Cloud," Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on, pp.524,528, April 27 2014-May 2 2014. doi: 10.1109/INFCOMW.2014.6849286 In the era of big data, many users and companies start to move their data to cloud storage to simplify data management and reduce data maintenance cost. However, security and privacy issues become major concerns because third-party cloud service providers are not always trusty. Although data contents can be protected by encryption, the access patterns that contain important information are still exposed to clouds or malicious attackers. In this paper, we apply the ORAM algorithm to enable privacy-preserving access to big data that are deployed in distributed file systems built upon hundreds or thousands of servers in a single or multiple geo-distributed cloud sites. Since the ORAM algorithm would lead to serious access load unbalance among storage servers, we study a data placement problem to achieve a load balanced storage system with improved availability and responsiveness. Due to the NP-hardness of this problem, we propose a low-complexity algorithm that can deal with large-scale problem size with respect to big data. Extensive simulations are conducted to show that our proposed algorithm finds results close to the optimal solution, and significantly outperforms a random data placement algorithm.
Keywords: Big Data; cloud computing; computational complexity; data protection; distributed databases; file servers; information retrieval; random processes; resource allocation; storage management; Big Data; NP-hardness; ORAM algorithm; cloud storage; data availability; data content protection; data maintenance cost reduction; data management; data placement problem; data security; distributed file system; encryption; file server; geo-distributed cloud site; load balanced storage system; low-complexity algorithm; privacy preserving access; random data placement algorithm; responsiveness; storage server; Big data; Cloud computing; Conferences; Data privacy; Random access memory; Security; Servers (ID#:14-2646)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849286&isnumber=6849127
- Du, Nan; Manjunath, Niveditha; Shuai, Yao; Burger, Danilo; Skorupa, Ilona; Schuffny, Rene; Mayr, Christian; Basov, Dimitri N.; Di Ventra, Massimiliano; Schmidt, Oliver G.; Schmidt, Heidemarie, "Novel Implementation Of Memristive Systems For Data Encryption And Obfuscation," Journal of Applied Physics, vol. 115, no.12, pp.124501,124501-7, Mar 2014. doi: 10.1063/1.4869262 With the rise of big data handling, new solutions are required to drive cryptographic algorithms for maintaining data security. Here, we exploit the nonvolatile, nonlinear resistance change in BiFeO3 memristors [Shuai et al., J. Appl. Phys. 109, 124117 (2011)] by applying a voltage for the generation of second and higher harmonics and develop a new memristor-based encoding system from it to encrypt and obfuscate data. It is found that a BiFeO3 memristor in high and low resistance state can be used to generate two clearly distinguishable sets of second and higher harmonics as recently predicted theoretically [Cohen et al., Appl. Phys. Lett. 100, 133109 (2012)]. The computed autocorrelation of encrypted data using higher harmonics generated by a BiFeO3 memristor shows that the encoded data distribute randomly.
Keywords: (not provided) (ID#:14-2647)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778720&isnumber=6777935
- Kaushik, A; Satvika; Gupta, K.; Kumar, A, "Digital Image Chaotic Encryption (DICE - A Partial-Symmetric Key Cipher For Digital Images)," Optimization, Reliability, and Information Technology (ICROIT), 2014 International Conference on, pp.314,317, 6-8 Feb. 2014. doi: 10.1109/ICROIT.2014.6798345 The swift growth of communication facilities and ever decreasing cost of computer hardware has brought tremendous possibilities of expansion for commercial and academic rationales. With widely incremented communique like Internet, not only the good guys, but also bad guys have advantage. The hackers or crackers can take advantage of network vulnerabilities and pose a big threat to network security personnel. The information can be transferred by means of textual data, digital images, videos, animations, etc and thus requires better defense. Especially, the images are more visual and descriptive than textual data; hence they act as a momentous way of communication in the modern world. Protection of the digital images during transmission becomes more serious concern when they are confidential war plans, top-secret weapon photographs, stealthy military data and surreptitious architectural designs of financial buildings, etc. Several mechanisms like cryptography, steganography, hash functions, digital signatures have been designed to provide the ultimate safety for secret data. When the data is in form of digital images; certain features of images like high redundancy, strong correlation between neighboring pixels and abundance in information expression need some extra fortification while transmission. This paper proposes a new cryptographic cipher named Digital Image Chaotic Encryption (DICE) to convene the special requisites of secure image transfer. The strength of DICE lies in its partial-symmetric key nature i.e. even discovery of encryption key by hacker will not guarantee decoding of the original message.
Keywords: computer network security; cryptography; image processing; DICE ;Internet; digital image chaotic encryption; digital images protection; digital signatures; hash functions; network security personnel; partial-symmetric key cipher; steganography; Algorithm design and analysis; Biomedical imaging; Encryption; Standards; Block cipher; DICE Partial-Symmetric key algorithm; Digital watermarking (ID#:14-2648)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798345&isnumber=6798279
- Haoliang Lou; Yunlong Ma; Feng Zhang; Min Liu; Weiming Shen, "Data Mining For Privacy Preserving Association Rules Based On Improved MASK Algorithm," Computer Supported Cooperative Work in Design (CSCWD), Proceedings of the 2014 IEEE 18th International Conference on, pp.265,270, 21-23 May 2014. doi: 10.1109/CSCWD.2014.6846853 With the arrival of the big data era, information privacy and security issues become even more crucial. The Mining Associations with Secrecy Konstraints (MASK) algorithm and its improved versions were proposed as data mining approaches for privacy preserving association rules. The MASK algorithm only adopts a data perturbation strategy, which leads to a low privacy-preserving degree. Moreover, it is difficult to apply the MASK algorithm into practices because of its long execution time. This paper proposes a new algorithm based on data perturbation and query restriction (DPQR) to improve the privacy-preserving degree by multi-parameters perturbation. In order to improve the time-efficiency, the calculation to obtain an inverse matrix is simplified by dividing the matrix into blocks; meanwhile, a further optimization is provided to reduce the number of scanning database by set theory. Both theoretical analyses and experiment results prove that the proposed DPQR algorithm has better performance.
Keywords: data mining; data privacy; matrix algebra; query processing; DPQR algorithm; data mining; data perturbation and query restriction; data perturbation strategy; improved MASK algorithm ;information privacy ;inverse matrix; mining associations with secrecy constraints; privacy preserving association rules; scanning database; security issues; Algorithm design and analysis; Association rules; Data privacy; Itemsets; Time complexity ;Data mining; association rules; multi-parameters perturbation; privacy preservation(ID#:14-2649)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846853&isnumber=6846800
- Beigh, B.M., "One-stop: A novel hybrid model for intrusion detection system," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on, pp.798,805, 5-7 March 2014. doi: 10.1109/IndiaCom.2014.6828072 Organizations are paying huge amount only for the sake of securing their confidential data from attackers or intruders. But the hackers are Big Bosses and are very sharp enough to crack the security of the organization. Therefore before they made security breach, let us hunt down them and make the alert for organization, so that they can save their confidential data. For the above mentioned purpose, Intrusion detection system came into existence. But the current systems are not capable enough to detect all the attacks coming towards them. In order to fix the problem of detecting novel attacks and reducing number of false alarm, here in this paper, we have proposed a hybrid model for intrusion detection system, which have enhanced quality of detecting the unknown attack via anomaly based detection and also have module which will try to reduce the number of false alarm generated by the system.
Keywords: security of data; anomaly based detection; confidential data; false alarm reduction; intrusion detection system; one-stop model; organization security; security breach; Databases; Decoding; Engines; Hybrid power systems ;Intrusion detection; Organizations; Intrusion; attack; availability; confidentiality; detection; information; integrity; mitigate (ID#:14-2650)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828072&isnumber=6827395
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Browser Security
Browser Security Web browser exploits are a common attack vector. Research into browser security in the first three quarters of 2014 has looked at the common browsers and add-ons to address both specific and general problems. Included in the articles cited here are some addressing cross site scripting, hardware virtualization, bothounds, system call monitoring, and phishing detection.
- Barnes, R.; Thomson, M., "Browser-to-Browser Security Assurances for WebRTC," Internet Computing, IEEE, vol. PP, no. 99, pp.1, 1, September, 2014. doi: 10.1109/MIC.2014.106 For several years, browsers have been able to assure a user that he is talking to a specific, identified web site, protected from network-based attackers. In email, messaging, and other applications where sites act as intermediaries, there is a need for additional protections to provide end-to-end security. In this article we describe the approach that WebRTC takes to providing end-to-end security, leveraging both the flexibility of JavaScript and the ability of browsers to create constraints through JavaScript APIs.
Keywords: Browsers; Cameras; Internet; Media (ID#:14-2838)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6894480&isnumber=5226613
- Abgrall, E.; Le Traon, Y.; Gombault, S.; Monperrus, M., "Empirical Investigation of the Web Browser Attack Surface under Cross-Site Scripting: An Urgent Need for Systematic Security Regression Testing," Software Testing, Verification and Validation Workshops (ICSTW), 2014 IEEE Seventh International Conference on, pp.34,41, March 31 2014-April 4 2014. doi: 10.1109/ICSTW.2014.63 One of the major threats against web applications is Cross-Site Scripting (XSS). The final target of XSS attacks is the client running a particular web browser. During this last decade, several competing web browsers (IE, Netscape, Chrome, Firefox) have evolved to support new features. In this paper, we explore whether the evolution of web browsers is done using systematic security regression testing. Beginning with an analysis of their current exposure degree to XSS, we extend the empirical study to a decade of most popular web browser versions. We use XSS attack vectors as unit test cases and we propose a new method supported by a tool to address this XSS vector testing issue. The analysis on a decade releases of most popular web browsers including mobile ones shows an urgent need of XSS regression testing. We advocate the use of a shared security testing benchmark as a good practice and propose a first set of publicly available XSS vectors as a basis to ensure that security is not sacrificed when a new version is delivered.
Keywords: online front-ends; regression analysis; security of data; Web applications; Web browser attack surface; XSS vector testing; cross-site scripting; systematic security regression testing; Browsers; HTML; Mobile communication; Payloads; Security; Testing; Vectors; XSS; browser; regression; security; testing; web (ID#:14-2839)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825636&isnumber=6825623
- Xin Wu, "Secure Browser Architecture Based on Hardware Virtualization," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.489, 495, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6779009 Ensuring the entire code base of a browser to deal with the security concerns of integrity and confidentiality is a daunting task. The basic method is to split it into different components and place each of them in its own protection domain. OS processes are the prevalent isolation mechanism to implement the protection domain, which result in expensive context-switching overheads produced by Inter-Process Communication (TPC). Besides, the dependences of multiple web instance processes on a single set of privileged ones reduce the entire concurrency. In this paper, we present a secure browser architecture design based on processor virtualization technique. First, we divide the browser code base into privileged components and constrained components which consist of distrusted web page Tenderer components and plugins. All constrained components are in the form of shared object (SO) libraries. Second, we create an isolated execution environment for each distrusted shared object library using the hardware virtualization support available in modern Intel and AMD processors. Different from the current researches, we design a custom kernel module to gain the hardware virtualization capabilities. Third, to enhance the entire security of browser, we implement a validation mechanism to check the OS resources access from distrusted web page Tenderer to the privileged components. Our validation rules is similar with Google chrome. By utilizing VMENTER and VMEXIT which are both CPU instructions, our approach can gain a better system performance substantially.
Keywords: microprocessor chips; online front-ends; operating systems (computers); security of data; software libraries; virtualisation; AMD processors; CPU instructions; Google chrome; IPC; Intel processors; OS processes; OS resource checking; SO libraries; VMENTER; VMEXIT; browser security; context-switching overheads; distrusted Web page renderer components; distrusted shared object library; hardware virtualization capabilities; Interprocess communication; isolated execution environment; isolation mechanism; multiple Web instance processes; processor virtualization technique; secure browser architecture design; validation mechanism; Browsers; Google; Hardware; Monitoring; Security; Virtualization; Web pages; Browser security; Component isolation; Hardware virtualization; System call interposition (ID#:14-2840)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779009&isnumber=6778899
- Wadkar, H.; Mishra, A; Dixit, A, "Prevention of Information Leakages In A Web Browser By Monitoring System Calls," Advance Computing Conference (IACC), 2014 IEEE International, pp.199,204, 21-22 Feb. 2014. doi: 10.1109/IAdCC.2014.6779320 The web browser has become one of most accessed process/applications in recent years. The latest website security statistics report about 30% of vulnerability attacks happen due to the information leakage by browser application and its use by hackers to exploit privacy of an individual. This leaked information is one of the main sources for hackers to attack individual's PC or to make the PC a part of botnet. A software controller is proposed to track system calls invoked by the browser process. The designed prototype deals with the systems calls which perform operations related to read, write, access personal and/or system information. The objective of the controller is to confine the leakage of information by a browser process.
Keywords: Web sites; online front-ends; security of data; Web browser application; Web site security statistics report; botnet; browser process; monitoring system calls; software controller; system information leakages; track system calls; vulnerability attacks; Browsers; Computer hacking; Monitoring; Privacy; Process control; Software; browser security; confinement; information leakage}, (ID#:14-2841)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779320&isnumber=6779283
- Shamsi, J.A; Hameed, S.; Rahman, W.; Zuberi, F.; Altaf, K.; Amjad, A, "Clicksafe: Providing Security Against Clickjacking Attacks," High-Assurance Systems Engineering (HASE), 2014 IEEE 15th International Symposium on, pp.206,210, 9-11 Jan. 2014. doi: 10.1109/HASE.2014.36 Click jacking is an act of hijacking user clicks in order to perform undesired actions which are beneficial for the attacker. We propose Click safe, a browser-based tool to provide increased security and reliability against click jacking attacks. Click safe is based on three major components. The detection unit detects malicious components in a web page that redirect users to external links. The mitigation unit provides interception of user clicks and give educated warnings to users who can then choose to continue or not. Click safe also incorporate a feedback unit which records the user's actions, converts them into ratings and allows future interactions to be more informed. Click safe is predominant from other similar tools as the detection and mitigation is based on a comprehensive framework which utilizes detection of malicious web components and incorporating user feedback. We explain the mechanism of click safe, describes its performance, and highlights its potential in providing safety against click jacking to a large number of users.
Keywords: Internet; online front-ends; security of data; Clicksafe; Web page; browser-based tool; click safe; clickjacking attacks; detection unit; feedback unit; malicious Web component detection; mitigation unit; Browsers; Communities; Computers; Context; Loading; Safety; Security; Browser Security; Clickjacking; Safety; Security; Soft assurance of safe browsing (ID#:14-2842)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6754607&isnumber=6754569
- Mohammad, R.M.; Thabtah, F.; McCluskey, L., "Intelligent Rule-Based Phishing Websites Classification," Information Security, IET, vol.8, no.3, pp.153,160, May 2014. doi: 10.1049/iet-ifs.2013.0202 Phishing is described as the art of echoing a website of a creditable firm intending to grab user's private information such as usernames, passwords and social security number. Phishing websites comprise a variety of cues within its content-parts as well as the browser-based security indicators provided along with the website. Several solutions have been proposed to tackle phishing. Nevertheless, there is no single magic bullet that can solve this threat radically. One of the promising techniques that can be employed in predicting phishing attacks is based on data mining, particularly the `induction of classification rules' since anti-phishing solutions aim to predict the website class accurately and that exactly matches the data mining classification technique goals. In this study, the authors shed light on the important features that distinguish phishing websites from legitimate ones and assess how good rule-based data mining classification techniques are in predicting phishing websites and which classification technique is proven to be more reliable.
Keywords: Web sites; data mining; data privacy; pattern classification; security of data; unsolicited e-mail; Web site echoing; Website class; antiphishing solutions; browser-based security indicators; creditable flrm; intelligent rule-based phishing Web site classification; phishing attack prediction; rule-based data mining classification techniques; social security number; user private information (ID#:14-2843)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6786863&isnumber=6786849
- Phung, P.; Monshizadeh, M.; Sridhar, M.; Hamlen, K.; Venkatakrishnan, V., "Between Worlds: Securing Mixed JavaScript/ActionScript Multi-party Web Content," Dependable and Secure Computing, IEEE Transactions on, vol. PP, no.99, pp.1, 1, September 2014. doi: 10.1109/TDSC.2014.2355847 Mixed Flash and JavaScript content has become increasingly prevalent; its purveyance of dynamic features unique to each platform has popularized it for myriad web development projects. Although Flash and JavaScript security has been examined extensively, the security of untrusted content that combines both has received considerably less attention. This article considers this fusion in detail, outlining several practical scenarios that threaten the security of web applications. The severity of these attacks warrants the development of new techniques that address the security of Flash-JavaScript content considered as a whole, in contrast to prior solutions that have examined Flash or JavaScript security individually. Toward this end, the article presents FlashJaX, a cross-platform solution that enforces fine-grained, history-based policies that span both Flash and JavaScript. Using in-lined reference monitoring, FlashJaX safely embeds untrusted JavaScript and Flash content in web pages without modifying browser clients or using special plug-ins. The architecture of FlashJaX, its design and implementation, and a detailed security analysis are exposited. Experiments with advertisements from popular ad networks demonstrate that FlashJaX is transparent to policy-compliant advertisement content, yet blocks many common attack vectors that exploit the fusion of these web platforms.
Keywords: Browsers; Engines; Mediation; Monitoring; Payloads; Runtime; Security (ID#:14-2844) URL: http://ieeexplore.ieee.org/stam
p/stamp.jsp?tp=&arnumber=6894186&isnumber=4358699
- Byungho Min; Varadharajan, V., "A New Technique for Counteracting Web Browser Exploits," Software Engineering Conference (ASWEC), 2014 23rd Australian, pp.132,141, 7-10 April 2014. doi: 10.1109/ASWEC.2014.28 Over the last few years, exploit kits have been increasingly used for system compromise and malware propagation. As they target the web browser which is one of the most commonly used software in the Internet era, exploit kits have become a major concern of security community. In this paper, we propose a proactive approach to protecting vulnerable systems from this prevalent cyber threat. Our technique intercepts communications between the web browser and web pages, and proactively blocks the execution of exploit kits using version information of web browser plugins. Our system, AFFAF, is a zero-configuration solution, and hence users do not need to do anything but just simply install it. Also, it is an easy-to-employ methodology from the perspective of plugin developers. We have implemented a lightweight prototype, which has demonstrated that AFFAF protected vulnerable systems can counteract 50 real-world and one locally deployed exploit kit URLs. Tested exploit kits include popular and well-maintained ones such as Blackhole 2.0, Redkit, Sakura, Cool and Bleeding Life 2. We have also shown that the false positive rate of AFFAF is virtually zero, and it is robust enough to be effective against real web browser plugin scanners.
Keywords: Internet; invasive software; online front-ends; AFFAF protected vulnerable systems; Internet; Web browser exploits; Web browser plugin scanners; Web pages; cyber threat; exploit kit URL; lightweight prototype; malware propagation; security community; system compromise; version information; zero-configuration solution; browsers; Java; Malware; Prototypes; Software; Web sites; Defensive Techniques; Exploit Kits; Security Attacks (ID#:14-2845)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6824118&isnumber=6824087
- Mewara, B.; Bairwa, S.; Gajrani, J., "Browser's Defenses Against Reflected Cross-Site Scripting Attacks," Signal Propagation and Computer Technology (ICSPCT), 2014 International Conference on, pp.662,667, 12-13 July 2014. doi: 10.1109/ICSPCT.2014.6884928 Due to the frequent usage of online web applications for various day-to-day activities, web applications are becoming most suitable target for attackers. Cross-Site Scripting also known as XSS attack, one of the most prominent defacing web based attack which can lead to compromise of whole browser rather than just the actual web application, from which attack has originated. Securing web applications using server side solutions is not profitable as developers are not necessarily security aware. Therefore, browser vendors have tried to evolve client side filters to defend against these attacks. This paper shows that even the foremost prevailing XSS filters deployed by latest versions of most widely used web browsers do not provide appropriate defense. We evaluate three browsers - Internet Explorer 11, Google Chrome 32, and Mozilla Firefox 27 for reflected XSS attack against different type of vulnerabilities. We find that none of above is completely able to defend against all possible type of reflected XSS vulnerabilities. Further, we evaluate Firefox after installing an add-on named XSS-Me, which is widely used for testing the reflected XSS vulnerabilities. Experimental results show that this client side solution can shield against greater percentage of vulnerabilities than other browsers. It is witnessed to be more propitious if this add-on is integrated inside the browser instead being enforced as an extension.
Keywords: online front-ends; security of data; Google Chrome 32; Internet Explorer 11; Mozilla Firefox 27;Web based attack; Web browsers; XSS attack; XSS filters; XSS-Me;online Web applications; reflected cross-site scripting attacks; Browsers; Security; Thyristors; JavaScript; Reflected XSS; XSS-Me; attacker; bypass; exploit; filter (ID#:14-2846)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6884928&isnumber=6884878
- Biedermann, S.; Ruppenthal, T.; Katzenbeisser, S., "Data-centric Phishing Detection Based On Transparent Virtualization Technologies," Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on, pp.215,223, 23-24 July 2014. doi: 10.1109/PST.2014.6890942 We propose a novel phishing detection architecture based on transparent virtualization technologies and isolation of the own components. The architecture can be deployed as a security extension for virtual machines (VMs) running in the cloud. It uses fine-grained VM introspection (VMI) to extract, filter and scale a color-based fingerprint of web pages which are processed by a browser from the VM's memory. By analyzing the human perceptual similarity between the fingerprints, the architecture can reveal and mitigate phishing attacks which are based on redirection to spoofed web pages and it can also detect "Man-in-the-Browser" (MitB) attacks. To the best of our knowledge, the architecture is the first anti-phishing solution leveraging virtualization technologies. We explain details about the design and the implementation and we show results of an evaluation with real-world data.
Keywords: Web sites; cloud computing; computer crime; online front-ends; virtual machines; virtualisation; MitB attack; VM introspection; VMI; antiphishing solution; cloud; color-based fingerprint extraction; color-based fingerprint filtering; color-based fingerprint scaling; component isolation; data-centric phishing detection; human perceptual similarity; man-in-the-browser attack; phishing attacks; spoofed Web pages; transparent virtualization technologies; virtual machines; Browsers; Computer architecture; Data mining; Detectors; Image color analysis; Malware; Web pages (ID#:14-2847)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890942&isnumber=6890911
- Sayed, B.; Traore, I, "Protection against Web 2.0 Client-Side Web Attacks Using Information Flow Control," Advanced Information Networking and Applications Workshops (WAINA), 2014 28th International Conference on, pp. 261, 268, 13-16 May 2014. doi: 10.1109/WAINA.2014.52 The dynamic nature of the Web 2.0 and the heavy obfuscation of web-based attacks complicate the job of the traditional protection systems such as Firewalls, Anti-virus solutions, and IDS systems. It has been witnessed that using ready-made toolkits, cyber-criminals can launch sophisticated attacks such as cross-site scripting (XSS), cross-site request forgery (CSRF) and botnets to name a few. In recent years, cyber-criminals have targeted legitimate websites and social networks to inject malicious scripts that compromise the security of the visitors of such websites. This involves performing actions using the victim browser without his/her permission. This poses the need to develop effective mechanisms for protecting against Web 2.0 attacks that mainly target the end-user. In this paper, we address the above challenges from information flow control perspective by developing a framework that restricts the flow of information on the client-side to legitimate channels. The proposed model tracks sensitive information flow and prevents information leakage from happening. The proposed model when applied to the context of client-side web-based attacks is expected to provide a more secure browsing environment for the end-user.
Keywords: Internet; computer crime; data protection; invasive software; IDS systems; Web 2.0 client-side Web attacks; antivirus solutions; botnets; cross-site request forgery; cross-site scripting; c yber-criminals; firewalls; information flow control; information leakage; legitimate Web sites; malicious script injection; protection systems; secure browsing environment; social networks; Browsers; Feature extraction; Security; Semantics; Servers; Web 2.0;Web pages; AJAX; Client-side web attacks; Information Flow Control; Web 2.0 (ID#:14-2848)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6844648&isnumber=6844560
- Zarras, A; Papadogiannakis, A; Gawlik, R.; Holz, T., "Automated Generation Of Models For Fast And Precise Detection Of HTTP-Based Malware," Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on, pp.249,256, 23-24 July 2014. doi: 10.1109/PST.2014.6890946 Malicious software and especially botnets are among the most important security threats in the Internet. Thus, the accurate and timely detection of such threats is of great importance. Detecting machines infected with malware by identifying their malicious activities at the network level is an appealing approach, due to the ease of deployment. Nowadays, the most common communication channels used by attackers to control the infected machines are based on the HTTP protocol. To evade detection, HTTP-based malware adapt their behavior to the communication patterns of the benign HTTP clients, such as web browsers. This poses significant challenges to existing detection approaches like signature-based and behavioral-based detection systems. In this paper, we propose BOTHOUND: a novel approach to precisely detect HTTP-based malware at the network level. The key idea is that implementations of the HTTP protocol by different entities have small but perceivable differences. Building on this observation, BOTHOUND automatically generates models for malicious and benign requests and classifies at real time the HTTP traffic of a monitored network. Our evaluation results demonstrate that BOTHOUND outperforms prior work on identifying HTTP-based botnets, being able to detect a large variety of real-world HTTP-based malware, including advanced persistent threats used in targeted attacks, with a very low percentage of classification errors.
Keywords: Internet; invasive software; BOTHOUND approach; HTTP protocol; HTTP traffic; HTTP-based malware detection; Internet; Web browsers; behavioral-based detection system; botnets; classification errors; hypertext transfer protocol; malicious software; security threats; signature-based detection system; Accuracy; Browsers; Malware; Monitoring; Protocols; Software; Training (ID#:14-2849)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890946&isnumber=6890911
- Ortiz-Yepes, D.A; Hermann, R.J.; Steinauer, H.; Buhler, P., "Bringing Strong Authentication And Transaction Security To The Realm Of Mobile Devices," IBM Journal of Research and Development, vol.58, no.1, pp.4:1, 4:11, Jan.-Feb. 2014. doi: 10.1147/JRD.2013.2287810 Widespread usage of mobile devices in conjunction with malicious software attacks calls for the development of mobile-device-oriented mechanisms aiming to provide strong authentication and transaction security. This paper considers the eBanking application scenario and argues that the concept of using a trusted companion device can be ported to the mobile realm. Trusted companion devices involve established and proven techniques in the PC (personal computer) environment to secure transactions. Various options for the communication between mobile and companion devices are discussed and evaluated in terms of technical feasibility, usability, and cost. Accordingly, audio communication across the 3.5-mm audio jack--also known as tip-ring-ring-sleeve, or TRRS connector,--is determined to be quite appropriate. We present a proof-of-concept companion device implementing binary frequency shift keying across this interface. Results from a field study performed with the proof-of-concept device further confirm the feasibility of the proposed solution.
Keywords: Authentication; Browsers; Computer security; Malware; Mobile communication; Servers; Smart cards; Universal Serial Bus (ID#:14-2850)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6717088&isnumber=6717043
- Chuan Xu; Guofeng Zhao; Gaogang Xie; Shui Yu, "Detection on Application Layer DDOSs Using Random Walk Model," Communications (ICC), 2014 IEEE International Conference on, pp.707,712, 10-14 June 2014. doi: 10.1109/ICC.2014.6883402 Application Layer Distributed Denial of Service (ALDDoS) attacks have been increasing rapidly with the growth of Botnets and Ubiquitous computing. Differentiate to the former DDoS attacks, ALDDoS attacks cannot be efficiently detected, as attackers always adopt legitimate requests with real IP address, and the traffic has high similarity to legitimate traffic. In spite of that, we think, the attackers' browsing behavior will have great disparity from that of the legitimate users'. In this paper, we put forward a novel user behavior-based method to detect the application layer asymmetric DDoS attack. We introduce an extended random walk model to describe user browsing behavior and establish the legitimate pattern of browsing sequences. For each incoming browser, we observe his page request sequence and predict subsequent page request sequence based on random walk model. The similarity between the predicted and the observed page request sequence is used as a criterion to measure the legality of the user, and then attacker would be detected based on it. Evaluation results based on real collected data set has demonstrated that our method is very effective in detecting asymmetric ALDDoS attacks.
Keywords: computer network security; ALDDoS attacks; application layer distributed denial of service attacks; botnet; browsing sequences; extended random walk model; legitimate users; novel user behavior-based method; page request sequence; real IP address; subsequent page request sequence; ubiquitous computing; user browsing behavior; Computational modeling; Computer crime; Educational institutions; Information systems; Predictive models; Probability distribution; Vectors; Asymmetric application layer DDoS attack; anomaly detection; random walk model; similarity (ID#:14-2851)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883402&isnumber=6883277
- Sah, S.K.; Shakya, S.; Dhungana, H., "A Security Management For Cloud Based Applications And Services with Diameter-AAA," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on, pp.6,11, 7-8 Feb. 2014. doi: 10.1109/ICICICT.2014.6781243 The Cloud computing offers various services and web based applications over the internet. With the tremendous growth in the development of cloud based services, the security issue is the main challenge and today's concern for the cloud service providers. This paper describes the management of security issues based on Diameter AAA mechanisms for authentication, authorization and accounting (AAA) demanded by cloud service providers. This paper focuses on the integration of Diameter AAA into cloud system architecture.
Keywords: authorisation; cloud computing; Internet; Web based applications; authentication, authorization and accounting; cloud based applications; cloud based services; cloud computing; cloud service providers; cloud system architecture; diameter AAA mechanisms; security management; Authentication; Availability; Browsers; Computational modeling; Protocols; Servers; Cloud Computing; Cloud Security; Diameter-AAA (ID#:14-2852)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781243&isnumber=6781240
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
CAPTCHA
CAPTCHA (acronym for Completely Automated Public Turing test to tell Computers and Humans Apart) technology has become a standard security tool. In the research presented here, some novel uses are presented, including an Arabic language text digitization scheme, use of Captchas as graphical passwords, motion-based captchas, and defeating a captcha using a gaming technique. These works were presented or published in 2014.
- Zhu, B.B.; Yan, J.; Guanbo Bao; Maowei Yang; Ning Xu, "Captcha as Graphical Passwords--A New Security Primitive Based on Hard AI Problems," Information Forensics and Security, IEEE Transactions on, vol.9, no.6, pp.891,904, June 2014. doi: 10.1109/TIFS.2014.2312547 Many security primitives are based on hard mathematical problems. Using hard AI problems for security is emerging as an exciting new paradigm, but has been under-explored. In this paper, we present a new security primitive based on hard AI problems, namely, a novel family of graphical password systems built on top of Captcha technology, which we call Captcha as graphical passwords (CaRP). CaRP is both a Captcha and a graphical password scheme. CaRP addresses a number of security problems altogether, such as online guessing attacks, relay attacks, and, if combined with dual-view technologies, shoulder-surfing attacks. Notably, a CaRP password can be found only probabilistically by automatic online guessing attacks even if the password is in the search set. CaRP also offers a novel approach to address the well-known image hotspot problem in popular graphical password systems, such as PassPoints, that often leads to weak password choices. CaRP is not a panacea, but it offers reasonable security and usability and appears to fit well with some practical applications for improving online security.
Keywords: artificial intelligence; security of data; CaRP password; Captcha as graphical passwords; PassPoints; artificial intelligence; automatic online guessing attacks; dual-view technologies; hard AI problems; hard mathematical problems; image hotspot problem; online security; password choices; relay attacks; search set; security primitives; shoulder-surfing attacks; Animals; Artificial intelligence; Authentication ;CAPTCHAs; Usability; Visualization; CaRP; Captcha; Graphical password; dictionary attack; hotspots; password; password guessing attack; security primitive (ID#:14-2853)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6775249&isnumber=6803967
- Bakry, M.; Khamis, M.; Abdennadher, S., "AreCAPTCHA: Outsourcing Arabic Text Digitization to Native Speakers," Document Analysis Systems (DAS), 2014 11th IAPR International Workshop on, pp.304,308, 7-10 April 2014. doi: 10.1109/DAS.2014.50 There has been a recent increasing demand to digitize Arabic books and documents, due to the fact that digital books do not lose quality over time, and can be easily sustained. Meanwhile, the number of Arabic-speaking Internet users is increasing. We propose AreCAPTCHA, a system that digitizes Arabic text by outsourcing it to native Arabic speakers, while offering protective measures to online web forms of Arabic websites. As users interact with AreCAPTCHA, we collect possible digitizations of words that were not recognized by OCR programs. We explain how the system works, the challenges we faced, and promising preliminary evaluation results.
Keywords: Web sites; document image processing; natural language processing; optical character recognition; security of data; Arabic Web sites; Arabic book; Arabic document; Arabic text digitization; Arabic-speaking Internet user; AreCAPTCHA; OCR program; digital book; native Arabic speaker; online Web form; protective measure; CAPTCHAs; Databases; Educational institutions; Engines; Internet; Libraries; Optical character recognition software; Arabic; CAPTCHA; Digitization; Human computation; words recognition (ID#:14-2854)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6831018&isnumber=6824386
- Yi Xu; Reynaga, G.; Chiasson, S.; Frahm, J.-M.; Monrose, F.; van Oorschot, P.C., "Security Analysis and Related Usability of Motion-Based CAPTCHAs: Decoding Codewords in Motion," Dependable and Secure Computing, IEEE Transactions on, vol.11, no.5, pp.480,493, Sept.-Oct. 2014. doi: 10.1109/TDSC.2013.52 We explore the robustness and usability of moving-image object recognition (video) CAPTCHAS, designing and implementing automated attacks based on computer vision techniques. Our approach is suitable for broad classes of moving-image CAPTCHAS involving rigid objects. We first present an attack that defeats instances of such a CAPTCHA (NuCaptcha) representing the state-of-the-art, involving dynamic text strings called codewords. We then consider design modifications to mitigate the attacks (e.g., overlapping characters more closely, randomly changing the font of individual characters, or even randomly varying the number of characters in the codeword). We implement the modified CAPTCHAS and test if designs modified for greater robustness maintain usability. Our lab-based studies show that the modified captchas fail to offer viable usability, even when the captcha strength is reduced below acceptable targets. Worse yet, our GPU-based implementation shows that our automated approach can decode these captchas faster than humans can, and we can do so at a relatively low cost of roughly 50 cents per 1,000 captchas solved based on Amazon EC2 rates circa 2012. To further demonstrate the challenges in designing usable captchas, we also implement and test another variant of moving text strings using the known emerging images concept. This variant is resilient to our attacks and also offers similar usability to commercially available approaches. We explain why fundamental elements of the emerging images idea resist our current attack where others fail.
Keywords: Turing machines; computer vision; graphics processing units; image coding ;image motion analysis; object recognition; security of data; text analysis; Amazon EC2 rates circa strings; GPU-based implementation; automated attack mitigation; computer vision; decoding codeword; design modification; dynamic text strings; motion-based CAPTCHA; moving image object recognition CAPTCHA; security analysis; usability analysis; CAPTCHAs; Feature extraction; Image color analysis; Robustness; Streaming media; Trajectory; Usability; CAPTCHAs; computer vision; security; usability (ID#:14-2855)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6682912&isnumber=6893064
- Subpratatsavee, P.; Kuha, P.; Janthong, N.; Chintho, C., "An Implementation of a Geometric and Arithmetic CAPTCHA without Database," Information Science and Applications (ICISA), 2014 International Conference on, pp.1,3, 6-9 May 2014. doi: 10.1109/ICISA.2014.6847359 This research presented a geometric CAPTCHA which was not created from images in any database, but it is an image of a geometric shape that randomly generated from a program and its edge was incomplete. Geometric CAPTCHAs were tested with users to identify the number of angles from a shape and to do a simple calculation. Users must type a right answer to pass the CAPTCHA test. Geometric CAPTCHAs were test run with other similar three CAPTCHAs in terms of time for task completion, number of errors, and user's satisfaction. This paper was a pilot study for designing a new image- based CAPTCHA, and the improved design will be made in the short future. This research presented a geometric CAPTCHA which was not created from images in any database, but it is an image of a geometric shape that randomly generated from a program and its edge was incomplete. Geometric CAPTCHAs were tested with users to identify the number of angles from a shape and to do a simple calculation. Users must type a right answer to pass the CAPTCHA test. Geometric CAPTCHAs were test run with other similar three CAPTCHAs in terms of time for task completion, number of errors, and user's satisfaction. This paper was a pilot study for designing a new image-based CAPTCHA, and the improved design will be made in the short future.
Keywords: image processing; message authentication; CAPTCHA test; arithmetic CAPTCHA; authentication; geometric CAPTCHA; geometric shape image; image- based CAPTCHA; shape angle identification; task completion time; user satisfaction; CAPTCHAs; Databases; Educational institutions; Image edge detection; Security; Shape; Silicon (ID#:14-2856)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847359&isnumber=6847317
- Powell, B.M.; Goswami, G.; Vatsa, M.; Singh, R.; Noore, A, "fgCAPTCHA: Genetically Optimized Face Image CAPTCHA 5," Access, IEEE, vol.2, no., pp.473, 484, 2014. doi: 10.1109/ACCESS.2014.2321001 The increasing use of smartphones, tablets, and other mobile devices poses a significant challenge in providing effective online security. CAPTCHAs, tests for distinguishing human and computer users, have traditionally been popular; however, they face particular difficulties in a modern mobile environment because most of them rely on keyboard input and have language dependencies. This paper proposes a novel image-based CAPTCHA that combines the touch-based input methods favored by mobile devices with genetically optimized face detection tests to provide a solution that is simple for humans to solve, ready for worldwide use, and provides a high level of security by being resilient to automated computer attacks. In extensive testing involving over 2600 users and 40000 CAPTCHA tests, fgCAPTCHA demonstrates a very high human success rate while ensuring a 0% attack rate using three well-known face detection algorithms.
Keywords: face recognition; mobile computing; security of data; automated computer attacks; face detection algorithms; fgCAPTCHA; genetically optimized face image CAPTCHA; modern mobile environment; novel image-based CAPTCHA; online security; touch-based input methods; CAPTCHAs; Face detection; Face recognition; Mobile communication; Mobile handsets; Noise measurement; Security; CAPTCHA; Mobile security; face detection; web security (ID#:14-2857)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6807630&isnumber=6705689
- Qi Ye; Youbin Chen; Bin Zhu, "The Robustness of a New 3D CAPTCHA," Document Analysis Systems (DAS), 2014 11th IAPR International Workshop on, vol., no., pp.319, 323, 7-10 April 2014 doi: 10.1109/DAS.2014.31 CAPTCHA is a standard security technology to tell humans and computers and the most widely used method is text based scheme. As many text schemes have been broken, 3D CAPTCHAs have emerged as one of the latest one. In this paper, we study the robustness of 3D text-based CAPTCHA adopted by Ku6 which is a leading website providing videos in China and provide the first analysis of 3D hollow CAPTCHA. The security of this CAPTCHA scheme relies on a novel segmentation resistance mechanism, which combines Crowding Character Together (CCT) strategy and side surfaces which form the 3D visual effect of characters and lead to a promising usability even under strong overlapping between characters. However, by exploiting the unique features of the 3D characters in hollow font, i.e. parallel boundaries, the different stroke width of side faces and front faces and relationships between them, we propose a technique that segments connected characters apart and repairs some overlapped apart. The success segmentation rate is 70%. With minor changes, our attack program works well on its two variations, the segmentation rate is 75% and 85% respectively.
Keywords: cryptography ;image coding; image segmentation; 3D CAPTCHA scheme; CCT strategy; Completely Automated Public Turing test to tell Computers and Humans Apart; attack program; crowding character together; side surfaces; standard security technology; success segmentation rate; CAPTCHAs; Character recognition; Computers; Maintenance engineering; Robustness; Security; Three-dimensional displays;3D;CAPTCHA;hollow font; security; segmentation; usability (ID#:14-2858)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6831021&isnumber=6824386
- Harisinghaney, A; Dixit, A; Gupta, S.; Arora, A, "Text and Image Based Spam Email Classification Using KNN, Naive Bayes and Reverse DBSCAN Algorithm," Optimization, Reliability, and Information Technology (ICROIT), 2014 International Conference on, pp. 153, 155, 6-8 Feb. 2014. doi: 10.1109/ICROIT.2014.6798302 Internet has changed the way of communication, which has become more and more concentrated on emails. Emails, text messages and online messenger chatting have become part and parcel of our lives. Out of all these communications, emails are more prone to exploitation. Thus, various email providers employ algorithms to filter emails based on spam and ham. In this research paper, our prime aim is to detect text as well as image based spam emails. To achieve the objective we applied three algorithms namely: KNN algorithm, Naive Bayes algorithm and reverse DBSCAN algorithm. Pre-processing of email text before executing the algorithms is used to make them predict better. This paper uses Enron corpus's dataset of spam and ham emails. In this research paper, we provide comparison performance of all three algorithms based on four measuring factors namely: precision, sensitivity, specificity and accuracy. We are able to attain good accuracy by all the three algorithms. The results have shown comparison of all three algorithms applied on same data set.
Keywords: Bayes methods; image classification; neural nets; text analysis; text detection; unsolicited e-mail; Enron corpus dataset; Internet; KNN algorithm; Naive Bayes algorithm; email text pre-processing; image based spam email classification; online messenger chatting; reverse DBSCAN algorithm; text based spam email classification; text detection; text messages; CAPTCHAs; Classification algorithms; Computers; Electronic mail; Image resolution; Technological innovation; Viruses (medical); Ham; Image Spam; KNN; Naive Bayes; Spam; reverse DBSCAN (ID#:14-2859)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798302&isnumber=6798279
- Goto, Misako; Shirato, Toru; Uda, Ryuya, "Text-Based CAPTCHA Using Phonemic Restoration Effect and Similar Sounds," Computer Software and Applications Conference Workshops (COMPSACW), 2014 IEEE 38th International, pp.270,275, 21-25 July 2014. doi: 10.1109/COMPSACW.2014.48 In Recent years, bot (robot) program has been one of the problems on the web. Some kinds of the bots acquire accounts of web services in order to use the accounts for SPAM mails, phishing, etc. CAPTCHA (Completely Automated Public Turing Test to Tell Computers and Humans Apart) is one of the countermeasures for preventing bots from acquiring the accounts. Text-based CAPTCHA is especially implemented on almost all famous web services. However, CAPTCHA faces a problem that evolution of algorithms for analysis of printed characters disarms text-based CAPTCHA. Of course, stronger distortion of characters is the easiest solution of the problem. However, it makes recognition of characters difficult not only for bots but also for human beings. Therefore, in this paper, we propose a new CAPTCHA with higher safety and convenience. Especially, we focus on the human abilities of phonemic restoration and recognition of similar sounds, and adopt the abilities in the propose CAPTCHA. The proposed CAPTCHA makes machinery presumption difficult for bots, while providing easy recognition for human beings.
Keywords: CAPTCHAs; Character recognition; Computers; Educational institutions; Google; Image restoration; Time measurement; CAPTCHA; Phonemic Restoration; Web Technology (ID#:14-2860)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903141&isnumber=6903069
- Song Gao; Mohamed, M.; Saxena, N.; Chengcui Zhang, "Gaming the Game: Defeating A Game Captcha With Efficient And Robust Hybrid Attacks," Multimedia and Expo (ICME), 2014 IEEE International Conference on, pp.1, 6, 14-18 July 2014. doi: 10.1109/ICME.2014.6890287 Dynamic Cognitive Game (DCG) CAPTCHAs are a promising new generation of interactive CAPTCHAs aiming to provide improved security against automated and human-solver relay attacks. Unlike existing CAPTCHAs, defeating DCG CAPTCHAs using pure automated attacks or pure relay attacks may be challenging in practice due to the fundamental limitations of computer algorithms (semantic gap) and synchronization issues with solvers. To overcome this barrier, we propose two hybrid attack frameworks. which carefully combine the strengths of an automated program and offline/online human intelligence. These hybrid attacks require maintaining the synchronization only between the game and the bot similar to a pure automated attack, while solving the static AI problem (i.e., bridging the semantic gap) behind the game challenge similar to a pure relay attack. As a crucial component of our framework, we design a new DCG object tracking algorithm, based on color code histogram, and show that it is simpler, more efficient and more robust compared to several known tracking approaches. We demonstrate that both frameworks can effectively defeat a wide range of DCG CAPTCHAs.
Keywords: authorisation; computer games; image colour analysis; object tracking; DCG CAPTCHA; DCG object tracking algorithm; automated human-solver relay attacks; automated program; color code histogram; computer algorithms; dynamic cognitive game CAPTCHA; hybrid attack framework; interactive CAPTCHA; offline human intelligence; online human intelligence; security improvement; semantic gap; static AI problem; synchronization issues; High definition video; Light emitting diodes; CAPTCHA; hybrid attack; multi-object tracking; visual processing; web security (ID#:14-2861)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890287&isnumber=6890121
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Channel Coding
Channel coding, also known as Forward Error Correction, are methods for controlling errors in data transmissions over noisy or unreliable communications channels. For cybersecurity, these methods can also be used to ensure data integrity, as some of the research cited below shows. These works were presented in the first half of 2014.
- Si, H.; Koyluoglu, O.O.; Vishwanath, S., "Polar Coding for Fading Channels: Binary and Exponential Channel Cases," Communications, IEEE Transactions on, vol.62, no.8, pp.2638, 2650, Aug. 2014. doi: 10.1109/TCOMM.2014.2345399 This work presents a polar coding scheme for fading channels, focusing primarily on fading binary symmetric and additive exponential noise channels. For fading binary symmetric channels, a hierarchical coding scheme is presented, utilizing polar coding both over channel uses and over fading blocks. The receiver uses its channel state information (CSI) to distinguish states, thus constructing an overlay erasure channel over the underlying fading channels. By using this scheme, the capacity of a fading binary symmetric channel is achieved without CSI at the transmitter. Noting that a fading AWGN channel with BPSK modulation and demodulation corresponds to a fading binary symmetric channel, this result covers a fairly large set of practically relevant channel settings. For fading additive exponential noise channels, expansion coding is used in conjunction to polar codes. Expansion coding transforms the continuous-valued channel to multiple (independent) discrete-valued ones. For each level after expansion, the approach described previously for fading binary symmetric channels is used. Both theoretical analysis and numerical results are presented, showing that the proposed coding scheme approaches the capacity in the high SNR regime. Overall, utilizing polar codes in this (hierarchical) fashion enables coding without CSI at the transmitter, while approaching the capacity with low complexity.
Keywords: AWGN channels; Channel state information; Decoding; Encoding; Fading; Noise; Transmitters; Binary symmetric channel; channel coding; expansion coding; fading channels; polar codes (ID#:14-2651)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6871313&isnumber=6880875
- Koller, C.; Haenggi, M.; Kliewer, J.; Costello, D.J., "Joint Design of Channel and Network Coding for Star Networks Connected by Binary Symmetric Channels," Communications, IEEE Transactions on, vol.62, no.1, pp.158, 169, January 2014. doi: 10.1109/TCOMM.2013.110413.120971 In a network application, channel coding alone is not sufficient to reliably transmit a message of finite length K from a source to one or more destinations as in, e.g., file transfer. To ensure that no data is lost, it must be combined with rateless erasure correcting schemes on a higher layer, such as a time-division multiple access (TDMA) system paired with automatic repeat request (ARQ) or random linear network coding (RLNC). We consider binary channel coding on a binary symmetric channel (BSC) and q-ary RLNC for erasure correction in a star network, where Y sources send messages to each other with the help of a central relay. In this scenario RLNC has been shown to have a throughput advantage over TDMA schemes as K- and q-. In this paper we focus on finite block lengths and compare the expected throughputs of RLNC and TDMA. For a total message length of K bits, which can be subdivided into blocks of smaller size prior to channel coding, we obtain the channel code rate and the number of blocks that maximize the expected throughput of both RLNC and TDMA, and we find that TDMA is more throughput-efficient for small message lengths K and small q.
Keywords: channel coding; network coding; time division multiple access; wireless channels; ARQ; BSC; RLNC; TDMA schemes; TDMA system; automatic repeat request; binary channel coding; binary symmetric channels; channel code rate; erasure correction; file transfer; joint design; random linear network coding; star network; star networks; time division multiple access; Automatic repeat request; Encoding; Network coding; Relays; Silicon; Throughput; Time division multiple access; Random linear network coding; joint channel and network coding; star networks (ID#:14-2652)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6657830&isnumber=6719911
- Aguerri, IE.; Varasteh, M.; Gunduz, D., "Zero-delay Joint Source-Channel Coding," Communication and Information Theory (IWCIT), 2014 Iran Workshop on, pp.1,6, 7-8 May 2014. doi: 10.1109/IWCIT.2014.6842482 In zero-delay joint source-channel coding each source sample is mapped to a channel input, and the samples are directly estimated at the receiver based on the corresponding channel output. Despite its simplicity, uncoded transmission achieves the optimal end-to-end distortion performance in some communication scenarios, significantly simplifying the encoding and decoding operations, and reducing the coding delay. Three different communication scenarios are considered here, for which uncoded transmission is shown to achieve either optimal or near-optimal performance. First, the problem of transmitting a Gaussian source over a block-fading channel with block-fading side information is considered. In this problem, uncoded linear transmission is shown to achieve the optimal performance for certain side information distributions, while separate source and channel coding fails to achieve the optimal performance. Then, uncoded transmission is shown to be optimal for transmitting correlated multivariate Gaussian sources over a multiple-input multiple-output (MIMO) channel in the low signal to noise ratio (SNR) regime. Finally, motivated by practical systems a peak-power constraint (PPC) is imposed on the transmitter's channel input. Since linear transmission is not possible in this case, nonlinear transmission schemes are proposed and shown to perform very close to the lower bound.
Keywords: Gaussian channels; MIMO communication; block codes; combined source-channel coding; decoding; delays; fading channels; radio receivers; radio transmitters; MIMO communication; PPC; SNR; block fading channel; correlated multivariate Gaussian source transmission; decoding; encoding delay reduction; end-to-end distortion performance; information distribution; multiple input multiple output channel; nonlinear transmission scheme; peak power constraint; receiver; signal to noise ratio; transmitter channel; uncoded linear transmission; zero delay joint source channel coding; Channel coding; Decoding; Joints; MIMO; Nonlinear distortion; Signal to noise ratio (ID#:14-2653)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842482&isnumber=6842477
- Taotao Wang; Soung Chang Liew, "Joint Channel Estimation and Channel Decoding in Physical-Layer Network Coding Systems: An EM-BP Factor Graph Framework," Wireless Communications, IEEE Transactions on, vol.13, no.4, pp.2229, 2245, April 2014. doi: 10.1109/TWC.2013.030514.131312 This paper addresses the problem of joint channel estimation and channel decoding in physical-layer network coding (PNC) systems. In PNC, multiple users transmit to a relay simultaneously. PNC channel decoding is different from conventional multi-user channel decoding: specifically, the PNC relay aims to decode a network-coded message rather than the individual messages of the users. Although prior work has shown that PNC can significantly improve the throughput of a relay network, the improvement is predicated on the availability of accurate channel estimates. Channel estimation in PNC, however, can be particularly challenging because of 1) the overlapped signals of multiple users; 2) the correlations among data symbols induced by channel coding; and 3) time-varying channels. We combine the expectation-maximization (EM) algorithm and belief propagation (BP) algorithm on a unified factor-graph framework to tackle these challenges. In this framework, channel estimation is performed by an EM subgraph, and channel decoding is performed by a BP subgraph that models a virtual encoder matched to the target of PNC channel decoding. Iterative message passing between these two subgraphs allow the optimal solutions for both to be approached progressively. We present extensive simulation results demonstrating the superiority of our PNC receivers over other PNC receivers.
Keywords: channel coding; channel estimation; expectation-maximisation algorithm; graph theory; network coding; BP algorithm; EM algorithm;E M-BP factor graph framework; PNC channel decoding; PNC receivers; PNC systems;belief propagation; data symbols; expectation maximization joint channel estimation; multiuser channel decoding; network coded message; overlapped signals; physical layer network coding systems; unified factor graph framework; Channel estimation; Decoding; Iterative decoding; Joints; Message passing; Receivers; Relays; Physical-layer network coding; belief propagation; expectation-maximization; factor graph; message passing (ID#:14-2654)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6760601&isnumber=6803026
- Feng Cen; Fanglai Zhu, "Codeword Averaged Density Evolution For Distributed Joint Source And Channel Coding With Decoder Side Information," Communications, IET, vol.8, no.8, pp.1325,1335, May 22 2014. doi: 10.1049/iet-com.2013.1005 The authors consider applying the systematic low-density parity-check codes with the parity based approach to the lossless (or near lossless) distributed joint source channel coding (DJSCC) with the decoder side information for the non-uniform sources over the asymmetric memoryless transmission channel. By using an equivalent channel coding model, which consists of two parallel subchannels: a correlation and a transmission sub-channel, respectively, they derive the codeword averaged density evolution (DE) for the DJSCC with the decoder side information for the asymmetrically correlated non-uniform sources over the asymmetric memoryless transmission channel. A new code ensemble definition of the irregular codes is introduced to distinguish between the source and the parity variable nodes, respectively. Extensive simulations demonstrate the effectiveness of the codeword averaged DE.
Keywords: channel coding; combined source-channel coding; decoding; parity check codes; DE; DJSCC; asymmetric memoryless transmission channel; codeword averaged density evolution; decoder side information; distributed joint source channel coding; equivalent channel coding model; parity variable nodes; systematic low-density parity-check codes (ID#:14-2655)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6827069&isnumber=6827053
- Muramatsu, J., "Channel Coding and Lossy Source Coding Using a Generator of Constrained Random Numbers," Information Theory, IEEE Transactions on, vol.60, no.5, pp.2667, 2686, May 2014. doi: 10.1109/TIT.2014.2309140 Stochastic encoders for channel coding and lossy source coding are introduced with a rate close to the fundamental limits, where the only restriction is that the channel input alphabet and the reproduction alphabet of the lossy source code are finite. Random numbers, which satisfy a condition specified by a function and its value, are used to construct stochastic encoders. The proof of the theorems is based on the hash property of an ensemble of functions, where the results are extended to general channels/sources and alternative formulas are introduced for channel capacity and the rate-distortion region. Since an ensemble of sparse matrices has a hash property, we can construct a code by using sparse matrices.
Keywords: channel capacity; channel coding; random number generation; source coding; channel capacity; channel coding; channel input alphabet; constrained random number generator; hash property; lossy source coding; rate distortion region; reproduction alphabet;s parse matrices; stochastic encoders; Channel capacity; Channel coding; Manganese; Probability distribution; Random variables; Rate-distortion; Sparse matrices; LDPC codes; Shannon theory; channel coding; information spectrum methods; lossy source coding (ID#:14-2656)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6750723&isnumber=6800061
- Bocharova, IE.; Guillen i Fabregas, A; Kudryashov, B.D.; Martinez, A; Tauste Campo, A; Vazquez-Vilar, G., "Source-Channel Coding With Multiple Classes," Information Theory (ISIT), 2014 IEEE International Symposium on, pp.1514,1518, June 29 2014-July 4 2014. doi: 10.1109/ISIT.2014.6875086 We study a source-channel coding scheme in which source messages are assigned to classes and encoded using a channel code that depends on the class index. While each class code can be seen as a concatenation of a source code and a channel code, the overall performance improves on that of separate source-channel coding and approaches that of joint source-channel coding as the number of classes increases. The performance of this scheme is studied by means of random-coding bounds and validated by simulation of a low-complexity implementation using existing source and channel codes.
Keywords: combined source-channel coding; class code; random coding bounds; source channel coding; source messages; AWGN; Decoding; Joints; Presses (ID#:14-2657)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875086&isnumber=6874773
- Romero, S.M.; Hassanin, M.; Garcia-Frias, J.; Arce, G.R., "Analog Joint Source Channel Coding for Wireless Optical Communications and Image Transmission," Lightwave Technology, Journal of, vol.32, no.9, pp.1654, 1662, May1, 2014. doi: 10.1109/JLT.2014.2308136 An analog joint source channel coding (JSCC) system is developed for wireless optical communications. Source symbols are mapped directly onto channel symbols using space filling curves and then a non-linear stretching function is used to reduce distortion. Different from digital systems, the proposed scheme does not require long block lengths to achieve good performance reducing the complexity of the decoder significantly. This paper focuses on intensity-modulated direct-detection (IM/DD) optical wireless systems. First, a theoretical analysis of the IM/DD wireless optical channel is presented and the prototype communication system designed to transmit data using analog JSCC is introduced. The nonlinearities of the real channel are studied and characterized. A novel technique to mitigate the channel nonlinearities is presented. The performance of the real system follows the simulations and closely approximates the theoretical limits. The proposed system is then used for image transmission by first taking samples of a set of images using compressive sensing and then encoding the measurements using analog JSCC. Both simulation and experimental results are shown.
Keywords: combined source-channel coding; compressed sensing; image coding; intensity modulation; optical communication; optical modulation; wireless channels; IM/DD wireless optical channel; JSCC; analog joint source channel coding; channel nonlinearities; compressive sensing; distortion reduction; image encoding; image transmission; intensity-modulated direct-detection optical wireless systems; nonlinear stretching function; space filling curves; wireless optical communications; Channel coding; Decoding; Noise; Nonlinear optics; Optical receivers; Optical transmitters; Wireless communication; Compressive sensing (CS);Shannon mappings; intensity-modulation direct-detection (IM/DD); joint source channel coding (JSCC); optical communications (ID#:14-2658)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6748003&isnumber=6781021
- Suhan Choi, "Functional Duality Between Distributed Reconstruction Source Coding and Multiple-Access Channel Coding in the Case of Correlated Messages," Communications Letters, IEEE , vol.18, no.3, pp.499, 502, March 2014. doi: 10.1109/LCOMM.2014.012214.140018 In this letter, functional duality between Distributed Reconstruction Source Coding (DRSC) with correlated messages and Multiple-Access Channel Coding (MACC) with correlated messages is considered. It is shown that under certain conditions, for a given DRSC problem with correlated messages, a functional dual MACC problem with correlated messages can be obtained, and vice versa. In particular, it is shown that the correlation structures of the messages in the two dual problems are the same. The source distortion measure and the channel cost measure for this duality are also specified.
Keywords: channel coding; correlation theory; distortion measurement; duality (mathematics);functional analysis; source coding; DRSC; MACC; channel cost measure; correlated messages; distributed reconstruction source coding; functional duality; multiple access channel coding; source distortion measure; Bipartite graph; Channel coding; Correlation; Decoding; Distortion measurement; Source coding; Functional duality; correlated messages; distributed reconstruction source coding; multiple-access channel coding (ID#:14-2659)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6784556&isnumber=6784524
- Jie Luo, "Generalized Channel Coding Theorems For Random Multiple Access Communication," Communications Workshops (ICC), 2014 IEEE International Conference on , vol., no., pp.489,494, 10-14 June 2014. doi: 10.1109/ICCW.2014.6881246 This paper extends the channel coding theorems of [1][2] to time-slotted random multiple access communication systems with a generalized problem formulation. Assume that users choose their channel codes arbitrarily in each time slot. When the codeword length can be taken to infinity, fundamental performance limitation of the system is characterized using an achievable region defined in the space of channel code index vector each specifies the channel codes of all users. The receiver decodes the message if the code index vector happens to locate inside the achievable region and reports a collision if it falls outside the region. A generalized system error performance measure is defined as the maximum of weighted probabilities of different types of communication error events. Upper bounds on the generalized error performance measure are derived under the assumption of a finite codeword length. It is shown that "interfering users" can be introduced to model not only the impact of interference from remote transmitters, but also the impact of channel uncertainty in random access communication.
Keywords: channel coding; decoding; probability; radio receivers; radio transmitters; radiofrequency interference; channel code index vector; channel uncertainty impact; communication error events; finite codeword length; generalized channel coding theorems; generalized system error performance measurement; interference impact; message decoding; receiver; remote transmitters; time-slotted random multiple access communication systems; weighted probabilities; Channel coding; Decoding; Error probability; Indexes; Receivers; Vectors (ID#:14-2660)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6881246&isnumber=6881162
- Hye Won Chung; Guha, S.; Lizhong Zheng, "Superadditivity of Quantum Channel Coding Rate With Finite Blocklength Quantum Measurements," Information Theory (ISIT), 2014 IEEE International Symposium on, pp.901,905, June 29 2014-July 4 2014. doi: 10.1109/ISIT.2014.6874963 We investigate superadditivity in the maximum achievable rate of reliable classical communication over a quantum channel. The maximum number of classical information bits extracted per use of the quantum channel strictly increases as the number of channel outputs jointly measured at the receiver increases. This phenomenon is called superadditivity. We provide an explanation of this phenomenon by comparing a quantum channel with a classical discrete memoryless channel (DMC) under concatenated codes. We also give a lower bound on the maximum accessible information per channel use at a finite length of quantum measurements in terms of V, which is the quantum version of channel dispersion, and C, the classical capacity of the quantum channel.
Keywords: channel coding; concatenated codes; DMC; concatenated codes; discrete memoryless channel; finite blocklength quantum measurements; quantum channel; quantum channel coding rate; superadditivity; Binary phase shift keying; Concatenated codes; Decoding; Length measurement; Photonics; Quantum mechanics; Receivers (ID#:14-2661)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6874963&isnumber=6874773
- Vaezi, M.; Labeau, F., "Distributed Source-Channel Coding Based on Real-Field BCH Codes," Signal Processing, IEEE Transactions on, vol.62, no.5, pp.1171,1184, March 1, 2014. doi: 10.1109/TSP.2014.2300039 We use real-number codes to compress statistically dependent sources and establish a new framework for distributed lossy source coding in which we compress sources before, rather than after, quantization. This change in the order of binning and quantization blocks makes it possible to model the correlation between continuous-valued sources more realistically and compensate for the quantization error partially. We then focus on the asymmetric case, i.e., lossy source coding with side information at the decoder. The encoding and decoding procedures are described in detail for a class of real-number codes called discrete Fourier transform (DFT) codes, both for the syndrome- and parity-based approaches. We leverage subspace-based decoding to improve the decoding and by extending it we are able to perform distributed source coding in a rate-adaptive fashion to further improve the decoding performance when the statistical dependency between sources is unknown. We also extend the parity-based approach to the case where the transmission channel is noisy and thus we perform distributed joint source-channel coding in this context. The proposed system is well suited for low-delay communications, as the mean-squared reconstruction error (MSE) is shown to be reasonably low for very short block length.
Keywords: BCH codes; combined source-channel coding; correlation methods; decoding; discrete Fourier transforms; mean square error methods; quantisation (signal); DFT codes; MSE; discrete Fourier transform; distributed lossy source coding; distributed source-channel coding; mean-squared reconstruction error; quantization blocks; quantization error; real-field BCH codes; real-number codes; subspace-based decoding; transmission channel; Correlation; Decoding; Delays; Discrete Fourier transforms; Quantization (signal);Source coding; BCH-DFT codes; distributed source coding; joint source-channel coding; parity; real-number codes; syndrome (ID#:14-2662)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6712144&isnumber=6732988
- Tao Wang; Wenbo Zhang; Maunder, R.G.; Hanzo, L., "Near-Capacity Joint Source and Channel Coding of Symbol Values from an Infinite Source Set Using Elias Gamma Error Correction Codes," Communications, IEEE Transactions on, vol.62, no.1, pp.280,292, January 2014. doi: 10.1109/TCOMM.2013.120213.130301 In this paper we propose a novel low-complexity Joint Source and Channel Code (JSCC), which we refer to as the Elias Gamma Error Correction (EGEC) code. Like the recently-proposed Unary Error Correction (UEC) code, this facilitates the practical near-capacity transmission of symbol values that are randomly selected from a set having an infinite cardinality, such as the set of all positive integers. However, in contrast to the UEC code, our EGEC code is a universal code, facilitating the transmission of symbol values that are randomly selected using any monotonic probability distribution. When the source symbols obey a particular zeta probability distribution, our EGEC scheme is shown to offer a 3.4 dB gain over a UEC benchmarker, when Quaternary Phase Shift Keying (QPSK) modulation is employed for transmission over an uncorrelated narrowband Rayleigh fading channel. In the case of another zeta probability distribution, our EGEC scheme offers a 1.9 dB gain over a Separate Source and Channel Coding (SSCC) benchmarker.
Keywords: Rayleigh channels; channel coding; error correction codes; phase shift keying; source coding; statistical distributions; EGEC code; Infinite Source Set; QPSK modulation; UEC code; elias gamma error correction codes; monotonic probability distribution; near-capacity joint source and channel coding; near-capacity transmission; novel low-complexity joint source and channel code; quaternary phase shift keying modulation; symbol values; unary error correction code; uncorrelated narrowband Rayleigh fading channel; universal code; zeta probability distribution; Decoding; Encoding; Error correction codes; Phase shift keying; Probability distribution; Transmitters; Vectors; Source coding; channel capacity; channel coding; iterative decoding (ID#:14-2663)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6679360&isnumber=6719911
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Clean Slate
The "clean slate" approach looks at designing networks and internets from scratch, with security built in, in contrast to the evolved Internet in place. The research presented here covers a range of research topics, and includes a survey of those topics. These works were published or presented in the first half of 2014.
- Sourlas, V.; Tassiulas, L., "Replication Management And Cache-Aware Routing In Information-Centric Networks," Network Operations and Management Symposium (NOMS), 2014 IEEE, pp.1,7, 5-9 May 2014. doi: 10.1109/NOMS.2014.6838282 Content distribution in the Internet places content providers in a dominant position, with delivery happening directly between two end-points, that is, from content providers to consumers. Information-Centrism has been proposed as a paradigm shift from the host-to-host Internet to a host-to-content one, or in other words from an end-to-end communication system to a native distribution network. This trend has attracted the attention of the research community, which has argued that content, instead of end-points, must be at the center stage of attention. Given this emergence of information-centric solutions, the relevant management needs in terms of performance have not been adequately addressed, yet they are absolutely essential for relevant network operations and crucial for the information-centric approaches to succeed. Performance management and traffic engineering approaches are also required to control routing, to configure the logic for replacement policies in caches and to control decisions where to cache, for instance. Therefore, there is an urgent need to manage information-centric resources and in fact to constitute their missing management and control plane which is essential for their success as clean-slate technologies. In this thesis we aim to provide solutions to crucial problems that remain, such as the management of information-centric approaches which has not yet been addressed, focusing on the key aspect of route and cache management.
Keywords: Internet; telecommunication network routing ;telecommunication traffic; Internet; cache management; cache-aware routing; clean-slate technologies; content distribution; control plane; end-to-end communication system; host-to-host Internet ;information-centric approaches; information-centric networks; information-centric resources ;information-centric solutions; information-centrism; missing management; native distribution network; performance management; replication management; route management; traffic engineering approaches; Computer architecture; Network topology; Planning; Routing; Servers; Subscriptions; Transportation (ID#:14-2664)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838282&isnumber=6838210
- Visala, K.; Keating, A; Khan, R.H., "Models And Tools For The High-Level Simulation Of A Name-Based Interdomain Routing Architecture," Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on , vol., no., pp.55,60, April 27 2014-May 2 2014. doi: 10.1109/INFCOMW.2014.6849168 The deployment and operation of global network architectures can exhibit complex, dynamic behavior and the comprehensive validation of their properties, without actually building and running the systems, can only be achieved with the help of simulations. Packet-level models are not feasible in the Internet scale, but we are still interested in the phenomena that emerge when the systems are run in their intended environment. We argue for the high-level simulation methodology and introduce a simulation environment based on aggregate models built on state-of-the-art datasets available while respecting invariants observed in measurements. The models developed are aimed at studying a clean slate name-based interdomain routing architecture and provide an abundance of parameters for sensitivity analysis and a modular design with a balanced level of detail in different aspects of the model. In addition to introducing several reusable models for traffic, topology, and deployment, we report our experiences in using the high-level simulation approach and potential pitfalls related to it.
Keywords: Internet; telecommunication network routing; telecommunication network topology; telecommunication traffic; aggregate models; clean slate name-based interdomain routing architecture; complex-dynamic behavior; global network architecture deployment; global network architecture operation; high-level simulation methodology; modular design; packet-level models; reusable deployment model; reusable topology model; reusable traffic model; sensitivity analysis; Aggregates; Approximation methods; Internet; Network topology; Peer-to-peer computing; Routing; Topology (ID#:14-2665)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849168&isnumber=6849127
- Campista, M.E.M.; Rubinstein, M.G.; Moraes, IM.; Costa, L.H.M.K.; Duarte, O.C.M.B., "Challenges and Research Directions for the Future Internetworking," Communications Surveys & Tutorials, IEEE, vol.16, no.2, pp.1050,1079, Second Quarter 2014. doi: 10.1109/SURV.2013.100213.00143 We review the main challenges and survey promising techniques for network interconnection in the Internet of the future. To this end, we first discuss the shortcomings of the Internet's current model. Among them, many are consequence of unforeseen demands on the original Internet design such as: mobility, multihoming, multipath, and network scalability. These challenges have attracted significant research efforts in the latest years because of both their relevance and complexity. In this survey, for the sake of completeness, we cover several new protocols for network interconnection spanning both incremental deployments (evolutionary approach) and radical proposals to redesign the Internet from scratch (clean-slate approach). We focus on specific proposals for future internetworking such as: Loc/ID split, flat routing, network mobility, multipath and content-based routing, path programmability, and Internet scalability. Although there is no consensus on the future internetworking approach, requirements such as security, scalability, and incremental deployment are often considered.
Keywords: {internetworking telecommunication network routing; Internet scalability; content-based routing; future Internetworking; incremental deployments; multipath routing; network interconnection spanning; network mobility; path programmability; radical proposals; IP networks; Internet; Mobile communication; Mobile computing; Routing; Routing protocols; Future Internet; internetworking; routing (ID#:14-2666)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6644748&isnumber=6811383
- Qadir, J.; Hasan, O., "Applying Formal Methods to Networking: Theory, Techniques and Applications," Communications Surveys & Tutorials, IEEE, vol. PP, no.99, pp.1, 1, August 2014. doi: 10.1109/COMST.2014.2345792 Despite its great importance, modern network infrastructure is remarkable for the lack of rigor in its engineering. The Internet which began as a research experiment was never designed to handle the users and applications it hosts today. The lack of formalization of the Internet architecture meant limited abstractions and modularity, especially for the control and management planes, thus requiring for every new need a new protocol built from scratch. This led to an unwieldy ossified Internet architecture resistant to any attempts at formal verification, and an Internet culture where expediency and pragmatism are favored over formal correctness. Fortunately, recent work in the space of clean slate Internet design--especially, the software defined networking (SDN) paradigm--offers the Internet community another chance to develop the right kind of architecture and abstractions. This has also led to a great resurgence in interest of applying formal methods to specification, verification, and synthesis of networking protocols and applications. In this paper, we present a self-contained tutorial of the formidable amount of work that has been done in formal methods, and present a survey of its applications to networking.
Keywords: Communities; Computers; Internet; Mathematics; Protocols; Software; Tutorials (ID#:14-2667)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6873212&isnumber=5451756
- Mohamed, Abdelrahim; Onireti, Oluwakayode; Qi, Yinan; Imran, Ali; Imran, Muhammed; Tafazolli, Rahim, "Physical Layer Frame in Signalling-Data Separation Architecture: Overhead and Performance Evaluation," European Wireless 2014; 20th European Wireless Conference; Proceedings of, pp.1,6, 14-16 May 2014. Doi: (not provided) Conventional cellular systems are dimensioned according to a worst case scenario, and they are designed to ensure ubiquitous coverage with an always-present wireless channel irrespective of the spatial and temporal demand of service. A more energy conscious approach will require an adaptive system with a minimum amount of overhead that is available at all locations and all times but becomes functional only when needed. This approach suggests a new clean slate system architecture with a logical separation between the ability to establish availability of the network and the ability to provide functionality or service. Focusing on the physical layer frame of such an architecture, this paper discusses and formulates the overhead reduction that can be achieved in next generation cellular systems as compared with the Long Term Evolution (LTE). Considering channel estimation as a performance metric whilst conforming to time and frequency constraints of pilots spacing, we show that the overhead gain does not come at the expense of performance degradation.
Keywords: (not provided) (ID#:14-2668)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6843062&isnumber=6843048
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Cloud Security
Cloud security is one of the prime topics for theoretical and applied research today. The works cited here cover a wide range of topics and methods for addressing cloud security issues. They were presented or published between January and August of 2014.
- Feng Zhao; Chao Li; Chun Feng Liu, "A Cloud Computing Security Solution Based On Fully Homomorphic Encryption," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.485,488, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6779008 With the rapid development of Cloud computing, more and more users deposit their data and application on the cloud. But the development of Cloud computing is hindered by many Cloud security problem. Cloud computing has many characteristics, e.g. multi-user, virtualization, scalability and so on. Because of these new characteristics, traditional security technologies can't make Cloud computing fully safe. Therefore, Cloud computing security becomes the current research focus and is also this paper's research direction[1]. In order to solve the problem of data security in cloud computing system, by introducing fully homomorphism encryption algorithm in the cloud computing data security, a new kind of data security solution to the insecurity of the cloud computing is proposed and the scenarios of this application is hereafter constructed. This new security solution is fully fit for the processing and retrieval of the encrypted data, and effectively leading to the broad applicable prospect, the security of data transmission and the storage of the cloud computing[2].
Keywords: cloud computing; cryptography; cloud computing security solution; cloud security problem; data security solution; data storage; data transmission; encrypted data processing; encrypted data retrieval; fully homomorphic encryption algorithm; security technologies; Cloud computing; Encryption; Safety; Cloud security; Cloud service; Distributed implementation; Fully homomorphic encryption (ID#:14-2669)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779008&isnumber=6778899
- Fazal-e-Amin; Alghamdi, AS.; Ahmad, I, "Cloud Based C4I Systems: Security Requirements and Concerns," Computational Science and Computational Intelligence (CSCI), 2014 International Conference on, vol.2, pp.75, 80, 10-13 March 2014. doi: 10.1109/CSCI.2014.98 C4I (Command, Control, Communication, Computer and Intelligence) systems are critical systems of systems. These systems are used in military, emergency response, and in disaster management etc. Due to the sensitive nature of domains and applications of these systems, quality could never be compromised. C4I systems are resource demanding system, their expansion or up gradation for the sake of improvement require additional computational resources. Cloud computing provides a solution for the convenient access and scaling of resources. Recently, it is envisioned by the researchers to deploy C4I systems using cloud computing resources. However, there are many issues in such deployment and security being at the top, is focus of many researchers. In this research, security requirements and concerns of cloud based C4I systems are highlighted. Different aspects of cloud computing and C4I systems are discussed from the security point of view. This research will be helpful for both academia and industry to further strengthen the basis of cloud based C4I systems.
Keywords: cloud computing; command and control systems; security of data; Command, Control, Communication, Computer and Intelligence systems; cloud based C4I systems; cloud computing resources; critical systems of systems; security requirements; Availability; Cloud computing; Computational modeling;Computers;Government;Security;c4i system; cloud computing; security (ID#:14-2670)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6822307&isnumber=6822285
- Albahdal, AA; Alsolami, F.; Alsaadi, F., "Evaluation of Security Supporting Mechanisms in Cloud Storage," Information Technology: New Generations (ITNG), 2014 11th International Conference on, pp.285,292, 7-9 April 2014. doi: 10.1109/ITNG.2014.110 Cloud storage is one of the most promising services of cloud computing. It holds promise for unlimited, scalable, flexible, and low cost data storage. However, security of data stored at the cloud is the main concern that hinders the adoption of cloud storage model. In the literature, there are many proposed mechanisms to improve the security of cloud storage. These proposed mechanisms differ in many aspects and provide different levels of security. In this paper, we evaluate five different mechanisms for supporting the security of the cloud storage. We begin with a brief description of these mechanisms. Then we evaluate these mechanisms based on the following criteria: security, support of writing serializability and reading freshness, workload distribution between the client and cloud, performance, financial cost, support of accountability between the client and cloud, support of file sharing between users, and ease of deployment. The evaluation section of this paper forms a guide for individuals and organizations to select or design an appropriate mechanism that satisfies their requirements for securing cloud storage.
Keywords: cloud computing; security of data; storage management; In file sharing ;cloud computing; cloud security; cloud storage model; reading freshness; security supporting mechanism; workload distribution; Availability; Cloud computing; Encryption; Secure storage; Writing; Cloud Computing; Cloud Security; Cloud Storage (ID#:14-2671)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6822212&isnumber=6822158
- Varadharajan, V.; Tupakula, U., "Security as a Service Model for Cloud Environment," Network and Service Management, IEEE Transactions on, vol. 11, no.1, pp.60, 75, March 2014. doi: 10.1109/TNSM.2014.041614.120394 Cloud computing is becoming increasingly important for provision of services and storage of data in the Internet. However there are several significant challenges in securing cloud infrastructures from different types of attacks. The focus of this paper is on the security services that a cloud provider can offer as part of its infrastructure to its customers (tenants) to counteract these attacks. Our main contribution is a security architecture that provides a flexible security as a service model that a cloud provider can offer to its tenants and customers of its tenants. Our security as a service model while offering a baseline security to the provider to protect its own cloud infrastructure also provides flexibility to tenants to have additional security functionalities that suit their security requirements. The paper describes the design of the security architecture and discusses how different types of attacks are counteracted by the proposed architecture. We have implemented the security architecture and the paper discusses analysis and performance evaluation results.
Keywords: cloud computing; security of data; Internet; baseline security ;cloud computing; cloud environment; cloud infrastructures; cloud provider; data storage; security architecture; security functionalities; security requirements; security-as-a-service model; service provisioning; Cloud computing; Computer architecture; Operating systems; Privacy; Security; Software as a service; Virtual machining; Cloud security; security and privacy; security architecture (ID#:14-2672)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6805344&isnumber=6804401
- Whaiduzzaman, M.; Gani, A, "Measuring Security For Cloud Service Provider: A Third Party Approach," Electrical Information and Communication Technology (EICT), 2013 International Conference on, pp.1,6, 13-15 Feb. 2014. doi: 10.1109/EICT.2014.6777855 Cloud Computing (CC) is a new paradigm of utility computing and enormously growing phenomenon in the present IT industry hype. CC leverages low cost investment opportunity for the new business entrepreneur as well as business avenues for cloud service providers. As the number of the new Cloud Service Customer (CSC) increases, users require a secure, reliable and trustworthy Cloud Service Provider (CSP) from the market to store confidential data. However, a number of shortcomings in reliable monitoring and identifying security risks, threats are an immense concern in choosing the highly secure CSP for the wider cloud community. The secure CSP ranking system is currently a challenging aspect to gauge trust, privacy and security. In this paper, a Trusted Third Party (TTP) like credit rating agency is introduced for security ranking by identifying current assessable security risks. We propose an automated software scripting model by penetration testing for TTP to run on CSP side and identify the vulnerability and check security strength and fault tolerance capacity of the CSP. Using the results, several non-measurable metrics are added and provide the ranking system of secured trustworthy CSP ranking systems. Moreover, we propose a conceptual model for monitoring and maintaining such TTP cloud ranking providers worldwide called federated third party approach. Hence the model of federated third party cloud ranking and monitoring system assures and boosts up the confidence to make a feasible secure and trustworthy market of CSPs.
Keywords: cloud computing; program testing; trusted computing; CC; CSC; CSP fault tolerance capacity; CSP ranking system; CSP security strength; IT industry; TTP; automated software scripting model; business avenues; business entrepreneur; cloud computing; cloud service customer; cloud service provider; confidential data storage; credit rating agency; federated third party approach; information technology; penetration testing; security measurement; security risks identification; security risks monitoring; trusted third party; utility computing; Business; Cloud computing; Measurement; Mobile communication; Monitoring; Security; Cloud computing; cloud security ranking; cloud service provider; trusted third party (ID#:14-2673)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6777855&isnumber=6777807
- Djenna, A; Batouche, M., "Security Problems In Cloud Infrastructure," Networks, Computers and Communications, The 2014 International Symposium on, pp.1,6, 17-19 June 2014. doi: 10.1109/SNCC.2014.6866505 Cloud computing is the emergence of a logical continuation of the computing history, following in the footsteps of mainframes, PCs, Servers, Internet and Data Centers, all those had changed radically the way of our everyday life which adopt the technology. Cloud Computing is able to provide its customers numerous services through Internet. The Virtualization is the secret to the establishment of a Cloud infrastructure. With any technology, it presents both benefits and challenges; the virtualization is not an exception to this rule. In this context, the Cloud infrastructure can be used as a springboard for the generation of new types of attacks. Therefore, security is one of the major concerns for the evolution and migration to the Cloud. In this paper, an overview of security issues related to Cloud infrastructure will be presented, followed by a critical analysis of the various issues that arise in IaaS Cloud and the current attempts to improve security in the Cloud environment.
Keywords: cloud computing; telecommunication security; virtualisation; IaaS Cloud; Internet; cloud computing; cloud infrastructure; computing history; security problems; virtualization; Cloud computing; Computer architecture; Security; Servers; Virtual machine monitors; Virtual machining; Virtualization; Cloud Computing; Cloud Security; Virtualization (ID#:14-2674)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866505&isnumber=6866503
- Chalse, R.R.; Katara, A; Selokar, A; Talmale, R., "Inter-cloud Data Transfer Security," Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on, pp.654,657, 7-9 April 2014. doi: 10.1109/CSNT.2014.137 The use of cloud computing has increased rapidly in many organizations. Cloud computing provides many benefits in terms of low cost and accessibility of data. Cloud computing has generated a lot of interest and competition in the industry and it is recognize as one of the top 10 technologies of 2010. It is an internet based service delivery model which provides internet based services, computing and storage for users in all market including financial, health care & government. In this paper we to provide Inter Cloud Data Transfer Security. Cloud security is becoming a key differentiator and competitive edge between cloud providers. This paper discusses the security issues arising in different type of clouds. This work aims to promote the use of multi-clouds due to its ability to reduce security risks that affect the cloud computing user.
Keywords: cloud computing; security of data ;Internet based service delivery model; cloud computing user; cloud providers; competitive edge; differentiator; government;health care; intercloud data transfer security; multiclouds; security risks; Cloud computing; Computer crime; Data transfer; Fingerprint recognition; Servers; Software as a service; Cloud; Cloud computing; DMFT; Security; Security challenges; data security (ID#:14-2675)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821479&isnumber=6821334
- Mapp, G.; Aiash, M.; Ondiege, B.; Clarke, M., "Exploring a New Security Framework for Cloud Storage Using Capabilities," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp.484, 489, 7-11 April 2014. doi: 10.1109/SOSE.2014.69 We are seeing the deployment of new types of networks such as sensor networks for environmental and infrastructural monitoring, social networks such as facebook, and e-Health networks for patient monitoring. These networks are producing large amounts of data that need to be stored, processed and analysed. Cloud technology is being used to meet these challenges. However, a key issue is how to provide security for data stored in the Cloud. This paper addresses this issue in two ways. It first proposes a new security framework for Cloud security which deals with all the major system entities. Secondly, it introduces a Capability ID system based on modified IPv6 addressing which can be used to implement a security framework for Cloud storage. The paper then shows how these techniques are being used to build an e-Health system for patient monitoring.
Keywords: cloud computing; electronic health records; patient monitoring; social networking (online);storage management;IPv6 addressing; capability ID system; cloud security; cloud storage; cloud technology ;e-Health system; e-health networks; environmental monitoring; facebook; infrastructural monitoring; patient monitoring; security for data; security framework; sensor networks; social networks; system entity; Cloud computing; Companies; Monitoring; Protocols; Security; Servers; Virtual machine monitors; Capability Systems; Cloud Storage; Security Framework; e-Health Monitoring (ID#:14-2676)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830953&isnumber=6825948
- Himmel, M.A; Grossman, F., "Security on distributed systems: Cloud security versus traditional IT," IBM Journal of Research and Development, vol.58, no.1, pp.3:1, 3:13, Jan.-Feb. 2014. doi: 10.1147/JRD.2013.2287591 Cloud computing is a popular subject across the IT (information technology) industry, but many risks associated with this relatively new delivery model are not yet fully understood. In this paper, we use a qualitative approach to gain insight into the vectors that contribute to cloud computing risks in the areas of security, business, and compliance. The focus is on the identification of risk vectors affecting cloud computing services and the creation of a framework that can help IT managers in their cloud adoption process and risk mitigation strategy. Economic pressures on businesses are creating a demand for an alternative delivery model that can provide flexible payments, dramatic cuts in capital investment, and reductions in operational cost. Cloud computing is positioned to take advantage of these economic pressures with low-cost IT services and a flexible payment model, but with certain security and privacy risks. The frameworks offered by this paper may assist IT professionals obtain a clearer understanding of the risk tradeoffs associated with cloud computing environments.
Keywords: Automation; Cloud computing; Computer security; Information technology; Risk management; Virtual machine monitors (ID#:14-2677)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6717051&isnumber=6717043
- Sah, S.K.; Shakya, S.; Dhungana, H., "A Security Management For Cloud Based Applications And Services with Diameter-AAA," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on, pp.6,11, 7-8 Feb. 2014. doi: 10.1109/ICICICT.2014.6781243 The Cloud computing offers various services and web based applications over the internet. With the tremendous growth in the development of cloud based services, the security issue is the main challenge and today's concern for the cloud service providers. This paper describes the management of security issues based on Diameter AAA mechanisms for authentication, authorization and accounting (AAA) demanded by cloud service providers. This paper focuses on the integration of Diameter AAA into cloud system architecture.
Keywords: authorisation; cloud computing ;Internet; Web based applications; authentication, authorization and accounting; cloud based applications; cloud based services; cloud computing; cloud service providers; cloud system architecture; diameter AAA mechanisms; security management; Authentication; Availability; Browsers; Computational modeling; Protocols; Servers; Cloud Computing; Cloud Security; Diameter-AAA (ID#:14-2678)
URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781243&isnumber=6781240
- Boopathy, D.; Sundaresan, M., "Data Encryption Framework Model With Watermark Security For Data Storage In Public Cloud Model," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on, pp.903,907, 5-7 March 2014. doi: 10.1109/IndiaCom.2014.6828094 Cloud computing technology is a new concept of providing dramatically scalable and virtualized resources. It implies a SOA (Service Oriented Architecture) type, reduced information technology overhead for the end level user, greater flexibility model, reduced total cost of ownership and on-demand service providing structure. From the user point of view, one of the main concerns is cloud security from the unknown threats. The lack of physical access to servers constitutes a completely new and disruptive challenge for investigators. The Clients can store, transfer or exchange their data using public cloud model. This paper represents the encryption method for public cloud and also the cloud service provider's verification mechanism using the third party auditors with framework model. The Cloud Data Storage is one of the mandatory services which are acquiring in this rapid development business world.
Keywords: cloud computing; cryptography; service-oriented architecture; storage management; watermarking; SOA; cloud computing technology; cloud data storage; cloud security; cloud service provider verification mechanism; data encryption framework model; end level user; information technology overhead; mandatory services; on-demand service; physical access; public cloud model; scalable resources; service oriented architecture; third party auditors; total cost of ownership reduction; unknown threats; virtualized resources; watermark security; Cloud computing; Computational modeling; Data models; Encryption; Servers; Watermarking ;Cloud Data Storage; Cloud Encryption; Data Confidency Data Privacy; Encryption Model; Watermark Security (ID#:14-2679)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828094&isnumber=6827395
- Poornima, B.; Rajendran, T., "Improving Cloud Security by Enhanced HASBE Using Hybrid Encryption Scheme," Computing and Communication Technologies (WCCCT), 2014 World Congress on, pp.312,314, Feb. 27 2014-March 1 2014. doi: 10.1109/WCCCT.2014.88 Cloud computing has appeared as one of the most influential paradigms in the IT commerce in recent years and this technology needs users to entrust their precious facts and figures to cloud providers, there have been expanding security and privacy concerns on outsourced data. Several schemes employing attribute-based encryption (ABE) have been suggested for get access to control of outsourced data in cloud computing; however, most of them suffer from inflexibility in applying convoluted get access to command principles. In order to recognize scalable, flexible, and finegrained get access to control of outsourced facts and figures in cloud computing, in this paper, we suggest hierarchical attribute-set-based encryption (HASBE) by expanding ciphertext-policy attributeset- based encryption (ASBE) with a hierarchical structure of users. The suggested design not only achieves scalability due to its hierarchical structure, but furthermore inherits flexibility and fine-grained get access to command in carrying compound attributes of ASBE. In addition, HASBE uses multiple worth assignments for access expiration time to deal with client revocation more effectively than living schemes. We apply our scheme and show that it is both effective and flexible in dealing with get access to command for outsourced facts in cloud computing with comprehensive trials.
Keywords: cloud computing; cryptography; IT commerce; attribute based encryption; attribute set based encryption; cloud computing; cloud providers; enhanced HASBE; hierarchical attribute-set-based encryption; hybrid encryption scheme ;improving cloud security; Cloud computing; Computational modeling; Educational institutions; Encryption; Privacy; Scalability (ID#:14-2680)
URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6755167&isnumber=6755083
- Goel, R.; Garuba, M.; Goel, R., "Cloud Computing Vulnerability: DDoS as Its Main Security Threat, and Analysis of IDS as a Solution Model," Information Technology: New Generations (ITNG), 2014 11th International Conference on, vol., no., pp.307, 312, 7-9 April 2014. doi: 10.1109/ITNG.2014.77 Cloud computing has emerged as an increasingly popular means of delivering IT-enabled business services and a potential technology resource choice for many private and government organizations in today's rapidly changing computing environment. Consequently, as cloud computing technology, functionality and usability expands unique security vulnerabilities and treats requiring timely attention arise continuously. The primary challenge being providing continuous service availability. This paper will address cloud security vulnerability issues, the threats propagated by a distributed denial of service (DDOS) attack on cloud computing infrastructure and also discuss the means and techniques that could detect and prevent the attacks.
Keywords: business data processing; cloud computing; computer network security; DDoS ;IDS; IT-enabled business services; cloud computing infrastructure; cloud computing vulnerability; cloud security vulnerability issues; distributed denial of service; security threat; Availability; Cloud computing; Computational modeling; Computer crime; Organizations; Servers; Cloud; DDoS; IDS; Security; Vulnerability (ID#:14-2681)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6822215&isnumber=6822158
- Datta, E.; Goyal, N., "Security Attack Mitigation Framework For The Cloud," Reliability and Maintainability Symposium (RAMS), 2014 Annual, pp.1,6, 27-30 Jan. 2014. doi: 10.1109/RAMS.2014.6798457 Cloud computing brings in a lot of advantages for enterprise IT infrastructure; virtualization technology, which is the backbone of cloud, provides easy consolidation of resources, reduction of cost, space and management efforts. However, security of critical and private data is a major concern which still keeps back a lot of customers from switching over from their traditional in-house IT infrastructure to a cloud service. Existence of techniques to physically locate a virtual machine in the cloud, proliferation of software vulnerability exploits and cross-channel attacks in-between virtual machines, all of these together increases the risk of business data leaks and privacy losses. This work proposes a framework to mitigate such risks and engineer customer trust towards enterprise cloud computing. Everyday new vulnerabilities are being discovered even in well-engineered software products and the hacking techniques are getting sophisticated over time. In this scenario, absolute guarantee of security in enterprise wide information processing system seems a remote possibility; software systems in the cloud are vulnerable to security attacks. Practical solution for the security problems lies in well-engineered attack mitigation plan. At the positive side, cloud computing has a collective infrastructure which can be effectively used to mitigate the attacks if an appropriate defense framework is in place. We propose such an attack mitigation framework for the cloud. Software vulnerabilities in the cloud have different severities and different impacts on the security parameters (confidentiality, integrity, and availability). By using Markov model, we continuously monitor and quantify the risk of compromise in different security parameters (e.g.: change in the potential to compromise the data confidentiality). Whenever, there is a significant change in risk, our framework would facilitate the tenants to calculate the Mean Time to Security Failure (MTTSF) cloud and allow - hem to adopt a dynamic mitigation plan. This framework is an add-on security layer in the cloud resource manager and it could improve the customer trust on enterprise cloud solutions.
Keywords: Markov processes; cloud computing; security of data; virtualisation; MTTSF cloud; Markov model; attack mitigation plan; availability parameter; business data leaks; cloud resource manager; cloud service; confidentiality parameter; cross-channel attacks; customer trust; enterprise IT infrastructure; enterprise cloud computing; enterprise cloud solutions; enterprise wide information processing system; hacking techniques; information technology; integrity parameter; mean time to security failure; privacy losses; private data security; resource consolidation; security attack mitigation framework; security guarantee; software products; software vulnerabilities; software vulnerability exploits; virtual machine; virtualization technology; Cloud computing; Companies; Security ;Silicon; virtual machining; Attack Graphs; Cloud computing; Markov Chain; Security; Security Administration (ID#:14-2682)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798457&isnumber=6798433
- Dinadayalan, P.; Jegadeeswari, S.; Gnanambigai, D., "Data Security Issues in Cloud Environment and Solutions," Computing and Communication Technologies (WCCCT), 2014 World Congress on, pp.88, 91, Feb. 27 2014-March 1 2014. doi: 10.1109/WCCCT.2014.63 Cloud computing is an internet based model that enable convenient, on demand and pay per use access to a pool of shared resources. It is a new technology that satisfies a user's requirement for computing resources like networks, storage, servers, services and applications, Data security is one of the leading concerns and primary challenges for cloud computing. This issue is getting more serious with the development of cloud computing. From the consumers' perspective, cloud computing security concerns, especially data security and privacy protection issues, remain the primary inhibitor for adoption of cloud computing services. This paper analyses the basic problem of cloud computing and describes the data security and privacy protection issues in cloud.
Keywords: cloud computing; data privacy; security of data; Internet based model ;cloud computing security concerns; cloud computing services; cloud environment; computing resources; data security issues; networks; pay per use access; privacy protection issues; servers; shared resources; Cloud computing; Computers; Data privacy; Data security; Organizations; Privacy; Cloud Computing; Cloud Computing Security; Data Security; Privacy protection (ID#:14-2683)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6755112&isnumber=6755083
- Pawar, Y.; Rewagad, P.; Lodha, N., "Comparative Analysis of PAVD Security System with Security Mechanism of Different Cloud Storage Services," Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on, pp.611,614, 7-9 April 2014. doi: 10.1109/CSNT.2014.128 Cloud Computing, being in its infancy in the field of research has attracted lots of research communities in last few years. Lot of investment is made in cloud based research by MNCs like Amazon, IBM and different R & D organizations. Inspite of these the number of stakeholders actually using cloud services is limited. The main hindrance to the wide adoption of cloud Technology is feeling of insecurity regarding storage of data in cloud, absence of reliance and comprehensive access control mechanism. To overcome it, cloud service providers have employed different security mechanism to protect confidentiality and integrity of data in cloud. We have used PAVD security system to protect confidentiality and integrity of data stored in cloud. PAVD is an acronym for Privacy, Authentication and Verification of data. We have statistically analyzed the performance of PAVD security system over different sizes of data files. This paper aimed at comparing the performance of different cloud storage services like Drop box, sky drive etc with our PAVD security system with respect to uploading and downloading time.
Keywords: authorisation; cloud computing; data privacy; storage management; Drop Box; MNC; PAVD security system; Sky Drive; access control mechanism; cloud based research; cloud computing; cloud storage services; cloud technology; data storage; multinational companies; privacy authentication and verification of data; security mechanism; statistical analysis; Cloud computing; Digital signatures ;Encryption; Servers; Cloud Computing; Data confidentiality; Performance Analysis; Security Issues; Stakeholders (ID#:14-2684)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821470&isnumber=6821334
- Honggang Wang; Shaoen Wu; Min Chen; Wei Wang, "Security Protection Between Users And The Mobile Media Cloud," Communications Magazine, IEEE, vol.52, no.3, pp.73, 79, March 2014. doi: 10.1109/MCOM.2014.6766088 Mobile devices such as smartphones are widely deployed in the world, and many people use them to download/upload media such as video and pictures to remote servers. On the other hand, a mobile device has limited resources, and some media processing tasks must be migrated to the media cloud for further processing. However, a significant question is, can mobile users trust the media services provided by the media cloud service providers? Many traditional security approaches are proposed to secure the data exchange between mobile users and the media cloud. However, first, because multimedia such as video is large-sized data, and mobile devices have limited capability to process media data, it is important to design a lightweight security method; second, uploading and downloading multi-resolution images/videos make it difficult for the traditional security methods to ensure security for users of the media cloud. Third, the error-prone wireless environment can cause failure of security protection such as authentication. To address the above challenges, in this article, we propose to use both secure sharing and watermarking schemes to protect user's data in the media cloud. The secure sharing scheme allows users to upload multiple data pieces to different clouds, making it impossible to derive the whole information from any one cloud. In addition, the proposed scalable watermarking algorithm can be used for authentications between personal mobile users and the media cloud. Furthermore, we introduce a new solution to resist multimedia transmission errors through a joint design of watermarking and Reed- Solomon codes. Our studies show that the proposed approach not only achieves good security performance, but also can enhance media quality and reduce transmission overhead.
Keywords: Reed-Solomon codes; cloud computing; security of data; smart phones; video watermarking; Reed-Solomon codes; authentication; data exchange; error-prone wireless environment; large-sized data; lightweight security; media cloud service providers; media processing tasks; mobile devices; mobile media cloud; multimedia transmission errors; multiple data pieces; multiresolution images-videos; personal mobile users;secure sharing; security protection; smartphones; transmission overhead; watermarking; Cloud computing; Cryptography; Handheld devices ;Media; Mobile communication; Multimedia communication; Network security; Watermarking (ID#:14-2685)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6766088&isnumber=6766068
- Sugumaran, M.; Murugan, B.B.; Kamalraj, D., "An Architecture for Data Security in Cloud Computing," Computing and Communication Technologies (WCCCT), 2014 World Congress on, pp.252,255, Feb. 27 2014-March 1 2014. doi: 10.1109/WCCCT.2014.53 Cloud computing is a more flexible, cost effective and proven delivery platform for providing business or consumer services over the Internet. Cloud computing supports distributed service oriented architecture, multi-user and multi-domain administrative infrastructure. So, it is more prone to security threats and vulnerabilities. At present, a major concern in cloud adoption is towards its security and privacy. Security and privacy issues are of great concern to cloud service providers who are actually hosting the services. In most cases, the provider must guarantee that their infrastructure is secure and clients' data and applications are safe, by implementing security policies and mechanisms. The security issues are organized into several general categories: trust, identity management, software isolation, data protection, availability reliability, ownership, data backup, data portability and conversion, multi platform support and intellectual property. In this paper, it is discuss about some of the techniques that were implemented to protect data and propose architecture to protect data in cloud. This architecture was developed to store data in cloud in encrypted data format using cryptography technique which is based on block cipher.
Keywords: cloud computing; cryptography; data protection; electronic data interchange; industrial property; safety-critical software; service-oriented architecture; block cipher; business services; client data; cloud computing; cloud service providers; consumer services; cryptography technique; data availability; data backup; data conversion; data ownership; data portability; data privacy; data protection; data reliability; data security architecture; data storage; distributed service-oriented architecture; encrypted data format; identity management; intellectual property; multiuser multidomain administrative infrastructure; software isolation; trust factor; Ciphers; Cloud computing; Computer architecture; Encryption; Cloud computing; data privacy ;data security; symmetric cryptography; virtualization (ID#:14-2686)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6755152&isnumber=6755083
- De, S.J.; Pal, AK., "A Policy-Based Security Framework for Storage and Computation on Enterprise Data in the Cloud," System Sciences (HICSS), 2014 47th Hawaii International Conference on, pp.4986,4997, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.613 A whole range of security concerns that can act as barriers to the adoption of cloud computing have been identified by researchers over the last few years. While outsourcing its business-critical data and computations to the cloud, an enterprise loses control over them. How should the organization decide what security measures to apply to protect its data and computations that have different security requirements from a Cloud Service Provider (CSP) with an unknown level of corruption? The answer to this question relies on the organization's perception about the CSP's trustworthiness and the security requirements of its data. This paper proposes a decentralized, dynamic and evolving policy-based security framework that helps an organization to derive such perceptions from knowledgeable and trusted employee roles and based on that, choose the most relevant security policy specifying the security measures necessary for outsourcing data and computations to the cloud. The organizational perception is built through direct user participation and is allowed to evolve over time.
Keywords: business data processing; cloud computing; data protection; outsourcing; security of data; trusted computing; CSPs trustworthiness; cloud computing; cloud service provider; data outsourcing; data protection; data security requirements; decentralized security framework; dynamic security framework; enterprise data computation; enterprise data storage; policy-based security framework; Cloud computing; Computational modeling; Data security; Organizations; Outsourcing; Secure storage; Cloud Computing; Data and Computation Outsourcing; Security (ID#:14-2687)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759216&isnumber=6758592
- Hassan, S.; Abbas Kamboh, A; Azam, F., "Analysis of Cloud Computing Performance, Scalability, Availability, & Security," Information Science and Applications (ICISA), 2014 International Conference on, pp.1, 5, 6-9 May 2014. doi: 10.1109/ICISA.2014.6847363 Cloud Computing means that a relationship of many number of computers through a contact channel like internet. Through cloud computing we send, receive and store data on internet. Cloud Computing gives us an opportunity of parallel computing by using a large number of Virtual Machines. Now a days, Performance, scalability, availability and security may represent the big risks in cloud computing. In this paper we highlights the issues of security, availability and scalability issues and we will also identify that how we make our cloud computing based infrastructure more secure and more available. And we also highlight the elastic behavior of cloud computing. And some of characteristics which involved for gaining the high performance of cloud computing will also be discussed.
Keywords: cloud computing; parallel processing; security of data; virtual machines; Internet; cloud computing; parallel computing; scalability; security; virtual machine; Availability; Cloud computing; Computer hacking; Scalability (ID#:14-2688)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847363&isnumber=6847317
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Coding Theory
Coding theory is one of the essential pieces of information theory. More important, coding theory is a core element in cryptography. The research work cited here looks at signal processing, crowdsourcing, matroid theory, WOM codes, and the N-P hard problem. These works were presented or published between January and August of 2014.
- Vempaty, A; Han, Y.S.; Varshney, L.R.; Varshney, P.K., "Coding Theory For Reliable Signal Processing," Computing, Networking and Communications (ICNC), 2014 International Conference on, pp.200,205, 3-6 Feb. 2014. doi: 10.1109/ICCNC.2014.6785331 With increased dependence on technology in daily life, there is a need to ensure their reliable performance. There are many applications where we carry out inference tasks assisted by signal processing systems. A typical system performing an inference task can fail due to multiple reasons: presence of a component with permanent failure, a malicious component providing corrupt information, or there might simply be an unreliable component which randomly provides faulty data. Therefore, it is important to design systems which perform reliably even in the presence of such unreliable components. Coding theory based techniques provide a possible solution to this problem. In this position paper, we survey some of our recent work on the use of coding theory based techniques for the design of some signal processing applications. As examples, we consider distributed classification and target localization in wireless sensor networks. We also consider the more recent paradigm of crowdsourcing and discuss how coding based techniques can be used to mitigate the effect of unreliable crowd workers in the system.
Keywords: error correction codes; signal processing; telecommunication network reliability; wireless sensor networks; coding theory; crowdsourcing; distributed inference; malicious component; permanent failure; reliable signal processing; wireless sensor network; Encoding; Maximum likelihood estimation; Reliability theory; Sensors; Wireless sensor networks; Coding theory; Crowdsourcing; Distributed Inference; Reliability; Wireless Sensor Networks (ID#:14-2689)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785331&isnumber=6785290
- Vempaty, A; Varshney, L.R.; Varshney, P.K., "Reliable Crowdsourcing for Multi-Class Labeling Using Coding Theory," Selected Topics in Signal Processing, IEEE Journal of, vol.8, no.4, pp.667,679, Aug. 2014. doi: 10.1109/JSTSP.2014.2316116 Crowdsourcing systems often have crowd workers that perform unreliable work on the task they are assigned. In this paper, we propose the use of error-control codes and decoding algorithms to design crowdsourcing systems for reliable classification despite unreliable crowd workers. Coding theory based techniques also allow us to pose easy-to-answer binary questions to the crowd workers. We consider three different crowdsourcing models: systems with independent crowd workers, systems with peer-dependent reward schemes, and systems where workers have common sources of information. For each of these models, we analyze classification performance with the proposed coding-based scheme. We develop an ordering principle for the quality of crowds and describe how system performance changes with the quality of the crowd. We also show that pairing among workers and diversification of the questions help in improving system performance. We demonstrate the effectiveness of the proposed coding-based scheme using both simulated data and real datasets from Amazon Mechanical Turk, a crowdsourcing microtask platform. Results suggest that use of good codes may improve the performance of the crowdsourcing task over typical majority-voting approaches.
Keywords: decoding; error correction codes; pattern classification; Amazon Mechanical Turk; classification performance analysis; coding theory based techniques; crowdsourcing microtask platform; decoding algorithms; error-control codes; independent crowd workers; majority-voting approaches; multiclass labeling; peer-dependent reward schemes; reliable crowdsourcing system; Algorithm design and analysis; Decoding; Hamming distance;Nose;Reliability;Sensors;Vectors;Crowdsourcing;error-control codes; multi-class labeling; quality assurance (ID#:14-2690)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6784318&isnumber=6856242
- Guangfu Wu; Lin Wang; Trieu-Kien Truong, "Use of Matroid Theory To Construct A Class Of Good Binary Linear Codes," Communications, IET, vol.8, no.6, pp.893, 898, April 17 2014. doi: 10.1049/iet-com.2013.0671 It is still an open challenge in coding theory how to design a systematic linear (n, k) - code C over GF(2) with maximal minimum distance d. In this study, based on matroid theory (MT), a limited class of good systematic binary linear codes (n, k, d) is constructed, where n = 2k-1 + * * * + 2k-d and d = 2k-2 + * * * + 2k-d-1 for k 4, 1 d <; k. These codes are well known as special cases of codes constructed by Solomon and Stiffler (SS) back in 1960s. Furthermore, a new shortening method is presented. By shortening the optimal codes, we can design new kinds of good systematic binary linear codes with parameters n = 2k-1 + * * * + 2k-d - 3u and d = 2k-2 + * * * + 2k-d-1 - 2u for 2 u 4, 2 d <; k. The advantage of MT over the original SS construction is that it has an advantage in yielding generator matrix on systematic form. In addition, the dual code C with relative high rate and optimal minimum distance can be obtained easily in this study.
Keywords: binary codes; combinatorial mathematics; linear codes; matrix algebra; SS construction; Solomon-Stiffler code construction; coding theory; dual code; generator matrix; matroid theory; maximal minimum distance; optimal codes; shortening method; systematic binary linear codes (ID#:14-2691)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798003&isnumber=6797989
- Xunrui Yin; Zongpeng Li; Xin Wang, "A Matroid Theory Approach To Multicast Network Coding," INFOCOM, 2014 Proceedings IEEE, pp.646,654, April 27 2014-May 2 2014. doi: 10.1109/INFOCOM.2014.6847990 Network coding encourages the mixing of information flows at intermediate nodes of a network for enhanced network capacity, especially for one-to-many multicast applications. A fundamental problem in multicast network coding is to construct a feasible solution such that encoding and decoding are performed over a finite field of size as small as possible. Coding operations over very small finite fields (e.g., F2) enable low computational complexity in theory and ease of implementation in practice. In this work, we propose a new approach based on matroid theory to study multicast network coding and its minimum field size requirements. Applying this new approach that translates multicast networks into matroids, we derive the first upper-bounds on the field size requirement based on the number of relay nodes in the network, and make new progresses along the direction of proving that coding over very small fields (F2 and F3) suffices for multicast network coding in planar networks.
Keywords: combinatorial mathematics; matrix algebra; multicast communication; network coding; coding operations; decoding; encoding; enhanced network capacity; information flows; intermediate nodes; matroid theory; minimum field size requirements; multicast network coding; multicast networks; one-to-many multicast applications; planar networks; relay nodes; Encoding; Multicast communication; Network coding; Receivers; Relays; Throughput; Vectors (ID#:14-2692)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6847990&isnumber=6847911
- Shanmugam, K.; Dimakis, AG.; Langberg, M., "Graph Theory Versus Minimum Rank For Index Coding," Information Theory (ISIT), 2014 IEEE International Symposium on, pp.291,295, June 29 2014-July 4 2014. doi: 10.1109/ISIT.2014.6874841 We obtain novel index coding schemes and show that they provably outperform all previously known graph theoretic bounds proposed so far 1. Further, we establish a rather strong negative result: all known graph theoretic bounds are within a logarithmic factor from the chromatic number. This is in striking contrast to minrank since prior work has shown that it can outperform the chromatic number by a polynomial factor in some cases. The conclusion is that all known graph theoretic bounds are not much stronger than the chromatic number.
Keywords: graph colouring; linear codes; chromatic number; graph theoretic bounds; index coding scheme; logarithmic factor; minimum rank; minrank; polynomial factor; Channel coding; Indexes; Interference; Unicast; Upper bound (ID#:14-2693)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6874841&isnumber=6874773
- Chen, H.C.H.; Lee, P.P.C., "Enabling Data Integrity Protection in Regenerating-Coding-Based Cloud Storage: Theory and Implementation," Parallel and Distributed Systems, IEEE Transactions on, vol.25, no.2, pp.407,416, Feb. 2014. doi: 10.1109/TPDS.2013.164 To protect outsourced data in cloud storage against corruptions, adding fault tolerance to cloud storage, along with efficient data integrity checking and recovery procedures, becomes critical. Regenerating codes provide fault tolerance by striping data across multiple servers, while using less repair traffic than traditional erasure codes during failure recovery. Therefore, we study the problem of remotely checking the integrity of regenerating-coded data against corruptions under a real-life cloud storage setting. We design and implement a practical data integrity protection (DIP) scheme for a specific regenerating code, while preserving its intrinsic properties of fault tolerance and repair-traffic saving. Our DIP scheme is designed under a mobile Byzantine adversarial model, and enables a client to feasibly verify the integrity of random subsets of outsourced data against general or malicious corruptions. It works under the simple assumption of thin-cloud storage and allows different parameters to be fine-tuned for a performance-security trade-off. We implement and evaluate the overhead of our DIP scheme in a real cloud storage testbed under different parameter choices. We further analyze the security strengths of our DIP scheme via mathematical models. We demonstrate that remote integrity checking can be feasibly integrated into regenerating codes in practical deployment.
Keywords: cloud computing; data integrity; data protection; DIP scheme; data integrity protection; fault tolerance; mobile Byzantine adversarial model; performance-security trade-off ;regenerating-coded data integrity checking; regenerating-coding-based cloud storage; remote integrity checking; repair-traffic saving; thin-cloud storage; experimentation; implementation; remote data checking; secure and trusted storage systems (ID#:14-2694)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6547608&isnumber=6689796
- Gomez, Arley; Mejia, Carolina; Montoya, J.Andres, "Linear Network Coding And The Model Theory Of Linear Rank Inequalities," Network Coding (NetCod), 2014 International Symposium on, pp.1,5, 27-28 June 2014. doi: 10.1109/NETCOD.2014.6892128 Let n 4. Can the entropic region of order n be defined by a finite list of polynomial inequalities? This question was first asked by Chan and Grant. We showed, in a companion paper, that if it were the case one could solve many algorithmic problems coming from network coding, index coding and secret sharing. Unfortunately, it seems that the entropic regions are not semialgebraic. Are the Ingleton regions semialgebraic sets? We provide some evidence showing that the Ingleton regions are semialgebraic. Furthermore, we show that if the Ingleton regions are semialgebraic, then one can solve many algorithmic problems coming from Linear Network Coding.
Keywords: Electronic mail; Encoding; Indexes; Network coding; Polynomials; Random variables; Vectors (ID#:14-2695)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6892128&isnumber=6892118
- Xenoulis, K., "List Permutation Invariant Linear Codes: Theory and Applications," Information Theory, IEEE Transactions on, vol.60, no.9, pp.5263, 5282, Sept. 2014. doi: 10.1109/TIT.2014.2333000 The class of q-ary list permutation invariant linear codes is introduced in this paper along with probabilistic arguments that validate their existence when certain conditions are met. The specific class of codes is characterized by an upper bound that is tighter than the generalized Shulman-Feder bound and relies on the distance of the codes' weight distribution to the binomial (multinomial, respectively) one. The bound applies to cases where a code from the proposed class is transmitted over a q-ary output symmetric discrete memoryless channel and list decoding with fixed list size is performed at the output. In the binary case, the new upper bounding technique allows the discovery of list permutation invariant codes whose upper bound coincides with sphere-packing exponent. Furthermore, the proposed technique motivates the introduction of a new class of upper bounds for general q-ary linear codes whose members are at least as tight as the DS2 bound as well as all its variations for the discrete channels treated in this paper.
Keywords: channel coding; decoding; linear codes; memoryless systems; generalized Shulman-Feder bound; list decoding; list permutation invariant linear codes; symmetric discrete memoryless channel; Hamming distance; Hamming weight; Linear codes; Maximum likelihood decoding; Vectors; Discrete symmetric channels; double exponential function; list decoding; permutation invariance; reliability function (ID#:14-2696)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6843999&isnumber=6878505
- Micciancio, D., "Locally Dense Codes," Computational Complexity (CCC), 2014 IEEE 29th Conference on, vol., no., pp.90,97, 11-13 June 2014. doi: 10.1109/CCC.2014.17 The Minimum Distance Problem (MDP), i.e., the computational task of evaluating (exactly or approximately) the minimum distance of a linear code, is a well known NP-hard problem in coding theory. A key element in essentially all known proofs that MDP is NP-hard is the construction of a combinatorial object that we may call a locally dense code. This is a linear code with large minimum distance d that admits a ball of smaller radius r!d containing an exponential number of codewords, together with some auxiliary information used to map these codewords. In this paper we provide a generic method to explicitly construct locally dense binary codes, starting from an arbitrary linear code with sufficiently large minimum distance. Instantiating our construction with well known linear codes (e.g., Reed-Solomon codes concatenated with Hadamard codes) yields a simple proof that MDP is NPhard to approximate within any constant factor under deterministic polynomial time reductions, simplifying and explaining recent results of Cheng and Wan (STOC 2009 / IEEE Trans. Inf. Theory, 2012) and Austrin and Khot (ICALP 2011). Our work is motivated by the construction of analogous combinatorial objects over integer lattices, which are used in NP-hardness proofs for the Shortest Vector Problem (SVP). We show that for the max norm, locally dense lattices can also be easily constructed. However, all currently known constructions of locally dense lattices in the standard Euclidean norm are probabilistic. Finding a deterministic construction of locally dense Euclidean lattices, analogous to the results presented in this paper, would prove the NP-hardness of approximating SVP under deterministic polynomial time reductions, a long standing open problem in the computational complexity of integer lattices.
Keywords: binary codes; combinatorial mathematics; computational complexity; linear codes; MDP; NP-hard problem; SVP; arbitrary linear code; codewords; coding theory; combinatorial object construction; computational complexity; deterministic polynomial time reductions; integer lattices; locally dense Euclidean lattices; locally dense binary codes; locally dense lattices; max norm; minimum distance problem; shortest vector problem; standard Euclidean norm; Binary codes; Lattices; Linear codes; Polynomials; Reed-Solomon codes; Symmetric matrices; Vectors; NP-hardness; coding theory; derandomization; lattices; minimum distance problem; shortest vector problem (ID#:14-2697)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6875478&isnumber=6875460
- Paajanen, P., "Finite p-Groups, Entropy Vectors, and the Ingleton Inequality for Nilpotent Groups," Information Theory, IEEE Transactions on, vol.60, no.7, pp.3821, 3824, July 2014. doi: 10.1109/TIT.2014.2321561 In this paper, we study the capacity/entropy region of finite, directed, acyclic, multiple-sources, and multiple-sinks network by means of group theory and entropy vectors coming from groups. There is a one-to-one correspondence between the entropy vector of a collection of n random variables and a certain group-characterizable vector obtained from a finite group and n of its subgroups. We are looking at nilpotent group characterizable entropy vectors and show that they are all also abelian group characterizable, and hence they satisfy the Ingleton inequality. It is known that not all entropic vectors can be obtained from abelian groups, so our result implies that to get more exotic entropic vectors, one has to go at least to soluble groups or larger nilpotency classes. The result also implies that Ingleton inequality is satisfied by nilpotent groups of bounded class, depending on the order of the group.
Keywords: entropy; group theory; network coding; Ingleton inequality; abelian group; capacity-entropy region; finite p-groups; group theory; group-characterizable vector; multiple-sinks network; network coding theory; nilpotent group characterizable entropy vectors; Channel coding;Entropy; Indexes; Lattices; Random variables; Structural rings; Vectors; Non-Shannon type inequalities; entropy regions; network coding theory; nilpotent groups; p-groups (ID#:14-2698)
URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6809978&isnumber=6832684
- Guruswami, V.; Narayanan, S., "Combinatorial Limitations of Average-Radius List-Decoding," Information Theory, IEEE Transactions on, vol.60, no.10, pp. 5827, 5842, Oct. 2014. doi: 10.1109/TIT.2014.2343224 We study certain combinatorial aspects of list-decoding, motivated by the exponential gap between the known upper bound (of (O(1/gamma )) ) and lower bound (of (Omega _{p} (log (1/gamma ))) ) for the list size needed to list decode up to error fraction (p) with rate (gamma ) away from capacity, i.e., (1- h(p)-gamma ) [here (pin (0, {1}/{2})) and (gamma > 0) ]. Our main result is that we prove that in any binary code (C subseteq { 0, 1 } ^{n}) of rate (1- h(p) - gamma ) , there must exist a set ( mathcal {L}subset C) of (Omega _{p} (1/sqrt {gamma })) codewords such that the average distance of the points in ( mathcal {L}) from their centroid is at most (pn) . In other words, there must exist (Omega _{p}(1/sqrt {gamma })) codewords with low average radius. The standard notion of list decoding corresponds to working with the maximum distance of a collection of codewords from a center instead of average distance. The average radius form is in - tself quite natural; for instance, the classical Johnson bound in fact implies average-radius list-decodability. The remaining results concern the standard notion of list-decoding, and help clarify the current state of affairs regarding combinatorial bounds for list-decoding as follows. First, we give a short simple proof, over all fixed alphabets, of the above-mentioned (Omega _{p}(log (1/gamma ))) lower bound. Earlier, this bound followed from a complicated, more general result of Blinovsky. Second, we show that one cannot improve the (Omega _{p}(log (1/gamma ))) lower bound via techniques based on identifying the zero-rate regime for list-decoding of constant-weight codes [this is a typical approach for negative results in coding theory, including the (Omega _{p} (log (1/gamma ))) list-size lower bound]. On a positive note, our (Omega _{p}(1/sqrt {gamma })) lower bound for average-radius list-decoding circumvents this barrier. Third, we exhibit a reverse connection between the existence of constant-weight and general codes for list-decoding, showing that the best possible list-size, as a function of the gap (gamma ) of the rate to the capacity limit, is the same up to constant factors for both constant-weight codes (with weight bounded away from (p) ) and general codes. Fourth, we give simple second moment-based proofs that w.h.p. a list-size of (Omega _{p} (1/gamma )) is needed for list-decoding random codes from errors as well as erasures.
Keywords: Binary codes; Decoding; Entropy; Hamming distance; Standards; Upper bound; Combinatorial coding theory; linear codes; list error-correction; probabilistic method; random coding (ID#:14-2699)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866234&isnumber=6895347
- Shpilka, A, "Capacity-Achieving Multiwrite WOM Codes," Information Theory, IEEE Transactions on, vol.60, no.3, pp.1481,1487, March 2014. doi: 10.1109/TIT.2013.2294464 In this paper, we give an explicit construction of a family of capacity-achieving binary t-write WOM codes for any number of writes t, which have polynomial time encoding and decoding algorithms. The block length of our construction is N=(t/e)O(t/(de)) when e is the gap to capacity and encoding and decoding run in time N1+d. This is the first deterministic construction achieving these parameters. Our techniques also apply to larger alphabets.
Keywords: codes; decoding; alphabets; capacity-achieving binary t-write WOM codes; capacity-achieving multiwrite WOM codes; decoding algorithms; polynomial time encoding; Decoding; Encoding; Force; Indexes; Polynomials; Vectors; Writing; Coding theory; WOM-codes; flash memories; hash-functions; write-once memories (ID#:14-2700)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6680743&isnumber=6739111
- Bitouze, N.; Amat, AG.I; Rosnes, E., "Using Short Synchronous WOM Codes to Make WOM Codes Decodable," Communications, IEEE Transactions on, vol.62, no.7, pp.2156, 2169, July 2014. doi: 10.1109/TCOMM.2014.2323308 In the framework of write-once memory (WOM) codes, it is important to distinguish between codes that can be decoded directly and those that require the decoder to know the current generation so as to successfully decode the state of the memory. A widely used approach to constructing WOM codes is to design first nondecodable codes that approach the boundaries of the capacity region and then make them decodable by appending additional cells that store the current generation, at an expense of rate loss. In this paper, we propose an alternative method to making nondecodable WOM codes decodable by appending cells that also store some additional data. The key idea is to append to the original (nondecodable) code a short synchronous WOM code and write generations of the original code and the synchronous code simultaneously. We consider both the binary and the nonbinary case. Furthermore, we propose a construction of synchronous WOM codes, which are then used to make nondecodable codes decodable. For short-to-moderate block lengths, the proposed method significantly reduces the rate loss as compared to the standard method.
Keywords: decoding; WOM codes decodable; capacity region; current generation; nondecodable codes; short synchronous WOM codes; short-to-moderate block lengths; standard method; write generations; write once memory; Binary codes; Decoding; Encoding; Solids; Standards; Synchronization; Vectors; Coding theory; Flash memories; decodable codes; synchronous write-once memory (WOM) codes (ID#:14-2701)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6815644&isnumber=6860331
- Papailiopoulos, D.S.; Dimakis, AG., "Locally Repairable Codes," Information Theory, IEEE Transactions on, vol.60, no.10, pp.5843,5855, Oct. 2014. doi: 10.1109/TIT.2014.2325570 Distributed storage systems for large-scale applications typically use replication for reliability. Recently, erasure codes were used to reduce the large storage overhead, while increasing data reliability. A main limitation of off-the-shelf erasure codes is their high-repair cost during single node failure events. A major open problem in this area has been the design of codes that: 1) are repair efficient and 2) achieve arbitrarily high data rates. In this paper, we explore the repair metric of locality, which corresponds to the number of disk accesses required during a single node repair. Under this metric, we characterize an information theoretic tradeoff that binds together the locality, code distance, and storage capacity of each node. We show the existence of optimal locally repairable codes (LRCs) that achieve this tradeoff. The achievability proof uses a locality aware flow-graph gadget, which leads to a randomized code construction. Finally, we present an optimal and explicit LRC that achieves arbitrarily high data rates. Our locality optimal construction is based on simple combinations of Reed-Solomon blocks.
Keywords: Encoding; Entropy; Joints; Maintenance engineering; Measurement; Peer-to-peer computing; Vectors; Information theory; coding theory; distributed storage (ID#:14-2702)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6818438&isnumber=6895347
- Yaakobi, E.; Mahdavifar, H.; Siegel, P.H.; Vardy, A; Life, J.K.W., "Rewriting Codes for Flash Memories," Information Theory, IEEE Transactions on, vol.60, no.2, pp.964,975, Feb. 2014. doi: 10.1109/TIT.2013.2290715 Flash memory is a nonvolatile computer memory comprising blocks of cells, wherein each cell can take on q different values or levels. While increasing the cell level is easy, reducing the level of a cell can be accomplished only by erasing an entire block. Since block erasures are highly undesirable, coding schemes-known as floating codes (or flash codes) and buffer codes-have been designed in order to maximize the number of times that information stored in a flash memory can be written (and rewritten) prior to incurring a block erasure. An (n,k,t)q flash code C is a coding scheme for storing k information bits in n cells in such a way that any sequence of up to t writes can be accommodated without a block erasure. The total number of available level transitions in n cells is n(q-1), and the write deficiency of C, defined as d(C)=n(q-1)-t, is a measure of how close the code comes to perfectly utilizing all these transitions. In this paper, we show a construction of flash codes with write deficiency O(q k log k) if q log2 k, and at most O(klog2k) otherwise. An (n,r,l,t)q buffer code is a coding scheme for storing a buffer of r l-ary symbols such that for any sequence of t symbols, it is possible to successfully decode the last r symbols that were written. We improve upon a previous upper bound on the maximum number of writes t in the case where there is a single cell to store the buffer. Then, we show how to improve a construction by Jiang that uses multiple cells, where n 2r.
Keywords: block codes; flash memories ;random-access storage; block erasures; buffer codes; coding schemes; flash codes; flash memories; floating codes; l-ary symbols; multiple cells; nonvolatile computer memory; rewriting codes; Ash; Buffer storage; Decoding; Encoding; Indexes; Upper bound; Vectors; Buffer codes; coding theory; flash codes; flash memories (ID#:14-2703)
URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6662417&isnumber=6714461
- Vempaty, A; Han, Y.S.; Varshney, P.K., "Target Localization in Wireless Sensor Networks Using Error Correcting Codes," Information Theory, IEEE Transactions on, vol.60, no.1, pp.697, 712, Jan. 2014 doi: 10.1109/TIT.2013.2289859 In this paper, we consider the task of target localization using quantized data in wireless sensor networks. We propose a computationally efficient localization scheme by modeling it as an iterative classification problem. We design coding theory based iterative approaches for target localization where at every iteration, the fusion center (FC) solves an M-ary hypothesis testing problem and decides the region of interest for the next iteration. The coding theory based iterative approach works well even in the presence of Byzantine (malicious) sensors in the network. We further consider the effect of non-ideal channels. We suggest the use of soft-decision decoding to compensate for the loss due to the presence of fading channels between the local sensors and FC. We evaluate the performance of the proposed schemes in terms of the Byzantine fault tolerance capability and probability of detection of the target region. We also present performance bounds, which help us in designing the system. We provide asymptotic analysis of the proposed schemes and show that the schemes achieve perfect region detection irrespective of the noise variance when the number of sensors tends to infinity. Our numerical results show that the proposed schemes provide a similar performance in terms of mean square error as compared with the traditional maximum likelihood estimation but are computationally much more efficient and are resilient to errors due to Byzantines and non-ideal channels.
Keywords: decoding; error correction codes; fading channels; iterative methods; probability; wireless sensor networks; Byzantine fault tolerance capability; M-ary hypothesis testing problem; asymptotic analysis; coding theory based iterative approaches; efficient localization scheme; error correcting codes; fading channels; fusion center; iterative classification problem; maximum likelihood estimation; mean square error; perfect region detection; probability; soft-decision decoding; target localization; wireless sensor networks; Decoding; Encoding; Fading; Hamming distance; Sensor fusion; Wireless sensor networks; Byzantines; Target localization; error correcting codes; wireless sensor networks (ID#:14-2704)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6657772&isnumber=6690264
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Cognitive Radio Security
If volume is any indication, cognitive radio (CR) is the "hot topic" for research and conferences in 2014. The works cited here come from a global range of conference sources and cover a range of issues including spectrum competition between CR and radar, cooperative jamming, authentication, trust manipulation, and others. These works were published or presented between January and October, 2014.
- Chauhan, K.K.; Sanger, AK.S., "Survey of Security Threats And Attacks In Cognitive Radio Networks," Electronics and Communication Systems (ICECS), 2014 International Conference on , vol., no., pp.1,5, 13-14 Feb. 2014. doi: 10.1109/ECS.2014.6892537 Number of technologies has been developed in wireless communication and there is always a common issue in this field i.e. `Security' due to its open medium of communication. Now, spectrum allocation is becoming major problem in wireless communication due to paucity of available spectrum. Cognitive radio is one of the rapidly increasing technologies in wireless communication. Cognitive radio promises to detract spectrum shortage problem by allowing unlicensed users to co-exist with licensed users in spectrum band and use it for communication while causing no interference with licensed users. Cognitive radio technology intelligently detects vacant channels and allows unlicensed users to use that one, while avoiding occupied channels optimizing the use of available spectrum. Initially research in cognitive radios focused on resource allocation, spectrum sensing and management. Parallelly, another important issue that garnered attention of researchers from academia and industry is Security. Security considerations show that the unique characteristics of cognitive radio such as spectrum sensing and sharing make it vulnerable to new class of security threats and attacks. These security threats are challenge in the deployment of CRN and meeting Quality of Service (QoS). This is a survey paper in which we identified and discussed some of the security threats and attacks in spectrum sensing and cognitive radio networks. Together with discussing security attacks, we also proposed some techniques to mitigate the effectiveness of these attacks.
Keywords: cognitive radio; quality of service; radio networks; radio spectrum management; signal detection; telecommunication security; QoS; cognitive radio networks; quality of service; resource allocation; security attacks; security threats; spectrum allocation; spectrum sensing; spectrum sharing; unlicensed users; vacant channel detection; wireless communication; Artificial intelligence; Authentication; Computers; FCC; Jamming; Radio networks; Cognitive Radio; Cognitive Radio Networks; Dynamic Spectrum Access; Mitigation; Security Threats/Attacks (ID#:14-2883)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6892537&isnumber=6892507
- Khasawneh, M.; Agarwal, A, "A Survey On Security In Cognitive Radio Networks," Computer Science and Information Technology (CSIT), 2014 6th International Conference on, pp.64, 70, 26-27 March 2014. doi: 10.1109/CSIT.2014.6805980 Cognitive radio (CR) has been introduced to accommodate the steady increment in the spectrum demand. In CR networks, unlicensed users, which are referred to as secondary users (SUs), are allowed to dynamically access the frequency bands when licensed users which are referred to as primary users (PUs) are inactive. One important technical area that has received little attention to date in the cognitive radio system is wireless security. New classes of security threats and challenges have been introduced in the cognitive radio systems, and providing strong security may prove to be the most difficult aspect of making cognitive radio a long-term commercially-viable concept. This paper addresses the main challenges, security attacks and their mitigation techniques in cognitive radio networks. The attacks showed are organized based on the protocol layer that an attack is operating on.
Keywords: cognitive radio; protocols; radio networks; telecommunication security; cognitive radio networks; long-term commercially-viable concept; mitigation techniques; protocol layer; security attacks; spectrum demand; wireless security; Authentication; Cognitive radio ;Linear programming; Physical layer; Protocols; Sensors; Attack; Cognitive radio; Primary User (PU);Secondary User (SU); Security (ID#:14-2884)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6805980&isnumber=6805962
- Akin, S., "Security in Cognitive Radio Networks," Information Sciences and Systems (CISS), 2014 48th Annual Conference on, pp.1,6, 19-21 March 2014. doi: 10.1109/CISS.2014.6814188 In this paper, we investigate the information-theoretic security by modeling a cognitive radio wiretap channel under quality-of-service (QoS) constraints and interference power limitations inflicted on primary users (PUs). We initially define four different transmission scenarios regarding channel sensing results and their correctness. We provide effective secure transmission rates at which a secondary eavesdropper is refrained from listening to a secondary transmitter (ST). Then, we construct a channel state transition diagram that characterizes this channel model. We obtain the effective secure capacity which describes the maximum constant buffer arrival rate under given QoS constraints. We find out the optimal transmission power policies that maximize the effective secure capacity, and then, we propose an algorithm that, in general, converges quickly to these optimal policy values. Finally, we show the performance levels and gains obtained under different channel conditions and scenarios. And, we emphasize, in particular, the significant effect of hidden-terminal problem on information-theoretic security in cognitive radios.
Keywords: cognitive radio; information theory; quality of service; radio transmitters; radiofrequency interference; telecommunication security; QoS; channel sensing; channel state transition; cognitive radio networks; constant buffer arrival rate; hidden-terminal problem; information-theoretic security; interference power limitations; optimal policy; optimal transmission power; primary users; quality of service; secondary eavesdropper; secondary transmitter; transmission rates; wiretap channel; Cognitive radio; Fading; Interference; Quality of service; Security; Sensors; Signal to noise ratio; Cognitive radio; effective capacity; information-theoretic security; quality of service (QoS) constraints (ID#:14-2885)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814188&isnumber=6814063
- Liu, W.; Sarkar, M.Z.I; Ratnarajah, T., "On the Security Of Cognitive Radio Networks: Cooperative Jamming With Relay Selection," Networks and Communications (EuCNC), 2014 European Conference on, pp.1,5, 23-26 June 2014. doi: 10.1109/EuCNC.2014.6882674 We consider the problem of secret communication through a relay assisted downlink cognitive interference channel in which secondary base station (SBS) is allowed to transmit simultaneously with the primary base station (PBS) over the same channel instead of waiting for an idle channel which is traditional for a cognitive radio. We propose a cooperative jamming (CJ) scheme to improve the secrecy rate where multiple relays transmit weighted jamming signals to create additional interferences in the direction of eavesdropper with the purpose of confusing it. The proposed CJ scheme is designed to cancel out interference at the secondary receiver (SR) while maintaining interference at the primary receiver (PR) under a certain threshold. Moreover, we develop an algorithm to select the effective relays which meet the target secrecy rate. Our results show that with the help of developed algorithm, a suitable CJ scheme can be designed to improve the secrecy rate at SR to meet the target secrecy rate.
Keywords: cognitive radio; jamming; relay networks (telecommunication) ;telecommunication security; cognitive radio networks security; cooperative jamming; primary base station; primary receiver; relay assisted downlink cognitive interference channel; relay selection; secondary base station; secondary receiver; Cognitive radio; Interference; Jamming; Physical layer; Relays; Scattering; Security; Cooperative jamming; cognitive radio network; secrecy rate (ID#:14-2886)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6882674&isnumber=6882614
- Elkashlan, M.; Wang, L.; Duong, T.Q.; Karagiannidis, G.K.; Nallanathan, A, "On the Security of Cognitive Radio Networks," Vehicular Technology, IEEE Transactions on, vol. PP, no.99, pp.1, 1, September 2014. doi: 10.1109/TVT.2014.2358624 Cognitive radio has emerged as an essential recipe for future high-capacity high-coverage multi-tier hierarchical networks. Securing data transmission in these networks is of utmost importance. In this paper, we consider the cognitive wiretap channel and propose multiple antennas to secure the transmission at the physical layer, where the eavesdropper overhears the transmission from the secondary transmitter to the secondary receiver. The secondary receiver and the eavesdropper are equipped with multiple antennas, and passive eavesdropping is considered where the channel state information of the eavesdropper's channel is not available at the secondary transmitter. We present new closedform expressions for the exact and asymptotic secrecy outage probability. Our results reveal the impact of the primary network on the secondary network in the presence of a multi-antenna wiretap channel.
Keywords: Antennas; Cognitive radio; Interference; Radio transmitters; Receivers; Security; Signal to noise ratio (ID#:14-2887)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6901288&isnumber=4356907
- Safdar, G.A; Albermany, S.; Aslam, N.; Mansour, A; Epiphaniou, G., "Prevention Against Threats To Self Co-Existence - A Novel Authentication Protocol For Cognitive Radio Networks," Wireless and Mobile Networking Conference (WMNC), 2014 7th IFIP, pp.1, 6, 20-22 May 2014. doi: 10.1109/WMNC.2014.6878857 Cognitive radio networks are intelligent networks that can sense the environment and adapt the communication parameters accordingly. These networks find their applications in co-existence of different wireless networks, interference mitigation, and dynamic spectrum access. Unlike traditional wireless networks, cognitive radio networks additionally have their own set of unique security threats and challenges, such as selfish misbehaviours, self-coexistence, license user emulation and attacks on spectrum managers; accordingly the security protocols developed for these networks must have abilities to counter these attacks. This paper presents a novel cognitive authentication protocol, called CoG-Auth, aimed to provide security in cognitive radio networks against threats to self co-existence. CoG-Auth does not require presence of any resource enriched base stations or centralised certification authorities, thus enabling it to be applicable to both infrastructure and ad hoc cognitive radio networks. The CoG-Auth design employs key hierarchy; such as temporary keys, partial keys and session keys to fulfil the fundamental requirements of security. CoG-Auth is compared with IEEE 802.16e standard PKMv2 for performance analysis; it is shown that CoG-Auth is secure, more efficient, less computational intensive, and performs better in terms of authentication time, successful authentication and transmission rate.
Keywords: cognitive radio; cryptographic protocols; CoG-Auth design; base stations; centralised certification authorities; cognitive radio networks; dynamic spectrum access; intelligent networks; interference mitigation; novel cognitive authentication protocol; security protocols; self co-existence; wireless networks; Authentication; Cognitive radio; Encryption; Protocols; Standards; Authentication; Cognitive Radio; Cryptography; Protocol; Security (ID#:14-2888)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6878857&isnumber=6878843
- Savas, O.; Ahn, G.S.; Deng, J., "Securing Cognitive Radio Networks Against Belief Manipulation Attacks Via Trust Management," Collaboration Technologies and Systems (CTS), 2014 International Conference on , vol., no., pp.158,165, 19-23 May 2014. doi: 10.1109/CTS.2014.6867559 Cognitive Radio (CR) provides cognitive, self-organizing, and reconfiguration features. When forming a network, namely Cognitive Radio Networks (CRNs), these features can further provide network agility and spectrum sharing. On the other hand, they also make the network much more vulnerable than other traditional wireless networks, e.g., ad hoc wireless or sensor networks. In particular, the malicious nodes may exploit the cognitive engine of CRs, and conduct belief manipulation attacks to degrade the network performance. Traditional security methods using cryptography or authentication cannot adequately address these attacks. In this paper, we propose to use trust management for a more robust CRN operation against belief manipulation attacks. Specifically, we first study the effects of malicious behaviors to the network performance, define trust evaluation metrics to capture malicious behaviors, and illustrate how trust management strategy can help to enhance the robustness of network operations in various network configurations.
Keywords: ad hoc networks; authorisation; cognitive radio; cryptography; radio spectrum management; CRN; ad hoc wireless; authentication; belief manipulation attacks; cognitive radio networks; cryptography; malicious nodes; network agility; security methods; sensor networks ;spectrum sharing; trust management; Ad hoc networks; Authentication; Cognitive radio; Routing; Throughput; Uncertainty; Cognitive radio networks; belief manipulation attack; cross-layer networking; trust initialization; trust management (ID#:14-2889)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6867559&isnumber=6867522
- Li Hongning; Pei Qingqi; Ma Lichuan, "Channel Selection Information Hiding Scheme For Tracking User Attack In Cognitive Radio Networks," Communications, China, vol.11, no.3, pp.125,136, March 2014. doi: 10.1109/CC.2014.6825265 For the discontinuous occupancy of primary users in cognitive radio networks (CRN), the time-varying of spectrum holes becomes more and more highlighted. In the dynamic environment, cognitive users can access channels that are not occupied by primary users, but they have to hand off to other spectrum holes to continue communication when primary users come back, which brings new security problems. Tracking user attack (TUA) is a typical attack during spectrum handoff, which will invalidate handoff by preventing user accessing, and break down the whole network. In this paper, we propose a Channel Selection Information Hiding scheme (CSIH) to defense TUA. With the proposed scheme, we can destroy the routes to the root node of the attack tree by hiding the information of channel selection and enhance the security of cognitive radio networks.
Keywords: cognitive radio; mobility management (mobile radio); radio spectrum management; tracking; CRN; CSIH; TUA; access channels; channel selection information hiding scheme; cognitive radio networks; root node; spectrum handoff; spectrum holes; tracking user attack; Channel estimation; Cognitive radio; Communication system security; Security; Tracking; Wireless sensor networks; attack tree; handoff; tracking user attack (ID#:14-2890)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825265&isnumber=6825249
- Jung-Min Park; Reed, J.H.; Beex, AA; Clancy, T.C.; Kumar, V.; Bahrak, B., "Security and Enforcement in Spectrum Sharing," Proceedings of the IEEE, vol.102, no.3, pp.270,281, March 2014. doi: 10.1109/JPROC.2014.2301972 When different stakeholders share a common resource, such as the case in spectrum sharing, security and enforcement become critical considerations that affect the welfare of all stakeholders. Recent advances in radio spectrum access technologies, such as cognitive radios, have made spectrum sharing a viable option for significantly improving spectrum utilization efficiency. However, those technologies have also contributed to exacerbating the difficult problems of security and enforcement. In this paper, we review some of the critical security and privacy threats that impact spectrum sharing. We propose a taxonomy for classifying the various threats, and describe representative examples for each threat category. We also discuss threat countermeasures and enforcement techniques, which are discussed in the context of two different approaches: ex ante (preventive) and ex post (punitive) enforcement.
Keywords: cognitive radio; radio spectrum management; telecommunication security; cognitive radios; enforcement techniques; ex ante enforcement; ex post enforcement; preventive enforcement; privacy threats; punitive enforcement; radio spectrum access technologies; security; spectrum sharing ;spectrum utilization; stakeholders; taxonomy; threat category; Data privacy; Interference; Network security; Privacy; Radio spectrum management; Sensors; Cognitive radio; dynamic spectrum access ;enforcement; security; spectrum sharing (ID#:14-2891)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6732887&isnumber=6740864
- Sheng Zhong; Haifan Yao, "Towards Cheat-Proof Cooperative Relay for Cognitive Radio Networks," Parallel and Distributed Systems, IEEE Transactions on, vol.25, no.9, pp.2442, 2451, Sept. 2014 doi: 10.1109/TPDS.2013.151 In cognitive radio networks, cooperative relay is a new technology that can significantly improve spectrum efficiency. While the existing protocols for cooperative relay are very interesting and useful, there is a crucial problem that has not been investigated: Selfish users may cheat in cooperative relay, in order to benefit themselves. Here by cheating we mean the behavior of reporting misleading channel and payment information to the primary user and other secondary users. Such cheating behavior may harm other users and thus lead to poor system throughput. Given the threat of selfish users' cheating, our objective in this paper is to suppress the cheating behavior of selfish users in cooperative relay. Hence, we design the first cheat-proof scheme for cooperative relay in cognitive radio networks, and rigorously prove that under our scheme, selfish users have no incentive to cheat. Our design and analysis start in the model of strategic game for interactions among secondary users; then they are extended to the entire cooperative relay process, which is modeled as an extensive game. To make our schemes more practical, we also consider two aspects: fairness and system security. Results of extensive simulations demonstrate that our scheme suppresses cheating behavior and thus improves the system throughput in face of selfish users.
Keywords: cognitive radio; cooperative communication; relay networks (telecommunication) ;cheat-proof cooperative relay; cognitive radio networks; fairness aspect; misleading channel; payment information; secondary users; strategic game; system security; Cognitive radio networks; cheat-proof; cooperative relay; fairness (ID#:14-2892)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6520841&isnumber=6873370
- Rocca, P.; Quanjiang Zhu; Bekele, E.T.; Shiwen Yang; Massa, A, "4-D Arrays as Enabling Technology for Cognitive Radio Systems," Antennas and Propagation, IEEE Transactions on, vol.62, no.3, pp.1102, 1116, March 2014. doi: 10.1109/TAP.2013.2288109 Time-modulation (TM) in four-dimensional (4-D) arrays is implemented by using a set of radio-frequency switches in the beam forming network to modulate, by means of periodic pulse sequences, the static excitations and thus control the antenna radiation features. The on-off reconfiguration of the switches, that can be easily implemented via software, unavoidably generates harmonic radiations that can be suitably exploited for multiple channel communication purposes. As a matter of fact, harmonic beams can be synthesized having different spatial distribution and shapes in order to receive signals arriving on the antenna from different directions. Similarly, the capability to generate a field having different frequency and spatial distribution implies that the signal transmitted by time-modulated 4-D arrays is direction-dependent. Accordingly, such a feature is also exploited to implement a secure communication scheme directly at the physical layer. Thanks to the easy software-based reconfigurability, the multiple harmonic beamforming, and the security capability, 4-D arrays can be considered as an enabling technology for future cognitive radio systems. In this paper, these potentialities of time-modulated 4-D arrays are presented and their effectiveness is supported by a set of representative numerical simulation results.
Keywords: MIMO communication; antenna arrays; antenna radiation patterns; array signal processing; cognitive radio; modulation; telecommunication security; time-domain analysis ;antenna radiation features; beam forming network; four-dimensional arrays; future cognitive radio systems; harmonic beams ;harmonic radiations; multiple channel communication purposes; multiple harmonic beamforming; on-off reconfiguration; periodic pulse sequences; physical layer; radiofrequency switches; secure communication scheme; security capability; software-based reconfigurability; spatial distribution; static excitations; time-modulated 4D arrays time-modulation; Antenna arrays; Directive antennas; Harmonic analysis; Optimization; Radio frequency; Receiving antennas;4-D arrays; cognitive radio; harmonic beamforming; multiple-input multiple-output (MIMO); reconfigurability; secure communications; time-modulated arrays (ID#:14-2893)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6651739&isnumber=6750022
- Heuel, S.; Roessler, A, "Coexistence of S-Band radar and 4G Mobile Networks," Radar Symposium (IRS), 2014 15th Internationa , pp.1,4, 16-18 June 2014. doi: 10.1109/IRS.2014.6869236 Today's wireless network and radar systems are designed to obey a fixed spectrum assignment policy regulated by the Federal Communications Commission (FCC). The assignment served well in the past, but sparse or medium utilization of some frequencies confronts heavy usage of others. This ineffective spectrum allocation contradicts the dramatically increasing need of bandwidth for security systems like radar or mobile networks and causes the evolution of intelligent radios applying dynamic spectrum access i.e. cognitive radio. To underline the demand of dynamic spectrum allocation, this paper addresses coexistence between S-Band Air Traffic Control (ATC) radar systems and LTE mobiles operating in E-UTRA Band 7. Technical requirements for radar and mobile devices operating close to each other are addressed and coexistence validated by test and measurement performed at a major German airport. It is shown that throughput reduction, increased Block Error Rate (BLER) of these mobile radios and reduction of the probability of detection Pd of security relevant S-Band radar occur in the presence of the other service.
Keywords: Long Term Evolution; air traffic control; military radar; radio spectrum management; radiofrequency interference; 4G mobile networks; E-UTRA band 7; Federal Communications Commission; LTE mobile radio; S-band air traffic control radar systems; S-band radar; block error rate; cognitive radio; dynamic spectrum access; dynamic spectrum allocation; fixed spectrum assignment policy; ineffective spectrum allocation; intelligent radios; radar-mobile radio coexistence; security system bandwidth; wireless network; Interference; Meteorological radar; Mobile communication; Radar antennas; Radar measurements;Throughput;4G Networks; ATC Radar; ATS Radar; Coexistence; LTE; S-Band; WiMAX (ID#:14-2894)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6869236&isnumber=6869176
- Dabcevic, K.; Betancourt, A; Marcenaro, L.; Regazzoni, C.S., "A Fictitious Play-Based Game-Theoretical Approach To Alleviating Jamming Attacks For Cognitive Radios," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pp.8158,8162, 4-9 May 2014 doi: 10.1109/ICASSP.2014.6855191 On-the-fly reconfigurability capabilities and learning prospectives of Cognitive Radios inherently bring a set of new security issues. One of them is intelligent radio frequency jamming, where adversary is able to deploy advanced jamming strategies to degrade performance of the communication system. In this paper, we observe the jamming/antijamming problem from a game-theoretical perspective. A game with incomplete information on opponent's payoff and strategy is modelled as a Markov Decision Process (MDP). A variant of fictitious play learning algorithm is deployed to find optimal strategies in terms of combination of channel hopping and power alteration anti-jamming schemes.
Keywords: {Markov processes; cognitive radio; game theory; jamming; MDP; Markov decision process; channel hopping; cognitive radios; fictitious play-based game-theoretical approach; intelligent radio frequency jamming attack; power alteration anti-jamming scheme; Cognitive radio; Games; Interference; Jamming; Radio transmitters; Stochastic processes; Markov models; anti-jamming; channel surfing; cognitive radio; fictitious play; game theory; jamming; power alteration (ID#:14-2895)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6855191&isnumber=6853544
- YuanYuan He; Evans, J.; Dey, S., "Secrecy Rate Maximization For Cooperative Overlay Cognitive Radio Networks With Artificial Noise," Communications (ICC), 2014 IEEE International Conference on, pp.1663,1668, 10-14 June 2014. doi: 10.1109/ICC.2014.6883561 We consider physical-layer security in a novel MISO cooperative overlay cognitive radio network (CRN) with a single eavesdropper. We aim to design an artificial noise (AN) aided secondary transmit strategy to maximize the joint achievable secrecy rate of both primary and secondary links, subject to a global secondary transmit power constraint and guaranteeing any transmission of secondary should at least not degrade the receive quality of primary network, under the assumption that global CSI is available. The resulting optimization problem is challenging to solve due to its non-convexity in general. A computationally efficient approximation methodology is proposed based on the semidefinite relaxation (SDR) technique and followed by a two-step alternating optimization algorithm for obtaining a local optimum for the corresponding SDR problem. This optimization algorithm consists of a one-dimensional line search and a non-convex optimization problem, which, however, through a novel reformulation, can be approximated as a convex semidefinite program (SDP). Analysis on the extension to multiple eavesdroppers scenario is also provided. Simulation results show that the proposed AN-aided joint secrecy rate maximization design (JSRMD) can significantly boost the secrecy performance over JSRMD without AN.
Keywords: cognitive radio; concave programming; convex programming; cooperative communication; overlay networks; radio links; radio networks; telecommunication security; AN aided secondary power transmission strategy; AN-aided joint secrecy rate maximization design; JSRMD; MISO cooperative overlay CRN;SDR technique; artificial noise; computationally efficient approximation methodology; convex semidefinite relaxation program; cooperative overlay cognitive radio networks; global CSI; nonconvex optimization problem; physical layer security; primary links; secondary links; single eavesdropper; two step alternating optimization algorithm; Approximation algorithms; Approximation methods; Cognitive radio; Interference; Jamming; Optimization; Vectors; Overlay Cognitive Radio; amplify-and-forward relaying; artificial interference; physical-layer security; semidefinite relaxation (ID#:14-2896)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883561&isnumber=6883277
- Kumar, V.; Jung-Min Park; Clancy, T.C.; Kaigui Bian, "PHY-Layer Authentication Using Hierarchical Modulation And Duobinary Signaling," Computing, Networking and Communications (ICNC), 2014 International Conference on, pp.782,786, 3-6 Feb. 2014. doi: 10.1109/ICCNC.2014.6785436 In a cognitive radio network, the non-conforming behavior of rogue transmitters is a major threat to opportunistic spectrum access. One approach for facilitating spectrum enforcement and security is to require every transmitter to embed a uniquely-identifiable authentication signal in its waveform at the PHY-layer. In existing PHY-layer authentication schemes, known as blind signal superposition, the authentication/identification signal is added to the message signal as noise, which leads to a tradeoff between the message signal's signal-to-noise (SNR) and the authentication signal's SNR under the assumption of constant average transmitted power. This implies that one cannot improve the former without scarifying the latter, and vice versa. In this paper, we propose a novel PHY-layer authentication scheme called hierarchically modulated duobinary signaling for authentication (HM-DSA). HM-DSA introduces some controlled amount of inter-symbol interference (ISI) into the message signal. The redundancy induced by the addition of the controlled ISI is utilized to embed the authentication signal. Our scheme, HM-DSA, relaxes the constraint on the aforementioned tradeoff and improves the error performance of the message signal as compared to the prior art.
Keywords: message authentication; radio spectrum management; telecommunication signalling; ISI;PHY-layer authentication; SNR; authentication-identification signal; blind signal superposition; cognitive radio network; duobinary signaling; hierarchical modulation; intersymbol interference; message signal; opportunistic spectrum access ;rogue transmitters; signal-to-noise; spectrum enforcement; spectrum security; uniquely-identifiable authentication signal; Authentication; Constellation diagram; Euclidean distance; Radio transmitters; Receivers; Signal to noise ratio (ID#:14-2897)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785436&isnumber=6785290
- ChunSheng Xin; Song, M., "Detection of PUE Attacks in Cognitive Radio Networks Based on Signal Activity Pattern," Mobile Computing, IEEE Transactions on, vol.13, no.5, pp.1022, 1034, May 2014. doi: 10.1109/TMC.2013.121 Promising to significantly improve spectrum utilization, cognitive radio networks (CRNs) have attracted a great attention in the literature. Nevertheless, a new security threat known as the primary user emulation (PUE) attack raises a great challenge to CRNs. The PUE attack is unique to CRNs and can cause severe denial of service (DoS) to CRNs. In this paper, we propose a novel PUE detection system, termed Signal activity Pattern Acquisition and Reconstruction System. Different from current solutions of PUE detection, the proposed system does not need any a priori knowledge of primary users (PUs), and has no limitation on the type of PUs that are applicable. It acquires the activity pattern of a signal through spectrum sensing, such as the ON and OFF periods of the signal. Then it reconstructs the observed signal activity pattern through a reconstruction model. By examining the reconstruction error, the proposed system can smartly distinguish a signal activity pattern of a PU from a signal activity pattern of an attacker. Numerical results show that the proposed system has excellent performance in detecting PUE attacks.
Keywords: cognitive radio; computer network security; radio spectrum management; CRN; DoS; PUE attacks; PUE detection system; cognitive radio networks; denial of service; primary user emulation; signal activity pattern acquisition and reconstruction system; spectrum sensing; spectrum utilization; Data models; Probability distribution; Radio transmitters; Sensors; Training; Training data; Cognitive radio network; primary user emulation attack; primary user emulation detection (ID#:14-2898)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6819890&isnumber=6819877
- Songjun Ma; Yunfeng Peng; Tao Wang; Xiaoying Gan; Feng Yang; Xinbing Wang; Guizani, M., "Detecting the Greedy Spectrum Occupancy Threat In Cognitive Radio Networks," Communications (ICC), 2014 IEEE International Conference on, pp.4939,4944, 10-14 June 2014. doi: 10.1109/ICC.2014.6884103 Recently, security of cognitive radio (CR) is becoming a severe issue. There is one kind of threat, which we call greedy spectrum occupancy threat (GSOT) in this paper, has long been ignored in previous work. In GSOT, a secondary user may selfishly occupy the spectrum for a long time, which makes other users suffer additional waiting time in queue to access the spectrum and leads to congestion or breakdown. In this paper, a queueing model is established to describe the system with greedy secondary user (GSU). Based on this model, the impacts of GSU on the system are evaluated. Numerical results indicate that the steady-state performance of the system is influenced not only by average occupancy time, but also by the number of users as well as number of channels. Since a sudden change in average occupancy time of GSU will produce dramatic performance degradation, the greedy second user prefers to increase its occupancy time in a gradual manner in case it is detected easily. Once it reaches its targeted occupancy time, the system will be in steady state, and the performance will be degraded. In order to detect such a cunning behavior as quickly as possible, we propose a wavelet based detection approach. Simulation results are presented to demonstrate the effectiveness and quickness of the proposed approach.
Keywords: cognitive radio; greedy algorithms; telecommunication security; wavelet transforms; CR security; GSOT; GSU; cognitive radio networks; greedy secondary user; greedy spectrum occupancy threat detection; occupancy time; steady-state performance; wavelet based detection approach; Cognitive radio; Educational institutions; Queueing analysis; Security; Steady-state; Transforms (ID#:14-2899)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6884103&isnumber=6883277
- Kabir, I; Astaneh, S.; Gazor, S., "Forensic Outlier Detection for Cognitive Radio Networks," Communications (QBSC), 2014 27th Biennial Symposium on, vol., no., pp.52, 56, 1-4 June 2014. doi: 10.1109/QBSC.2014.6841183 We consider forensic outlier detection instead of traditional outlier detection to enforce spectrum security in a Cognitive Radio Network (CRN). We investigate a CRN where a group of sensors report their local binary decisions to a Fusion Center (FC), which makes a global decision on the availability of the spectrum. To ensure the truthfulness of the sensors, we examine the reported decisions in order to determine whether a specific sensor is an outlier. We propose several optimal detectors (for known parameters) and suboptimal detectors (for the practical cases where the parameters are unknown) to detect three types of outlier sensors: 1) selfish sensor, which reports the spectrum to be occupied when locally detects its vacancy, 2) malicious sensor, which reports the spectrum to be vacant when locally detects its occupancy, 3) malfunctioning sensor, whose reports are not accurate enough (i.e., its performance is close to random guessing). We evaluate the proposed detectors by simulations. Our simulation results reveal that the proposed detectors significantly outperform the Grubb's test. Since the unknown or untrustworthy parameters are accurately estimated by the FC, the proposed suboptimal detectors do not require the knowledge of the spectrum statistics and are insensitive to the parameters reported by the suspected user. These detectors can be used by government agencies for forensic testing in policy control and abuser identification in CRNs.
Keywords: {cognitive radio; decision theory; radio networks; sensor fusion; signal detection; telecommunication security; CRN; FC; Grubb test; abuser identification; cognitive radio networks; forensic outlier detection; forensic testing; fusion center; local binary decisions; malfunctioning sensor; malicious sensor; optimal detectors; outlier sensors; policy control; selfish sensor; spectrum security; spectrum statistics; suboptimal detectors; Availability; Cognitive radio; Detectors; Forensics; Maximum likelihood estimation; Simulation; Cognitive radio; forensic cognitive detection; outlier detection; policy enforcement; spectrum security (ID#:14-2900)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841183&isnumber=6841165
- Alvarado, A; Scutari, G.; Jong-Shi Pang, "A New Decomposition Method for Multiuser DC-Programming and Its Applications," Signal Processing, IEEE Transactions on, vol.62, no.11, pp.2984, 2998, June1, 2014. doi: 10.1109/TSP.2014.2315167 We propose a novel decomposition framework for the distributed optimization of Difference Convex (DC)-type nonseparable sum-utility functions subject to coupling convex constraints. A major contribution of the paper is to develop for the first time a class of (inexact) best-response-like algorithms with provable convergence, where a suitably convexified version of the original DC program is iteratively solved. The main feature of the proposed successive convex approximation method is its decomposability structure across the users, which leads naturally to distributed algorithms in the primal and/or dual domain. The proposed framework is applicable to a variety of multiuser DC problems in different areas, ranging from signal processing, to communications and networking. As a case study, in the second part of the paper we focus on two examples, namely: i) a novel resource allocation problem in the emerging area of cooperative physical layer security and ii) and the renowned sum-rate maximization of MIMO Cognitive Radio networks. Our contribution in this context is to devise a class of easy-to-implement distributed algorithms with provable convergence to stationary solution of such problems. Numerical results show that the proposed distributed schemes reach performance close to (and sometimes better than) that of centralized methods.
Keywords: MIMO communication; approximation theory; cognitive radio; convex programming; cooperative communication; distributed algorithms; iterative methods; multiuser detection; resource allocation; telecommunication security; MIMO cognitive radio networks; best-response-like algorithms; convex constraint coupling; cooperative physical layer security; decomposability structure; decomposition method; difference convex-type nonseparable sum-utility functions; distributed algorithms; distributed optimization; inexact algorithms; multiuser DC-programming; novel resource allocation problem; renowned sum-rate maximization; signal processing; successive convex approximation method; Approximation methods; Convergence; Couplings; Jamming; Linear programming; Optimization; Signal processing algorithms; Cooperative physical layer security; cognitive radio; difference convex program; distributed algorithms; successive convex approximation (ID#:14-2901)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781556&isnumber=6809867
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Compiler Security
Much of software security focuses on applications, but compiler security should also be an area of concern. Compilers can "correct" secure coding in the name of efficient processing. The works cited here look at various approaches and issues in compiler security. These articles appeared in the first half of 2014.
- Bayrak, A; Regazzoni, F.; Novo Bruna, D.; Brisk, P.; Standaert, F.; Ienne, P., "Automatic Application of Power Analysis Countermeasures," Computers, IEEE Transactions on, vol. PP, no. 99, pp.1,1, Jan 2014. doi: 10.1109/TC.2013.219 We introduce a compiler that automatically inserts software countermeasures to protect cryptographic algorithms against power-based side-channel attacks. The compiler first estimates which instruction instances leak the most information through side-channels. This information is obtained either by dynamic analysis, evaluating an information theoretic metric over the power traces acquired during the execution of the input program, or by static analysis. As information leakage implies a loss of security, the compiler then identifies (groups of) instruction instances to protect with a software countermeasure such as random precharging or Boolean masking. As software protection incurs significant overhead in terms of cryptosystem runtime and memory usage, the compiler protects the minimum number of instruction instances to achieve a desired level of security. The compiler is evaluated on two block ciphers, AES and Clefia; our experiments demonstrate that the compiler can automatically identify and protect the most important instruction instances. To date, these software countermeasures have been inserted manually by security experts, who are not necessarily the main cryptosystem developers. Our compiler offers significant productivity gains for cryptosystem developers who wish to protect their implementations from side-channel attacks.
Keywords: Assembly; Computers; Cryptography; Sensitivity; Software; Automatic Programming; Physical security (ID#:14-2705)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6671593&isnumber=4358213
- Yier Jin, "EDA Tools Trust Evaluation Through Security Property Proofs," Design, Automation and Test in Europe Conference and Exhibition (DATE), 2014 , vol., no., pp.1,4, 24-28 March 2014. doi: 10.7873/DATE.2014.260 The security concerns of EDA tools have long been ignored because IC designers and integrators only focus on their functionality and performance. This lack of trusted EDA tools hampers hardware security researchers' efforts to design trusted integrated circuits. To address this concern, a novel EDA tools trust evaluation framework has been proposed to ensure the trustworthiness of EDA tools through its functional operation, rather than scrutinizing the software code. As a result, the newly proposed framework lowers the evaluation cost and is a better fit for hardware security researchers. To support the EDA tools evaluation framework, a new gate-level information assurance scheme is developed for security property checking on any gatelevel netlist. Helped by the gate-level scheme, we expand the territory of proof-carrying based IP protection from RT-level designs to gate-level netlist, so that most of the commercially trading third-party IP cores are under the protection of proof-carrying based security properties. Using a sample AES encryption core, we successfully prove the trustworthiness of Synopsys Design Compiler in generating a synthesized netlist.
Keywords: cryptography; electronic design automation; integrated circuit design; AES encryption core; EDA tools trust evaluation; Synopsys design compiler; functional operation; gate-level information assurance scheme; gate-level netlist; hardware security researchers; proof-carrying based IP protection; security property proofs; software code; third-party IP cores; trusted integrated circuits ;Hardware; IP networks; Integrated circuits; Logic gates; Sensitivity; Trojan horses (ID#:14-2706)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6800461&isnumber=6800201
- Woodruff, J.; Watson, R.N.M.; Chisnall, D.; Moore, S.W.; Anderson, J.; Davis, B.; Laurie, B.; Neumann, P.G.; Norton, R.; Roe, M., "The CHERI Capability Model: Revisiting RISC In An Age Of Risk," Computer Architecture (ISCA), 2014 ACM/IEEE 41st International Symposium on , vol., no., pp.457,468, 14-18 June 2014. doi: 10.1109/ISCA.2014.6853201 Motivated by contemporary security challenges, we reevaluate and refine capability-based addressing for the RISC era. We present CHERI, a hybrid capability model that extends the 64-bit MIPS ISA with byte-granularity memory protection. We demonstrate that CHERI enables language memory model enforcement and fault isolation in hardware rather than software, and that the CHERI mechanisms are easily adopted by existing programs for efficient in-program memory safety. In contrast to past capability models, CHERI complements, rather than replaces, the ubiquitous page-based protection mechanism, providing a migration path towards deconflating data-structure protection and OS memory management. Furthermore. CHERI adheres to a strict RISC philosophy: it maintains a load-store architecture and requires only single-cycle instructions, and supplies protection primitives to the compiler, language runtime, and operating system. We demonstrate a mature FPGA implementation that runs the FreeBSD operating system with a full range of software and an open-source application suite compiled with an extended LLVM to use CHERI memory protection. A limit study compares published memory safety mechanisms in terms of instruction count and memory overheads. The study illustrates that CHERI is performance-competitive even while providing assurance and greater flexibility with simpler hardware.
Keywords: field programmable gate arrays; operating systems (computers);reduced instruction set computing; security of data; CHERI hybrid capability model; CHERI memory protection; FPGA implementation; FreeBSD operating system; MIPS ISA;OS memory management; RISC era; byte-granularity memory protection; capability hardware enhanced RISC instruction; compiler; data-structure protection; fault isolation; field programmable gate array; in-program memory safety; instruction count ;instruction set architecture; language memory model enforcement; language runtime; load-store architecture; memory overhead; open-source application suite; reduces instruction set computing; single-cycle instructions; ubiquitous page-based protection mechanism; Abstracts; Coprocessors; Ground penetrating radar; Registers; Safety (ID#:14-2707)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6853201&isnumber=6853187
- Barbosa, C.E.; Trindade, G.; Epelbaum, V.J.; Gomes Chang, J.; Oliveira, J.; Rodrigues Neto, J.A; Moreira de Souza, J., "Challenges on Designing A Distributed Collaborative UML Editor," Computer Supported Cooperative Work in Design (CSCWD), Proceedings of the 2014 IEEE 18th International Conference on, pp.59,64, 21-23 May 2014.doi: 10.1109/CSCWD.2014.6846817 Software development projects with geographically disperse teams, especially when use UML models for code generation, may gain performance by using tools with collaborative capabilities. This study reviews the distributed collaborative UML editors available in the literature. The UML Editors were compared using a Workstyle Model. Then, we discuss the fundamental challenges which these kind of UML Editors face to assist distributed developers and stakeholders across disperse locations.
Keywords: Unified Modeling Language; groupware; program compilers; project management; software development management; UML models; Workstyle model; code generation; collaborative capabilities; distributed collaborative UML editors; geographically disperse teams; software development projects; Collaboration; Real-time systems; Security; Software; Synchronization; Syntactics; Unified modeling language; UML ;challenges; comparation; editor; review (ID#:14-2708)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846817&isnumber=6846800
- Larsen, P.; Brunthaler, S.; Franz, M., "Security through Diversity: Are We There Yet?," Security & Privacy, IEEE, vol.12, no.2, pp.28,35, Mar.-Apr. 2014. doi: 10.1109/MSP.2013.129 Because most software attacks rely on predictable behavior on the target platform, mass distribution of identical software facilitates mass exploitation. Countermeasures include moving-target defenses in general and biologically inspired artificial software diversity in particular. Although the concept of software diversity has interested researchers for more than 20 years, technical obstacles prevented its widespread adoption until now. Massive-scale software diversity has become practical due to the Internet (enabling distribution of individualized software) and cloud computing (enabling the computational power to perform diversification). In this article, the authors take stock of the current state of software diversity research. The potential showstopper issues are mostly solved; the authors describe the remaining issues and point to a realistic adoption path.
Keywords: cloud computing; security of data; software engineering; Internet; biologically inspired artificial software diversity; cloud computing; mass exploitation; mass identical software distribution; massive-scale software diversity; moving-target defenses; predictable behavior; security; software attacks; target platform; Computer crime; Computer security; Internet; Memory management; Prediction methods; Program processors; Runtime environment; Software architecture; compilers; error handling and recovery; programming languages; software engineering; system issues; testing and debugging (ID#:14-2709)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6617633&isnumber=6798534
- Agosta, G.; Barenghi, A; Pelosi, G.; Scandale, M., "A Multiple Equivalent Execution Trace Approach To Secure Cryptographic Embedded Software," Design Automation Conference (DAC), 2014 51st ACM/EDAC/IEEE, pp.1,6, 1-5 June 2014. doi: 10.1109/DAC.2014.6881537 We propose an efficient and effective method to secure software implementations of cryptographic primitives on low-end embedded systems, against passive side-channel attacks relying on the observation of power consumption or electro-magnetic emissions. The proposed approach exploits a modified LLVM compiler toolchain to automatically generate a secure binary characterized by a randomized execution flow. Also, we provide a new method to refresh the random values employed in the share splitting approaches to lookup table protection, addressing a currently open issue. We improve the current state-of-the-art in dynamic executable code countermeasures removing the requirement of a writeable code segment, and reducing the countermeasure overhead.
Keywords: cryptography; embedded systems; program compilers; table lookup; LLVM compiler toolchain; countermeasure overhead reduction; cryptographic embedded software security; cryptographic primitives; dynamic executable code countermeasures; electromagnetic emissions; lookup table protection; low-end embedded systems; multiple equivalent execution trace approach; passive side-channel attacks; power consumption observation; random values; randomized execution flow; share splitting approach; writeable code segment; Ciphers; Optimization; Power demand; Registers; Software; Power Analysis Attacks; Software Countermeasures; Static Analysis (ID#:14-2710)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6881537&isnumber=6881325
- Calvagna, A; Fornaia, A; Tramontana, E., "Combinatorial Interaction Testing of a Java Card Static Verifier," Software Testing, Verification and Validation Workshops (ICSTW), 2014 IEEE Seventh International Conference on, pp.84,87, March 31 2014-April 4 2014. doi: 10.1109/ICSTW.2014.10 We present a combinatorial interaction testing approach to perform validation testing of a fundamental component for the security of Java Cards: the byte code verifier. Combinatorial testing of all states of the Java Card virtual machine has been adopted as the coverage criteria. We developed a formal model of the Java Card byte code syntax to enable the combinatorial enumeration of well-formed states, and a formal model of the byte code semantic rules to be able to distinguish between well-typed and ill-typed ones, and to derive actual test programs from them. A complete framework has been implemented, enabling fully automated application and evaluation of the conformance tests to any verifier implementation.
Keywords: Java; combinatorial mathematics; formal verification; operating systems (computers); program compilers; program testing; virtual machines; Java card byte code syntax; Java card static verifier; Java card virtual machine; byte code semantic rules; byte code verifier; combinatorial enumeration; combinatorial interaction testing; formal model; test programs; validation testing; Java; Law; Load modeling; Semantics; Testing; Virtual machining; Java virtual machine; combinatorial interaction testing; software engineering (ID#:14-2711)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825642&isnumber=6825623
- Hu Ge; Li Ting; Dong Hang; Yu Hewei; Zhang Miao, "Malicious Code Detection for Android Using Instruction Signatures," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on , vol., no., pp.332,337, 7-11 April 2014. doi: 10.1109/SOSE.2014.48 This paper provides an overview of the current static analysis technology of Android malicious code, and a detailed analysis of the format of APK which is the application name of Android platform executable file (dex). From the perspective of binary sequence, Dalvik VM file is syncopated in method, and these test samples are analyzed by automated DEX file parsing tools and Levenshtein distance algorithm, which can detect the malicious Android applications that contain the same signatures effectively. Proved by a large number of samples, this static detection system that based on signature sequences can't only detect malicious code quickly, but also has a very low rate of false positives and false negatives.
Keywords: Android (operating system); digital signatures; program compilers; program diagnostics; APK format; Android malicious code detection; Android platform executable file; Dalvik VM file; Levenshtein distance algorithm; automated DEX file parsing tools; binary sequence; instruction signatures; malicious Android applications detection; signature sequences; static analysis technology; static detection system; Libraries; Malware; Mobile communication; Smart phones; Software; Testing; Android; DEX; Static Analysis; malicious code (ID#:14-2712)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830926&isnumber=6825948
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Compressive Sampling
Compressive sampling (or compressive sensing) is an important theory in signal processing. It allows efficient acquisition and reconstruction of a signal and may also be the basis for user identification. The works cited here were published or presented between January and August of 2014.
- Wei Wang; Xiao-Yi Pan; Yong-Cai Liu; De-Jun Feng; Qi-Xiang Fu, "Sub-Nyquist Sampling Jamming Against ISAR With Compressive Sensing," Sensors Journal, IEEE, vol.14, no.9, pp.3131,3136, Sept. 2014. doi: 10.1109/JSEN.2014.2323978 Shannon-Nyquist theorem indicates that under-sampling at low rates will lead to aliasing in the frequency domain of signal and can be utilized in electronic warfare. However, the question is whether it still works when the compressive sensing (CS) algorithm is applied into reconstruction of target. This paper concerns sub-Nyquist sampled jamming signals and its corresponding influence on inverse synthetic aperture radar (ISAR) imaging via CS. Results show that multiple deceptive false-target images with finer resolution will be induced after the sub-Nyquist sampled jamming signals dealed with CS-based reconstruction algorithm; hence, the sub-Nyquist sampling can be adopted in the generation of decoys against ISAR with CS. Experimental results of the scattering model of the Yak-42 plane and real data are used to verify the correctness of the analyses.
Keywords: compressed sensing; image reconstruction; image resolution; image sampling; jamming; radar imaging; synthetic aperture radar; CS-based reconstruction algorithm; ISAR imaging; Shannon-Nyquist theorem;Yak-42 plane; compressive sensing algorithm; decoy generation; electronic warfare; frequency domain analysis; inverse synthetic aperture radar imaging; multiple deceptive false-target image resolution; scattering model; subNyquist sampled jamming signal; Compressed sensing; mage resolution; Imaging; Jamming; Radar imaging; Scattering; Sub-Nyquist sampling; compressive sensing (CS); deception jamming; inverse syntheticaperture radar (ISAR) (ID#:14-2713)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6815640&isnumber=6862121
- Lagunas, E.; Najar, M., "Robust Primary User Identification Using Compressive Sampling For Cognitive Radios," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on, pp.2347,2351, 4-9 May 2014. doi: 10.1109/ICASSP.2014.6854019 In cognitive radio (CR), the problem of limited spectral resources is solved by enabling unlicensed systems to opportunistically utilize the unused licensed bands. Compressive Sensing (CS) has been successfully applied to alleviate the sampling bottleneck in wideband spectrum sensing leveraging the sparseness of the signal spectrum in open-access networks. This has inspired the design of a number of techniques that identify spectrum holes from sub-Nyquist samples. However, the existence of interference emanating from low-regulated transmissions, which cannot be taken into account in the CS model because of their non-regulated nature, greatly degrades the identification of licensed activity. Capitalizing on the sparsity described by licensed users, this paper introduces a feature-based technique for primary user's spectrum identification with interference immunity which works with a reduced amount of data. The proposed method detects which channels are occupied by primary users' and also identify the primary users transmission powers without ever reconstructing the signals involved. Simulation results show the effectiveness of the proposed technique for interference suppression and primary user detection.
Keywords: cognitive radio; compressed sensing; interference suppression; radio spectrum management; cognitive radio; compressive sensing; feature-based technique; interference immunity; interference suppression; licensed users; limited spectral resources; low-regulated transmissions; open-access networks; primary user detection; sampling bottleneck; signal spectrum; spectrum holes; spectrum identification; sub-Nyquist samples; unlicensed systems; unused licensed bands; wideband spectrum sensing; Correlation; Feature extraction; Interference; Noise; Sensors; Spectral shape; Vectors (ID#:14-2714)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6854019&isnumber=6853544
- Yuxin Chen; Goldsmith, AJ.; Eldar, Y.C., "Channel Capacity Under Sub-Nyquist Nonuniform Sampling," Information Theory, IEEE Transactions on, vol.60, no.8, pp.4739,4756, Aug. 2014. doi: 10.1109/TIT.2014.2323406 This paper investigates the effect of sub-Nyquist sampling upon the capacity of an analog channel. The channel is assumed to be a linear time-invariant Gaussian channel, where perfect channel knowledge is available at both the transmitter and the receiver. We consider a general class of right-invertible time-preserving sampling methods which includes irregular nonuniform sampling, and characterize in closed form the channel capacity achievable by this class of sampling methods, under a sampling rate and power constraint. Our results indicate that the optimal sampling structures extract out the set of frequencies that exhibits the highest signal-to-noise ratio among all spectral sets of measure equal to the sampling rate. This can be attained through filterbank sampling with uniform sampling grid employed at each branch with possibly different rates, or through a single branch of modulation and filtering followed by uniform sampling. These results reveal that for a large class of channels, employing irregular nonuniform sampling sets, while are typically complicated to realize in practice, does not provide capacity gain over uniform sampling sets with appropriate preprocessing. Our findings demonstrate that aliasing or scrambling of spectral components does not provide capacity gain in this scenario, which is in contrast to the benefits obtained from random mixing in spectrum-blind compressive sampling schemes.
Keywords: Gaussian channels; channel bank filters; channel capacity; sampling methods; transceivers; analog channel;capacity gain; channel capacity; filterbank sampling; filtering single branch ;frequency set; irregular nonuniform sampling; irregular nonuniform sampling sets ;linear time-invariant Gaussian channel; modulation single branch; optimal sampling structures; power constraint; random mixing; receiver; right-invertible time-preserving sampling methods; sampling rate; signal-to-noise ratio; spectral components aliasing; spectral components scrambling; spectral sets; spectrum-blind compressive sampling schemes;s ubNyquist nonuniform sampling effect; transmitter; uniform sampling grid; Channel capacity; Data preprocessing; Measurement; Modulation; Nonuniform sampling; Upper bound; Be
URLing density; Nonuniform sampling; channel capacity; irregular sampling; sampled analog channels; sub-Nyquist sampling; time-preserving sampling systems(ID#:14-2715)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814945&isnumber=6851961
- Feng Xi; Shengyao Chen; Zhong Liu, "Quadrature Compressive Sampling for Radar Signals," Signal Processing, IEEE Transactions on, vol.62, no.11, pp.2787,2802, June1, 2014. doi: 10.1109/TSP.2014.2315168 Quadrature sampling has been widely applied in coherent radar systems to extract in-phase and quadrature ( I and Q) components in the received radar signal. However, the sampling is inefficient because the received signal contains only a small number of significant target signals. This paper incorporates the compressive sampling (CS) theory into the design of the quadrature sampling system, and develops a quadrature compressive sampling (QuadCS) system to acquire the I and Q components with low sampling rate. The QuadCS system first randomly projects the received signal into a compressive bandpass signal and then utilizes the quadrature sampling to output compressive I and Q components. The compressive outputs are used to reconstruct the I and Q components. To understand the system performance, we establish the frequency domain representation of the QuadCS system. With the waveform-matched dictionary, we prove that the QuadCS system satisfies the restricted isometry property with overwhelming probability. For K target signals in the observation interval T, simulations show that the QuadCS requires just O(Klog(BT/K)) samples to stably reconstruct the signal, where B is the signal bandwidth. The reconstructed signal-to-noise ratio decreases by 3 dB for every octave increase in the target number K and increases by 3 dB for every octave increase in the compressive bandwidth. Theoretical analyses and simulations verify that the proposed QuadCS is a valid system to acquire the I and Q components in the received radar signals.
Keywords: compressed sensing; frequency-domain analysis; probability; radar receivers; radar signal processing; signal reconstruction; signal sampling; QuadCS system; compressive bandpass signal; frequency domain representation; noise figure 3 dB; probability; quadrature compressive sampling theory; received radar signal sampling ;signal reconstruction; signal-to-noise ratio;waveform-matching;Bandwidth;Baseband;Demodulation;Dictionaries;Frequency-domain analysis; Radar; Vectors; Analog-to-digital conversion; compressive sampling; quadrature sampling; restricted isometry property; sparse signal reconstruction (ID#:14-2716)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781614&isnumber=6809867
- Xianbiao Shu; Jianchao Yang; Ahuja, N., "Non-local Compressive Sampling Recovery," Computational Photography (ICCP), 2014 IEEE International Conference on, pp.1,8, 2-4 May 2014. doi: 10.1109/ICCPHOT.2014.6831806 Compressive sampling (CS) aims at acquiring a signal at a sampling rate below the Nyquist rate by exploiting prior knowledge that a signal is sparse or correlated in some domain. Despite the remarkable progress in the theory of CS, the sampling rate on a single image required by CS is still very high in practice. In this paper, a non-local compressive sampling (NLCS) recovery method is proposed to further reduce the sampling rate by exploiting non-local patch correlation and local piecewise smoothness present in natural images. Two non-local sparsity measures, i.e., non-local wavelet sparsity and non-local joint sparsity, are proposed to exploit the patch correlation in NLCS. An efficient iterative algorithm is developed to solve the NLCS recovery problem, which is shown to have stable convergence behavior in experiments. The experimental results show that our NLCS significantly improves the state-of-the-art of image compressive sampling.
Keywords: compressed sensing; correlation theory ;image sampling; iterative methods; natural scenes; wavelet transforms; NLCS recovery method; Nyquist rate; image compressive sampling; iterative algorithm; local piecewise smoothness; natural images; nonlocal compressive sampling recovery; nonlocal joint sparsity; nonlocal patch correlation; nonlocal sparsity measure; nonlocal wavelet sparsity; sampling rate reduction; signal acquisition; sparse signal; Correlation; Image coding; Imaging; Joints; Three-dimensional displays; Videos; Wavelet transforms (ID#:14-2717)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6831806&isnumber=6831796
- Banitalebi-Dehkordi, M.; Abouei, J.; Plataniotis, K.N., "Compressive-Sampling-Based Positioning in Wireless Body Area Networks," Biomedical and Health Informatics, IEEE Journal of, vol.18, no.1, pp.335, 344, Jan. 2014.doi: 10.1109/JBHI.2013.2261997 Recent achievements in wireless technologies have opened up enormous opportunities for the implementation of ubiquitous health care systems in providing rich contextual information and warning mechanisms against abnormal conditions. This helps with the automatic and remote monitoring/tracking of patients in hospitals and facilitates and with the supervision of fragile, elderly people in their own domestic environment through automatic systems to handle the remote drug delivery. This paper presents a new modeling and analysis framework for the multipatient positioning in a wireless body area network (WBAN) which exploits the spatial sparsity of patients and a sparse fast Fourier transform (FFT)-based feature extraction mechanism for monitoring of patients and for reporting the movement tracking to a central database server containing patient vital information. The main goal of this paper is to achieve a high degree of accuracy and resolution in the patient localization with less computational complexity in the implementation using the compressive sensing theory. We represent the patients' positions as a sparse vector obtained by the discrete segmentation of the patient movement space in a circular grid. To estimate this vector, a compressive-sampling-based two-level FFT (CS-2FFT) feature vector is synthesized for each received signal from the biosensors embedded on the patient's body at each grid point. This feature extraction process benefits in the combination of both short-time and long-time properties of the received signals. The robustness of the proposed CS-2FFT-based algorithm in terms of the average positioning error is numerically evaluated using the realistic parameters in the IEEE 802.15.6-WBAN standard in the presence of additive white Gaussian noise. Due to the circular grid pattern and the CS-2FFT feature extraction method, the proposed scheme represents a significant reduction in the computational complexity, while improving the level of the resolut- on and the localization accuracy when compared to some classical CS-based positioning algorithms.
Keywords: AWGN; body sensor networks; compressed sensing; drug delivery systems; fast Fourier transforms; feature extraction; geriatrics; health care; hospitals; medical signal processing; patient monitoring; personal area networks; telemedicine; tracking; ubiquitous computing; CS-2FFT feature extraction method; CS-2FFT feature vector synthesis; CS-2FFT-based algorithm robustness; FFT-based feature extraction mechanism; IEEE 802.15.6-WBAN standard; abnormal condition contextual information; abnormal condition warning mechanism; additive white Gaussian noise; automatic drug delivery system; automatic patient monitoring; automatic patient tracking; average positioning error; biosensor signal; central database server; circular grid pattern; classical CS-based positioning algorithm; compressive sensing theory; compressive-sampling-based positioning; compressive-sampling-based two-level FFT feature vector; computational complexity reduction; feature extraction process; fragile elderly people supervision; hospital; movement tracking reporting; multipatient positioning analysis; multipatient positioning modeling; numerical evaluation; patient localization accuracy; patient localization resolution; patient movement space discrete segmentation; patient spatial sparsity; patient vital information; remote drug delivery; remote patient monitoring; remote patient tracking; signal long-time properties; signal short-time properties; sparse fast Fourier transform;sparse vector estimation; ubiquitous health care system; wireless body area network; wireless technology; Compressive sampling (CS);patient localization; spatial sparsity; wireless body area networks (WBANs) (ID#:14-2718)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6514596&isnumber=6701130
- Gishkori, S.; Lottici, V.; Leus, G., "Compressive Sampling-Based Multiple Symbol Differential Detection for UWB Communications," Wireless Communications, IEEE Transactions on, vol.13, no.7, pp.3778,3 790, July 2014. doi: 10.1109/TWC.2014.2317175 Compressive sampling (CS) based multiple symbol differential detectors are proposed for impulse-radio ultra-wideband signaling, using the principles of generalized likelihood ratio tests. The CS based detectors correspond to two communication scenarios. One, where the signaling is fully synchronized at the receiver and the other, where there exists a symbol level synchronization only. With the help of CS, the sampling rates are reduced much below the Nyquist rate to save on the high power consumed by the analog-to-digital converters. In stark contrast to the usual compressive sampling practices, the proposed detectors work on the compressed samples directly, thereby avoiding a complicated reconstruction step and resulting in a reduction of the implementation complexity. To resolve the detection of multiple symbols, compressed sphere decoders are proposed as well, for both communication scenarios, which can further help to reduce the system complexity. Differential detection directly on the compressed symbols is generally marred by the requirement of an identical measurement process for every received symbol. Our proposed detectors are valid for scenarios where the measurement process is the same as well as where it is different for each received symbol.
Keywords: compressed sensing; signal detection; signal reconstruction; signal sampling; statistical testing; synchronisation; ultra wideband communication; CS based detectors; Nyquist rate; UWB communications; analog-to-digital converters; complicated reconstruction step; c ompressed sphere decoders; compressive sampling-based multiple symbol differential detection; generalized likelihood ratio tests; identical measurement process ;impulse-radio ultra-wideband signaling; symbol level synchronization; system complexity reduction; Complexity theory;Detectors;Joints;Receivers;Synchronization;Vectors;Compressive sampling (CS);multiple symbol differential detection (MSDD);sphere decoding (SD);ultra-wideband impulse radio (UWB-IR) (ID#:14-2719)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6797969&isnumber=6850111
- Shuyuan Yang; HongHong Jin; Min Wang; Yu Ren; Licheng Jiao, "Data-Driven Compressive Sampling and Learning Sparse Coding for Hyperspectral Image Classification," Geoscience and Remote Sensing Letters, IEEE, vol.11, no.2, pp.479, 483, Feb. 2014. doi: 10.1109/LGRS.2013.2268847 Exploring the sparsity in classifying hyperspectral vectors proves to lead to state-of-the-art performance. To learn a compact and discriminative dictionary for accurate and fast classification of hyperspectral images, a data-driven Compressive Sampling (CS) and learning sparse coding scheme are use to reduce the dimensionality and size of the dictionary respectively. First, a sparse radial basis function (RBF) kernel learning network (S-RBFKLN) is constructed to learn a compact dictionary for sparsely representing hyperspectral vectors. Then a data-driven compressive sampling scheme is designed to reduce the dimensionality of the dictionary, and labels of new samples are derived from coding coefficients. Some experiments are taken on NASA EO-1 Hyperion data and AVIRIS Indian Pines data to investigate the performance of the proposed method, and the results show its superiority to its counterparts.
Keywords: geophysical image processing; hyperspectral imaging; image classification; AVIRIS Indian Pines data; NASA EO-1 Hyperion data; coding coefficients; data-driven compressive sampling; hyperspectral image classification; hyperspectral vectors ;learning sparse coding scheme; Dictionaries; Hyperspectral imaging; Image coding; Kernel; Training; Vectors; Compressive sampling (CS); data-driven; hyperspectral image classification ;sparse radial basis function kernel learning network (S-RBFKLN) (ID#:14-2720)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6578556&isnumber=6675034
- Yan Jing; Naizhang Feng; Yi Shen, "Bearing Estimation Of Coherent Signals Using Compressive Sampling Array," Instrumentation and Measurement Technology Conference (I2MTC) Proceedings, 2014 IEEE International, pp.1221,1225, 12-15 May 2014. doi: 10.1109/I2MTC.2014.6860938 Compressive sampling (CS) is an attractive theory which can achieve sparse signals acquisition and compression simultaneously. Exploiting the sparse property in the spatial domain, the direction of arrival (DOA) of narrowband signals is studied by using compressive sampling measurements in the form of random projections of sensor arrays. The proposed approach, CS array DOA estimation based on eigen space (CSA-ES-DOA) uses a very small number of measurements to resolve the DOA estimation of the coherent signals and two closely adjacent signals. Theoretical analysis and simulation results demonstrate that the proposed approaches can maintain high angular resolution, low hardware complexity and low computational cost.
Keywords: {compressed sensing; direction-of-arrival estimation; eigenvalues and eigenfunctions; signal detection; signal sampling; DOA estimation; bearing estimation; coherent signals; compressive sampling array; direction of arrival; narrowband signals; sparse signals acquisition; Arrays; Compressed sensing; Direction-of-arrival estimation; Estimation; Multiple signal classification; Signal resolution; Vectors; coherent signals; compressive sampling array; direction of arrival; eigen space (ID#:14-2721)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6860938&isnumber=6860504
- Angrisani, L.; Bonavolonta, F.; Lo Moriello, R.S.; Andreone, A; Casini, R.; Papari, G.; Accardo, D., "First Steps Towards An Innovative Compressive Sampling Based-Thz Imaging System For Early Crack Detection On Aereospace Plates," Metrology for Aerospace (MetroAeroSpace), 2014 IEEE, pp.488,493, 29-30 May 2014. doi: 10.1109/MetroAeroSpace.2014.6865974 The paper deals with the problem of early detecting cracks in composite materials for avionic applications. In particular, the authors present a THz imaging system that exploits compressive sampling (CS) to detect submillimeter cracks with a reduced measurement burden. Traditional methods for THz imaging usually involve raster scan of the issue of interest by means of highly collimated radiations and the corresponding image is achieved by measuring the received THz power in different positions (pixels) of the desired image. As it can be expected, the higher the required resolution, the longer the measurement time. On the contrary, two different approaches for THz imaging (namely, continuous wave and time domain spectroscopy) combined with a proper CS solution are used to assure as good results as those granted by traditional raster scan; a proper set of masks (each of which characterized by a specific random pattern) are defined to the purpose. A number of tests conducted on simulated data highlighted the promising performance of the proposed method thus suggesting its implementation in an actual measurement setup.
Keywords: aerospace materials; avionics; composite materials; compressed sensing; condition monitoring; crack detection; plates (structures);terahertz wave imaging; CS; THz imaging system; aerospace plates; avionic applications; composite materials; compressive sampling; continuous wave; early crack detection; submillimeter cracks; time domain spectroscopy; Detectors; Image reconstruction; Image resolution ;Imaging; Laser excitation; Quantum cascade lasers; Skin; compressive sampling THz imaging ;cracks detection; nondestructive evaluation (ID#:14-2722)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6865974&isnumber=6865882
- Das, S.; Singh Sidhu, T., "Application of Compressive Sampling in Synchrophasor Data Communication in WAMS," Industrial Informatics, IEEE Transactions on, vol.10, no.1, pp.450, 460, Feb. 2014. doi: 10.1109/TII.2013.2272088 In this paper, areas of power system synchrophasor data communication which can be improved by compressive sampling (CS) theory are identified. CS reduces the network bandwidth requirements of Wide Area Measurement Systems (WAMS). It is shown that CS can reconstruct synchrophasors at higher rates while satisfying the accuracy requirements of IEEE standard C37.118.1-2011. Different steady state and dynamic power system scenarios are considered here using mathematical models of C37.118.1-2011. Synchrophasors of lower reporting rates are exempted from satisfying the accuracy requirements of C37.118.1-2011 during system dynamics. In this work, synchrophasors are accurately reconstructed from above and below Nyquist rates. Missing data often pose challenges to the WAMS applications. It is shown that missing and bad data can be reconstructed satisfactorily using CS. Performance of CS is found to be better than the existing interpolation techniques for WAMS communication.
Keywords: IEEE standards; compressed sensing; interpolation;phasor measurement; CS theory; IEEE standard C37.118.1-2011; Nyquist rates; WAMS communication; compressive sampling; dynamic power system scenario; interpolation technique; mathematical model; network bandwidth requirements; power system synchrophasor data communication; steady state power system scenario; wide area measurement systems; Compressive sampling; phasor measurement unit; smart grid; synchrophasor; wide area measurement system (WAMS) (ID#:14-2723)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6553079&isnumber=6683081
- Xi, Feng; Chen, Shengyao; Liu, Zhong, "Quadrature Compressive Sampling For Radar Signals: Output Noise And Robust Reconstruction," Signal and Information Processing (ChinaSIP), 2014 IEEE China Summit & International Conference on, pp.790,794, 9-13 July 2014. doi: 10.1109/ChinaSIP.2014.6889353 The quadrature compressive sampling (QuadCS) system is a recently developed low-rate sampling system for acquiring inphase and quadrature (I and Q) components of radar signals. This paper investigates the output noise and robust reconstruction of the QuadCS system with the practical non-ideal bandpass filter. For independently and identically distributed Gaussian input noise, we find that the output noise is a correlated Gaussian one in the non-ideal case. Then we exploit the correlation property and develop a robust reconstruction formulation. Simulations show that the reconstructed signal-to-noise ratio is enhanced 3-4dB with the robust formulation.
Keywords: Compressive sampling; Gaussian noise; quadrature demodulation; radar signals (ID#:14-2724)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889353&isnumber=6889177
- Budillon, A; Ferraioli, G.; Schirinzi, G., "Localization Performance of Multiple Scatterers in Compressive Sampling SAR Tomography: Results on COSMO-SkyMed Data," Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of, vol.7, no.7, pp.2902, 2910, July 2014. doi: 10.1109/JSTARS.2014.2344916 The 3-D SAR tomographic technique based on compressive sampling (CS) has been proven very performing in recovering the 3-D reflectivity function and hence in estimating multiple scatterers lying in the same range-azimuth resolution cell, but at different elevations. In this paper, a detection method for multiple scatterers, assuming the number of scatterers to be known or preliminarily estimated, has been investigated. The performance of CS processing for identifying and locating multiple scatterers has been analyzed for different number of measurements and different reciprocal distances between the scatterers, in presence of the off-grid effect, and in the case of super-resolution imaging. The proposed method has been tested on simulated and real COSMO-SkyMed data.
Keywords: Detectors ;Image resolution; Signal resolution; Signal to noise ratio; Synthetic aperture radar; Tomography; Compressive sampling (CS); detection; synthetic aperture radar (SAR); tomography (ID#:14-2725)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6881815&isnumber=6881766
- Ningfei Dong; Jianxin Wang, "Channel Gain Mismatch And Time Delay Calibration For Modulated Wideband Converter-Based Compressive Sampling," Signal Processing, IET, vol.8, no.2, pp.211, 219, April 2014. doi: 10.1049/iet-spr.2013.0137 The modulated wideband converter (MWC) is a recently proposed compressive sampling system for acquiring sparse multiband signals. For the MWC with digital sub-channel separation block, channel gain mismatch and time delay will lead to a potential performance loss in reconstruction. These gains and delays are represented as an unknown multiplicative diagonal matrix here. The authors formulate the estimation problem as a convex optimisation problem, which can be efficiently solved by utilising least squares estimation. Then the calibrated system model is obtained and the estimates of the gains and time delays of physical channels from the estimate of this matrix are calculated. Numerical simulations verify the effectiveness of the proposed approach.
Keywords: {channel estimation; compressed sensing; delay estimation ;least mean squares methods; matrix multiplication; modulation; signal detection; signal reconstruction; MWC; channel gain mismatch; compressive sampling system; convex optimisation problem; digital subchannel separation block; gain estimation; least square estimation; modulated wideband converter; numerical simulation; potential performance loss; signal reconstruction; sparse multiband signal acquisition; time delay calibration; time delay estimation; unknown multiplicative diagonal matrix (ID#:14-2726)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6786869&isnumber=6786851
- Sejdic, E.; Rothfuss, M.A; Gimbel, M.L.; Mickle, M.H., "Comparative Analysis Of Compressive Sensing Approaches For Recovery Of Missing Samples In Implantable Wireless Doppler Device," Signal Processing, IET, vol.8, no.3, pp.230, 238, May 2014. doi: 10.1049/iet-spr.2013.0402 An implantable wireless Doppler device used in microsurgical free flap surgeries can suffer from lost data points. To recover the lost samples, the authors considered the approaches based on a recently proposed compressive sensing. In this paper, they performed a comparative analysis of several different approaches by using synthetic and real signals obtained during blood flow monitoring in four pigs. They considered three different bases functions: Fourier bases, discrete prolate spheroidal sequences and modulated discrete prolate spheroidal sequences, respectively. To avoid the computational burden, they considered the approaches based on the l1 minimisation for all the three bases. To understand the trade-off between the computational complexity and the accuracy, they also used a recovery process based on a matching pursuit and modulated discrete prolate spheroidal sequences bases. For both the synthetic and the real signals, the matching approach with modulated discrete prolate spheroidal sequences provided the most accurate results. Future studies should focus on the optimisation of the modulated discrete prolate spheroidal sequences in order to further decrease the computational complexity and increase the accuracy.
Keywords: blood flow measurement; compressed sensing; computational complexity; medical signal processing; minimisation; prosthetics; signal sampling; blood flow monitoring; compressive sensing; implantable wireless Doppler device; matching pursuit; microsurgical free flap surgery; missing sample recovery; modulated discrete prolate spheroidal sequences base; recovery process (ID#:14-2727)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6817399&isnumber=6816971
- Mihajlovic, Radomir; Scekic, Marijana; Draganic, Andjela; Stankovic, Srdjan, "An Analysis Of CS Algorithms Efficiency For Sparse Communication Signals Reconstruction," Embedded Computing (MECO), 2014 3rd Mediterranean Conference on, pp.221,224, 15-19 June 2014. doi: 10.1109/MECO.2014.6862700 As need for increasing the speed and accuracy of the real applications is constantly growing, the new algorithms and methods for signal processing are intensively developing. Traditional sampling approach based on Sampling theorem is, in many applications, inefficient because of production a large number of signal samples. Generally, small number of significant information is presented within the signal compared to its length. Therefore, the Compressive Sensing method is developed as an alternative sampling strategy. This method provides efficient signal processing and reconstruction, without need for collecting all of the signal samples. Signal is sampled in a random way, with number of acquired samples significantly smaller than the signal length. In this paper, the comparison of the several algorithms for Compressive Sensing reconstruction is presented. The one dimensional band-limited signals that appear in wireless communications are observed and the performance of the algorithms in non-noisy and noisy environments is tested. Reconstruction errors and execution times are compared between different algorithms, as well.
Keywords: Compressed sensing; Image reconstruction; Matching pursuit algorithms; Optimization; Reconstruction algorithms; Signal processing; Signal processing algorithms; Compressive Sensing; basis pursuit; iterative hard thresholding; orthogonal matching pursuit; wireless signals (ID#:14-2728)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6862700&isnumber=6862649 isnumber=6862649
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Computational Intelligence
Computational Intelligence
- Lavania, S.; Darbari, M.; Ahuja, N.J.; Siddqui, IA, "Application of computational intelligence in measuring the elasticity between software complexity and deliverability," Advance Computing Conference (IACC), 2014 IEEE International , vol., no., pp.1415,1418, 21-22 Feb. 2014 doi: 10.1109/IAdCC.2014.6779533 Abstract: The paper highlights various issues of complexity and deliverability and its impact on software popularity. The use of Expert Intelligence system helps us in identifying the dominant and non-dominant impediments of software. FRBS is being developed to quantify the trade-off between complexity and deliverability issues of a software system.
Keywords: {computational complexity;expert systems;software quality;FRGS;computational intelligence;dominant impediments;elasticity measurement;expert intelligence system;nondominant impediments;software complexity;software deliverability;software popularity;Conferences;Decision support systems;Handheld computers;Complexity;Deliverability;Expert System}, (ID#:14-2762)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779533&isnumber=6779283
- Yannakakis, G.N.; Togelius, J., "A Panorama of Artificial and Computational Intelligence in Games," Computational Intelligence and AI in Games, IEEE Transactions on , vol.PP, no.99, pp.1,1 doi: 10.1109/TCIAIG.2014.2339221 Abstract: This paper attempts to give a high-level overview 4of the field of artificial and computational intelligence (AI/CI) in games, with particular reference to how the different core research areas within this field inform and interact with each other, both actually and potentially. We identify ten main research areas within this field: NPC behavior learning, search and planning, player modeling, games as AI benchmarks, procedural content generation, computational narrative, believable agents, AI-assisted game design, general game artificial intelligence and AI in commercial games. We view and analyze the areas from three key perspectives: (1) the dominant AI method(s) used under each area; (2) the relation of each area with respect to the end (human) user; and (3) the placement of each area within a human-computer (player-game) interaction perspective. In addition, for each of these areas we consider how it could inform or interact with each of the other areas; in those cases where we find that meaningful interaction either exists or is possible, we describe the character of that interaction and provide references to published studies, if any. We believe that this paper improves understanding of the current nature of the game AI/CI research field and the interdependences between its core areas by providing a unifying overview. We also believe that the discussion of potential interactions between research areas provides a pointer to many interesting future research projects and unexplored subfields.
Keywords: {Artificial intelligence;Computational modeling;Evolutionary computation;Games;Planning;Seminars}, (ID#:14-2763)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6855367&isnumber=4804729
- Myers, AJ.; Megherbi, D.B., "An efficient computational intelligence technique for affine-transformation-invariant image face detection, tracking, and recognition in a video stream," Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA), 2014 IEEE International Conference on , vol., no., pp.88,93, 5-7 May 2014 doi: 10.1109/CIVEMSA.2014.6841444 Abstract: While there are many current approaches to solving the difficulties that come with detecting, tracking, and recognizing a given face in a video sequence, the difficulties arising when there are differences in pose, facial expression, orientation, lighting, scaling, and location remain an open research problem. In this paper we present and perform the study and analysis of a computationally efficient approach for each of the three processes, namely a given template face detection, tracking, and recognition. The proposed algorithms are faster relatively to other existing iterative methods. In particular, we show that unlike such iterative methods, the proposed method does not estimate a given face rotation angle or scaling factor by looking into all possible face rotations or scaling factors. The proposed method looks into segmenting and aligning the distance between two eyes' pupils in a given face image with the image x-axis. Reference face images in a given database are normalized with respect to translation, rotation, and scaling. We show here how the proposed method to estimate a given face image template rotation and scaling factor leads to real-time template image rotation and scaling corrections. This allows the recognition algorithm to be less computationally complex than iterative methods.
Keywords: {face recognition;image sequences;iterative methods;video signal processing;affine-transformation-invariant image;computational intelligence technique;face detection;face image template;face recognition;face tracking;iterative methods;reference face images;video sequence;video stream;Databases;Face;Face recognition;Histograms;Lighting;Nose;Streaming media;computational intelligence;detection;facial;machine learning;real-time;recognition;tracking;video}, (ID#:14-2764)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841444&isnumber=6841424
- Antoniades, A; Took, C.C., "A Google approach for computational intelligence in big data," Neural Networks (IJCNN), 2014 International Joint Conference on , vol., no., pp.1050,1054, 6-11 July 2014 doi: 10.1109/IJCNN.2014.6889469 Abstract: With the advent of the emerging field of big data, it is becoming increasingly important to equip machine learning algorithms to cope with volume, variety, and velocity of data. In this work, we employ the MapRe-duce paradigm to address these issues as an enabling technology for the well-known support vector machine to perform distributed classification of skin segmentation. An open source implementation of MapReduce called Hadoop offers a streaming facility, which allows us to focus on the computational intelligence problem at hand, instead of focusing on the implementation of the learning algorithm. This is the first time that support vector machine has been proposed to operate in a distributed fashion as it is, circumventing the need for long and tedious mathematical derivations. This highlights the main advantages of MapReduce - its generality and distributed computation for machine learning with minimum effort. Simulation results demonstrate the efficacy of MapReduce when distributed classification is performed even when only two machines are involved, and we highlight some of the intricacies of MapReduce in the context of big data.
Keywords: {Big Data;distributed processing;learning (artificial intelligence);pattern classification;public domain software;support vector machines;Google approach;MapReduce;big data;computational intelligence;distributed classification;machine learning algorithms;open source Hadoop;skin segmentation;streaming facility;support vector machine;Big data;Context;Machine learning algorithms;Skin;Support vector machines;Testing;Training}, (ID#:14-2765)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6889469&isnumber=6889358
- Sharif, N.; Zafar, K.; Zyad, W., "Optimization of requirement prioritization using Computational Intelligence technique," Robotics and Emerging Allied Technologies in Engineering (iCREATE), 2014 International Conference on , vol., no., pp.228,234, 22-24 April 2014 doi: 10.1109/iCREATE.2014.6828370 Abstract: Requirement Engineering (RE) is considered as an important part in Software Development Life Cycle. It is a traditional Software Engineering (SE) process. The goal of RE is to Identify, Analyze, Document and Validate requirements. Requirement Prioritization is a crucial step towards making good decisions about product plan but it is often neglected. It is observed that in many cases the product is considered as a failure without proper prioritization because it fails to meet its core objectives. When a project has tight schedule, restricted resources, and customer expectations are high then it is necessary to deploy the most critical and important features as early as possible. For this purpose requirements are prioritized. Several requirement prioritization techniques have been presented by various researchers over the past years in the domain of SE as well as Computational Intelligence. A new technique is presented in this paper which is a hybrid of both domains named as FuzzyHCV. FuzzyHCV is a hybrid of Hierarchical Cumulative Voting (HCV) and Fuzzy Expert System. Comparative analysis is performed between new technique and an existing HCV technique. Result shows that proposed technique has proved to be more reliable and accurate.
Keywords: {expert systems;fuzzy set theory;software engineering;statistical analysis;FuzzyHCV technique;RE;SE process;computational intelligence technique;fuzzy expert system;hierarchical cumulative voting;requirement engineering;requirement prioritization techniques;software development life cycle;software engineering process;Computers;Documentation;Expert systems;Fuzzy systems;Software;Software engineering;Fuzzy HCV;Fuzzy systems;HCV;Requirement prioritization}, (ID#:14-2766)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828370&isnumber=6828323
- Alvares, Marcos; Marwala, Tshilidzi; de Lima Neto, Fernando Buarque, "Application of Computational Intelligence For Source Code Classification," Evolutionary Computation (CEC), 2014 IEEE Congress on, vol., no., pp.895, 902, 6-11 July 2014. doi: 10.1109/CEC.2014.6900300 Multi-language Source Code Management systems have been largely used to collaboratively manage software development projects. These systems represent a fundamental step in order to fully use communication enhancements by producing concrete value on the way people collaborate to produce more reliable computational systems. These systems evaluate results of analyses in order to organise and optimise source code. These analyses are strongly dependent on technologies (i.e. framework, programming language, libraries) each of them with their own characteristics and syntactic structure. To overcome such limitation, source code classification is an essential preprocessing step to identify which analyses should be evaluated. This paper introduces a new approach for generating content-based classifiers by using Evolutionary Algorithms. Experiments were performed on real world source code collected from more than 200 different open source projects. Results show us that our approach can be successfully used for creating more accurate source code classifiers. The resulting classifier is also expansible and flexible to new classification scenarios (opening perspectives for new technologies).
Keywords: {Algorithm design and analysis;Computer languages;Databases;Genetic algorithms;Libraries;Sociology;Statistics}, (ID#:14-2767)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900300&isnumber=6900223
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Confinement
In photonics, confinement is important to loss avoidance. In quantum theory, it relates to energy levels. The articles cited here cover both concepts and were presented or published in the first half of 2014.
- Hasan, D.; Alam, M.S., "Ultra-Broadband Confinement in Deep Sub-Wavelength Air Hole of a Suspended Core Fiber," Lightwave Technology, Journal of, vol.32, no. 8, pp. 1434, 1441, April 15, 2014. doi: 10.1109/JLT.2014.2306292 We demonstrate low loss (0.4043 dB/Km at 1.55 mm) deep sub-wavelength broadband evanescent field confinement in low index material from near IR to mid IR wavelengths with the aid of an specialty optical fiber whilst achieving at least 1.5 dB improvement of figure of merit over the previous design. Plane strain analysis has been conducted to foresee fiber material dependent fabrication challenges associated with such nanoscale feature due to thermal stress. Size dependence of air hole is explained rigorously by modifying the existent slot waveguide model. We report significant improvement of field intensity, interaction length, bandwidth and surface sensitivity over the conventional free standing nanowire structure. The effect of metal layer thickness on surface plasmon resonance sensitivity is explored as well. A method to obtain strong evanescent field in such structure for medical sensing is also demonstrated. The proposed technique to enhance sub-wavelength confinement is expected to be of potential engineering merits for optical nanosensors, atomic scale waveguide for single molecule inspection and ultra-low mode volume cavity.
Keywords: fibre optic sensors;nanomedicine;nanophotonics;nanosensors;nanowires;optical fibre fabrication; optical fibre losses; optical materials; surface plasmon resonance thermal stresses; atomic scale waveguide; bandwidth; conventional free standing nanowire structure; deep subwavelength air hole; fiber material dependent fabrication; field intensity; figure of merit; gain 1.5 dB; interaction length; low index material; low loss deep subwavelength broadband evanescent field confinement; medical sensing; metal layer thickness ;mid IR wavelengths; nanoscale feature; optical nanosensors; plane strain analysis; single molecule inspection; size dependence; slot waveguide model; specialty optical fiber; strong evanescent field; subwavelength confinement; surface plasmon resonance sensitivity; surface sensitivity; suspended core fiber; thermal stress; ultrabroadband confinement; ultralow mode volume cavity; wavelength 1.55 mum; Indexes; Materials; Optical fiber devices; Optical fiber dispersion; Optical fibers; Optical surface waves; Characteristic decay length; evanescent sensing; field intensity; slot waveguide; sub-wavelength confinement ;suspended core fiber(ID#:14-2729)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6742596&isnumber=6759765
- Paul, U.; Hasan, M.; Rahman, M.T.; Bhuiyan, AG., "Effect of QD Size And Band-Offsets On Confinement Energy In Inn QD Heterostructure," Electrical Information and Communication Technology (EICT), 2013 International Conference on, pp.1,4, 13-15 Feb. 2014. doi: 10.1109/EICT.2014.6777897 Detailed theoretical analysis of how QD size variation and band-offset affects the confinement energy of InN QD is presented. Low dimensional structures show a strong quantum confinement effect, which results in shifting the ground state away from the band edge and discrete eigen-states. Graphically solving 1D Schrodinger ground quantized energy levels of electrons were computed and using Luttinger-Khon 4x4 Hamiltonian matrix ground quantized energy level of holes were determined. Our results allow us to tune dot size and band-offset to obtain required bandgap for InN based low dimensional device design.
Keywords: III-V semiconductors; Schrodinger equation; energy gap; ground states ;indium compounds; semiconductor heterojunctions; semiconductor quantum dots; InN; Luttinger-Khon 4x4 Hamiltonian matrix ground quantized energy level; band edge; band gap; band-offset effects; confinement energy; discrete eigenstates graphically solving 1D Schrodinger ground quantized energy levels; ground state; low dimensional device design; quantum confinement effect; quantum dot heterostructure; quantum dot size effect; quantum dot size variation; theoretical analysis; Charge carrier processes; Energy states; Equations; Materials; Mathematical model; Optoelectronic devices; Quantum dots; Confinement energy; Indium Nitride; Quantum dots (QD) (ID#:14-2730)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6777897&isnumber=6777807
- Tripathi, Neeti; Yamashita, Masaru; Uchida, Takeyuki; Akai, Tomoko, "Observations on Size Confinement Effect In B-C-N Nanoparticles Embedded In Mesoporous Silica Channels," Applied Physics Letters, vol. 105, no.1, pp.014106,014106-4, Jul 2014. doi: 10.1063/1.4890000 Fluorescent B-C-N/silica nanoparticles were synthesized by solution impregnation method. Effect of B-C-N particle size on the optical properties was investigated by varying the silica pore sizes. Formation of B-C-N nanoparticles within the mesoporous matrix is confirmed by x-ray diffraction, transmission electron microscopy, and Fourier transform infrared spectroscopy. Furthermore, a remarkable blue-shift in emission peak centres with decreasing pore size in conjugation with band gap modification, ascribed to the size confinement effect. A detailed analysis of experimental results by theoretically defined confinement models demonstrates that the B-C-N nanoparticles in the size range of 3-13 nm falls within the confinement regime. This work demonstrated the experimental evidence of the size confinement effect in smaller size B-C-N nanoparticles.
Keywords: (not provided) (ID#:14-2731)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6853278&isnumber=6849859
- Lingyun Wang; Youmin Wang; Xiaojing Zhang, "Integrated Grating-Nanoslot Probe Tip for Near-Field Subwavelength Light Confinement and Fluorescent Sensing," Selected Topics in Quantum Electronics, IEEE Journal of, vol.20, no.3, pp.184,194, May-June 2014. doi: 10.1109/JSTQE.2014.2301232 We demonstrate a near-field sub-wavelength light confinement probe tip comprised of compact embedded metallic focus grating (CEMFG) coupler and photonic crystal (PhC) based l/4 nano-slot tip, in terms of its far-field radiation directivity and near-field sub-wavelength light enhancement. The embedded metallic grating coupler increases the free space coupling at tilted coupling angle of 25deg with over 280 times light intensity enhancement for 10 mm coupler size. Further, 20 nm air slot embedded in single line defect PhC waveguide are designed, using the impedance matching concept of the l/4 "air rod", to form the TE mode light wave resonance right at the probe tip aperture opening. This leads to the light beam spot size reduction down to l/20. The near-field center peak intensity is enhanced by 4.2 times from that of the rectangular waveguide input, with the total enhancement factor of 1185 from free space laser source intensity. The near-field fluorescence excitation and detection also demonstrate its single molecular enhanced fluorescence measurement capability.
Keywords: diffraction gratings; fluorescence; integrated optics; nanophotonics; nanosensors; optical couplers; optical sensors; optical waveguides; photonic crystals; rectangular waveguides; TE mode light wave resonance; air rod; air slot; compact embedded metallic focus grating coupler; coupler size; far-field radiation directivity; fluorescent sensing; free space coupling; free space laser source intensity; impedance matching; integrated grating-nanoslot probe tip; light beam spot size reduction ;light intensity enhancement; near-field center peak intensity; near-field fluorescence excitation; near-field sub-wavelength light confinement probe tip; near-field sub-wavelength light enhancement; near-field subwavelength light confinement; photonic crystal based l/4 nanoslot tip; probe tip aperture opening; rectangular waveguide input; single line defect PhC waveguide; single molecular enhanced fluorescence measurement capability; size 10 mum; tilted coupling angle; Couplers; Couplings; Etching; Gratings; Metals; Optical waveguides; Probes; l/4 nano-slot; Metallic grating; light confinement; near-field; photonic crystal; single molecule fluorescence detection (ID#:14-2732)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6716001&isnumber=6603368
- Ali, M.S.; Islam, A; Ahmad, R.; Siddique, AH.; Nasim, K.M.; Khan, M.AG.; Habib, M.S., "Design Of Hybrid Photonic Crystal Fibers For Tailoring Dispersion And Confinement Loss," Electrical Information and Communication Technology (EICT), 2013 International Conference on, pp. 1, 4, 13-15 Feb. 2014 doi: 10.1109/EICT.2014.6777861 This paper presents the proposal of a hybrid cladding photonic crystal fiber offering flat dispersion and low confinement operating in the Telecom bands. Simulation results reveal that near zero ultra flattened dispersion of 0 +- 1.20 ps/(nm.km) is obtained in a 1.25 to 1.70 mm wavelength range i.e. 450 nm flat band along with low confinement losses which is less than 10-2 dB/km at operating wavelength 1.55 mm. Moreover, the sensitivity of the fiber dispersion properties to a +-1% to +-5% variation in the optimum parameters is studied for practical conditions.
Keywords: holey fibres; optical fibre dispersion; optical fibre losses; photonic crystals; Telecom bands; design; fiber dispersion properties; flat dispersion; hybrid cladding photonic crystal fiber; low confinement losses; wavelength 1.25 mum to 1.70 mum; Chromatic dispersion; Optical fiber communication; Optical fiber dispersion; Optical fibers; Photonic crystal fibers; Refractive index; chromatic dispersion; confinement loss; effective area; photonic crystal fiber (ID#:14-2733)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6777861&isnumber=6777807
- Ghasemi, M.; Choudhury, P.K., "Effect Due To Down-Tapering On The Hybrid Mode Power Confinement In Liquid Crystal Optical Fiber," Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), 2014 11th International Conference on, pp.1,4, 14-17 May 2014. doi: 10.1109/ECTICon.2014.6839753 The paper presents analysis of the wave propagation through down-tapered three-layer liquid crystal optical fiber in respect of power confinement due to the hybrid modes supported by the guide. The inner two regions are homogeneous and isotropic dielectrics whereas the outermost layer being composed of radially anisotropic liquid crystal material. It has been found that the guide supports relatively very high amount of power in the liquid crystal region, which indicates the possible use of such microstructures in varieties of optical applications. The effects on confinement due to the positive and the negative (illustrating the taper type) values of taper slopes are reported.
Keywords: dielectric materials; liquid crystals; micro-optics; optical fibres; down-tapering; homogeneous dielectrics; hybrid mode power confinement; hybrid modes; isotropic dielectrics; liquid crystal optical fiber; power confinement; radially anisotropic liquid crystal material; taper slopes; wave propagation; Dielectrics; Equations ;Liquid crystals; Optical fiber dispersion; Optical fibers; Liquid crystal fibers; complex mediums; electromagnetic wave propagation (ID#:14-2734)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6839753&isnumber=6839704
- Janjua, B.; Ng, T.K.; Alyamani, AY.; El-Desouki, M.M.; Ooi, B.S., "Enhancement of Hole Confinement by Monolayer Insertion in Asymmetric Quantum-Barrier UVB Light Emitting Diodes," Photonics Journal, IEEE, vol.6, no.2, pp.1,9, April 2014. doi: 10.1109/JPHOT.2014.2310199 We study the enhanced hole confinement by having a large bandgap AlGaN monolayer insertion (MLI) between the quantum well (QW) and the quantum barrier (QB). The numerical analysis examines the energy band alignment diagrams, using a self-consistent 6 x 6 k *p method and, considering carrier distribution, recombination rates (Shockley-Reed-Hall, Auger, and radiative recombination rates), under equilibrium and forward bias conditions. The active region is based on AlaGa1-aN (barrier)/AlbGa1-bN (MLI)/AlcGa1-cN (well)/AldGa1-dN (barrier), where b d a c. A large bandgap AlbGa1 - bN mono layer, inserted between the QW and QB, was found to be effective in providing stronger hole confinement. With the proposed band engineering scheme, an increase of more than 30% in spatial overlap of carrier wavefunction was obtained, with a considerable increase in carrier density and direct radiative recombination rates. The single-QW-based UV-LED was designed to emit at 280 nm, which is an effective wavelength for water disinfection.
Keywords: Auger effect; III-V semiconductors; aluminium compounds; electron-hole recombination; gallium compounds; k.p calculations; light emitting diodes; monolayers; semiconductor quantum wells; wave functions; wide band gap semiconductors; AlGaN; Auger recombination rates; MLI; Shockley-Reed-Hall recombination rates; asymmetric quantum-barrier UVB light emitting diodes; carrier density; carrier distribution; carrier wavefunction; direct radiative recombination rates; energy band alignment diagrams; enhanced hole confinement; hole confinement; monolayer insertion; numerical analysis; radiative recombination rates; recombination rates ;self-consistent 6 x 6 k *p method; water disinfection; Aluminum gallium nitride; Charge carrier density; Charge carrier processes; III-V semiconductor materials; Light emitting diodes; Radiative recombination ;Light emitting diodes (LEDs);energy barrier; semiconductor quantum well; thin insertion layer; ultraviolet; water disinfection; wavefunction overlap (ID#:14-2735)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6758387&isnumber=6750774
- Qijing Lu; Fang-Jie Shu; Chang-Ling Zou, "Extremely Local Electric Field Enhancement and Light Confinement in Dielectric Waveguide," Photonics Technology Letters, IEEE, vol.26, no.14, pp.1426, 1429, July15, 15 2014. doi: 10.1109/LPT.2014.2322595 Extremely local electric field enhancement and light confinement are demonstrated in dielectric waveguides with corner and gap geometry. Classical electromagnetic theory predicts that the field enhancement and confinement abilities are inversely proportional to radius of rounded corner (r) and gap (g), and shows a singularity for infinitesimal r and g. For practical parameters with r = g = 10 nm, the mode area of opposing apex-to-apex fan-shaped waveguides can be as small as 4 x 10-3 A0 (A0 = l2/4), far beyond the diffraction limit. The lossless dielectric corner and gap structures offer an alternative method to enhance light-matter interactions without the use of metal nanostructures, and can find applications in quantum electrodynamics, sensors, and nanoparticle trapping.
Keywords: light diffraction; optical waveguide theory; apex-to-apex fan-shaped waveguides; classical electromagnetic theory; corner geometry; dielectric waveguide; diffraction limit; extremely local electric field enhancement; gap geometry; gap radius; gap structures; light confinement; light-matter interactions; lossless dielectric corner; nanoparticle trapping; quantum electrodynamics; rounded corner radius; sensors; Antennas; Dielectrics; Electric fields; Optical waveguides; Plasmons; Waveguide discontinuities; Dielectric waveguides; nanophotonics; optical waveguides (ID#:14-2736)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6815656&isnumber=6840377
- Kai-Jun Che, "Waveguide modulated photonic molecules with metallic confinement," Transparent Optical Networks (ICTON), 2014 16th International Conference on , vol., no., pp.1,3, 6-10 July 2014 doi: 10.1109/ICTON.2014.6876649 Abstract: Photonic molecules based on the evanescent wave have been displayed unique physical characteristics, such as quality factor enhancement of optical system and mode transition between different order modes etc. Waveguide, as basic photonic element, is introduced for indirect optical interaction and guided emission of photonics molecules. Due to that the metal can effectively confine the photons in a fixed space and facilitates the high density device package as optical insulator, the optical characteristics of photonic molecules with metallic confinement, including the mode and emission characteristics, are investigated by electromagnetic analysis, combined with finite difference time domain simulations. The results show the metal dissipation of odd and even state split since they have different morphologies at coupling area and the guided emission is strongly determined by the metal-dielectric confined waveguide. Moreover, non-local optical interaction between two whispering gallery circular resonators through a waveguide coupled in radial direction is proposed for breaking the small depth of evanescent permeation. Strong optical interaction is found from the even state and interaction intensity is relative to the features of waveguide.
Keywords: Q-factor; finite difference time-domain analysis; optical resonators; optical waveguides; coupling area; electromagnetic analysis; evanescent permeation; evanescent wave; even state; finite difference time domain simulations; guided emission ;high density device package; interaction intensity; metal dissipation; metal-dielectric confined waveguide; metallic confinement; mode transition; nonlocal optical interaction; odd state; optical insulator; photonic molecules; quality factor enhancement; waveguide modulated photonic molecules; whispering gallery circular resonators; Integrated optics; Optical coupling; Optical resonators; Optical surface waves; Optical waveguides; Photonics; Stimulated emission; guided emission; metallic confinement; n on-local optical interaction; photonic molecules (ID#:14-2737)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6876649&isnumber=6876260
- Hegde, Ganesh; Povolotskyi, Michael; Kubis, Tillmann; Charles, James; Klimeck, Gerhard, "An Environment-Dependent Semi-Empirical Tight Binding Model Suitable For Electron Transport In Bulk Metals, Metal Alloys, Metallic Interfaces, And Metallic Nanostructures. II. Application--Effect Of Quantum Confinement And Homogeneous Strain On Cu Conductance," Journal of Applied Physics, vol.115, no.12, pp.123704, 123704-5, Mar 2014. doi: 10.1063/1.4868979 The Semi-Empirical tight binding model developed in Part I Hegde et al. [J. Appl. Phys. 115, 123703 (2014)] is applied to metal transport problems of current relevance in Part II. A systematic study of the effect of quantum confinement, transport orientation, and homogeneous strain on electronic transport properties of Cu is carried out. It is found that quantum confinement from bulk to nanowire boundary conditions leads to significant anisotropy in conductance of Cu along different transport orientations. Compressive homogeneous strain is found to reduce resistivity by increasing the density of conducting modes in Cu. The [110] transport orientation in Cu nanowires is found to be the most favorable for mitigating conductivity degradation since it shows least reduction in conductance with confinement and responds most favorably to compressive strain.
Keywords: (not provided) (ID#:14-2738)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778709&isnumber=6777935
- Padilla, J.L.; Alper, C.; Gamiz, F.; Ionescu, AM., "Assessment of field-induced quantum confinement in heterogate germanium electron-hole bilayer tunnel field-effect transistor," Applied Physics Letters, vol.105, no.8, pp.082108, 082108-4, Aug 2014. doi: 10.1063/1.4894088 The analysis of quantum mechanical confinement in recent germanium electron-hole bilayer tunnel field-effect transistors has been shown to substantially affect the band-to-band tunneling (BTBT) mechanism between electron and hole inversion layers that constitutes the operating principle of these devices. The vertical electric field that appears across the intrinsic semiconductor to give rise to the bilayer configuration makes the formerly continuous conduction and valence bands become a discrete set of energy subbands, therefore increasing the effective bandgap close to the gates and reducing the BTBT probabilities. In this letter, we present a simulation approach that shows how the inclusion of quantum confinement and the subsequent modification of the band profile results in the appearance of lateral tunneling to the underlap regions that greatly degrades the subthreshold swing of these devices. To overcome this drawback imposed by confinement, we propose an heterogate configuration that proves to suppress this parasitic tunneling and enhances the device performance.
Keywords: (not provided) (ID#:14-2739)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6887270&isnumber=6884699
- Puthen Veettil, B.; Konig, D.; Patterson, R.; Smyth, S.; Conibeer, G., "Electronic Confinement In Modulation Doped Quantum Dots," Applied Physics Letters, vol.104, no. 15, pp. 153102, 153102-3, Apr 2014. doi: 10.1063/1.4871576 Modulation doping, an effective way to dope quantum dots (QDs), modifies the confinement energy levels in the QDs. We present a self-consistent full multi-grid solver to analyze the effect of modulation doping on the confinement energy levels in large-area structures containing Si QDs in SiO2 and Si3N4 dielectrics. The confinement energy was found to be significantly lower when QDs were in close proximity to dopant ions in the dielectric. This effect was found to be smaller in Si3N4, while smaller QDs in SiO2 were highly susceptible to energy reduction. The energy reduction was found to follow a power law relationship with the QD size.
Keywords: (not provided) (ID#:14-2740)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798595&isnumber=6798591
- Li Wei; Aldawsari, S.; Wing-Ki Liu; West, B.R., "Theoretical Analysis of Plasmonic Modes in a Symmetric Conductor-Gap-Dielectric Structure for Nanoscale Confinement," Photonics Journal, IEEE, vol.6, no.3, pp.1, 10, June 2014. doi: 10.1109/JPHOT.2014.2326677 A hybrid plasmonic waveguide is considered as one of the most promising architectures for long-range subwavelength guiding. The objective of this paper is to present a theoretical analysis of plasmonic guided modes in a symmetric conductor-gap-dielectric (SCGD) system. It consists of a thin metal conductor symmetrically sandwiched by two-layer dielectrics with low-index nanoscale gaps inside. The SCGD waveguide can support ultra-long range surface plasmon-polariton mode when the thickness of a low-index gap is smaller than a cutoff gap thickness. For relatively high index contrast ratios of the cladding to gap layers, the cutoff gap thickness is only a few nanometers, within which the electric field of the guided SCGD mode is tightly confined. The dispersion equations and approximate analytical expressions of the cutoff gap thickness are derived in order to characterize the properties of the guided mode. Our simulation results show that the cutoff gap thickness can be tailored by the metal film thickness and the indices of the cladding and gap materials. The geometrical scheme for lateral confinement is also presented. Such a structure with unique features of low-loss and strong confinement has applications in the fabrication of active and passive plasmonic devices.
Keywords: metallic thin films; nanophotonics; optical waveguides; plasmonics; polaritons; surface plasmons; cutoff gap thickness; dispersion equations; electric field; hybrid plasmonic waveguide; low-index nanoscale gaps; metal film thickness; nanoscale confinement; plasmonic guided modes; surface plasmon-polariton mode; symmetric conductor-gap-dielectric structure; theoretical analysis; thin metal conductor; two-layer dielectrics; Equations; Films; Indexes; Metals; Optical waveguides; Plasmons; Propagation losses; Surface plasmons; guided wave; integrated optics; waveguides (ID#:14-2741)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6823089&isnumber=6809260
- Barbagiovanni, E.G.; Lockwood, D.J.; Rowell, N.L.; Costa Filho, R.N.; Berbezier, I; Amiard, G.; Favre, L.; Ronda, A; Faustini, M.; Grosso, D., "Role of Quantum Confinement In Luminescence Efficiency Of Group IV Nanostructures," Journal of Applied Physics, vol.115, no.4, pp. 044311, 044311-4, Jan 2014. doi: 10.1063/1.4863397 Experimental results obtained previously for the photoluminescence efficiency (PLeff) of Ge quantum dots (QDs) are theoretically studied. A log-log plot of PLeff versus QD diameter (D) resulted in an identical slope for each Ge QD sample only when EG(D2+D)1. We identified that above D 6.2 nm: EGD1 due to a changing effective mass (EM), while below D 4.6 nm: EGD2 due to electron/hole confinement. We propose that as the QD size is initially reduced, the EM is reduced, which increases the Bohr radius and interface scattering until eventually pure quantum confinement effects dominate at small D.
Keywords: (not provided) (ID#:14-2742)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6728975&isnumber=6720061
- Ishizaka, Yuhei; Nagai, Masaru; Saitoh, Kunimasa, "Strong Light Confinement In A Metal-Assisted Silicon Slot Waveguide," Optical Fibre Technology, 2014 OptoElectronics and Communication Conference and Australian Conference on, pp.103,105, 6-10 July 2014. A metal-assisted silicon slot waveguide is presented. Numerical results show that the proposed structure achieves a strong light confinement in a low-index region, which leads to the improvement of the sensitivity in refractive index sensors.
Keywords: Metals; Optical waveguides; Optimized production technology; Refractive index; Sensitivity; Sensors; Silicon (ID#:14-2743)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6888012&isnumber=6887957
- Park, Y.; Hirose, Y.; Nakao, S.; Fukumura, T.; Xu, J.; Hasegawa, T., "Quantum Confinement Effect In Bi Anti-Dot Thin Films With Tailored Pore Wall Widths And Thicknesses," Applied Physics Letters, vol. 104, no. 2, pp.023106,023106-4, Jan 2014. doi: 10.1063/1.4861775 We investigated quantum confinement effects in Bi anti-dot thin films grown on anodized aluminium oxide templates. The pore wall widths (wBi) and thickness (t) of the films were tailored to have values longer or shorter than Fermi wavelength of Bi (lF = 40 nm). Magnetoresistance measurements revealed a well-defined weak antilocalization effect below 10 K. Coherence lengths (Lph) as functions of temperature were derived from the magnetoresistance vs field curves by assuming the Hikami-Larkin-Nagaoka model. The anti-dot thin film with wBi and t smaller than lF showed low dimensional electronic behavior at low temperatures where Lph(T) exceed wBi or t.
Keywords: (not provided) (ID#:14-2744)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6712880&isnumber=6712870
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Control Theory
According to Wikipedia, "Control theory is an interdisciplinary branch of engineering and mathematics that deals with the behavior of dynamical systems with inputs, and how their behavior is modified by feedback." In cyber security, control theory offers methods and approaches to potentially solve hard problems. The articles cited here look at both theory and applications and were presented in the first half of 2014.
- Spyridopoulos, Theodoros; Maraslis, Konstantinos; Tryfonas, Theo; Oikonomou, George; Li, Shancang, "Managing Cyber Security Risks In Industrial Control Systems With Game Theory And Viable System Modelling," System of Systems Engineering (SOSE), 2014 9th International Conference on, pp.266,271, 9-13 June 2014. doi: 10.1109/SYSOSE .2014.6892499 Cyber security risk management in Industrial Control Systems has been a challenging problem for both practitioners and the research community. Their proprietary nature along with the complexity of those systems renders traditional approaches rather insufficient and creating the need for the adoption of a holistic point of view. This paper draws upon the principles of the Viable System Model and Game Theory in order to present a novel systemic approach towards cyber security management in this field, taking into account the complex inter-dependencies and providing cost-efficient defence solutions.
Keywords: Airports; Computer security; Game theory; Games; Industrial control; Risk management; asset evaluation; game theory; industrial control systems; risk management; viable system model (ID#:14-2745)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6892499&isnumber=6892448
- Kumar, P.; Singh, AK.; Kummari, N.K., "P-Q Theory Based Modified Control Algorithm For Load Compensating Using DSTATCOM," Harmonics and Quality of Power (ICHQP), 2014 IEEE 16th International Conference on, pp.591,595, 25-28 May 2014. doi: 10.1109/ICHQP.2014.6842810 This paper proposes a control algorithm for DSTATCOM (Distributed STATic COMpensator) to compensate the source current harmonics in a non-sinusoidal voltage source environment. A 3-leg VSC (voltage source converter) based DSTATCOM is used for the load compensation, on a system consisting balanced 5th harmonic PCC voltages, in 3-phase, 4-wire distribution system. Simulations are performed in MATLAB(r) environment for two load conditions, i.e., (i) a 3-phase non-linear load (NLL), and (ii) a NLL with reactive load. The results show that the proposed modification in the p-q theory control algorithm allows successful harmonic compensation at load side.
Keywords: compensation; power convertors; static VAr compensators; 3-leg VSC; 3-phase 4-wire distribution system; 3-phase nonlinear load; Matlab environment; NLL; balanced 5th harmonic PCC voltages; control algorithm; distributed static compensator; load compensation; nonsinusoidal voltage source environment ;p-q theory based modified control algorithm; point of common coupling; reactive load; source current harmonic compensation; voltage source converter based DSTATCOM; Harmonic analysis; Power harmonic filters; Reactive power; Rectifiers; Vectors; Voltage control; DSTATCOM; Harmonics; p-q theory (ID#:14-2746)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842810&isnumber=6842734
- Veremey, Evgeny I, "Computer Technologies Based On Optimization Approach In Control Theory," Computer Technologies in Physical and Engineering Applications (ICCTPEA), 2014 International Conference on, pp.200,201, June 30 2014-July 4 2014. doi: 10.1109/ICCTPEA.2014.6893359 Report is devoted to basic conceptions of computer technologies and systems application in the wide area of control systems and processes investigation and design. A special attention is focused on the ideology of optimization approach connected with the problems of control systems modeling, analysis, and synthesis. Some questions of digital control laws real-time implementation are discussed. Computational algorithms are proposed for optimization problems with no formalized performance indices. The main positions are illustrated by correspondent numerical examples.
Keywords: (not provided) (ID#:14-2747)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6893359&isnumber=6893238
- Fatemi, M.; Haykin, S., "Cognitive Control: Theory and Application," Access, IEEE, vol.2, pp.698, 710, 2014. doi: 10.1109/ACCESS.2014.2332333 From an engineering point-of-view, cognitive control is inspired by the prefrontal cortex of the human brain; cognitive control may therefore be viewed as the overarching function of a cognitive dynamic system. In this paper, we describe a new way of thinking about cognitive control that embodies two basic components: learning and planning, both of which are based on two notions: 1) two-state model of the environment and the perceptor and 2) perception-action cycle, which is a distinctive characteristic of the cognitive dynamic system. Most importantly, it is shown that the cognitive control learning algorithm is a special form of Bellman's dynamic programming. Distinctive properties of the new algorithm include the following: 1) optimality of performance; 2) algorithmic convergence to optimal policy; and 3) linear law of complexity measured in terms of the number of actions taken by the cognitive controller on the environment. To validate these intrinsic properties of the algorithm, a computational experiment is presented, which involves a cognitive tracking radar that is known to closely mimic the visual brain. The experiment illustrates two different scenarios: 1) the impact of planning on learning curves of the new cognitive controller and 2) comparison of the learning curves of three different controllers, based on dynamic optimization, traditional (Q) -learning, and the new algorithm. The latter two algorithms are based on the two-state model, and they both involve the use of planning.
Keywords: cognition; computational complexity; dynamic programming; Bellman dynamic programming; Q-learning; algorithmic convergence; cognitive control learning algorithm; cognitive dynamic system; cognitive tracking radar; dynamic optimization; human brain; learning curves; linear complexity law; perception-action cycle; performance optimality; prefrontal cortex; two-state model; visual brain; Brain modeling; Cognition; Complexity theory; Control systems; Dynamic programming; Heuristic algorithms; Perception; Radar tracking; Bayesian filtering; Cognitive dynamic systems; Shannon's entropy; cognitive control; dynamic programming; entropic state; explore/exploit tradeoff ;learning; planning; two-state model (ID#:14-2748)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6843352&isnumber=6705689
- Yin-Lam Chow; Pavone, M., "A Framework For Time-Consistent, Risk-Averse Model Predictive Control: Theory And Algorithms," American Control Conference (ACC), 2014, pp.4204,4211, 4-6 June 2014. doi: 10.1109/ACC.2014.6859437 In this paper we present a framework for risk-averse model predictive control (MPC) of linear systems affected by multiplicative uncertainty. Our key innovation is to consider time-consistent, dynamic risk metrics as objective functions to be minimized. This framework is axiomatically justified in terms of time-consistency of risk preferences, is amenable to dynamic optimization, and is unifying in the sense that it captures a full range of risk assessments from risk-neutral to worst case. Within this framework, we propose and analyze an online risk-averse MPC algorithm that is provably stabilizing. Furthermore, by exploiting the dual representation of time-consistent, dynamic risk metrics, we cast the computation of the MPC control law as a convex optimization problem amenable to implementation on embedded systems. Simulation results are presented and discussed.
Keywords: convex programming; linear systems; predictive control; risk analysis; stability; uncertain systems; MPC control law; convex optimization problem; dynamic optimization; dynamic risk metrics; linear systems; multiplicative uncertainty; risk preference; risk-averse model predictive control; stability; time-consistent model predictive control; Equations; Markov processes; Mathematical model; Measurement; Predictive control; Random variables; Stability analysis; LMIs; Predictive control for linear systems; Stochastic systems (ID#:14-2749)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6859437&isnumber=6858556
- Ueyama, Y., "Feedback Gain Indicates The Preferred Direction In Optimal Feedback Control Theory," Advanced Motion Control (AMC), 2014 IEEE 13th International Workshop on, pp.651,656, 14-16 March 2014. doi: 10.1109/AMC.2014.6823358 We investigated the role of feedback gain in optimal feedback control (OFC) theory using a neuromotor system. Neural studies have shown that directional tuning, known as the "preferred direction" (PD), is a basic functional property of cell activity in the primary motor cortex (M1). However, it is not clear which directions the M1 codes for, because neural activities can correlate with several directional parameters, such as joint torque and end-point motion. Thus, to examine the computational mechanism in the M1, we modeled the isometric motor task of a musculoskeletal system required to generate the desired joint torque. Then, we computed the optimal feedback gain according to OFC. The feedback gain indicated directional tunings of the joint torque and end-point motion in Cartesian space that were similar to the M1 neuron PDs observed in previous studies. Thus, we suggest that the M1 acts as a feedback gain in OFC.
Keywords: biocontrol; feedback; neurophysiology; optimal control; biological motor system; central nervous system; directional tuning; end-point motion; isometric motor task; joint torque; musculoskeletal system; neuromotor system; optimal feedback control theory; optimal feedback gain; preferred direction; primary motor cortex; Elbow; Force; Joints; Kalman filters; Muscles; Shoulder; Torque; isometric task; motor control; motor cortex; musculoskeletal systems; population coding (ID#:14-2750)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6823358&isnumber=6823244
- Xiaoliang Zhang; Junqiang Bai, "Aerodynamic Optimization Utilizing Control Theory," Control and Decision Conference (2014 CCDC), The 26th Chinese, pp.1293, 1298, May 31 2014-June 2 2014. doi: 10.1109/CCDC.2014.6852366 This paper presents the method of aerodynamic optimization utilizing control theory, which is also called the adjoint method. The discrete adjoint equations are obtained from an unstructured cell-vortex finite-volume Navier-Stokes solver. The developed adjoint equations solver is verified by comparison of objective sensitivities with finite differences. An aerodynamic optimization system is developed combining the flow solver, adjoint solver, mesh deformation and a gradient-based optimizer. The surface geometry is parameterized using Free Form Deformation (FFD) method and a linear elasticity method is employed for the volume mesh deformation during optimization process. This optimization system is successfully applied to a design case of ONERA M6 transonic wing design.
Keywords: Navier-Stokes equations; aerodynamics; aerospace components; computational fluid dynamics; design engineering; elasticity; finite difference methods; finite volume methods; gradient methods; mechanical engineering computing; mesh generation; optimisation; transonic flow; vortices; FFD method; ONERA M6 transonic wing design; adjoint equations solver; adjoint method; aerodynamic optimization; computational fluid dynamics; control theory; discrete adjoint equations; finite difference; flow solver; free form deformation method; gradient-based optimizer; linear elasticity method; surface geometry; unstructured cell-vortex finite-volume Navier-Stokes solver; volume mesh deformation; Aerodynamics; Equations; Geometry; Mathematical model; Optimization; Sensitivity; Vectors; Aerodynamic and Adjoint method; Control theory; Optimization (ID#:14-2751)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6852366&isnumber=6852105
- Khanum, S.; Islam, M.M., "An Enhanced Model Of Vertical Handoff Decision Based On Fuzzy Control Theory & User Preference," Electrical Information and Communication Technology (EICT), 2013 International Conference on , vol., no., pp.1,6, 13-15 Feb. 2014. doi: 10.1109/EICT.2014.6777873 With the development of wireless communication technology, various wireless networks will exist with different features in same premises. Heterogeneous networks will be dominant in the next generation wireless networks. In such networks choose the most suitable network for mobile user is one of the key issues. Vertical handoff decision making is one of the most important topics in wireless heterogeneous networks architecture. Here the most significant parameters are considered in vertical handoff decision. The proposed method considered Received signal strength (RSS), Monetary Cost(C), Bandwidth (BW), Battery consumption (BC), Security (S) and Reliability (R). Handoff decision making is divided in two sections. First section calculates system obtained value (SOV) considering RSS, C, BW and BC. SOV is calculated using fuzzy logic theory. Today's mobile user are very intelligent in deciding there desired type of services. User preferred network is choose from user priority list is called User obtained value (UOV). Then handoff decisions are made based on SOV & UOV to select the most appropriate network for the mobile nodes (MNs). Simulation results show that fuzzy control theory & user preference based vertical handoff decision algorithm (VHDA) is able to make accurate handoff decisions, reduce unnecessary handoffs decrease handoff calculation time and decrease the probability of call blocking and dropping.
Keywords: decision making; fuzzy control; fuzzy set theory; mobile computing; mobility management (mobile radio);probability; telecommunication network reliability; telecommunication security; MC; RSS; SOV; VHDA; bandwidth; battery consumption; decrease call blocking probability; decrease call dropping probability; decrease handoff calculation time; fuzzy control theory; fuzzy logic theory; mobile nodes; monetary cost; next generation wireless networks; received signal strength; reliability; security; system obtained value calculation; unnecessary handoff reduction; user obtained value; user preference; user priority list; vertical handoff decision enhancement model; vertical handoff decision making; wireless communication technology; wireless heterogeneous networks architecture; Bandwidth; Batteries; Communication system security Mobile communication; Vectors; Wireless networks; Bandwidth; Cost; Fuzzy control theory; Heterogeneous networks; Received signal strength; Security and user preference; Vertical handoff (ID#:14-2752)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6777873&isnumber=6777807
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Covert Channels
A covert channel is a simple, effective mechanism for sending and receiving data between machines without alerting any firewalls or intrusion detectors on the network. In cybersecurity science, they have value both as a means for defense and attack. The work cited here, presented or published between January and October of 2014, looks at covert channels in radar and other signal processors., timing, IPv6, DNS and attacks within the Cloud.
- Shi, H.; Tennant, A, "Covert Communication Using A Directly Modulated Array Transmitter," Antennas and Propagation (EuCAP), 2014 8th European Conference on, pp.352, 354, 6-11 April 2014. doi: 10.1109/EuCAP.2014.6901764 A Direct Antenna Modulation (DAM) scheme is configured on a 2-element array with 2-bit phase control. Such a transmitter is shown to generate constellations with two different orders simultaneously towards different transmitting angles. A possible covert communication scenario is presented in which a constellation with 16 desired signals can be generated at the intended direction, while at a second direction one with reduced number of distinct signal points is purposely generated to prevent accurate demodulation by eavesdropper. In addition, system can be configured to actively lead low-level constellation towards up to two independent pre-known eavesdropping angles.
Keywords: Antenna arrays; Arrays; Constellation diagram; Transmitting antennas; Direct Antenna Modulation (DAM); constellation; phased array(ID#:14-2768)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6901764&isnumber=6901672
- Shrestha, P.L.; Hempel, M.; Sharif, H., "Towards a Unified Model For The Analysis Of Timing-Based Covert Channels," Communications (ICC), 2014 IEEE International Conference on, pp.816,820, 10-14 June 2014. doi: 10.1109/ICC.2014.6883420 Covert channels are a network security risk growing both in sophistication and utilization, and thus posing an increasing threat. They leverage benign and overt network activities, such as the modulation of packet inter-arrival time, to covertly transmit information without detection by current network security approaches such as firewalls. This makes them a grave security concern. Thus, researching methods for detecting and disrupting such covert communication is of utmost importance. Understanding and developing analytical models is an essential requirement of covert channel analysis. Unfortunately, due to the enormous range of covert channel algorithms available it becomes very inefficient to analyze them on a case-by-case basis. Hence, a unified model that can represent a wide variety of covert channels is required, but is not yet available. In other publications, individual models to analyze the capacity of interrupt-related covert channels have been discussed. In our work, we present a unique model to unify these approaches. This model has been analyzed and we have presented the results and verification of our approach using MATLAB simulations.
Keywords: firewalls; telecommunication channels; Matlab simulations; analytical models; covert communication; firewalls; interrupt-related covert channels; network security risk; packet inter-arrival time modulation; timing-based covert channel analysis; Analytical models; Delays; Jitter; Mathematical model; Receivers; Security; Capacity; Covert Communication; Intemipt-Related Covert Channel; Mathematical Modeling; Model Analysis; Network Security; Packet Rate Timing Channels (ID#:14-2769)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883420&isnumber=6883277
- Rezaei, F.; Hempel, M.; Shrestha, P.L.; Sharif, H., "Achieving Robustness And Capacity Gains In Covert Timing Channels," Communications (ICC), 2014 IEEE International Conference on, pp.969,974, 10-14 June 2014. doi: 10.1109/ICC.2014.6883445 In this paper, we introduce a covert timing channel (CTC) algorithm and compare it to one of the most prevailing CTC algorithms, originally proposed by Cabuk et al. CTC is a form of covert channels - methods that exploit network activities to transmit secret data over packet-based networks - by modifying packet timing. This algorithm is a seminal work, one of the most widely cited CTCs, and the foundation for many CTC research activities. In order to overcome some of the disadvantages of this algorithm we introduce a covert timing channel technique that leverages timeout thresholds. The proposed algorithm is compared to the original algorithm in terms of channel capacity, impact on overt traffic, bit error rates, and latency. Based on our simulation results the proposed algorithm outperforms the work from Cabuk et al., especially in terms of its higher covert data transmission rate with lower latency and fewer bit errors. In our work we also address the desynchronization problem found in Cabuk et al.'s algorithm in our simulation results and show that even in the case of the synchronization-corrected Cabuk et al. algorithm our proposed method provides better results in terms of capacity and latency.
Keywords: channel capacity; wireless channels; CTC algorithms; bit error rates; capacity gains; channel capacity; covert timing channel algorithm; desynchronization problem; overt traffic; packet timing; packet-based networks; secret data ;timeout thresholds; Algorithm design and analysis; Bit error rate; Channel capacity; Delays; Jitter; Receivers; Capacity; Covert Communication; Covert Timing Channel; Hidden Information; Latency; Network Security (ID#:14-2770)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883445&isnumber=6883277
- Mavani, M.; Ragha, L., "Covert Channel In Ipv6 Destination Option Extension Header," Circuits, Systems, Communication and Information Technology Applications (CSCITA), 2014 International Conference on, pp.219,224, 4-5 April 2014. doi: 10.1109/CSCITA.2014.6839262 IPv6 is next generation Internet protocol whose market is going to increase as IPv4 addresses are exhausted and more mobile devices are attached to Internet. The experience with IPv6 protocol is less as its deployment is slow. So there are many unknown threats possible in IPv6 networks. One such threat addressed in this paper is covert communication in the network. Covert channel is way of communicating classified information. In network it is done by network protocol's control fields. Destination option Extension header of IPv6 is used to pass secret information which is shown experimentally in real test network set up. For creation of attack packets Scapy-Python based API is used. Covert channel due to unknown option and nonzero padding in PadN option is shown. Their detection is also proposed and detector logic is implemented using shell scripting and C programming.
Keywords: IP networks; application program interfaces; computer network security; protocols; C programming; IPv4 addresses ;IPv6 destination option extension header; IPv6 networks; PadN option; Scapy-Python based API attack packets; covert channel; covert communication; detector logic; extension header; mobile devices; network protocol control fields; next generation Internet protocol; nonzero padding; shell scripting; test network set up; Detectors; IP networks; Information technology; Internet; Operating systems; Protocols; Security; Extension Header; IPv6; Scapy; covert channel (ID#:14-2771)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6839262&isnumber=6839219
- Binsalleeh, H.; Kara, AM.; Youssef, A; Debbabi, M., "Characterization of Covert Channels in DNS," New Technologies, Mobility and Security (NTMS), 2014 6th International Conference on, pp.1,5, March 30 2014-April 2 2014. doi: 10.1109/NTMS.2014.6814008 Malware families utilize different protocols to establish their covert communication networks. It is also the case that sometimes they utilize protocols which are least expected to be used for transferring data, e.g., Domain Name System (DNS). Even though the DNS protocol is designed to be a translation service between domain names and IP addresses, it leaves some open doors to establish covert channels in DNS, which is widely known as DNS tunneling. In this paper, we characterize the malicious payload distribution channels in DNS. Our proposed solution characterizes these channels based on the DNS query and response messages patterns. We performed an extensive analysis of malware datasets for one year. Our experiments indicate that our system can successfully determine different patterns of the DNS traffic of malware families.
Keywords: cryptographic protocols; invasive software; DNS protocol; DNS traffic; DNS tunneling; IP addresses; communication networks; covert channel characterization; domain name system; malicious payload distribution channels; malware datasets; malware families; message patterns; translation service; Command and control systems; Malware; Payloads; Protocols; Servers ;Tunneling (ID#:14-2772)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6814008&isnumber=6813963
- Shrestha, P.L.; Hempel, M.; Sharif, H.; Chen, H.-H., "An Event-Based Unified System Model to Characterize and Evaluate Timing Covert Channels," Systems Journal, IEEE, vol. PP, no.99, pp. 1, 10, July 2014. doi: 10.1109/JSYST.2014.2328665 Covert channels are communication channels to transmit information utilizing existing system resources without being detected by network security elements, such as firewalls. Thus, they can be utilized to leak confidential governmental, military, and corporate information. Malicious users, like terrorists, can use covert channels to exchange information without being detected by cyber-intelligence services. Therefore, covert channels can be a grave security concern, and it is important to detect, eliminate, and disrupt covert communications. Active network wardens can attempt to eliminate such channels by traffic modification, but such an implementation will also hamper innocuous traffic, which is not always acceptable. Owing to a large number of covert channel algorithms, it is not possible to deal with them on a case-by-case basis. Therefore, it necessitates a unified system model that can represent them. In this paper, we present an event-based model to represent timing covert channels. Based on our model, we calculate the capacity of various covert channels and evaluate their essential features, such as the impact of network jitter noise and packet losses. We also used simulations to obtain these parameters to verify its accuracy and applicability.
Keywords: Capacity; covert channel; delay jitter; interrupt-related channel; packet loss; security; timing channel (ID#:14-2773)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6851146&isnumber=4357939
- Wu, Z.; Xu, Z.; Wang, H., "Whispers in the Hyper-Space: High-Bandwidth and Reliable Covert Channel Attacks Inside the Cloud," Networking, IEEE/ACM Transactions on, vol. PP, no.99, pp.1, 1, February 2014. doi: 10.1109/TNET.2014.2304439 Privacy and information security in general are major concerns that impede enterprise adaptation of shared or public cloud computing. Specifically, the concern of virtual machine (VM) physical co-residency stems from the threat that hostile tenants can leverage various forms of side channels (such as cache covert channels) to exfiltrate sensitive information of victims on the same physical system. However, on virtualized x86 systems, covert channel attacks have not yet proven to be practical, and thus the threat is widely considered a "potential risk." In this paper, we present a novel covert channel attack that is capable of high-bandwidth and reliable data transmission in the cloud. We first study the application of existing cache channel techniques in a virtualized environment and uncover their major insufficiency and difficulties. We then overcome these obstacles by: 1) redesigning a pure timing-based data transmission scheme, and 2) exploiting the memory bus as a high-bandwidth covert channel medium. We further design and implement a robust communication protocol and demonstrate realistic covert channel attacks on various virtualized x86 systems. Our experimental results show that covert channels do pose serious threats to information security in the cloud. Finally, we discuss our insights on covert channel mitigation in virtualized environments.
Keywords: Cloud; covert channel; network security (ID#:14-2774)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6744676&isnumber=4359146
- Kadhe, S.; Jaggi, S.; Bakshi, M.; Sprintson, A, "Reliable, Deniable, And Hidable Communication Over Multipath Networks," Information Theory (ISIT), 2014 IEEE International Symposium on , vol., no., pp.611,615, June 29 2014-July 4 2014. doi: 10.1109/ISIT.2014.6874905 We consider the scenario wherein a transmitter Alice wants to (potentially) communicate to the intended receiver Bob over a multipath network, i.e., a network consisting of multiple parallel links, in the presence of a passive eavesdropper Willie, who observes an unknown subset of links. A primary goal of our communication protocol is to make the communication "deniable", i.e., Willie should not be able to reliably estimate whether or not Alice is transmitting any covert information to Bob. Moreover, if Alice is indeed actively communicating, her covert messages should be information-theoretically "hidable" in the sense that Willie's observations should not leak any information about Alice's (potential) message to Bob - our notion of hidability is slightly stronger than the notion of information-theoretic strong secrecy well-studied in the literature. We demonstrate that deniability does not imply either hidability or (weak or strong) information-theoretic secrecy; nor does information-theoretic secrecy imply deniability. We present matching inner and outer bounds on the capacity for deniable and hidable communication over multipath networks.
Keywords: encoding; protocols; radio receivers; radio transmitters; telecommunication links telecommunication network reliability; telecommunication security; communication hidability; communication protocol; information theoretic secrecy; multipath networks; multiple parallel links; passive eavesdropper; telecommunication network reliability; Artificial neural networks; Cryptography; Encoding; Reliability theory (ID#:14-2775)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6874905&isnumber=6874773
- Hong Zhao, "Covert channels in 802.11e wireless networks," Wireless Telecommunications Symposium (WTS), 2014 , vol., no., pp.1,5, 9-11 April 2014. doi: 10.1109/WTS.2014.6834991 WLANs (Wireless Local Area Networks) have been widely used in business, school and public areas. The newly deployed 802.11e protocol provides QoS in WLANs. However there are some vulnerability in it. This paper analyzed the 802.11e protocol for QoS support in WLANs and two new covert channels are proposed. These proposed covert channels provide signalling method in order to have reliable communication. The proposed covert channels have no impact on normal traffic pattern, thus it cannot be detected by monitoring traffic pattern.
Keywords: protocols; quality of service; wireless LAN;802.11e wireless networks; QoS support; WLAN; Wireless Local Area Networks; covert channels; signalling method; traffic pattern monitoring; Communication system security; IEEE 802.11e Standard; Protocols; Quality of service; Wireless LAN; Wireless communication;802.11e WLAN; Network Steganography; information hiding (ID#:14-2776)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6834991&isnumber=6834983
- Bash, B.A; Goeckel, D.; Towsley, D., "LPD Communication When The Warden Does Not Know When," Information Theory (ISIT), 2014 IEEE International Symposium on, pp.606, 610, June 29 2014-July 4 2014. doi: 10.1109/ISIT.2014.6874904 Unlike standard security methods (e.g. encryption), low probability of detection (LPD) communication does not merely protect the information contained in a transmission from unauthorized access, but prevents the detection of a transmission in the first place. In this work we study the impact of secretly pre-arranging the time of communication. We prove that if Alice has AWGN channels to Bob and the warden, and if she and Bob can choose a single n symbol period slot out of T(n) such slots, keeping the selection secret from the warden (and, thus, forcing him to monitor all T(n) slots), then Alice can reliably transmit O(min{n log T(n),n}) bits to Bob while keeping the warden's detector ineffective. The result indicates that only an additional log T(n) secret bits need to be exchanged between Alice and Bpob prior to communication to produce a multiplicative gain of log T(n) in the amount of transmitted covert information.
Keywords: AWGN channels; computational complexity; probability; telecommunication network reliability; telecommunication security; AWGN channels; LPD communication; ow probability-of-detection; symbol period slot; transmission detection protection; unauthorized access; AWGN channels; Detectors; Random variables; Reliability; Vectors; Yttrium (ID#:14-2777)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6874904&isnumber=6874773
- Naseer, N.; Keum-Shik Hong; Bhutta, M.R.; Khan, M.J., "Improving Classification Accuracy Of Covert Yes/No Response Decoding Using Support Vector Machines: An fNIRS Study," Robotics and Emerging Allied Technologies in Engineering (iCREATE), 2014 International Conference on , vol., no., pp.6,9, 22-24 April 2014. doi: 10.1109/iCREATE.2014.6828329 One of the aims of brain-computer interface (BCI) is to restore the means of communication for people suffering severe motor impairment, anarthria, or persisting in a vegetative state. Yes/no decoding with the help of an imaging technology such as functional near-infrared spectroscopy (fNIRS) can make this goal a reality. fNIRS is a relatively new non-invasive optical imaging modality offering the advantages of low cost, safety, portability and ease of use. Recently, an fNIRS based online covert yes/no decision decoding framework was presented [Naseer and Hong (2013) online binary decision decoding using functional near infrared spectroscopy for development of a braincomputer interface]. Herein we propose a method to improve support vector machine classification accuracies for decoding covert yes/no responses by using signal slope values of oxygenated and deoxygenated hemoglobin as features calculated for a confined temporal window within the total task period.
Keywords: brain-computer interfaces; infrared spectra; medical signal processing; signal classification; support vector machines; BCI; brain-computer interface; classification accuracy; covert yes-no response decoding framework; deoxygenated hemoglobin; fNIRS; functional near-infrared spectroscopy; noninvasive optical imaging modality; oxygenated hemoglobin; signal slope values; support vector machines; temporal window; Accuracy;B rain-computer interfaces; Decoding; Detectors; Optical imaging; Spectroscopy; Support vector machines; Binary decision decoding; Brain-computer interface; Functional near-infrared spectroscopy; Support vector machines; Yes/no decoding (ID#:14-2778)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828329&isnumber=6828323
- Beato, F.; De Cristofaro, E.; Rasmussen, K.B., "Undetectable Communication: The Online Social Networks Case," Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference, pp.19,26, 23-24 July 2014. doi: 10.1109/PST.2014.6890919 Online Social Networks (OSNs) provide users with an easy way to share content, communicate, and update others about their activities. They also play an increasingly fundamental role in coordinating and amplifying grassroots movements, as demonstrated by recent uprisings in, e.g., Egypt, Tunisia, and Turkey. At the same time, OSNs have become primary targets of tracking, profiling, as well as censorship and surveillance. In this paper, we explore the notion of undetectable communication in OSNs and introduce formal definitions, alongside system and adversarial models that complement better understood notions of anonymity and confidentiality. We present a novel scheme for secure covert information sharing that, to the best of our knowledge, is the first to achieve undetectable communication in OSNs. We demonstrate, via an open-source prototype, that additional costs are tolerably low.
Keywords: data privacy; security of data; social networking (online);OSNs; anonymity notion; confidentiality notion; covert information sharing security; online social networks; open-source prototype; undetectable communication; Entropy; Facebook; Indexes; Internet; Security; Servers (ID#:14-2779)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890919&isnumber=6890911
- Lakhani, H.; Zaffar, F., "Covert Channels in Online Rogue-Like Games," Communications (ICC), 2014 IEEE International Conference on, pp.761, 767, 10-14 June 2014. doi: 10.1109/ICC.2014.6883411 Covert channels allow two parties to exchange secret data in the presence of adversaries without disclosing the fact that there is any secret data in their communications. We propose and implement EEDGE, an improved method for steganography in mazes that builds upon the work done by Lee et al; and has a significantly higher embedding capacity. We apply EEDGE to the setting of online rogue-like games, which have randomly generated mazes as the levels for players; and show that this can be used to successfully create an efficient, error-free, high bit-rate covert channel.
Keywords: computer games; electronic data interchange; steganography; EEDGE; covert channels; error free channel; high bit rate covert channel; online rogue like games; secret data exchange; steganography; Bit rate; Games; Image edge detection ;Information systems; Lattices; Receivers; Security (ID#:14-2780)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883411&isnumber=6883277
- Dainotti, A; King, A; Claffy, K.; Papale, F.; Pescape, A, "Analysis of a "/0" Stealth Scan from a Botnet," Networking, IEEE/ACM Transactions on, vol. PP, no. 99, pp.1, 1, Jan 2014. doi: 10.1109/TNET.2013.2297678 Botnets are the most common vehicle of cyber-criminal activity. They are used for spamming, phishing, denial-of-service attacks, brute-force cracking, stealing private information, and cyber warfare. Botnets carry out network scans for several reasons, including searching for vulnerable machines to infect and recruit into the botnet, probing networks for enumeration or penetration, etc. We present the measurement and analysis of a horizontal scan of the entire IPv4 address space conducted by the Sality botnet in February 2011. This 12-day scan originated from approximately 3 million distinct IP addresses and used a heavily coordinated and unusually covert scanning strategy to try to discover and compromise VoIP-related (SIP server) infrastructure. We observed this event through the UCSD Network Telescope, a /8 darknet continuously receiving large amounts of unsolicited traffic, and we correlate this traffic data with other public sources of data to validate our inferences. Sality is one of the largest botnets ever identified by researchers. Its behavior represents ominous advances in the evolution of modern malware: the use of more sophisticated stealth scanning strategies by millions of coordinated bots, targeting critical voice communications infrastructure. This paper offers a detailed dissection of the botnet's scanning behavior, including general methods to correlate, visualize, and extrapolate botnet behavior across the global Internet.
Keywords: Animation; Geology; IP networks; Internet; Ports (Computers); Servers; Telescopes; Botnet; Internet background radiation; Internet telephony; Network Telescope; VoIP; communication system security; darknet; network probing; scanning (ID#:14-2781)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6717049&isnumber=4359146
- Suzhi Bi; Ying Jun Zhang, "Using Covert Topological Information for Defense Against Malicious Attacks on DC State Estimation," Selected Areas in Communications, IEEE Journal on, vol.32, no.7, pp.1471, 1485, July 2014. doi: 10.1109/JSAC.2014.2332051 Accurate state estimation is of paramount importance to maintain the power system operating in a secure and efficient state. The recently identified coordinated data injection attacks to meter measurements can bypass the current security system and introduce errors to the state estimates. The conventional wisdom to mitigate such attacks is by securing meter measurements to evade malicious injections. In this paper, we provide a novel alternative to defend against false data injection attacks using covert power network topological information. By keeping the exact reactance of a set of transmission lines from attackers, no false data injection attack can be launched to compromise any set of state variables. We first investigate from the attackers' perspective the necessary condition to perform an injection attack. Based on the arguments, we characterize the optimal protection problem, which protects the state variables with minimum cost, as a well-studied Steiner tree problem in a graph. In addition, we also propose a mixed defending strategy that jointly considers the use of covert topological information and secure meter measurements when either method alone is costly or unable to achieve the protection objective. A mixed-integer linear programming formulation is introduced to obtain the optimal mixed defending strategy. To tackle the NP-hardness of the problem, a tree-pruning-based heuristic is further presented to produce an approximate solution in polynomial time. The advantageous performance of the proposed defending mechanisms is verified in IEEE standard power system test cases.
Keywords: integer programming; linear programming; power system security; power system state estimation; power transmission faults; power transmission lines; power transmission protection; smart meters; smart power grids; trees (mathematics);DC state estimation; NP-hardness problem; Steiner tree problem; coordinated data injection attacks identification; covert power network topological information; current security system; false data injection attack; graph theory; malicious attacks; mixed-integer linear programming; necessary condition; optimal mixed defending strategy; optimal protection problem; polynomial time; power system state estimation; secure meter measurements; state variables; transmission lines; tree-pruning-based heuristic; Phase measurement; Power measurement; Power transmission lines; State estimation; Transmission line measurements; Voltage measurement; False-data injection attack; graph algorithms; power system state estimation; smart grid security (ID#:14-2782)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6840294&isnumber=6879523
- Pak Hou Che; Bakshi, M.; Chung Chan; Jaggi, S., "Reliable, Deniable And Hidable Communication," Information Theory and Applications Workshop (ITA), 2014, pp.1, 10, 9-14 Feb. 2014. doi: 10.1109/ITA.2014.6804271 Alice wishes to potentially communicate covertly with Bob over a Binary Symmetric Channel while Willie the wiretapper listens in over a channel that is noisier than Bob's. We show that Alice can send her messages reliably to Bob while ensuring that even whether or not she is actively communicating is (a) deniable to Willie, and (b) optionally, her message is also hidable from Willie. We consider two different variants of the problem depending on the Alice's "default" behavior, i.e., her transmission statistics when she has no covert message to send: 1) When Alice has no covert message, she stays "silent", i.e., her transmission is 0; 2) When has no covert message, she transmits "innocently", i.e., her transmission is drawn uniformly from an innocent random codebook; We prove that the best rate at which Alice can communicate both deniably and hid ably in model 1 is O(1/n). On the other hand, in model 2, Alice can communicate at a constant rate.
Keywords: binary codes; channel coding; random codes; reliability; Alice default behavior; binary symmetric channel; random codebook; transmission statistics; wiretapper; Decoding; Encoding; Error probability; Measurement; Noise Reliability; Throughput (ID#:14-2783)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6804271&isnumber=6804199
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Cryptanalysis
Cryptanalysis is a core function for cybersecurity research. 2014 has been a very productive year so far for research in this area. The work cited below looks at AES, biclique, Lightweight Welch-Gong Stream Ciphers, and a number of smart card issues, and power injection and use, among other things. These works appeared between January and October of 2014.
- Heys, H., "Integral Cryptanalysis Of The BSPN Block Cipher," Communications (QBSC), 2014 27th Biennial Symposium on, pp.153, 158, 1-4 June 2014. doi: 10.1109/QBSC.2014.6841204 In this paper, we investigate the application of integral cryptanalysis to the Byte-oriented Substitution Permutation Network (BSPN) block cipher. The BSPN block cipher has been shown to be an efficient block cipher structure, particularly for environments using 8-bit microcontrollers. In our analysis, we are able to show that integral cryptanalysis has limited success when applied to BSPN. A first order attack, based on a deterministic integral, is only applicable to structures with 3 or fewer rounds, while higher order attacks and attacks using a probabilistic integral were found to be only applicable to structures with 4 or less rounds. Since a typical BSPN block cipher is recommended to have 8 or more rounds, it is expected that the BSPN structure is resistant to integral cryptanalysis.
Keywords: cryptography ;integral equations; microcontrollers; probability; BSPN block cipher; block cipher structure; byte-oriented substitution permutation network; deterministic integral; first order attack; higher order attacks ;integral cryptanalysis; microcontrollers; probabilistic integral; word length 8 bit; Ciphers; Encryption; Microcontrollers; Probabilistic logic; Probability; Resistance; block ciphers; cryptanalysis; cryptography (ID#:14-2784)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841204&isnumber=6841165
- Dadhich, A; Gupta, A; Yadav, S., "Swarm Intelligence Based Linear Cryptanalysis Of Four-Round Data Encryption Standard Algorithm," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on, pp.378,383, 7-8 Feb. 2014. doi: 10.1109/ICICICT.2014.6781312 The proliferation of computers, internet and wireless communication capabilities into the physical world has led to ubiquitous availability of computing infrastructure. With the expanding number and type of internet capable devices and the enlarged physical space of distributed and cloud computing, computer systems are evolving into complex and pervasive networks. Amidst the aforesaid rapid growth in technology, secure transmission of data is also equally important. The amount of sensitive information deposited and transmitted over the internet is absolutely critical and needs principles that enforce legal and restricted use and interpretation of data. The data needs to be protected from eavesdroppers and potential attackers who undermine the security processes and perform actions in excess of their permissions. Cryptography algorithms form a central component of the security mechanisms used to safeguard network transmissions and data storage. As the encrypted data security largely depends on the techniques applied to create, manage and distribute the keys, therefore a cryptographic algorithm might be rendered useless due to poor management of the keys. This paper presents a novel computational intelligence based approach for known ciphertext-only cryptanalysis of four-round Data Encryption Standard algorithm. In ciphertext-only attack, the encryption algorithm used and the ciphertext to be decoded are known to cryptanalyst and is termed as the most difficult attack encountered in cryptanalysis. The proposed approach uses Swarm Intelligences to deduce optimum keys according to their fitness values and identifies the best keys through a statistical probability based fitness function. The results suggest that the proposed approach is intelligent in finding missing key bits of the Data Encryption Standard algorithm.
Keywords: cloud computing; cryptography; probability; statistical analysis; swarm intelligence; Internet; ciphertext-only attack; ciphertext-only cryptanalysis; cloud computing; computational intelligence based approach; cryptography algorithms; data storage; distributed computing; four-round data encryption standard algorithm; network transmissions; secure data transmission; statistical probability based fitness function; swarm intelligence based linear cryptanalysis; Cryptography; MATLAB; NIST; Ciphertext; Cryptanalysis Cryptography; Information Security ;Language model; Particle Swarm Optimization; Plaintext; Swarm Intelligence (ID#:14-2785)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781312&isnumber=6781240
- Alghazzawi, D.M.; Hasan, S.H.; Trigui, M.S., "Advanced Encryption Standard - Cryptanalysis Research," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on, pp.660,667, 5-7 March 2014. doi: 10.1109/IndiaCom.2014.6828045 Advanced Encryption Standard (AES) has been the focus of Cryptanalysis since it was released in the 2001, November. The research gained more important when AES as declared as the Type-1 Suite-B Encryption Algorithm, by the NSA in 2003(CNSSP-15). Which makes it deemed suitable for being utilized for encryption of the both Classified & Unclassified security documents and system. The following papers discusses the Cryptanalysis research being carried out on the AES and discusses the different techniques being used establish the advantages of the algorithm being used in Security systems. It would conclude by the trying to assess the duration in which AES can be effectively used in the National Security Applications.
Keywords: algebraic codes; cryptography; standards; AES; Advanced Encryption Standard; NSA encryption algorithm; algebraic attack; cryptanalysis research; national security applications; security systems; Ciphers; Classification algorithms; Encryption; Equations; Timing; Cryptanalysis; Encryption; Network Security (ID#:14-2786)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828045&isnumber=6827395
- Kumar, R.; Jovanovic, P.; Polian, I, "Precise Fault-Injections Using Voltage And Temperature Manipulation For Differential Cryptanalysis," On-Line Testing Symposium (IOLTS), 2014 IEEE 20th International, pp.43, 48, 7-9 July 2014. doi: 10.1109/IOLTS.2014.6873670 State-of-the-art fault-based cryptanalysis methods are capable of breaking most recent ciphers after only a few fault injections. However, they require temporal and spatial accuracies of fault injection that were believed to rule out low-cost injection techniques such as voltage, frequency or temperature manipulation. We investigate selection of supply-voltage and temperature values that are suitable for high-precision fault injection even up to a single bit. The object of our studies is an ASIC implementation of the recently presented block cipher PRINCE, for which a two-stage fault attack scheme has been suggested lately. This attack requires, on average, about four to five fault injections in well-defined locations. We show by electrical simulations that voltage-temperature points exist for which faults show up at locations required for a successful attack with a likelihood of around 0.1%. This implies that the complete attack can be mounted by approximately 4,000 to 5,000 fault injection attempts, which is clearly feasible.
Keywords: application specific integrated circuits; cryptography; fault diagnosis; integrated circuit design ;block cipher PRINCE; differential cryptanalysis; electrical simulations; fault-based cryptanalysis methods; high-precision fault injection; low-cost injection techniques; supply-voltage selection; temperature manipulation; temperature values; two-stage fault attack scheme; voltage manipulation; voltage-temperature points; Ciphers; Circuit faults; Clocks; Logic gates; Mathematical model; Temperature distribution (ID#:14-2787)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6873670&isnumber=6873658
- Bhateja, A; Kumar, S., "Genetic Algorithm With Elitism For Cryptanalysis Of Vigenere Cipher," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on, pp.373,377, 7-8 Feb. 2014. doi: 10.1109/ICICICT.2014.6781311 In today's world, with increasing usage of computer networks and internet, the importance of network, computer and information security is obvious. One of the widely used approaches for information security is Cryptography. Cryptanalysis is a way to break the cipher text without having the encryption key. This paper describes a method of deciphering encrypted messages of Vigenere cipher cryptosystems by Genetic Algorithm using elitism with a novel fitness function. Roulette wheel method, two point crossover and cross mutation is used for selection and for the generation of the new population. We conclude that the proposed algorithm can reduce the time complexity and gives better results for such optimization problems.
Keywords: cryptography; genetic algorithms; Internet; Vigenere cipher; computer networks; computer security; cross mutation; cryptanalysis; cryptography; elitism; encryption key; fitness function; genetic algorithm; information security; network security; roulette wheel method; two point crossover; Ciphers; Genetic algorithms; Genetics; Lead; Size measurement; Vigenere cipher; chromosomes; cryptanalysis; elitism; fitness function; genes; genetic algorithm (ID#:14-2788)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781311&isnumber=6781240
- Lin Ding; Chenhui Jin; Jie Guan; Qiuyan Wang, "Cryptanalysis of Lightweight WG-8 Stream Cipher," Information Forensics and Security, IEEE Transactions on, vol.9, no.4, pp.645,652, April 2014. doi: 10.1109/TIFS.2014.2307202 WG-8 is a new lightweight variant of the well-known Welch-Gong (WG) stream cipher family, and takes an 80-bit secret key and an 80-bit initial vector (IV) as inputs. So far no attack on the WG-8 stream cipher has been published except the attacks by the designers. This paper shows that there exist Key-IV pairs for WG-8 that can generate keystreams, which are exact shifts of each other throughout the keystream generation. By exploiting this slide property, an effective key recovery attack on WG-8 in the related key setting is proposed, which has a time complexity of 253.32 and requires 252 chosen IVs. The attack is minimal in the sense that it only requires one related key. Furthermore, we present an efficient key recovery attack on WG-8 in the multiple related key setting. As confirmed by the experimental results, our attack recovers all 80 bits of WG-8 in on a PC with 2.5-GHz Intel Pentium 4 processor. This is the first time that a weakness is presented for WG-8, assuming that the attacker can obtain only a few dozen consecutive keystream bits for each IV. Finally, we give a new Key/IV loading proposal for WG-8, which takes an 80-bit secret key and a 64-bit IV as inputs. The new proposal keeps the basic structure of WG-8 and provides enough resistance against our related key attacks.
Keywords: computational complexity; cryptography; microprocessor chips;80-bit initial vector;80-bit secret key; Intel Pentium 4 processor; Welch-Gong stream cipher; frequency 2.5 GHz; key recovery attack; keystream generation; lightweight WG-8 stream cipher cryptanalysis; related key attack; slide property; time complexity; Ciphers; Clocks; Equations; Proposals; Time complexity;Cryptanalysis;WG-8;lightweight stream cipher; related key attack (ID#:14-2789)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6746224&isnumber=6755552
- Madhusudhan, R.; Kumar, S.R., "Cryptanalysis of a Remote User Authentication Protocol Using Smart Cards," Service Oriented System Engineering (SOSE), 2014 IEEE 8th International Symposium on, pp.474,477, 7-11 April 2014. doi: 10.1109/SOSE.2014.84 Remote user authentication using smart cards is a method of verifying the legitimacy of remote users accessing the server through insecure channel, by using smart cards to increase the efficiency of the system. During last couple of years many protocols to authenticate remote users using smart cards have been proposed. But unfortunately, most of them are proved to be unsecure against various attacks. Recently this year, Yung-Cheng Lee improved Shin et al.'s protocol and claimed that their protocol is more secure. In this article, we have shown that Yung-Cheng-Lee's protocol too has defects. It does not provide user anonymity; it is vulnerable to Denial-of-Service attack, Session key reveal, user impersonation attack, Server impersonation attack and insider attacks. Further it is not efficient in password change phase since it requires communication with server and uses verification table.
Keywords: computer network security; cryptographic protocols; message authentication ;smart cards; Yung-Cheng-Lee's protocol; cryptanalysis; denial-of-service attack; insecure channel; insider attacks; legitimacy verification; password change phase; remote user authentication protocol; server impersonation attack; session key; smart cards; user impersonation attack; verification table;Authentication;Bismuth;Cryptography;Protocols;Servers;Smart cards; authentication; smart card; cryptanalysis; dynamic id (ID#:14-2790)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830951&isnumber=6825948
- Phuong Ha Nguyen; Sahoo, D.P.; Mukhopadhyay, D.; Chakraborty, R.S., "Cryptanalysis of Composite PUFs (Extended abstract-invited talk)," VLSI Design and Test, 18th International Symposium on, pp.1,2, 16-18 July 2014.doi: 10.1109/ISVDAT.2014.6881035 In recent years, Physically Unclonable Functions (PUFs) have become important cryptographic primitive and are used in secure systems to resist physical attacks. Since PUFs have many useful properties such as memory-leakage resilience, unclonablity, tampering-resistance, PUF has drawn great interest in academia as well as industry. As extremely useful hardware security primitives, PUFs are used in various proposed applications such as device authentication and identification, random number generation, and intellectual property protection. One of important requirement to PUFs is that PUFs should have small hardware overhead in order to be utilized in lightweight application such as RFID. To achieve this goal, Composite PUFs are developed and introduced in RECONFIG2013 and HOST2014. In a nutshell, Composite PUFs are built by using many small PUFs primitives. In this talk, we show that Composite PUFs introduced in RECONFIG2013 are not secure by presenting its cryptanalysis.
Keywords: cryptography; data protection; message authentication; random number generation; composite PUFs cryptanalysis; cryptographic primitive; device authentication; intellectual property protection; physically unclonable functions; random number generation; Authentication; Computational modeling; Hardware; Industries; Random number generation; PUF; Physically unclonable function; cryptanalysis (ID#:14-2791)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6881035&isnumber=6881034
- Huixian Li; Liaojun Pang, "Cryptanalysis of Wang et al.'s Improved Anonymous Multi-Receiver Identity-Based Encryption Scheme," Information Security, IET , vol.8, no.1, pp.8,11, Jan. 2014. doi: 10.1049/iet-ifs.2012.0354 Fan et al. proposed an anonymous multi-receiver identity-based encryption scheme in 2010, and showed that the identity of any legal receiver can be kept anonymous to anyone else. In 2012, Wang et al. pointed out that Fan et al.'s scheme cannot achieve the anonymity and that every legal receiver can determine whether the other is one of the legal receivers. At the same time, they proposed an improved scheme based on Fan et al.'s scheme to solve this anonymity problem. Unfortunately, the authors find that Wang et al.'s improved scheme still suffers from the same anonymity problem. Any legal receiver of Wang et al.'s improved scheme can judge whether anyone else is a legal receiver or not. In this study, the authors shall give the detailed anonymity analysis of Wang et al.'s improved scheme.
Keywords: broadcasting; cryptography; receivers; telecommunication security; Wang et al improved scheme ;anonymity problem; anonymous multireceiver identity-based encryption scheme; cryptanalysis; legal receiver (ID#:14-2792)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6687152&isnumber=6687150
- Sarvabhatla, Mrudula; Giri, M.; Vorugunti, Chandra Sekhar, "Cryptanalysis of "a Biometric-Based User Authentication Scheme For Heterogeneous Wireless Sensor Networks"," Contemporary Computing (IC3), 2014 Seventh International Conference on, pp.312,317, 7-9 Aug. 2014. doi: 10.1109/IC3.2014.6897192 With the advancement of Internet of Things (IoT) technology and rapid growth of WSN applications, provides an opportunity to connect WSN to IoT, which results in the secure sensor data can be accessible via in secure Internet. The integration of WSN and IoT effects lots of security challenges and requires strict user authentication mechanism. Quite a few isolated user verification or authentication schemes using the password, the biometrics and the smart card have been proposed in the literature. In 2013, A.K Das et al. designed a biometric-based remote user verification scheme using smart card for heterogeneous wireless sensor networks. A.K Das et al insisted that their scheme is secure against several known cryptographic attacks. Unfortunately, in this manuscript we will show that their scheme fails to resist replay attack, user impersonation attack, failure to accomplish mutual authentication and failure to provide data privacy.
Keywords: Authentication; Biometrics (access control); Elliptic curve cryptography; Smart cards; Wireless sensor networks; Biometric; Cryptanalysis; Smart Card; User Authentication; Wireless Sensor Networks (ID#:14-2793)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6897192&isnumber=6897132
- Aboud, S.J.; Al-fayoumi, M., "Cryptanalysis Of Password Authentication System," Computer Science and Information Technology (CSIT), 2014 6th International Conference on, pp.14,17, 26-27 March 2014. doi: 10.1109/CSIT.2014.6805972 The password authentication systems have been increasing in recent years. Therefore authors have been concentrated these days on introducing more password authentication systems. Thus, in 2011, Lee et al., presented an enhanced system to resolve the vulnerabilities of selected system. But, we notice that Lee et al., system is still weak to server attack and stolen smart card attack. Also, a password change protocol of the system is neither suitable to users nor low efficient. There is no handy data can be gained from the values kept in smart cards. Therefore, a stolen smart card attack can be blocked. To prevent server attack, we suggest transferring a user authentication operation from servers to a registration centre, which can guarantee every server, has another private key.
Keywords: cryptography; message authentication; smart cards; cryptanalysis; password authentication system; password change protocol; private key; registration centre; server attack; stolen smart card attack; user authentication operation; Authentication; Computer hacking; Cryptography; Protocols; Servers; Smart cards (ID#:14-2794)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6805972&isnumber=6805962
- Ahmadi, S.; Ahmadian, Z.; Mohajeri, J.; Aref, M.R., "Low-Data Complexity Biclique Cryptanalysis of Block Ciphers With Application to Piccolo and HIGHT," Information Forensics and Security, IEEE Transactions on, vol.9, no.10, pp.1641,1652, Oct. 2014. doi: 10.1109/TIFS.2014.2344445 In this paper, we present a framework for biclique cryptanalysis of block ciphers which extremely requires a low amount of data. To that end, we enjoy a new representation of biclique attack based on a new concept of cutset that describes our attack more clearly. Then, an algorithm for choosing two differential characteristics is presented to simultaneously minimize the data complexity and control the computational complexity. Then, we characterize those block ciphers that are vulnerable to this technique and among them, we apply this attack on lightweight block ciphers Piccolo-80, Piccolo-128, and HIGHT. The data complexity of these attacks is only 16-plaintext-ciphertext pairs, which is considerably less than the existing cryptanalytic results. In all the attacks, the computational complexity remains the same as the previous ones or even it is slightly improved.
Keywords: Ciphers; Computational complexity; Encryption; Optimization; Schedules; Biclique cryptanalysis; attack complexity; lightweight block ciphers (ID#:14-2795)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6868260&isnumber=6891522
- Mala, H., "Biclique-based Cryptanalysis Of The Block Cipher SQUARE," Information Security, IET, vol.8, no.3, pp.207, 212, May 2014. doi: 10.1049/iet-ifs.2011.0332 SQUARE, an eight-round substitution-permutation block cipher, is considered as a predecessor of the advanced encryption standard (AES). Recently, the concept of biclique-based key recovery of block ciphers was introduced and applied to full-round versions of three variants of AES. In this paper, this technique is applied to analyse the block cipher SQUARE. First, a biclique for three rounds of SQUARE using independent related-key differentials has been found. Then, an attack on this cipher is presented, with a data complexity of about 248 chosen plaintexts and a time complexity of about 2125.7 encryptions. The attack is the first successful attack on full-round SQUARE in the single-key scenario.
Keywords: computational complexity; cryptography; AES; advanced encryption standard; biclique-based cryptanalysis; biclique-based key recovery; block cipher SQUARE; block ciphers; data complexity; eight-round substitution-permutation block cipher ;independent related-key differentials; time complexity (ID#:14-2796)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6786901&isnumber=6786849
- Kramer, J.; Kasper, M.; Seifert, J.-P., "The Role Of Photons In Cryptanalysis," Design Automation Conference (ASP-DAC), 2014 19th Asia and South Pacific, pp.780, 787, 20-23 Jan. 2014 doi: 10.1109/ASPDAC.2014.6742985 Photons can be exploited to reveal secrets of security ICs like smartcards, secure microcontrollers, and cryptographic coprocessors. One such secret is the secret key of cryptographic algorithms. This work gives an overview about current research on revealing these secret keys by exploiting the photonic side channel. Different analysis methods are presented. It is shown that the analysis of photonic emissions also helps to gain knowledge about the attacked device and thus poses a threat to modern security ICs. The presented results illustrate the differences between the photonic and other side channels, which do not provide fine-grained spatial information. It is shown that the photonic side channel has to be addressed by software engineers and during chip design.
Keywords: photons; private key cryptography; cryptanalysis; integrated circuit; photonic emissions; photonic side channel; photons; secret keys; security IC; Algorithm design and analysis; Cryptography; Detectors; Integrated circuits; Photonics; Random access memory (ID#:14-2797)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6742985&isnumber=6742831
- Xu, J.; Hu, L.; Sun, S., "Cryptanalysis of Two Cryptosystems Based On Multiple Intractability Assumptions," Communications, IET , vol.8, no.14, pp.2433,2437, Sept. 25 2014. doi: 10.1049/iet-com.2013.1101 Two public key cryptosystems based on the two intractable number-theoretic problems, integer factorisation and simultaneous Diophantine approximation, were proposed in 2005 and 2009, respectively. In this study, the authors break these two cryptosystems for the recommended minimum parameters by solving the corresponding modular linear equations with small unknowns. For the first scheme, the public modulus is factorised and the secret key is recovered with the Gauss algorithm. By using the LLL basis reduction algorithm for a seven-dimensional lattice, the public modulus in the second scheme is also factorised and the plaintext is recovered from a ciphertext. The author's attacks are efficient and verified by experiments which were done within 5s.
Keywords: (not provided) (ID#:14-2798)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900024&isnumber=6900021
- Kuo, Po-Chun; Cheng, Chen-Mou, "Lattice-based Cryptanalysis -- How To Estimate The Security Parameter Of Lattice-Based Cryptosystem," Consumer Electronics - Taiwan (ICCE-TW), 2014 IEEE International Conference on, pp.53,54, 26-28 May 2014. doi: 10.1109/ICCE-TW.2014.6904097 The usual cryptosystem behind debit card is RSA cryptosystem, which would be broken immediately by quantum computer. Thus, post-quantum cryptography rises and aims to develop cryptosystems which resist the quantum attack. Lattice-based cryptography is one on post-quantum cryptography, and is used to construct various cryptosystems. The central problem behind the lattice-based cryptosystem is Shortest Vector Problem (SVP), finding the shortest vector in the given lattice. Based on the previous results, we re-design the implementation method to improve the performance on GPU. Moreover, we implement and compare the enumeration and sieve algorithm to solve SVP on GPU. Thus, we can estimate the security parameter of lattice-based cryptosystem in reasonable way.
Keywords: Algorithm design and analysis; Approximation algorithms; Cryptography; Graphics processing units; Lattices; Vectors (ID#:14-2799)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6904097&isnumber=6903991
- Jun Xu; Lei Hu; Siwei Sun; Yonghong Xie, "Cryptanalysis of Countermeasures Against Multiple Transmission Attacks on NTRU," Communications, IET, vol.8, no.12, pp.2142, 2146, August 14 2014. doi: 10.1049/iet-com.2013.1092 The original Number Theory Research Unit (NTRU) public key cryptosystem is vulnerable to multiple transmission attacks, and the designers of NTRU presented two countermeasures to prevent such attacks. In this study, the authors show that the first countermeasure is still not secure, the plaintext can be revealed by a linearisation attack technique. Moreover, they demonstrate that the first countermeasure is even not secure for broadcast attacks, a class of more general attacks than multiple transmission attacks. For the second countermeasure, they show that one special case of its padding function for the plaintext is also insecure and the original plaintext can be obtained by lattice methods.
Keywords: public key cryptography; broadcast attacks; lattice methods; linearisation attack technique; multiple transmission attacks; original NTRU public key cryptosystem (ID#:14-2800)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6871476&isnumber=6871466
- Li Wei; Tao Zhi; Gu Dawu; Sun Li; Qu Bo; Liu Zhiqiang; Liu Ya, "An Effective Differential Fault Analysis On The Serpent Cryptosystem In The Internet of Things," Communications, China , vol.11, no.6, pp.129,139, June 2014. doi: 10.1109/CC.2014.6879011 Due to the strong attacking ability, fast speed, simple implementation and other characteristics, differential fault analysis has become an important method to evaluate the security of cryptosystem in the Internet of Things. As one of the AES finalists, the Serpent is a 128-bit Substitution-Permutation Network (SPN) cryptosystem. It has 32 rounds with the variable key length between 0 and 256 bits, which is flexible to provide security in the Internet of Things. On the basis of the byte-oriented model and the differential analysis, we propose an effective differential fault attack on the Serpent cryptosystem. Mathematical analysis and simulating experiment show that the attack could recover its secret key by introducing 48 faulty ciphertexts. The result in this study describes that the Serpent is vulnerable to differential fault analysis in detail. It will be beneficial to the analysis of the same type of other iterated cryptosystems.
Keywords: Internet of Things; computer network security; mathematical analysis; private key cryptography; Internet of Things; SPN cryptosystem; Serpent cryptosystem; byte-oriented model; cryptosystem security; differential fault analysis; differential fault attack; faulty ciphertexts; mathematical analysis; secret key recovery; substitution-permutation network cryptosystem; word length 0 bit to 256 bit; Educational institutions; Encryption; Internet of Things; Schedules; cryptanalysis; differential fault analysis ;internet of things; serpent (ID#:14-2801)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6879011&isnumber=6878993
- Tauleigne, R.; Datcu, O.; Stanciu, M., "Thwarting Cryptanalytic Attacks Based On The Correlation Function," Communications (COMM), 2014 10th International Conference on, pp.1, 4, 29-31 May 2014. doi: 10.1109/ICComm.2014.6866745 Many studies analyze the encrypted transmission using the synchronization of chaotic signals. This requires the exchange of an analog synchronization signal, which almost always is a state of the chaotic generator. However, very few different chaotic structures are used for this purpose, still. The uniqueness of their dynamics allows the identification of these structures by simple autocorrelation. In order to thwart all cryptanalytic attacks based on the identification of this dynamics, we propose a numerical method without memory in order to reversibly destroy the shape of the transmitted signal. After analog-to-digital conversion of the synchronization signal, we apply permutations of the weights of its bits to each binary word. These permutations significantly change the shape of the transmitted signal, increasing its versatility and spreading its spectrum. If the message is simply added to the synchronization signal, being the easiest to decrypt, it undergoes the same transformation. It is therefore extremely difficult to detect the message in the transmitted signal by using a temporal analysis, as well as a frequency one. The present work illustrates the proposed method for the chaotic Colpitts oscillator. Nevertheless, the algorithm does not depend on the chosen chaotic generator. Finally, by only increasing the size of the permutation matrix, the complexity of the change in the waveform is increased in a factorial way.
Keywords: analogue-digital conversion; chaos generators; correlation methods; cryptography; oscillators; signal detection; synchronisation; analog synchronization signal analog-to-digital conversion; autocorrelation function; chaotic Colpitts oscillator; chaotic generator; chaotic structure identification; encrypted signal transmission; frequency analysis; message detection; temporal analysis; thwarting cryptanalytic attacks; weight permutation matrix; Chaotic communication; Computer hacking; Receivers; Shape; Synchronization; Transmitters; chaotic system; correlation; cryptanalysis; encryption; synchronization (ID#:14-2802)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6866745&isnumber=6866648
- Ali, A, "Some Words On Linearisation Attacks On FCSR-Based Stream Ciphers," Applied Sciences and Technology (IBCAST), 2014 11th International Bhurban Conference on, pp.195, 202, 14-18 Jan. 2014. doi: 10.1109/IBCAST.2014.6778145 Linearisation attacks are effective against those stream ciphers whose analysis theory depends on the properties of 2-adic numbers. This paper discuses these attacks in the context of Feedback with Carry Shift Register (FCSR) based stream ciphers. In this context, linearisation attacks build upon the theory of linearisation intervals of the FCSR state update function. The paper presents detailed theoretical results on FCSRs, which describe various operational aspects of the FCSR state update function in relation to the linearisation intervals. Linearisation attacks combine these theoretical results on FCSRs with the concepts of well-known techniques of cryptanalysis, which depends upon the structures of specific ciphers to be analysed such as linear cryptanalysis, correlation attacks, guess-and-determine attacks, and algebraic attacks. In the context of FCSR-based stream ciphers, the paper describes three variants of linearisation attacks. These variants are named as "Conventional Linearisation Attacks", "Fast Linearisation Attacks" and "Improved Linearisation Attacks". These variants of linearisation attacks provide trade-offs between data, time and memory complexities with respect to each other. Moreover this paper also presents a detailed comparison of linearisation attacks with other well-known techniques of cryptanalysis.
Keywords: algebra; cryptography; shift registers; FCSR state update function ;FCSR-based stream ciphers; Feedback with Carry Shift Register; algebraic attacks; conventional linearisation attacks; correlation attacks; fast linearisation attacks; guess-and-determine attacks; improved linearisation attacks; linear cryptanalysis; linearisation interval theory; trade-offs; Adders; Ciphers; Equations; Hamming weight; Mathematical model; Registers; CLAs; FLAs; ILAs; New results; linearisation attacks; tradeoffs (ID#:14-2803)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778145&isnumber=6778084
- Khan, AK.; Mahanta, H.J., "Side Channel Attacks And Their Mitigation Techniques," Automation, Control, Energy and Systems (ACES), 2014 First International Conference on, pp.1,4, 1-2 Feb. 2014. doi: 10.1109/ACES.2014.6807983 Side channel cryptanalysis is one of the most volatile fields of research in security prospects. It has proved that cryptanalysis is no more confined to its dependence on plain text or cipher text. Indeed side channel attack uses the physical characteristics of the cryptographic device to find the cryptographic algorithm used and also the secret key. It is one of the most efficient techniques and has successfully broken almost all the cryptographic algorithms today. In this paper we aim to present a review on the various side channel attacks possible. Also, the techniques proposed to mitigate such an attack have been stated.
Keywords: cryptography; cryptographic device; ivolatile field; mitigation technique ;security prospect; side channel attack; side channel cryptanalysis; Ciphers ;Elliptic curve cryptography; Encryption; Hardware; Timing; AES; DES; DPA; Power Analysis; SPA; cryptographic device (ID#:14-2804)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6807983&isnumber=6807973
- Rudra, M.R.; Daniel, N.A; Nagoorkar, V.; Hoe, D.H.K., "Designing Stealthy Trojans With Sequential Logic: A Stream Cipher Case Study," Design Automation Conference (DAC), 2014 51st ACM/EDAC/IEEE, pp.1,4, 1-5 June 2014. doi: 10.1145/2593069.2596677 This paper describes how a stealthy Trojan circuit can be inserted into a stream cipher module. The stream cipher utilizes several shift register-like structures to implement the keystream generator and to process the encrypted text. We demonstrate how an effective trigger can be built with the addition of just a few logic gates inserted between the shift registers and one additional flip-flop. By distributing the inserted Trojan logic both temporally and over the logic design space, the malicious circuit is hard to detect by both conventional and more recent static analysis methods. The payload is designed to weaken the cipher strength, making it more susceptible to cryptanalysis by an adversary.
Keywords: cryptography; flip-flops; invasive software; logic design; sequential circuits; shift registers; cipher strength; cryptanalysis; encrypted text ;flip-flop; keystream generator; logic design space; logic gates; malicious circuit; sequential logic; shift register-like structures; static analysis methods; stealthy trojan circuit; stream cipher module; trojan logic; Ciphers; Encryption; Hardware; Logic gates; Shift registers; Trojan horses; hardware trojan; sequential-based Trojan; stream cipher (ID#:14-2805)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6881499&isnumber=6881325
- Chouhan, D.S.; Mahajan, R.P., "An Architectural Framework For Encryption & Generation Of Digital Signature Using DNA Cryptography," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on, pp.743,748, 5-7 March 2014. doi: 10.1109/IndiaCom.2014.6828061 As most of the modern encryption algorithms are broken fully/partially, the world of information security looks in new directions to protect the data it transmits. The concept of using DNA computing in the fields of cryptography has been identified as a possible technology that may bring forward a new hope for hybrid and unbreakable algorithms. Currently, several DNA computing algorithms are proposed for cryptography, cryptanalysis and steganography problems, and they are proven to be very powerful in these areas. This paper gives an architectural framework for encryption & Generation of digital signature using DNA Cryptography. To analyze the performance; the original plaintext size and the key size; together with the encryption and decryption time are examined also the experiments on plaintext with different contents are performed to test the robustness of the program.
Keywords: biocomputing; digital signatures; DNA computing; DNA cryptography; architectural framework; cryptanalysis; decryption time; digital signature encryption; digital signature generation ;encryption algorithms; encryption time; information security; key size; plaintext size; steganography; Ciphers; DNA; DNA computing; Digital signatures; Encoding; Encryption; DNA; DNA computing DNA cryptography; DNA digital coding (ID#:14-2806)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828061&isnumber=6827395
- Te-Yu Chen; Chung-Huei Ling; Min-Shiang Hwang, "Weaknesses of the Yoon-Kim-Yoo Remote User Authentication Scheme Using Smart Cards," Electronics, Computer and Applications, 2014 IEEE Workshop on, pp.771,774, 8-9 May 2014. doi: 10.1109/IWECA.2014.6845736 A user authentication scheme is a mechanism employed by a server to authenticate the legality of a user before he/she is allowed to access the resource or service provided by the server. Due to the Internet's openness and lack of security concern, the user authentication scheme is one of the most important security primitives in the Internet activities. Many researchers have been devoted to the study of this issue. There are many authentication schemes have been proposed up to now. However, most of these schemes have both the advantages and disadvantages. Recently, Yoon, Kim and Yoo proposed a remote user authentication scheme which is an improvement of Liaw et al.'s scheme. Unfortunately, we find their scheme is not secure enough. In this paper, we present some flaws in Yoon-Kim-Yoo's scheme. This proposed cryptanalysis contributes important heuristics on the secure concern when researchers design remote user authentication schemes.
Keywords: Internet; cryptography; message authentication; smart cards; Internet activities; Yoon-Kim-Yoo remote user authentication scheme weakness; cryptanalysis; security primitives; smart cards; Cryptography; Entropy; Ice; Smart card; cryptography; guessing attack; user authentication (ID#:14-2807)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845736&isnumber=6845536
- Ximeng Liu; Jianfeng Ma; Jinbo Xiong; Qi Li; Tao Zhang; Hui Zhu, "Threshold Attribute-Based Encryption With Attribute Hierarchy For Lattices In The Standard Model," Information Security, IET, vol.8, no.4, pp.217,223, July 2014. doi: 10.1049/iet-ifs.2013.0111 Attribute-based encryption (ABE) has been considered as a promising cryptographic primitive for realising information security and flexible access control. However, the characteristic of attributes is treated as the identical level in most proposed schemes. Lattice-based cryptography has been attracted much attention because of that it can resist to quantum cryptanalysis. In this study, lattice-based threshold hierarchical ABE (lattice-based t-HABE) scheme without random oracles is constructed and proved to be secure against selective attribute set and chosen plaintext attacks under the standard hardness assumption of the learning with errors problem. The notion of the HABE scheme can be considered as the generalisation of traditional ABE scheme where all attributes have the same level.
Keywords: authorisation; cryptography; attribute characteristics; attribute hierarchy; cryptographic primitive; flexible access control; information security; lattice-based cryptography; lattice-based t-HABE scheme; lattice-based threshold hierarchical ABE scheme; plaintext attacks; quantum cryptanalysis; random oracles; selective attribute set; standard model; threshold attribute-based encryption (ID#:14-2808)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6842406&isnumber=6842405
- Shao-zhen Chen; Tian-min Xu, "Biclique Key Recovery for ARIA-256," Information Security, IET, vol.8, no.5, pp.259,264, Sept. 2014. doi: 10.1049/iet-ifs.2012.0353 In this study, combining the biclique cryptanalysis with the meet-in-the-middle (MITM) attack, the authors present the first key recovery method for the full ARIA-256 faster than brute-force. The attack requires 280 chosen plaintexts, and the time complexity is about 2255.2 full-round ARIA encryptions.
Keywords: cryptography; MITM attack; biclique cryptanalysis; biclique key recovery; first key recovery method; full-round ARIA encryptions; meet-in-the-middle attack; time complexity (ID#:14-2809)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6881822&isnumber=6881821
- Zadeh, AA; Heys, H.M., "Simple Power Analysis Applied To Nonlinear Feedback Shift Registers," Information Security, IET, vol.8, no.3, pp.188, 198, May 2014. doi: 10.1049/iet-ifs.2012.0186 Linear feedback shift registers (LFSRs) and nonlinear feedback shift register (NLFSRs) are major components of stream ciphers. It has been shown that, under certain idealised assumptions, LFSRs and LFSR-based stream ciphers are susceptible to cryptanalysis using simple power analysis (SPA). In this study, the authors show that SPA can be practically applied to a CMOS digital hardware circuit to determine the bit values of an NLFSR and SPA therefore has applicability to NLFSR-based stream ciphers. A new approach is used with the cryptanalyst collecting power consumption information from the system on both edges (triggering and non-triggering) of the clock in the digital hardware circuit. The method is applied using simulated power measurements from an 80-bit NLFSR targeted to an 180 nm CMOS implementation. To overcome inaccuracies associated with mapping power measurements to the cipher data, the authors offer novel analytical techniques which help the analysis to find the bit values of the NLFSR. Using the obtained results, the authors analyse the complexity of the analysis on the NLFSR and show that SPA is able to successfully determine the NLFSR bits with modest computational complexity and a small number of power measurement samples.
Keywords: CMOS logic circuits; computational complexity; cryptography; power aware computing; shift registers; CMOS digital hardware circuit; LFSR; LFSR-based stream ciphers; NLFSR-based stream ciphers; SPA; bit value determination; cipher data; clock edges; computational complexity; cryptanalysis; digital hardware circuit; linear feedback shift registers; nonLFSR; nonlinear feedback shift registers; power consumption information; simple power analysis; simulated power measurements; size 180 nm; stream ciphers; word length 80 bit (ID#:14-2810)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6786955&isnumber=6786849
- Harish, P.D.; Roy, S., "Energy Oriented Vulnerability Analysis on Authentication Protocols for CPS," Distributed Computing in Sensor Systems (DCOSS), 2014 IEEE International Conference on, pp.367,371, 26-28 May 2014. doi: 10.1109/DCOSS.2014.52 In this work we compute the energy generated by modular exponentiation, a widely used powerful tool in password authentication protocols for cyber physical systems. We observe modular exponentiation to be an expensive operation in terms of energy consumption in addition to be known to be computationally intensive. We then analyze the security and energy consumption an advanced smart card based password authentication protocol for cyber physical systems, that use modular exponentiation. We devise a generic cryptanalysis method on the protocol, in which the attacker exploits the energy and computational intensive nature of modular exponentiation to a perform denial of service (DoS) attack. We also show other similar protocols to be vulnerable to this attack. We then suggest methods to prevent this attack.
Keywords: authorisation; energy conservation; CPS; DoS attack; cyber physical systems; denial-of-service attack; energy consumption; energy oriented vulnerability analysis;modular exponentiation; smart card based password authentication protocol; Authentication; Energy consumption; Energy measurement; Protocols; Servers; Smart cards (ID#:14-2811)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846192&isnumber=6846129
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Data at Rest - Data i Motion
Data protection has distinguished between data in motion and data at rest for more than a decade. Research into these areas continues with the proliferation of cloud and mobile technologies. The articles cited here, separated by motion and rest, were offered in the first half of 2014. Data in Motion:
- Ediger, D.; McColl, R.; Poovey, J.; Campbell, D., "Scalable Infrastructures for Data in Motion," Cluster, Cloud and Grid Computing (CCGrid), 2014 14th IEEE/ACM International Symposium on, vol., no., pp.875,882, 26-29 May 2014. doi: 10.1109/CCGrid.2014.91 Analytics applications for reporting and human interaction with big data rely upon scalable frameworks for data ingest, storage, and computation. Batch processing of analytic workloads increases latency of results and can perform redundant computation. In real-world applications, new data points are continuously arriving and a suite of algorithms must be updated to reflect the changes. Reducing the latency of re-computation by keeping algorithms online and up-to-date enables fast query, experimentation, and drill-down. In this paper, we share our experiences designing and implementing scalable infrastructure around No SQL databases for social media analytics applications. We propose a new heterogeneous architecture and execution model for streaming data applications that focuses on throughput and modularity.
Keywords: Big Data; SQL; data analysis; social networking (online); NoSQL databases; analytic workloads; batch processing; big data; data in motion; data ingest; data storage; execution model; heterogeneous architecture; recomputation latency reduction; redundant computation; scalable infrastructures; social media analytics applications; streaming data applications; Algorithm design and analysis; Clustering algorithms; Computational modeling; Data structures; Databases; Media; Servers (ID#:14-2753)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6846541&isnumber=6846423
- Veiga Neves, M.; De Rose, C.AF.; Katrinis, K.; Franke, H., "Pythia: Faster Big Data in Motion through Predictive Software-Defined Network Optimization at Runtime," Parallel and Distributed Processing Symposium, 2014 IEEE 28th International, pp.82,90, 19-23 May 2014. doi: 10.1109/IPDPS.2014.20 The rise of Internet of Things sensors, social networking and mobile devices has led to an explosion of available data. Gaining insights into this data has led to the area of Big Data analytics. The MapReduce framework, as implemented in Hadoop, is one of the most popular frameworks for Big Data analysis. To handle the ever-increasing data size, Hadoop is a scalable framework that allows dedicated, seemingly unbound numbers of servers to participate in the analytics process. Response time of an analytics request is an important factor for time to value/insights. While the compute and disk I/O requirements can be scaled with the number of servers, scaling the system leads to increased network traffic. Arguably, the communication-heavy phase of MapReduce contributes significantly to the overall response time, the problem is further aggravated, if communication patterns are heavily skewed, as is not uncommon in many MapReduce workloads. In this paper we present a system that reduces the skew impact by transparently predicting data communication volume at runtime and mapping the many end-to-end flows among the various processes to the underlying network, using emerging software-defined networking technologies to avoid hotspots in the network. Dependent on the network oversubscription ratio, we demonstrate reduction in job completion time between 3% and 46% for popular MapReduce benchmarks like Sort and Nutch.
Keywords: Big Data; computer networks; parallel programming; telecommunication traffic; Big Data analytics; Hadoop; MapReduce workloads; Nutch MapReduce benchmark; Pythia; Sort MapReduce benchmark; communication patterns; communication-heavy phase; compute requirements; data communication volume prediction; data size; disk I/O requirements; end-to-end flow mapping; job completion time reduction; network oversubscription ratio; network traffic; predictive software-defined network optimization; response time; runtime analysis; scalable framework; system scaling; unbound server numbers; Big data; Instruments; Job shop scheduling; Resource management; Routing; Runtime; Servers; Data communication; Data processing; Distributed computing (ID#:14-2754)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6877244&isnumber=6877223
- Hou, Junhui; Bian, Zhen-Peng; Chau, Lap-Pui; Magnenat-Thalmann, Nadia; He, Ying, "Restoring Corrupted Motion Capture Data Via Jointly Low-Rank Matrix Completion," Multimedia and Expo (ICME), 2014 IEEE International Conference on , vol., no., pp.1,6, 14-18 July 2014. doi: 10.1109/ICME.2014.6890222 Motion capture (mocap) technology is widely used in various applications. The acquired mocap data usually has missing data due to occlusions or ambiguities. Therefore, restoring the missing entries of the mocap data is a fundamental issue in mocap data analysis. Based on jointly low-rank matrix completion, this paper presents a practical and highly efficient algorithm for restoring the missing mocap data. Taking advantage of the unique properties of mocap data (i.e, strong correlation among the data), we represent the corrupted data as two types of matrices, where both the local and global characteristics are taken into consideration. Then we formulate the problem as a convex optimization problem, where the missing data is recovered by solving the two matrices using the alternating direction method of multipliers algorithm. Experimental results demonstrate that the proposed scheme significantly outperforms the state-of-the-art algorithms in terms of both the quality and computational cost.
Keywords: Accuracy; Computational efficiency; Computers; Convex functions; Image restoration; Optimization; Trajectory; Motion capture; convex optimization; low-rank; matrix completion (ID#:14-2755)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890222&isnumber=6890121
- Tennekoon, R.; Wijekoon, J.; Harahap, E.; Nishi, H.; Saito, E.; Katsura, S., "Per Hop Data Encryption Protocol For Transmission Of Motion Control Data Over Public Networks," Advanced Motion Control (AMC),2014 IEEE 13th International Workshop on , vol., no., pp.128,133, 14-16 March 2014. doi: 10.1109/AMC.2014.6823269 Bilateral controllers are widely used vital technology to perform remote operations and telesurgeries. The nature of the bilateral controller enables control objects, which are geographically far from the operation location. Therefore, the control data has to travel through public networks. As a result, to maintain the effectiveness and the consistency of applications such as teleoperations and telesurgeries, faster data delivery and data integrity are essential. The Service-oriented Router (SoR) was introduced to maintain the rich information on the Internet and to achieve maximum benefit from networks. In particular, the security, privacy and integrity of bilateral communication are not discoursed in spite of its significance brought by its underlying skill information or personal vital information. An SoR can analyze all packet or network stream transactions on its interfaces and store them in high throughput databases. In this paper, we introduce a hop-by-hop routing protocol which provides hop-by-hop data encryption using functions of the SoR. This infrastructure can provide security, privacy and integrity by using these functions. Furthermore, we present the implementations of proposed system in the ns-3 simulator and the test result shows that in a given scenario, the protocol only takes a processing delay of 46.32 ms for the encryption and decryption processes per a packet.
Keywords: Internet; computer network security; control engineering computing; cryptographic protocols; data communication; data integrity; data privacy; force control; medical robotics; motion control; position control; routing protocols;surgery;telecontrol;telemedicine;telerobotics;Internet;SoR; bilateral communication; bilateral controller; control objects; data delivery; data integrity; decryption process; hop-by-hop data encryption; hop-by-hop routing protocol; motion control data transmission; network stream transaction analysis;ns-3 simulator; operation location; packet analysis; per hop data encryption protocol; personal vital information; privacy; processing delay; public network; remote operation; security;s ervice-oriented router; skill information; teleoperation; telesurgery; throughput database; Delays; Encryption; Haptic interfaces; Routing protocols; Surgery; Bilateral Controllers; Service-oriented Router; hop-by-hop routing; motion control over networks; ns-3 (ID#:14-2756)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6823269&isnumber=6823244
Data at Rest:
- Ferretti, L.; Colajanni, M.; Marchetti, M., "Distributed, Concurrent, and Independent Access to Encrypted Cloud Databases," Parallel and Distributed Systems, IEEE Transactions on , vol.25, no.2, pp.437,446, Feb. 2014 doi: 10.1109/TPDS.2013.154 Abstract: Placing critical data in the hands of a cloud provider should come with the guarantee of security and availability for data at rest, in motion, and in use. Several alternatives exist for storage services, while data confidentiality solutions for the database as a service paradigm are still immature. We propose a novel architecture that integrates cloud database services with data confidentiality and the possibility of executing concurrent operations on encrypted data. This is the first solution supporting geographically distributed clients to connect directly to an encrypted cloud database, and to execute concurrent and independent operations including those modifying the database structure. The proposed architecture has the further advantage of eliminating intermediate proxies that limit the elasticity, availability, and scalability properties that are intrinsic in cloud-based solutions. The efficacy of the proposed architecture is evaluated through theoretical analyses and extensive experimental results based on a prototype implementation subject to the TPC-C standard benchmark for different numbers of clients and network latencies.
Keywords: {cloud computing; cryptography; database management systems; TPC-C standard benchmark; availability property; cloud database services; concurrent access; data confidentiality; database structure modification; distributed access; elasticity property; encrypted cloud database; encrypted data concurrent operation execution; geographically distributed clients; independent access; intermediate proxies elimination; network latencies; scalability property; Cloud; SecureDBaaS; confidentiality; database ;security (ID#:14-2757)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6522403&isnumber=6689796
- Woods, Jacqueline; Iyengar, Sridhar; Sinha, Amit; Mitra, Subhasish; Cannady, Stacy, "A New Era Of Computing: Are You "Ready Now" To Build A Smarter And Secured Enterprise?," Quality Electronic Design (ISQED), 2014 15th International Symposium on, pp.1,7, 3-5 March 2014. doi: 10.1109/ISQED.2014.6783293 We are experiencing fundamental changes in how we interact, live, work and succeed in business. To support the new paradigm, computing must be simpler, more responsive and more adaptive, with the ability to seamlessly move from monolithic applications to dynamic services, from structured data at rest to unstructured data in motion, from supporting standard device interfaces to supporting a myriad of new and different devices every day. IBM understands this need to integrate social, mobile, cloud and big data to deliver value for your enterprise, so join this discussion, and learn how IBM helps customers leverage these technologies for superior customer value.
Keywords: (not provided) (ID#:14-2758)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6783293&isnumber=6783285
- Rodriguez Garcia, Ricardo; Thorpe, Julie; Vargas Martin, Miguel, "Crypto-assistant: Towards Facilitating Developer's Encryption Of Sensitive Data," Privacy, Security and Trust (PST), 2014 Twelfth Annual International Conference on, pp.342,346, 23-24 July 2014. doi: 10.1109/PST.2014.6890958 The lack of encryption of data at rest or in motion is one of the top 10 database vulnerabilities [1]. We suggest that this vulnerability could be prevented by encouraging developers to perform encryption-related tasks by enhancing their integrated development environment (IDE). To this end, we created the Crypto-Assistant: a modified version of the Hibernate Tools plug-in for the popular Eclipse IDE. The purpose of the Crypto-Assistant is to mitigate the impact of developers' lack of security knowledge related to encryption by facilitating the use of encryption directives via a graphical user interface that seamlessly integrates with Hibernate Tools. Two preliminary tests helped us to identify items for improvement which have been implemented in Crypto-Assistant. We discuss Crypto-Assistant's architecture, interface, changes in the developers' workflow, and design considerations.
Keywords: Databases; Encryption; Java; Prototypes; Software (ID#:14-2759)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890958&isnumber=6890911
- Hankins, R.Q.; Jigang Liu, "A Novel Approach To Evaluating Similarity In Computer Forensic Investigations," Electro/Information Technology (EIT), 2014 IEEE International Conference on,, pp.567,572, 5-7 June 2014. doi: 10.1109/EIT.2014.6871826 Abstraction-based approaches to data analysis in computer forensics require substantial human effort to determine what data is useful. Automated or semi-automated, similarity-based approaches allow rapid computer forensics analysis of large data sets with less focus on untangling many layers of abstraction. Rapid and automated ranking of data by its value to a computer forensics investigation eliminates much of the human effort required in the computer forensics process, leaving investigators to judge and specify what data is interesting, and automating the rest of analysis. In this paper, we develop two algorithms that find portions of a string relevant to an investigation, then refine that portion using a combination of human and computer analysis to rapidly and effectively extract the most useful data from the string, speeding, automatically documenting, and partially automating analysis.
Keywords: data analysis; digital forensics; abstraction-based approach; computer analysis; computer forensic investigations; data analysis; data ranking; human analysis; similarity evaluation; similarity-based approach; Algorithm design and analysis; Computational complexity; Computers; Digital forensics; Measurement (ID#:14-2760)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6871826&isnumber=6871745
- D'Orazio, C.; Ariffin, A; Choo, K.-K.R., "iOS Anti-forensics: How Can We Securely Conceal, Delete and Insert Data?," System Sciences (HICSS), 2014 47th Hawaii International Conference on, pp.4838,4847, 6-9 Jan. 2014. doi: 10.1109/HICSS.2014.594 With increasing popularity of smart mobile devices such as iOS devices, security and privacy concerns have emerged as a salient area of inquiry. A relatively under-studied area is anti-mobile forensics to prevent or inhibit forensic investigations. In this paper, we propose a "Concealment" technique to enhance the security of non-protected (Class D) data that is at rest on iOS devices, as well as a "Deletion" technique to reinforce data deletion from iOS devices. We also demonstrate how our "Insertion" technique can be used to insert data into iOS devices surreptitiously that would be hard to pick up in a forensic investigation.
Keywords: data privacy; digital forensics; iOS (operating system); mobile computing; mobile handsets; antimobile forensics; concealment technique; data deletion; deletion technique; forensic investigations; iOS antiforensics; iOS devices; insertion technique; nonprotected data security; privacy concerns; security concerns; smart mobile devices; Cryptography; File systems; Forensics; Mobile handsets; Random access memory; Videos; iOS anti-forensics; iOS forensics; mobile anti-forensics; mobile forensics (ID#:14-2761)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6759196&isnumber=6758592
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Edge Detection
Edge detection is an important issue in image and signal processing. The work cited here includes an overview of the topic, several approaches, and applications for radar and sonar. These works were presented or published between January and August of 2014.
- Waghule, D.R.; Ochawar, R.S., "Overview on Edge Detection Methods," Electronic Systems, Signal Processing and Computing Technologies (ICESC), 2014 International Conference on, pp.151,155, 9-11 Jan. 2014. doi: 10.1109/ICESC.2014.31 Edge in an image is a contour across which the brightness of the image changes abruptly. Edge detection plays a vital role in image processing. Edge detection is a process that detects the presence and location of edges constituted by sharp changes in intensity of the image. An important property of the edge detection method is its ability to extract the accurate edge line with good orientation. Different edge detectors work better under different conditions. Comparative evaluation of different methods of edge detection makes it easy to decide which edge detection method is appropriate for image segmentation. This paper presents an overview of the published work on edge detection.
Keywords: edge detection; image segmentation; edge detection methods ;image intensity; image processing; image segmentation; sharp changes; Algorithm design and analysis; Detectors; Field programmable gate arrays; Image edge detection; Morphology; Wavelet transforms; Edge Detection; Edge Detectors; FPGA; Wavelets (ID#:14-2812)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6745363&isnumber=6745317
- Isik, S.; Ozkan, K., "A Novel Multi-Scale And Multi-Expert Edge Detection Method Based On Common Vector Approach," Signal Processing and Communications Applications Conference (SIU), 2014 22nd , vol., no., pp.1630,1633, 23-25 April 2014. doi: 10.1109/SIU.2014.6830558 Edge detection is most popular problem in image analysis. To develop an edge detection method that has efficient computation time, sensing to noise as minimum level and extracting meaningful edges from the image, so that many crowded edge detection algorithms have emerged in this area. The different derivative operators and possible different scales are needed in order to properly determine all meaningful edges in a processed image. In this work, we have combined the edge information obtained from each operators at different scales with the concept of common vector approach and obtained edge segments that connected, thin and robust to the noise.
Keywords: edge detection; expert systems; common vector approach; crowded edge detection algorithms; edge information; image analysis; multiexpert edge detection method; multiscale edge detection method; Conferences; Image edge detection; Noise; Pattern recognition; Speech; Vectors; common vector approach; edge detection; multi-expert; multi-scale (ID#:14-2813)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830558&isnumber=6830164
- Wenlong Fu; Johnston, M.; Mengjie Zhang, "Low-Level Feature Extraction for Edge Detection Using Genetic Programming," Cybernetics, IEEE Transactions on, vol.44, no.8, pp.1459,1472, Aug. 2014. doi: 10.1109/TCYB.2013.2286611 Edge detection is a subjective task. Traditionally, a moving window approach is used, but the window size in edge detection is a tradeoff between localization accuracy and noise rejection. An automatic technique for searching a discriminated pixel's neighbors to construct new edge detectors is appealing to satisfy different tasks. In this paper, we propose a genetic programming (GP) system to automatically search pixels (a discriminated pixel and its neighbors) to construct new low-level subjective edge detectors for detecting edges in natural images, and analyze the pixels selected by the GP edge detectors. Automatically searching pixels avoids the problem of blurring edges from a large window and noise influence from a small window. Linear and second-order filters are constructed from the pixels with high occurrences in these GP edge detectors. The experiment results show that the proposed GP system has good performance. A comparison between the filters with the pixels selected by GP and all pixels in a fixed window indicates that the set of pixels selected by GP is compact but sufficiently rich to construct good edge detectors.
Keywords: edge detection; feature extraction; filtering theory; genetic algorithms; image denoising; GP system; edge detection; genetic programming; linear filters; localization accuracy; low-level feature extraction; natural images; noise rejection; second-order filters; Accuracy; Detectors; Educational institutions; Feature extraction; Image edge detection; Noise; Training; Edge detection; feature extraction; genetic programming (ID#:14-2814)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6649981&isnumber=6856256
- Naumenko, AV.; Lukin, V.V.; Vozel, B.; Chehdi, K.; Egiazarian, K., "Neural Network Based Edge Detection In Two-Look And Dual-Polarization Radar Images," Radar Symposium (IRS), 2014 15th International , vol., no., pp.1,4, 16-18 June 2014. doi: 10.1109/IRS.2014.6869302 Edge detection is a standard operation in image processing. It becomes problematic if noise is not additive, not Gaussian and not i.i.d. as this happens in images acquired by synthetic aperture radar (SAR). To perform edge detection better, it has been recently proposed to apply a trained neural network (NN) and SAR image pre-filtering for single-look mode. In this paper, we demonstrate that the proposed detector is, after certain modifications, applicable for edge detection in two-look and dual-polarization SAR images with and without pre-filtering. Moreover, we show that a recently introduced parameter AUC (Area Under the Curve) can be helpful in optimization of parameters for elementary edge detectors used as inputs of the NN edge detector. Quantitative analysis results confirming efficiency of the proposed detector are presented. Its performance is also studied for real-life TerraSAR-X data.
Keywords: edge detection; neural nets; radar computing; radar imaging; radar polarimetry; synthetic aperture radar; NN edge detector; SAR image pre-filtering; area under the curve; dual-polarization radar images; image processing; neural network based edge detection; parameter optimization; real-life TerraSAR-X data;single-look mode; synthetic aperture radar; two-look radar images; Artificial neural networks; Detectors; Image edge detection; Noise; Speckle; Synthetic aperture radar; Training; Synthetic aperture radar; edge detection; neural network; polarimetric; speckle; two-look images (ID#:14-2815)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6869302&isnumber=6869176
- Tong Chunya; Teng Linlin; Zhou Jiaming; He Kejia; Zhong Qiubo, "A Novel Method Of Edge Detection With Gabor Wavelet Based On FFTW," Electronics, Computer and Applications, 2014 IEEE Workshop on, pp.625,628, 8-9 May 2014. doi: 10.1109/IWECA.2014.6845697 Since remote sensing images' features of substantial data and complex landmark, so it needs a higher requirement for edge detection operator. Using Gabor wavelet as the edge detection operator can get over the limitations of grads operator and Canny operator in edge detection. However, the method based on 2-D Gabor wavelet takes more time. In response to this lack of Gabor wavelet, this paper presents an edge detection method based on parallel processing of FFTW and Gabor wavelet and the experimental analysis shows this method can improve the processing speed of the algorithm greatly.
Keywords: Gabor filters; discrete Fourier transforms; edge detection; geophysical image processing; remote sensing; wavelet transforms;2D Gabor wavelet; Canny operator; FFTW; complex landmark feature; discrete Fourier transformation; edge detection method; grads operator; parallel processing; remote sensing images ;substantial data feature; Image edge detection; Image resolution; Wavelet transforms; FFTW; Gabor wavelet; edge detection; parallel; processing; remote sensing images (ID#:14-2816)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845697&isnumber=6845536
- Nai-Quei Chen; Jheng-Jyun Wang; Li-An Yu; Chung-Yen Su, "Sub-pixel Edge Detection of LED Probes Based on Canny Edge Detection and Iterative Curve Fitting," Computer, Consumer and Control (IS3C), 2014 International Symposium on , vol., no., pp.131,134, 10-12 June 2014. doi: 10.1109/IS3C.2014.45 In recent years, the demands of LED are increasing. In order to test the quality of LEDs, we need LED probes to detect it, so the accuracy and manufacturing methods are attracted more attention by companies. LED probes are ground by people so far. When processing, we often consider the angle and radius of a probe (the radius is between 0.015 mm and 0.03 mm), so it is hard to balance between precision and quality. In this study, we proposed an effective method to measure the angle and radius of a probe. The method is based on Canny edge detection and a curve fitting with iteration. Experimental results show the effectiveness of the proposed method.
Keywords: curve fitting; edge detection; iterative methods; light emitting diodes; Canny edge detection; LED probes; LED quality test iterative curve fitting; probe angle; probe radius; subpixel edge detection; Computational modeling; Curve fitting; Equations; Image edge detection; Light emitting diodes; Mathematical model; Probes; Edge detection; LED; Probe; Sub-pixel edge detection (ID#:14-2817)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845477&isnumber=6845429
- Nascimento, AD.C.; Horta, M.M.; Frery, AC.; Cintra, R.J., "Comparing Edge Detection Methods Based on Stochastic Entropies and Distances for PolSAR Imagery," Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of, vol.7, no.2, pp.648,663, Feb. 2014. doi: 10.1109/JSTARS.2013.2266319 Polarimetric synthetic aperture radar (PolSAR) has achieved a prominent position as a remote imaging method. However, PolSAR images are contaminated by speckle noise due to the coherent illumination employed during the data acquisition. This noise provides a granular aspect to the image, making its processing and analysis (such as in edge detection) hard tasks. This paper discusses seven methods for edge detection in multilook PolSAR images. In all methods, the basic idea consists in detecting transition points in the finest possible strip of data which spans two regions. The edge is contoured using the transitions points and a B-spline curve. Four stochastic distances, two differences of entropies, and the maximum likelihood criterion were used under the scaled complex Wishart distribution; the first six stem from the h-f class of measures. The performance of the discussed detection methods was quantified and analyzed by the computational time and probability of correct edge detection, with respect to the number of looks, the backscatter matrix as a whole, the SPAN, the covariance an the spatial resolution. The detection procedures were applied to three real PolSAR images. Results provide evidence that the methods based on the Bhattacharyya distance and the difference of Shannon entropies outperform the other techniques.
Keywords: data acquisition; edge detection; entropy; geophysical techniques; image resolution; maximum likelihood estimation; radar imaging; radar polarimetry; remote sensing by radar; speckle; splines (mathematics);statistical distributions; stochastic processes; synthetic aperture radar; B-spline curve; Bhattacharyya distance; SPAN; Shannon entropies; backscatter matrix; coherent illumination; computational time; data acquisition; detection methods; detection procedures; edge detection methods; image analysis; image processing; look number; maximum likelihood criterion; multilook PolSAR images; polarimetric synthetic aperture radar; probability; real PolSAR images; remote imaging method; scaled complex Wishart distribution; spatial resolution; speckle noise; stochastic distances; stochastic entropies; transition points; Edge detection; image analysis; information theory; polarimetric SAR (ID#:14-2818)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6550901&isnumber=6730960
- Weibin Rong; Zhanjing Li; Wei Zhang; Lining Sun, "An Improved Canny Edge Detection Algorithm," Mechatronics and Automation (ICMA), 2014 IEEE International Conference on , vol., no., pp.577,582, 3-6 Aug. 2014. doi: 10.1109/ICMA.2014.6885761The traditional Canny edge detection algorithm is sensitive to noise, therefore, it's easy to lose weak edge information when filtering out the noise, and its fixed parameters show poor adaptability. In response to these problems, this paper proposed an improved algorithm based on Canny algorithm. This algorithm introduced the concept of gravitational field intensity to replace image gradient, and obtained the gravitational field intensity operator. Two adaptive threshold selection methods based on the mean of image gradient magnitude and standard deviation were put forward for two kinds of typical images (one has less edge information, and the other has rich edge information) respectively. The improved Canny algorithm is simple and easy to realize. Experimental results show that the algorithm can preserve more useful edge information and more robust to noise.
Keywords: edge detection; adaptive threshold selection methods; gravitational field intensity operator; image gradient magnitude; improved Canny edge detection algorithm; standard deviation; Algorithm design and analysis; Histograms; Image edge detection; Noise; Robustness; Standards; Tires; Adaptive threshold; Canny algorithm; Edge detection; Gravitational field intensity operator (ID#:14-2819)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6885761&isnumber=6885661
- Catak, M.; Duran, N., "2-Dimensional Auto-Regressive Process Applied To Edge Detection," Signal Processing and Communications Applications Conference (SIU), 2014 22nd , vol., no., pp.1442,1445, 23-25 April 2014. doi: 10.1109/SIU.2014.6830511Edge detection has important applications in image processing area. In addition to well-known deterministic approaches, stochastic models have been developed and validated on edge detection. In this study, a stochastic auto-regressive process method has been presented and this method applied to gray scale and color scale images. Results have been compared to other well-recognized edge detectors, then applicability of the developed method is pointed out.
Keywords: {edge detection; image colour analysis; stochastic processes; autoregressive process; color scale images; edge detection; edge detectors; gray scale images; image processing; stochastic autoregressive process method; stochastic models; Art; Conferences; Feature extraction; Image edge detection; MATLAB; Signal processing; Stochastic processes; auto-regressive process; color image processing; edge detection (ID#:14-2820)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830511&isnumber=6830164
- Wang, Xingmei; Liu, Guangyu; Li, Lin; Liu, Zhipeng, "A Novel Quantum-Inspired Algorithm For Edge Detection Of Sonar Image," Control Conference (CCC), 2014 33rd Chinese, pp.4836,4841, 28-30 July 2014. doi: 10.1109/ChiCC.2014.6895759 In order to extract the underwater object contours of sonar image accurately, a novel quantum-inspired edge detection algorithm is proposed. This algorithm use parameters of anisotropic second-order distribution characteristics MRF (Markov Random Field, MRF) model to describe the texture feature of original sonar image to smooth noise. Based on the conditions mentioned above, sonar image is represented by quantum bit on quantum theory, structure edge detection operator of sonar image by establishing a quantum superposition relationship between pixels. Evaluation the results of quantum-inspired edge detection by PSNR (Peak Signal to Noise Ratio, PSNR), and then complete the quantum-inspired edge detection of sonar image. The comparison different experiments demonstrate that the proposed algorithm get good smoothing result of original sonar image and underwater object contours can be extracted accurately. And it has better adaptability.
Keywords: Histograms; Image edge detection; PSNR; Quantum mechanics; Sonar detection; Edge Detection; Peak Signal to Noise Ratio; Quantum-inspired; Sonar image (ID#:14-2821)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6895759&isnumber=6895198
- Baselice, F.; Ferraioli, G.; Reale, D., "Edge Detection Using Real and Imaginary Decomposition of SAR Data," Geoscience and Remote Sensing, IEEE Transactions on, vol.52, no.7, pp.3833,3842, July 2014 doi: 10.1109/TGRS.2013.2276917 The objective of synthetic aperture radar (SAR) edge detection is the identification of contours across the investigated scene, exploiting SAR complex data. Edge detectors available in the literature exploit singularly amplitude and interferometric phase information, looking for reflectivity or height difference between neighboring pixels, respectively. Recently, more performing detectors based on the joint processing of amplitude and interferometric phase data have been presented. In this paper, we propose a novel approach based on the exploitation of real and imaginary parts of single-look complex acquired data. The technique is developed in the framework of stochastic estimation theory, exploiting Markov random fields. Compared to available edge detectors, the technique proposed in this paper shows useful advantages in terms of model complexity, phase artifact robustness, and scenario applicability. Experimental results on both simulated and real TerraSAR-X and COSMO-SkyMed data show the interesting performances and the overall effectiveness of the proposed method.
Keywords: edge detection; geophysical image processing; remote sensing by radar; synthetic aperture radar; COSMO-SkyMed data; Markov random fields; SAR complex data; SAR data imaginary decomposition; SAR data real decomposition; TerraSAR-X data; amplitude phase information; contour identification; edge detection; interferometric phase data; interferometric phase information; single-look complex acquired data; stochastic estimation theory; synthetic aperture radar; Buildings; Detectors; Estimation; Image edge detection; Joints; Shape; Synthetic aperture radar; Edge detection; Markov random fields (MRFs); synthetic aperture radar (SAR) (ID#:14-2822)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6595051&isnumber=6750067
- Byungjin Chung; Joohyeok Kim; Changhoon Yim, "Fast Rough Mode Decision Method Based On Edge Detection For Intra Coding in HEVC," Consumer Electronics (ISCE 2014), The 18th IEEE International Symposium on, pp.1,2, 22-25 June 2014. doi: 10.1109/ISCE.2014.6884419 In this paper, we propose a fast rough mode decision method based on edge detection for intra coding in HEVC. It performs edge detection using Sobel operator and estimates the angular direction using gradient values. Histogram mapping is used to reduce the number of prediction modes for full rate-distortion optimization (RDO). The proposed method can achieve processing speed improvement through RDO computation reduction. Simulation results shows that encoding time is reduced significantly compared to HM-13.0 with acceptable BD-PSNR and BD-rate.
Keywords: edge detection; video coding;BD-rate;HEVC;HM-13.0;RDO computation reduction; Sobel operator; acceptable BD-PSNR; angular direction estimation; edge detection; encoding time; fast rough mode decision method; full RDO; full rate-distortion optimization; gradient values; histogram mapping; intracoding; prediction mode number reduction; processing speed improvement; Encoding; Histograms; Image edge detection; Rate-distortion ; Simulation; Standards; Video coding; HEVC; edge detection; intra prediction; prediction mode (ID#:14-2823)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6884419&isnumber=6884278
- Muhammad, A; Bala, I; Salman, M.S.; Eleyan, A, "DWT Subbands Fusion Using Ant Colony Optimization For Edge Detection," Signal Processing and Communications Applications Conference (SIU), 2014 22nd, pp.1351,1354, 23-25 April 2014. doi: 10.1109/SIU.2014.6830488 In this paper, a new approach for image edge detection using wavelet based ant colony optimization (ACO) is proposed. The proposed approach applies discrete wavelet transform (DWT) on the image. ACO is applied to the generated four subbands (Approximation, horizontal, vertical, and diagonal) separately for edge detection. After obtaining edges from the 4 subbands, inverse DWT is applied to fuse the results into one image with same size as the original one. The proposed approach outperforms the conventional ACO approach.
Keywords: ant colony optimisation; discrete wavelet transforms; edge detection; image fusion; ACO; DWT subbands fusion; ant colony optimization; discrete wavelet transform; image edge detection; inverse DWT; Ant colony optimization; Conferences; Discrete wavelet transforms; Image edge detection; Image reconstruction; Signal processing algorithms; ant colony optimization; discrete wavelet transform; edge detection (ID#:14-2824)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830488&isnumber=6830164
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Expert Systems
Expert systems based on fuzzy logic hold promise for solving many problems. The research presented here address black hole attacks in wireless sensor networks, a fuzzy tool for conducting information security risk assessments, and expert code generator, and other topics. These works were presented between January and August of 2014.
- Taylor, V.F.; Fokum, D.T., "Mitigating Black Hole Attacks In Wireless Sensor Networks Using Node-Resident Expert Systems," Wireless Telecommunications Symposium (WTS), 2014, pp.1, 7, 9-11 April 2014. doi: 10.1109/WTS.2014.6835013 Wireless sensor networks consist of autonomous, self-organizing, low-power nodes which collaboratively measure data in an environment and cooperate to route this data to its intended destination. Black hole attacks are potentially devastating attacks on wireless sensor networks in which a malicious node uses spurious route updates to attract network traffic that it then drops. We propose a robust and flexible attack detection scheme that uses a watchdog mechanism and lightweight expert system on each node to detect anomalies in the behaviour of neighbouring nodes. Using this scheme, even if malicious nodes are inserted into the network, good nodes will be able to identify them based on their behaviour as inferred from their network traffic. We examine the resource-preserving mechanisms of our system using simulations and demonstrate that we can allow groups of nodes to collectively evaluate network traffic and identify attacks while respecting the limited hardware resources (processing, memory and storage) that are typically available on wireless sensor network nodes.
Keywords: expert systems; telecommunication computing; telecommunication network routing; telecommunication security; telecommunication traffic; wireless sensor networks; autonomous self-organizing low-power nodes; black hole attacks; flexible attack detection scheme; lightweight expert system; malicious node; network traffic; node-resident expert systems; resource-preserving mechanisms; spurious route updates; watchdog mechanism ;wireless sensor networks; Cryptography; Expert systems; Intrusion detection; Monitoring; Routing; Routing protocols; Wireless sensor networks (ID#:14-2825)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6835013&isnumber=6834983
- Bartos, J.; Walek, B.; Klimes, C.; Farana, R., "Fuzzy Tool For Conducting Information Security Risk Analysis," Control Conference (ICCC), 2014 15th International Carpathian, pp.28,33, 28-30 May 2014. doi: 10.1109/CarpathianCC.2014.6843564 The following article proposes fuzzy tool for processing risk analysis in the area of information security. The paper reviews today's approaches (qualitative and quantitative methodologies) and together with already published results proposes a fuzzy tool to support our novel approach. In this paper the fuzzy tool itself is proposed and also every main part of this tool is described. The proposed fuzzy tool is connected with expert system and methodology which is the part of more complex approach to decision making process. The knowledge base of expert system is created based on user input values and the knowledge of the problem domain. The proposed fuzzy tool is demonstrated on examples and problems from the area of information security.
Keywords: expert systems; fuzzy set theory; risk analysis; security of data; decision making process; expert system; fuzzy tool; information security risk analysis; qualitative methodologies; quantitative methodologies; Expert systems; Information security; Organizations; Risk management; expert system; fuzzy; fuzzy tool; information security; risk analysis; uncertainty (ID#:14-2826)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6843564&isnumber=6843557
- Imam, AT.; Rousan, T.; Aljawarneh, S., "An Expert Code Generator Using Rule-Based And Frames Knowledge Representation Techniques," Information and Communication Systems (ICICS), 2014 5th International Conference on, pp.1 , 6, 1-3 April 2014. doi: 10.1109/IACS.2014.6841951 This paper aims to demonstrate the development of an expert code generator using rule-based and frames knowledge representation techniques (ECG-RF). The ECG-RF system presented in this paper is a passive code generator that carries out the task of automatic code generation in fixed-structure software. To develop an ECG-RF system, the artificial intelligence (AI) of rule-based system and frames knowledge representation techniques was applied to a code generation task. ECG-RF fills a predefined frame of a certain fixed-structure program with code chunks retrieved from ECG-RF's knowledge base. The filling operation is achieved by ECG-RF's inference engine and is guided by the information collected from the user via a graphic user interface (GUI). In this paper, an ECG-RF system for generating a device driver program is presented and implemented with VBasic software. The results show that the ECG-RF design concept is reasonably reliable.
Keywords: graphical user interfaces; inference mechanisms ;knowledge based systems; program compilers; ECG-RF design concept; ECG-RF inference engine ;ECG-RF knowledge base; ECG-RF system; GUI; VBasic software; artificial intelligence; automatic code generation; code chunks; code generation task; device driver program; expert code generator; fixed-structure program; fixed-structure software; frames knowledge representation techniques; graphic user interface; passive code generator;r ule-based system; Engines; Generators; Graphical user interfaces; Knowledge representation; Programming; Software; Software engineering; Automatic Code Generation; Expert System; Frames Knowledge Representation Techniques; Software Development (ID#:14-2827)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841951&isnumber=6841931
- Mavropoulos, C.; Ping-Tsai Chung, "A Rule-based Expert System: Speakeasy - Smart Drink Dispenser," Systems, Applications and Technology Conference (LISAT), 2014 IEEE Long Island, pp.1,6, 2-2 May 2014. doi: 10.1109/LISAT.2014.6845224 In this paper, we develop a knowledge-based expert system case study called Speakeasy Expert System (S.E.S.) for exercising the rule-based expert system programming in both CLIPS and VisiRule. CLIPS stands for "C Language Integrated Production System" and it's an expert system tool created to facilitate the development of software to model human knowledge or expertise. VisiRule is a tool that allows experts to build decision models using a graphical paradigm, but one that can be annotated using code and or Boolean logic and then executed and exported to other programs and processes. Nowadays, there are billions of computing devices are interconnected in computing and communications. These devices include from various desktop personal computers, laptops, servers, embedded computers to small ones such as mobile phones. This growth shows no signs of slowing down and becomes the cause of a new technology in computing and communications. This new technology is called Internet of Things (IOT). In this study, we propose and extend the S.E.S into a Smart Drink Dispenser using IOT Technology. We present Data Flow Diagram of S.E.S in IOT Environment and its IOT architecture, and propose the usage and implementation of S.E.S.
Keywords: Boolean functions; C language; decision making; expert systems; Boolean logic; C language integrated production system; CLIPS; SES; VisiRule; decision models; graphical paradigm; human knowledge; knowledge-based expert system; rule-based expert system programming; smart drink dispenser; speakeasy expert system; Alcoholic beverages; Business; Decision trees; Expert systems; Internet of Things; Artificial Intelligence (AI);CLIPS; Decision Making Information System ;Internet of Things (IOT);Knowledge-based Expert Systems; Radio-frequency Identification (RFID); VisiRule (ID#:14-2828)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6845224&isnumber=6845183
- Yuzuguzel, H.; Cemgil, AT.; Anarim, E., "Query Ranking Strategies In Probabilistic Expert Systems," Signal Processing and Communications Applications Conference (SIU), 2014 22nd, pp.1199, 1202, 23-25 April 2014. doi: 10.1109/SIU.2014.6830450 The number of features are quite high in many fields. For instance, the number of symptoms are around thousands in probabilistic medical expert systems. Since it is not practical to query all the symptoms to reach the diagnosis, query choice becomes important. In this work, 3 query ranking strategies in probabilistic expert systems are proposed and their performances on synthetic data are evaluated.
Keywords: medical diagnostic computing; medical expert systems; probability; query processing; medical diagnosis; probabilistic expert systems; probabilistic medical expert systems; query ranking strategies; Conferences; Entropy; Expert systems; Inference algorithms; Probabilistic logic; Sequential diagnosis; Signal processing; medical diagnosis; relative-entropy; sequential diagnosis (ID#:14-2829)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830450&isnumber=6830164
- GaneshKumar, P.; Rani, C.; Devaraj, D.; Victoire, T.AA, "Hybrid Ant Bee Algorithm for Fuzzy Expert System Based Sample Classification," Computational Biology and Bioinformatics, IEEE/ACM Transactions on, vol.11, no.2, pp.347, 360, March-April 2014. doi: 10.1109/TCBB.2014.2307325 Accuracy maximization and complexity minimization are the two main goals of a fuzzy expert system based microarray data classification. Our previous Genetic Swarm Algorithm (GSA) approach has improved the classification accuracy of the fuzzy expert system at the cost of their interpretability. The if-then rules produced by the GSA are lengthy and complex which is difficult for the physician to understand. To address this interpretability-accuracy tradeoff, the rule set is represented using integer numbers and the task of rule generation is treated as a combinatorial optimization task. Ant colony optimization (ACO) with local and global pheromone updations are applied to find out the fuzzy partition based on the gene expression values for generating simpler rule set. In order to address the formless and continuous expression values of a gene, this paper employs artificial bee colony (ABC) algorithm to evolve the points of membership function. Mutual Information is used for idenfication of informative genes. The performance of the proposed hybrid Ant Bee Algorithm (ABA) is evaluated using six gene expression data sets. From the simulation study, it is found that the proposed approach generated an accurate fuzzy system with highly interpretable and compact rules for all the data sets when compared with other approaches.
Keywords: ant colony optimisation; classification; fuzzy systems; genetic algorithms; genetics; genomics; medical expert systems; ABA; ACO; GSA; Genetic Swarm Algorithm approach; accuracy maximization; ant colony optimization; artificial bee colony algorithm; classification accuracy; combinatorial optimization task; complexity minimization; continuous expression values; formless expression values; fuzzy expert system based microarray data classification; fuzzy partition; gene expression data sets; gene expression values; global pheromone updation; hybrid ant bee algorithm; if-then rules; informative gene identification; integer numbers; interpretability-accuracy tradeoff; local pheromone updation; membership function; mutual information; rule generation; rule set; sample classification; simulation study; Accuracy; Computational biology; Data models; Expert systems; Fuzzy systems; Gene expression; IEEE transactions; Microarray data; ant colony optimization; artificial bee colony; fuzzy expert system; mutual information (ID#:14-2830)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6746045&isnumber=6819503
- Carreto, C.; Baltazar, M., "An Expert System for Mobile Devices Based On Cloud Computing," Information Systems and Technologies (CISTI), 2014 9th Iberian Conference on, pp.1, 6, 18-21 June 2014. doi: 10.1109/CISTI.2014.6876953 This paper describes the implementation of an Expert System for Android mobile devices, directed to the common user and the ability to use different knowledge bases, selectable by the user. The system uses a cloud computing-based architecture to facilitate the creation and distribution of different knowledge bases.
Keywords: cloud computing; expert systems; mobile computing; smart phones; Android mobile devices; cloud computing-based architecture; expert system; knowledge base; mobile devices; Androids; Engines; Expert systems; Google; Humanoid robots; Mobile communication; Android; Cloud computing; Expert System (ID#:14-2831)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6876953&isnumber=6876860
- Pokhrel, J.; Lalanne, F.; Cavalli, A; Mallouli, W., "QoE Estimation for Web Service Selection Using a Fuzzy-Rough Hybrid Expert System," Advanced Information Networking and Applications (AINA), 2014 IEEE 28th International Conference on, pp.629,634, 13-16 May 2014. doi: 10.1109/AINA.2014.77 With the proliferation of web services on the Inter-net, it has become important for service providers to select the best services for their clients in accordance to their functional and non-functional requirements. Generally, QoS parameters are used to select the most performing web services, however, these parameters do not necessarily reflect the user's satisfaction. Therefore, it is necessary to estimate the quality of web services on the basis of user satisfaction, i.e., Quality of Experience(QoE). In this paper, we propose a novel method based on a fuzzy-rough hybrid expert system for estimating QoE of web services for web service selection. It also presents how different QoS parameters impact the QoE of web services. For this, we conducted subjective tests in controlled environment with real users to correlate QoS parameters to subjective QoE. Based on this subjective test, we derive membership functions and inference rules for the fuzzy system. Membership functions are derived using a probabilistic approach and inference rules are generated using Rough Set Theory (RST). We evaluated our system in a simulated environment in MATLAB. The simulation results show that the estimated web quality from our system has a high correlation with the subjective QoE obtained from the participants in controlled tests.
Keywords: Web services; expert systems; fuzzy set theory; probability; quality of experience; rough set theory; Internet; MATLAB; QoE estimation; QoS parameters; RST; fuzzy system; fuzzy-rough hybrid expert system; inference rules; membership functions; probabilistic approach; quality of experience; rough set theory; user satisfaction; web service selection; web services proliferation; Availability; Estimation; Expert systems; Quality of service; Set theory; Web services; QoE; QoS; Web Services; intelligent systems (ID#:14-2832)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838723&isnumber=6838626
- Kaur, B.; Madan, S., "A Fuzzy Expert System To Evaluate Customer's Trust In B2C E-Commerce Websites," Computing for Sustainable Global Development (INDIACom), 2014 International Conference on, pp.394,399, 5-7 March 2014. doi: 10.1109/IndiaCom.2014.6828166 With the profound Internet perforation being the most significant advancement in the technology of the last few years, the platform for e-Commerce growth is set. E-Commerce industry has experienced astounding growth in recent years. For the successful implementation of a B2C E-business, it is necessary to understand the trust issues associated with the online environment which holds the customer back from shopping online. This paper proposes a model to discern the impact of trust factors pertaining in Indian E-Commerce marketplace on the customers' intention to purchase from an e-store. The model is based on Mamdani Fuzzy Inference System which is used for computation of the trust index of an e-store in order to assess the confidence level of the customers in the online store. The study first identifies the trust factors and thereby investigates the experts on them in order to examine the significance of the factors. Thereafter, the customers' responses regarding B2C E-Commerce websites with respect to the trust parameters are studied which leads to the development of the fuzzy system. The questionnaire survey method was used to gather primary data which was later used for the purpose of rule formation for the fuzzy inference system.
Keywords: Web sites; consumer behaviour; electronic commerce; expert systems; fuzzy reasoning; purchasing; retail data processing ;trusted computing; B2C e-business; B2C e-commerce Websites; Indian e-commerce marketplace; Internet perforation; Mamdani fuzzy inference system; customer confidence level; customer intention; customer trust; e-commerce growth ;e-commerce industry; e-store; fuzzy expert system; fuzzy system development; online environment; online shopping; online store; purchasing; trust factors; trust index; trust issues; trust parameters; Business; Computational modeling; Expert systems; Fuzzy logic; Fuzzy systems ;Indexes; Internet; Customer's Trust; E-Commerce Trust; Fuzzy System; Online Trust; Trust; Trust Factors; Trust Index (ID#:14-2833)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6828166&isnumber=6827395
- Wen-xue Geng; Fan'e Kong; Dong-qian Ma, "Study on Tactical Decision Of UAV Medium-Range Air Combat," Control and Decision Conference (2014 CCDC), The 26th Chinesepp.135,139, May 31 2014-June 2 2014. doi: 10.1109/CCDC.2014.6852132 To process the uncertainty of decision-making environment and the real-time during the tactical decision of UAV medium-range air combat, a hybrid tactical decision-making method based on rule sets and Fuzzy Bayesian network (FBN) was proposed. By studying the process of UAV air combat, the main factors that affect the tactical decision were analyzed. A corresponding FBN and expert system were built up. The hybrid system retained the advantage of expert system by the first call to it. In the meantime, the system could also process the uncertainty of decision-making environment by means of the FBN. Finally, through the air combat simulation, the correctness, real-time and effectiveness in an uncertain environment of the hybrid tactical decision-making method were verified.
Keywords: aerospace computing; autonomous aerial vehicles; belief networks; control engineering computing; decision making; expert systems; fuzzy control; fuzzy neural nets; military aircraft; military computing; neurocontrollers; FBN; UAV air combat; UAV medium-range air combat; air combat simulation; decision-making environment; expert system; fuzzy Bayesian network; hybrid system; hybrid tactical decision-making method; rule sets; uncertain environment; Atmospheric modeling; Bayes methods; Decision making; Expert systems; Missiles; Uncertainty; Fuzzy Bayesian network; UAV; expert system; medium-range air combat (ID#:14-2834)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6852132&isnumber=6852105
- Pozna, Claudiu; Foldesi, Peter; Precup, Radu-Emil; Koczy, Laszlo T., "On the Development Of Signatures For Artificial Intelligence Applications," Fuzzy Systems (FUZZ-IEEE), 2014 IEEE International Conference onpp.1304,1310, 6-11 July 2014. doi: 10.1109/FUZZ-IEEE.2014.6891636 This paper illustrates developments of signatures for Artificial Intelligence (AI) applications. Since the signatures are data structures with efficient results in modeling of fuzzy inference systems and of uncertain expert systems, the paper starts with the analysis of the data structures used in AI applications from the knowledge representation and manipulation point of view. An overview on the signatures, on the operators on signatures and on classes of signatures is next given. Using the proto fuzzy inference system, these operators are applied in a new application of fuzzy inference system modeled by means of signatures and of classes of signatures.
Keywords: Adaptation models; Artificial intelligence; Data structures; Educational institutions; Fuzzy logic; Fuzzy sets; Unified modeling language; Artificial Intelligence; expert systems; knowledge representation; proto fuzzy inference systems; signatures (ID#:14-2835)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6891636&isnumber=6891523
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Facial Recognition
Facial recognition tools have long been the stuff of action-adventure films. In the real world, they present opportunities and complex problems being examined by researchers. The works cited here, presented or published in the first three quarters of 2014, address various techniques and issues such as the use of TDM, PCA and Markov models, application of keystroke dynamics to facial thermography, multiresolution alignment, and sparse representation.
- Henderson, G.; Ellefsen, I, "Applying Keystroke Dynamics Techniques to Facial Thermography for Verification," IST-Africa Conference Proceedings, 2014, pp.1, 10, 7-9 May 2014. doi: 10.1109/ISTAFRICA.2014.6880626 The problem of verifying that the person accessing a system is the same person that was authorized to do so has existed for many years. Some of the solutions that have been developed to address this problem include continuous Facial Recognition and Keystroke Dynamics. Each of these has their own inherent flaws. We will propose an approach that makes use of Facial Recognition and Keystroke Dynamic techniques and applies them to Facial Thermography. The mechanisms required to implement this new technique are discussed, as well as the trade-offs between the proposed approach and the existing techniques. This will be followed by a discussion on some of the strengths and weaknesses of the proposed approach that need to be considered before the system should be considered for an organization. Keywords: authorisation; face recognition; infrared imaging; continuous facial recognition; facial thermography; keystroke dynamic techniques; person authorization; person verification; Accuracy; Cameras; Face; Face recognition; Fingerprint recognition; Security; Standards; Facial Recognition; Facial Thermography; Keystroke Dynamics; Temperature Digraphs; Verification (ID#:14-2872) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6880626&isnumber=6880588
- Meher, S.S.; Maben, P., "Face Recognition And Facial Expression Identification Using PCA," Advance Computing Conference (IACC), 2014 IEEE International, pp.1093,1098, 21-22 Feb. 2014 doi: 10.1109/IAdCC.2014.6779478 The face being the primary focus of attention in social interaction plays a major role in conveying identity and emotion. A facial recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame from a video source. The main aim of this paper is to analyse the method of Principal Component Analysis (PCA) and its performance when applied to face recognition. This algorithm creates a subspace (face space) where the faces in a database are represented using a reduced number of features called feature vectors. The PCA technique has also been used to identify various facial expressions such as happy, sad, neutral, anger, disgust, fear etc. Experimental results that follow show that PCA based methods provide better face recognition with reasonably low error rates. From the paper, we conclude that PCA is a good technique for face recognition as it is able to identify faces fairly well with varying illuminations, facial expressions etc. Keywords: emotion recognition; face recognition; principal component analysis; vectors; video signal processing; PCA; database; digital image; error rates; face recognition; facial expression identification; facial recognition system; feature vectors; person identification; person verification; principal component analysis; social interaction; video frame; Conferences; Erbium; Eigen faces; Face recognition; Principal Component Analysis (PCA) (ID#:14-2873) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779478&isnumber=6779283
- Vijayalakshmi, M.; Senthil, T., "Automatic Human Facial Expression Recognition Using Hidden Markov Model," Electronics and Communication Systems (ICECS), 2014 International Conference on, pp.1,5, 13-14 Feb. 2014. doi: 10.1109/ECS.2014.6892800 Facial Recognition is a type of biometric software application that can identify a specific individual in a digital image by analyzing and comparing patterns. These systems are commonly used for the security purposes but are increasingly being used in a variety of other applications such as residential security, voter verification, banking using ATM. Changes in facial expression become a difficult task in recognizing faces. In this paper continuous naturalistic affective expressions will be recognized using Hidden Markov Model (HMM) framework. Active Appearance Model (AAM) landmarks are considered for each frame of the videos. The AAMs were used to track the face and extract its visual features. There are six different facial expressions considered over here: Happy, Sadness, Anger, Fear, Surprise, Disgust, Fear and Sad. Different Expression recognition problem is solved through a multistage automatic pattern recognition system where the temporal relationships are modeled through the HMM framework. Dimension levels (i.e., labels) can be defined as the hidden states sequences in the HMM framework. Then the probabilities of these hidden states and their state transitions can be accurately computed from the labels of the training set. Through a three stage classification approach, the output of a first-stage classification is used as observation sequences for a second stage classification, modeled as a HMM-based framework. The k-NN will be used for the first stage classification. A third classification stage, a decision fusion tool, is then used to boost overall performance. Keywords: biometrics (access control);face recognition; hidden Markov models; AAM landmarks; ATM; HMM framework; Hidden Markov Model; active appearance model; automatic human facial expression recognition; banking; biometric software application; digital image; hidden states; residential security; state transitions; voter verification; Active appearance model; Computational modeling; Face recognition; Hidden Markov models; Speech; Speech recognition; Support vector machine classification; Active Appearance Model (AAM);Dimension levels; Hidden Markov model (HMM); K Nearest Neighbor (k-NN) (ID#:14-2874) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6892800&isnumber=6892507
- Chehata, Ramy C.G.; Mikhael, Wasfy B.; Atia, George, "A Transform Domain Modular Approach For Facial Recognition Using Different Representations And Windowing Techniques," Circuits and Systems (MWSCAS), 2014 IEEE 57th International Midwest Symposium on, pp.817,820, 3-6 Aug. 2014. doi: 10.1109/MWSCAS.2014.6908540 A face recognition algorithm based on a newly developed Transform Domain Modular (TDM) approach is proposed. In this approach, the spatial faces are divided into smaller sub-images, which are processed using non-overlapping and overlapping windows. Each image is subsequently transformed using a compressing transform such as the two dimensional discrete cosine transform. This produces the TDM-2D and the TDM-Dia based on two-dimensional and diagonal representations of the data, respectively. The performance of this approach for facial image recognition is compared with the state of the art successful techniques. The test results, for noise free and noisy images, yield higher than 97.5% recognition accuracy. The improved recognition accuracy is achieved while retaining comparable or better computation complexity and storage savings. Keywords: Face; Face recognition; Principal component analysis; Testing; Time division multiplexing; Training; Transforms (ID#:14-2875) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6908540&isnumber=6908326
- Aldhahab, Ahmed; Atia, George; Mikhael, Wasfy B., "Supervised Facial Recognition Based On Multi-Resolution Analysis And Feature Alignment," Circuits and Systems (MWSCAS), 2014 IEEE 57th International Midwest Symposium on, pp.137,140, 3-6 Aug. 2014. doi: 10.1109/MWSCAS.2014.6908371 A new supervised algorithm for face recognition based on the integration of Two-Dimensional Discrete Multiwavelet Transform (2-D DMWT), 2-D Radon Transform, and 2-D Discrete Wavelet Transform (2-D DWT) is proposed1. In the feature extraction step, Multiwavelet filter banks are used to extract useful information from the face images. The extracted information is then aligned using the Radon Transform, and localized into a single band using 2-D DWT for efficient sparse data representation. This information is fed into a Neural Network based classifier for training and testing. The proposed method is tested on three different databases, namely, ORL, YALE and subset fc of FERET, which comprise different poses and lighting conditions. It is shown that this approach can significantly improve the classification performance and the storage requirements of the overall recognition system. Keywords: Classification algorithms; Databases; Discrete wavelet transforms; Feature extraction; Multiresolution analysis; Training (ID#:14-2876) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6908371&isnumber=6908326
- Zhen Gao; Shangfei Wang; Chongliang Wu; Jun Wang; Qiang Ji, "Facial Action Unit Recognition By Relation Modeling From Both Qualitative Knowledge And Quantitative Data," Multimedia and Expo Workshops (ICMEW), 2014 IEEE International Conference on, pp.1,6, 14-18 July 2014. doi: 10.1109/ICMEW.2014.6890672 In this paper, we propose to capture Action Unit (AU) relations existing in both qualitative knowledge and quantitative data through Credal Networks (CN). Each node of the CN represents an AU label, and the links and probability intervals capture the probabilistic dependencies among multiple AUs. The structure of CN is designed based on prior knowledge. The parameters of CN are learned from both knowledge and ground-truth AU labels. The AU preliminary estimations are obtained by an existing image-driven recognition method. With the learned credal network, we infer the true AU labels by combining the relationships among labels with the previous obtained estimations. Experimental results on the CK+ database and MMI database demonstrate that with complete AU labels, our CN model is slightly better than the Bayesian Network (BN) model, demonstrating that credal sets learned from data can capture uncertainty more reliably; With incomplete and error-prone AU annotations, our CN model outperforms the BN model, indicating that credal sets can successfully capture qualitative knowledge. Keywords: face recognition; image sequences; probability; uncertainty handling; visual databases; AU label; AU preliminary estimation; BN model; CK+ database; N model; MMI database; credal network; error prone AU annotation; facial action unit recognition; image driven recognition method; incomplete AU annotation; probabilistic dependency; probability interval; relation modeling; uncertainty handling; Data models; Databases; Gold; Hidden Markov models; Image recognition; Mathematical model; Support vector machines; AU recognition; credal network; prior knowledge (ID#:14-2877) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6890672&isnumber=6890528
- Leventic, H.; Livada, C.; Gaclic, I, "Towards Fixed Facial Features Face Recognition," Systems, Signals and Image Processing (IWSSIP), 2014 International Conference on, pp.267,270, 12-15 May 2014 Abstract: In this paper we propose a framework for recognition of faces in controlled conditions. The framework consists of two parts: face detection and face recognition. For face detection we are using the Viola-Jones face detector. The proposal for face recognition part is based on the calculation of certain ratios on the face, where the features on the face are located by the use of Hough transform for circles. Experiments show that this framework presents a possible solution for the problem of face recognition. Keywords: Hough transforms; face recognition; Hough transform; Viola-Jones face detector; face detection; face recognition; fixed facial feature; Equations; Face; Face recognition; Nose; Transforms; Hough transform; Viola-Jones; face detection; face recognition (ID#:14-2878) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6837682&isnumber=6837609
- Wilber, M.J.; Rudd, E.; Heflin, B.; Yui-Man Lui; Boult, T.E., "Exemplar Codes For Facial Attributes And Tattoo Recognition," Applications of Computer Vision (WACV), 2014 IEEE Winter Conference on, pp.205,212, 24-26 March 2014. doi: 10.1109/WACV.2014.6836099 When implementing real-world computer vision systems, researchers can use mid-level representations as a tool to adjust the trade-off between accuracy and efficiency. Unfortunately, existing mid-level representations that improve accuracy tend to decrease efficiency, or are specifically tailored to work well within one pipeline or vision problem at the exclusion of others. We introduce a novel, efficient mid-level representation that improves classification efficiency without sacrificing accuracy. Our Exemplar Codes are based on linear classifiers and probability normalization from extreme value theory. We apply Exemplar Codes to two problems: facial attribute extraction and tattoo classification. In these settings, our Exemplar Codes are competitive with the state of the art and offer efficiency benefits, making it possible to achieve high accuracy even on commodity hardware with a low computational budget. Keywords: computer vision; face recognition; feature extraction; image classification; image representation; probability; classification efficiency; exemplar codes; extreme value theory; facial attribute extraction; linear classifiers; mid-level representations; probability normalization; real-world computer vision systems; tattoo classification; tattoo recognition; Accuracy; Face; Feature extraction; Libraries; Pipelines; Support vector machines; Training (ID#:14-2879) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6836099&isnumber=6835728
- Hehua Chi; Yu Hen Hu, "Facial Image De-Identification Using Identity Subspace Decomposition," Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on , vol., no., pp.524,528, 4-9 May 2014. doi: 10.1109/ICASSP.2014.6853651 How to conceal the identity of a human face without covering the facial image? This is the question investigated in this work. Leveraging the high dimensional feature representation of a human face in an Active Appearance Model (AAM), a novel method called the identity subspace decomposition (ISD) method is proposed. Using ISD, the AAM feature space is deposed into an identity sensitive subspace and an identity insensitive subspace. By replacing the feature values in the identity sensitive subspace with the averaged values of k individuals, one may realize a k-anonymity de-identification process on facial images. We developed a heuristic approach to empirically select the AAM features corresponding to the identity sensitive subspace. We showed that after applying k-anonymity de-identification to AAM features in the identity sensitive subspace, the resulting facial images can no longer be distinguished by either human eyes or facial recognition algorithms. Keywords: face recognition; AAM feature space; ISD; active appearance model; facial image de-identification; facial recognition algorithms; high dimensional feature representation; human eye recognition algorithms; identity subspace decomposition method; identiy subspace decomposition; k-anonymity de-identification process; sensitive subspace; Active appearance model; Databases; Face; Face recognition; Facial features; Privacy; Vectors; active appearance model; data privacy; face recognition ;identification of persons (ID#:14-2880) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6853651&isnumber=6853544
- Ptucha, R.; Savakis, AE., "LGE-KSVD: Robust Sparse Representation Classification," Image Processing, IEEE Transactions on, vol.23, no.4, pp.1737, 1750, April 2014. doi: 10.1109/TIP.2014.2303648 The parsimonious nature of sparse representations has been successfully exploited for the development of highly accurate classifiers for various scientific applications. Despite the successes of Sparse Representation techniques, a large number of dictionary atoms as well as the high dimensionality of the data can make these classifiers computationally demanding. Furthermore, sparse classifiers are subject to the adverse effects of a phenomenon known as coefficient contamination, where, for example, variations in pose may affect identity and expression recognition. We analyze the interaction between dimensionality reduction and sparse representations, and propose a technique, called Linear extension of Graph Embedding K-means-based Singular Value Decomposition (LGE-KSVD) to address both issues of computational intensity and coefficient contamination. In particular, the LGE-KSVD utilizes variants of the LGE to optimize the K-SVD, an iterative technique for small yet over complete dictionary learning. The dimensionality reduction matrix, sparse representation dictionary, sparse coefficients, and sparsity-based classifier are jointly learned through the LGE-KSVD. The atom optimization process is redefined to allow variable support using graph embedding techniques and produce a more flexible and elegant dictionary learning algorithm. Results are presented on a wide variety of facial and activity recognition problems that demonstrate the robustness of the proposed method. Keywords: dictionaries; image representation; iterative methods; optimisation; singular value decomposition; LGE-KSVD; activity recognition problems; atom optimization process; coefficient contamination; computational intensity; dictionary learning algorithm; dimensionality reduction matrix; expression recognition; facial recognition problems; graph embedding techniques; terative technique; linear extension of graph embedding k-means-based singular value decomposition; robust sparse representation classification ;sparse coefficients; sparse representation dictionary; sparsity-based classifier; Contamination; Dictionaries; Image reconstruction; Manifolds; Principal component analysis; Sparse matrices; Training; Dimensionality reduction; activity recognition; facial analysis; manifold learning; sparse representation (ID#:14-2881) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6728639&isnumber=6742656
- Bong-Nam Kang; Jongmin Yoon; Hyunsung Park; Daijin Kim, "Face Recognition Using Affine Dense SURF-Like Descriptors," Consumer Electronics (ICCE), 2014 IEEE International Conference on, pp.129,130, 10-13 Jan. 2014. doi: 10.1109/ICCE.2014.6775938 In this paper, we propose the method for pose and facial expression invariant face recognition using the affine dense SURF-like descriptors. The proposed method consists of four step, 1) we normalize the face image using the face and eye detector. 2) We apply the affine simulation for synthesizing various pose face images. 3) We make a descriptor on the overlapping block-based grid keypoints. 4) A probe image is compared with the referenced images by performing the nearest neighbor matching. To improve the recognition rate, we use the keypoint distance ratio and false matched keypoint ratio. The proposed method showed the better performance than that of the conventional methods in terms of the recognition rates. Keywords: face recognition; probes; affine dense SURF-like descriptors; eye detector; face detector ;facial expression invariant face recognition; false matched keypoint ratio; keypoint distance ratio; nearest neighbor matching; overlapping block-based grid keypoints; pose face images; probe image; recognition rate; recognition rates; Computer vision; Conferences; Educational institutions; Face; Face recognition; Probes; Vectors (ID#:14-2882) URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6775938&isnumber=6775879
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Forward Error Correction
Controlling errors in data transmission in noisy or lossy circuits is a problem often solved by channel coding or forward error correction. The articles cited here look at bit error rates, energy efficiency, hybrid networks, and transportation systems. This research was presented in the first three quarters of 2014.
- Hai Dao Thanh; Morvan, M.; Gravey, P.; Cugini, F.; Cerutti, I, "On the Spectrum-Efficiency Of Transparent Optical Transport Network Design With Variable-Rate Forward Error Correction Codes," Advanced Communication Technology (ICACT), 2014 16th International Conference on, pp.1173, 1177, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6779143 We discuss the flexible rate optical transmission enabled by forward error correction (FEC) codes adjustment. The adaptation of FEC codes to given transmission condition gives rise to trade-off between transmission rate and optical reach. In this paper, that compromise is addressed from network planning standpoint. A static transparent network planning taking into account that rate-reach trade-off is formulated. A case study is solved in realistic NSF network with a comparison between mixed line rate (MLR) (10/40/100 Gbps) and flexible rate (FlexRate) by FEC variation (10-100 Gbps with a step of 10 Gbps). The result shows that the maximum link load could be reduced up to ~60% in FlexRate compared with MLR and the reduction becomes evident at high traffic load. Moreover, thanks to finer rate adaptation, the FlexRate could support an amount of traffic around three times higher than MLR.
Keywords: forward error correction; light transmission; optical fibre networks; telecommunication network planning; telecommunication traffic; variable rate codes; flexible rate optical transmission; mixed line rate; network planning standpoint; static transparent network planning; traffic load; transparent optical transport network design; variable rate forward error correction codes; Adaptive optics; Integrated optics; Optical fiber networks; Optical fibers; Planning; Transponders; Elastic Transponder; Fiber-Optic Communication; Flexible Optical Network; Forward Error Correction; Network Optimization (ID#:14-3083)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779143&isnumber=6778899
- Ahmed, Q.Z.; Ki-Hong Park; Alouini, M.-S.; Aissa, S., "Linear Transceiver Design for Nonorthogonal Amplify-and-Forward Protocol Using a Bit Error Rate Criterion," Wireless Communications, IEEE Transactions on, vol.13, no.4, pp.1844, 1853, April 2014. doi: 10.1109/TWC.2014.022114.130369 The ever growing demand of higher data rates can now be addressed by exploiting cooperative diversity. This form of diversity has become a fundamental technique for achieving spatial diversity by exploiting the presence of idle users in the network. This has led to new challenges in terms of designing new protocols and detectors for cooperative communications. Among various amplify-and-forward (AF) protocols, the half duplex non-orthogonal amplify-and-forward (NAF) protocol is superior to other AF schemes in terms of error performance and capacity. However, this superiority is achieved at the cost of higher receiver complexity. Furthermore, in order to exploit the full diversity of the system an optimal precoder is required. In this paper, an optimal joint linear transceiver is proposed for the NAF protocol. This transceiver operates on the principles of minimum bit error rate (BER), and is referred as joint bit error rate (JBER) detector. The BER performance of JBER detector is superior to all the proposed linear detectors such as channel inversion, the maximal ratio combining, the biased maximum likelihood detectors, and the minimum mean square error. The proposed transceiver also outperforms previous precoders designed for the NAF protocol.
Keywords: amplify and forward communication; cooperative communication; detector circuits; diversity reception; error statistics; least mean squares methods; maximum likelihood detection; optimisation; precoding; protocols; radio transceivers; JBER detector; NAF protocols; biased maximum likelihood detectors; bit error rate criterion ;channel inversion; cooperative communications; cooperative diversity; duplex nonorthogonal amplify-and-forward protocol; error performance; idle users; joint bit error rate; linear detectors; linear transceiver design; maximal ratio combining; minimum mean square error; optimal joint linear transceiver; optimal precoder; receiver complexity; spatial diversity; Bit error rate; Complexity theory; Detectors; Diversity reception; Modulation; Protocols; Vectors; Cooperative diversity; bit error rate (BER);minimum mean square error (MMSE); nonorthogonal amplify-and-forward protocol}, (ID#:14-3084)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6754118&isnumber=6803026
- Fareed, M.M.; Uysal, M.; Tsiftsis, T.A, "Error-Rate Performance Analysis of Cooperative OFDMA System With Decode-and-Forward Relaying," Vehicular Technology, IEEE Transactions on, vol.63, no.5, pp.2216,2223, Jun 2014 doi: 10.1109/TVT.2013.2290780 In this paper, we investigate the performance of a cooperative orthogonal frequency-division multiple-access (OFDMA) system with decode-and-forward (DaF) relaying. Specifically, we derive a closed-form approximate symbol-error-rate expression and analyze the achievable diversity orders. Depending on the relay location, a diversity order up to (L(SkD) + 1) + Sm=1M min(L(SkRm) + 1, L(RmD) + 1) is available, where M is the number of relays, and L(SkD) + 1, L(SkRm) + 1, and L(RmD) + 1 are the lengths of channel impulse responses of source-to-destination, source-to-mth relay, and mth relay-to-destination links, respectively. Monte Carlo simulation results are also presented to confirm the analytical findings.
Keywords: Monte Carlo methods; OFDM modulation; cooperative communication; decode and forward communication; diversity reception; frequency division multiple access; telecommunication channels; transient response; DaF relaying; Monte Carlo simulation; channel impulse responses; closed-form approximate symbol-error-rate expression; cooperative OFDMA system; decode-and-forward relaying; diversity orders; error-rate performance analysis; orthogonal frequency-division multiple-access system; relay location; relay-to-destination links; source-to-destination; source-to-mth relay; Approximation methods; Error analysis; Maximum likelihood decoding; OFDM; Relays; Resource management; Upper bound; Error rate; Orthogonal frequency division multiple access; error rate; orthogonal frequency-division multiple access (OFDMA); power allocation; relay channels (ID#:14-3085)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6663693&isnumber=6832681
- Kaddoum, G.; Gagnon, F., "Lower Bound On The Bit Error Rate Of A Decode-And-Forward Relay Network Under Chaos Shift Keying Communication System," Communications, IET, vol.8, no.2, pp.227,232, January 23 2014. doi: 10.1049/iet-com.2013.0421 This study carries out the first-ever investigation of the analysis of a cooperative decode-and-forward (DF) relay network with chaos shift keying (CSK) modulation. The performance analysis of DF-CSK in this study takes into account the dynamical nature of chaotic signal, which is not similar to a conventional binary modulation performance computation methodology. The expression of a lower bound bit error rate (BER) is derived in order to investigate the performance of the cooperative system under independently and identically distributed Gaussian fading wireless environments. The effect of the non-periodic nature of chaotic sequence leading to a non-constant bit energy of the considered modulation is also investigated. A computation approach of the BER expression based on the probability density function of the bit energy of the chaotic sequence, channel distribution and number of relays is presented. Simulation results prove the accuracy of the authors BER computation methodology.
Keywords: Gaussian distribution; chaotic communication; cooperative communication; decode and forward communication; error statistics; fading channels; phase shift keying; probability; relay networks (telecommunication);BER;CSK modulation; binary modulation; bit error rate; channel distribution; chaos shift keying communication system; chaotic sequence; chaotic signal; cooperative decode-and-forward relay network; distributed Gaussian fading wireless environments; nonconstant bit energy; nonperiodic nature; probability density function (ID#:14-3086)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6740269&isnumber=6740097
- Al-Kali, M.; Li Yu; Mohammed, AA, "Performance Analysis Of Energy Efficiency And Symbol Error Rate In Amplify-And-Forward Cooperative MIMO Networks," Ubiquitous and Future Networks (ICUFN), 2014 Sixth International Conf. on vol., no., pp.448,453, 8-11 July 2014. doi: 10.1109/ICUFN.2014.6876831 In this paper, we analyze the energy efficiency and the symbol error rate (SER) in the cooperative multiple-input multiple-output (MIMO) relay networks. We employ an amplify-and-forward (AF) relay scheme, where a relay access point occupied with Q antennas cooperatively forwards packets to the destination. Under the assumption of Rayleigh fading channels and time division multiplexing (TDM), we derive new exact closed-form expressions for the outage probability, SER and the energy efficiency valid for Q antennas. Further asymptotic analysis is done in high SNR regime to characterize the energy efficiency in terms of the diversity order and the array gain. Subsequently, our expressions are quantitatively compared with Monte Carlo simulations. Numerical results are provided to validate the exact and the asymptotic expressions. The results show that the energy efficiency decreases with the number of antennas at the relay according to Q+1. The behavior of the energy efficiency with the relay locations is also discussed in this paper.
Keywords: MIMO communication; Monte Carlo methods; Rayleigh channels; amplify and forward communication; fading channels; probability; relay networks (telecommunication) ;time division multiplexing; AF relay scheme; MIMO relay networks; Monte Carlo simulations; Q antennas; Rayleigh fading channels; SER; TDM; amplify-and-forward cooperative MIMO networks; array gain; asymptotic analysis; energy efficiency; multiple-input multiple-output relay networks; outage probability; performance analysis; relay locations; symbol error rate; time division multiplexing; Antennas; Arrays; Diversity reception; MIMO; Modulation; Relays; Signal to noise ratio; Cooperative MIMO; cooperative diversity; energy efficiency; symbol error rate (ID#:14-3087)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6876831&isnumber=6876727
- Rasmussen, A; Yankov, M.P.; Berger, M.S.; Larsen, K.J.; Ruepp, S., "Improved Energy Efficiency for Optical Transport Networks by Elastic Forward Error Correction," Optical Communications and Networking, IEEE/OSA Journal of, vol. 6, no.4, pp.397, 407, April 2014. doi: 10.1364/JOCN.6.000397 In this paper we propose a scheme for reducing the energy consumption of optical links by means of adaptive forward error correction (FEC). The scheme works by performing on the fly adjustments to the code rate of the FEC, adding extra parity bits to the data stream whenever extra capacity is available. We show that this additional parity information decreases the number of necessary decoding iterations and thus reduces the power consumption in iterative decoders during periods of low load. The code rate adjustments can be done on a frame-by-frame basis and thus make it possible to manipulate the balance between effective data rate and FEC coding gain without any disruption to the live traffic. As a consequence, these automatic adjustments can be performed very often based on the current traffic demand and bit error rate performance of the links through the network. The FEC scheme itself is designed to work as a transparent add-on to transceivers running the optical transport network (OTN) protocol, adding an extra layer of elastic soft-decision FEC to the built-in hard-decision FEC implemented in OTN, while retaining interoperability with existing OTN equipment. In order to facilitate dynamic code rate adaptation, we propose a programmable encoder and decoder design approach, which can implement various codes depending on the desired code rate using the same basic circuitry. This design ensures optimal coding gain performance with a modest overhead for supporting multiple codes with minimal impact on the area and power requirements of the decoder.
Keywords: access protocols; energy conservation; error statistics; forward error correction; iterative decoding; optical fibre networks; optical links; optical transceivers; power consumption ;telecommunication standards; OTN protocol; adaptive FEC; adaptive forward error correction; bit error rate; built-in hard-decision FEC; data stream; decoding iterations; dynamic code rate adaptation; elastic forward error correction; elastic soft-decision FEC; energy consumption; energy efficiency; iterative decoders; optical links; optical transport network protocol; optimal coding gain performance; parity information ;power consumption; programmable encoder; traffic demand; transceivers; Bit error rate; Decoding; Encoding; Forward error correction; Iterative decoding; Optical fiber communication; Elastic optical networks; Optical transport networks; Optically switched networks; Rate adaptive forward error correction (ID#:14-3088)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821329&isnumber=6821321
- Ying Zhang; Huapeng Zhao; Chuanyi Pan, "Optimization of an Amplify-and-Forward Relay Network Considering Time Delay and Estimation Error in Channel State Information," Vehicular Technology, IEEE Transactions on, vol.63, no.5, pp. 2483, 2488, Jun 2014. doi: 10.1109/TVT.2013.2292939 This paper presents the optimization of an amplify-and-forward (AF) relay network with time delay and estimation error in channel state information (CSI). The CSI time delay and estimation error are modeled by the channel time variation model and stochastic error model, respectively. The conditional probability density function of the ideal CSI upon the estimated CSI is computed based on these two models, and it is used to derive the conditional expectation of the mean square error (MSE) between estimated and desired signals upon estimated CSI, which is minimized to optimize the beamforming and equalization coefficients. Computer simulations show that the proposed method obtains lower bit error rate (BER) than the conventional minimum MSE and the maxmin SNR strategies when CSI contains time delay and estimation error.
Keywords: amplify and forward communication; delays; least mean squares methods; optimisation;relay networks (telecommunication); stochastic processes; BER; amplify-and-forward relay network; beamforming; bit error rate; channel state information; channel time variation model; conditional probability density function; equalization coefficients; estimation error; minimum mean square error; stochastic error model; time delay; Bit error rate; Channel estimation; Correlation; Delay effects; Estimation error; Relays; Signal to noise ratio; Amplify and forward (AF); Amplify-and-forward; conditional expectation; estimation error; minimum mean square error; minimum mean square error (MMSE);outdated channel state information; outdated channel state information (CSI);relay network (ID#:14-3089)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6675878&isnumber=6832681
- Rafique, D.; Napoli, A; Calabro, S.; Spinnler, B., "Digital Preemphasis in Optical Communication Systems: On the DAC Requirements for Terabit Transmission Applications," Lightwave Technology, Journal of, vol.32, no.19, pp.3247, 3256, Oct.1, 1 2014. doi: 10.1109/JLT.2014.2343957 Next-generation coherent optical systems are geared to employ high-speed digital-to-analog converters (DAC), allowing for digital preprocessing of the signal and flexible optical transport networks. However, one of the major obstacles in such architectures is the limited resolution (less than 5.5 effective bits) and -3 dB bandwidth of commercial DACs, typically limited to half of the currently commercial baud rates, and even relatively reduced in case of higher baud rate transponders (400 Gb/s and 1 Tb/s). In this paper, we propose a simple digital preemphasis (DPE) algorithm to compensate for DAC-induced signal distortions, and exhaustively investigate the impact of DAC specifications on system performance, both with and without DPE. As an outcome, performance improvements are established across various DAC hardware requirements (effective number of bits and bandwidth) and channel baud rates, for m-state quadrature amplitude modulation (QAM) formats. In particular, we show that lower order modulation formats are least affected by DAC limitations, however, they benefit the most from DPE in extremely challenging hardware conditions. On the contrary, higher order formats are severely limited by DAC distortions, and moderately benefit from DPE across a wide range of DAC specifications. Moreover, effective number of bit requirements are established for m-state QAM, assuming low and high baud rate transmission regimes. Finally, we discuss the application scenarios for the proposed DPE in next-generation terabit transmission systems, and establish maximum transportable baud rates, which are shown to be used toward increasing channel baud rates to reduce terabit subcarrier count or toward increasing forward error correction (FEC) overheads to reduce the pre-FEC bit error rate threshold. Maximum baud rates after DPE are summarized here for polarization multiplexed BPSK, QPSK, 8QAM, and 16QAM, assuming two DACs: Current commer- ial DACs (5.5 effective bits, 16 GHz bandwidth) 57, 54, 51, and 48 Gbaud, respectively. Next-generation DACs (7 effective bits, 22 GHz bandwidth): 62, 61, 60, and 58 Gbaud, respectively.
Keywords: Bandwidth; Noise; Q-factor; Quadrature amplitude modulation; Receivers; Transfer functions; Coherent detection; Nyquist; digital signal processing; digital-to-analog converter; pre-emphasis (ID#:14-3090)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6868202&isnumber=6877758
- Qiang Huo; Tianxi Liu; Shaohui Sun; Lingyang Song; Bingli Jiao, "Selective Combining For Hybrid Cooperative Networks," Communications, IET, vol.8, no.4, pp.471,482, March 6 2014. doi: 10.1049/iet-com.2013.0323 In this study, we consider the selective combining in hybrid cooperative networks (SCHCNs scheme) with one source node, one destination node and N relay nodes. In the SCHCN scheme, each relay first adaptively chooses between amplify-and-forward protocol and decode-and-forward protocol on a per frame basis by examining the error-detecting code result, and Nc(1 Nc N) relays will be selected to forward their received signals to the destination. We first develop a signal-to-noise ratio (SNR) threshold-based frame error rate (FER) approximation model. Then, the theoretical FER expressions for the SCHCN scheme are derived by utilising the proposed SNR threshold-based FER approximation model. The analytical FER expressions are validated through simulation results.
Keywords: amplify and forward communication; cooperative communication; decode and forward communication; diversity reception; error detection codes; error statistics ;FER approximation model; SCHCN; amplify-and-forward protocol; decode-and-forward protocol; destination node; error detecting code; frame error rate; hybrid cooperative networks; relay nodes; selective combining; signal-to-noise ratio (ID#:14-3091)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6758416&isnumber=6758407
- Haifeng Zhu; Bajekal, S.; Lakamraju, V.; Murray, B., "A Radio System Design Tool For Forward Error Corrections In Wireless CSMA Networks: Analysis And Economics," Radio and Wireless Symposium (RWS), 2014 IEEE, pp.145,147, 19-23 Jan. 2014. doi: 10.1109/RWS.2014.6830160 As cyber-physical systems become pervasive, their power consumption and system design practices are major concerns. This paper explores problems of deploying Forward Error Correction (FEC) in wireless commercial standards such as IEEE 802.11b and 802.15.4. First, we describe battery life estimation that includes practical factors such as system issues and the negative impact by retransmissions vs. power impact by overhead of encoding schemes. Secondly, we explore the link to design economics and demonstrate a design decision method. Theoretical analyses validated with simulations provide a decision tool for engineers and management during system design. Different from previous unfavorable usage in FEC, we show that for cyber-physical devices FEC should be now strongly considered under proper circumstances, as it provides the opportunity for saving communications-related energy for prolonged battery life, which is critical for devices in hard-to-reach locations and battlefield.
Keywords: Zigbee; carrier sense multiple access; encoding;forward error correction; power consumption; wireless LAN; wireless channels;FEC; IEEE 802.11b;IEEE 802.15.4;battery life estimation; cyber-physical devices; cyber-physical systems; design decision method; economics; encoding; forward error corrections; power consumption; radio system design tool; wireless CSMA networks; wireless commercial standards; Automatic repeat request; Batteries; Bit error rate; Economics ;Encoding; Forward error correction; Power demand; FEC; Wireless; power consumption; system design (ID#:14-3092)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830160&isnumber=6830066
- Chang, Shih-Ying; Chiao, Hsin-Ta; Hung, Yu-Hsian, "Ideal Forward Error Correction Codes for High-Speed Rail Multimedia Communications," Vehicular Technology, IEEE Transactions on, vol. PP, no.99, pp.1, 1, March 2014. doi: 10.1109/TVT.2014.2310897 In recent years, Application Layer-Forward Error Correction (AL-FEC), especially rateless AL-FEC, has received a lot of attention due to its superior performance in both transmissional and computational efficiency. Rateless AL-FEC (e.g., Raptor code or LT code) can protect a large data block with an overhead somewhat close to ideal codes. In the meantime, its data processing rates of both encoding and decoding are quite efficient even in software implementations. However, we found that conventional rateless AL-FEC schemes may not be the best candidates when considering streaming over WiMAX networks for high speed rail reception in Taiwan. In this paper, we propose a new ideal AL-FEC scheme based on the Chinese Remainder Theorem (CRT) to facilitate streaming service delivery for highspeed rail reception. The proposed scheme can support the rateless property, but it requires less transmission overhead than conventional rateless codes. Although it requires higher computational cost than conventional rateless codes, the cost is affordable for commodity laptops. Besides measuring the FEC computation, storage, and decoder overhead, we also evaluate its performance in an emulation environment for simulating highspeed rail reception over WiMAX networks. The emulation result shows that the proposed scheme can achieve the same error protection as Raptor codes, but it requires less transmission overhead, suitable for protecting data transmission over bandwidthlimited, high-mobility erasure channels.
Keywords: Decoding; Digital video broadcasting; Encoding; Forward error correction; Maintenance engineering; Systematics; WiMAX (ID#:14-3093)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6763072&isnumber=4356907
- JongJun Park; Jongsoo Jeong; Hoon Jeong; Liang, C.-J.M.; JeongGil Ko, "Improving the Packet Delivery Performance for Concurrent Packet Transmissions in WSNs," Communications Letters, IEEE , vol.18, no. 1, pp.58,61, January 2014. doi: 10.1109/LCOMM.2013.112013.131974 In this letter, we investigate the properties of packet collisions in IEEE 802.15.4-based wireless sensor networks when packets with the same content are transmitted concurrently. While the nature of wireless transmissions allows the reception of a packet when the same packet is transmitted at different radios with (near) perfect time synchronization, we find that in practical systems, platform specific characteristics, such as the independence and error of the crystal oscillators, cause packets to collide disruptively when the two signals have similar transmission powers (i.e., differences of <;2 dBm). In such scenarios, the packet reception ratio (PRR) of concurrently transmitted packets falls below 10%. Nevertheless, we empirically show that the packet corruption patterns are easily recoverable using forward error correction schemes and validate this using implementations of RS and convolutional codes. Overall, our results show that using such error correction schemes can increase the PRR by more than four-fold.
Keywords: Reed-Solomon codes; Zigbee; convolutional codes; wireless sensor networks; IEEE 802.15.4-based wireless sensor networks; RS codes; WSN; concurrent packet transmissions; convolutional codes; forward error correction schemes; packet delivery performance; Convolutional codes; Crystals; Forward error correction; IEEE 802.15 Standards; Oscillators; Radio transmitters; Wireless sensor networks; Concurrent transmissions and forward error correction; wireless sensor networks (ID#:14-3094)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6679191&isnumber=6716946
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Fuzzy Logic and Security
Fuzzy logic is being used to develop a number of security systems. The articles cited here include research into fuzzy logic-based security for software defined networks, industrial controls, intrusion response and recovery, wireless sensor networks, and more. These works were presented or published in 2014.
- Dotcenko, S.; Vladyko, A; Letenko, I, "A Fuzzy Logic-Based Information Security Management For Software-Defined Networks," Advanced Communication Technology (ICACT), 2014 16th International Conference on, vol., no., pp.167,171, 16-19 Feb. 2014. doi: 10.1109/ICACT.2014.6778942 Abstract: In terms of network security, software-defined networks (SDN) offer researchers unprecedented control over network infrastructure and define a single point of control over the data flows routing of all network infrastructure. OpenFlow protocol is an embodiment of the software-defined networking paradigm. OpenFlow network security applications can implement more complex logic processing flows than their permission or prohibition. Such applications can implement logic to provide complex quarantine procedures, or redirect malicious network flows for their special treatment. Security detection and intrusion prevention algorithms can be implemented as OpenFlow security applications, however, their implementation is often more concise and effective. In this paper we considered the algorithm of the information security management system based on soft computing, and implemented a prototype of the intrusion detection system (IDS) for software-defined network, which consisting of statistic collection and processing module and decision-making module. These modules were implemented in the form of application for the Beacon controller in Java. Evaluation of the system was carried out on one of the main problems of network security - identification of hosts engaged in malicious network scanning. For evaluation of the modules work we used mininet environment, which provides rapid prototyping for OpenFlow network. The proposed algorithm combined with the decision making based on fuzzy rules has shown better results than the security algorithms used separately. In addition the number of code lines decreased by 20-30%, as well as the opportunity to easily integrate the various external modules and libraries, thus greatly simplifies the implementation of the algorithms and decision-making system.
Keywords: decision making; fuzzy logic; protocols; security of data; software radio; telecommunication control; telecommunication network management; telecommunication network routing; telecommunication security; Java; OpenFlow protocol; beacon controller; data flows routing; decision making; decision-making module; fuzzy logic-based information security management; intrusion detection system; intrusion prevention algorithms; logic processing flows; malicious network flows; malicious network scanning; mininet environment; network infrastructure; network security; processing module; security detection; soft computing; software-defined networks; statistic collection; Decision making; Information security; Software algorithms; Switches; Training; Fuzzy Logic; Information security; OpenFlow; Port scan; Software-Defined Networks (ID#:14-2862) the implementation of the algorithms and decision-making system.
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6778942&isnumber=6778899
- Vollmer, T.; Manic, M.; Linda, O., "Autonomic Intelligent Cyber-Sensor to Support Industrial Control Network Awareness," Industrial Informatics, IEEE Transactions on, vol.10, no.2, pp.1647, 1658, May 2014. doi: 10.1109/TII.2013.2270373 The proliferation of digital devices in a networked industrial ecosystem, along with an exponential growth in complexity and scope, has resulted in elevated security concerns and management complexity issues. This paper describes a novel architecture utilizing concepts of autonomic computing and a simple object access protocol (SOAP)-based interface to metadata access points (IF-MAP) external communication layer to create a network security sensor. This approach simplifies integration of legacy software and supports a secure, scalable, and self-managed framework. The contribution of this paper is twofold: 1) A flexible two-level communication layer based on autonomic computing and service oriented architecture is detailed and 2) three complementary modules that dynamically reconfigure in response to a changing environment are presented. One module utilizes clustering and fuzzy logic to monitor traffic for abnormal behavior. Another module passively monitors network traffic and deploys deceptive virtual network hosts. These components of the sensor system were implemented in C++ and PERL and utilize a common internal D-Bus communication mechanism. A proof of concept prototype was deployed on a mixed-use test network showing the possible real-world applicability. In testing, 45 of the 46 network attached devices were recognized and 10 of the 12 emulated devices were created with specific operating system and port configurations. In addition, the anomaly detection algorithm achieved a 99.9% recognition rate. All output from the modules were correctly distributed using the common communication structure. the implementation of the algorithms and decision-making system.
Keywords: access protocols; computer network security; fault tolerant computing; field buses; fuzzy logic; industrial control; intelligent sensors; meta data; network interfaces; pattern clustering; C++;IF-MAP; PERL; SOAP-based interface; anomaly detection algorithm; autonomic computing; autonomic intelligent cyber-sensor; digital device proliferation; flexible two-level communication layer; fuzzy logic; industrial control network awareness; internal D-Bus communication mechanism; legacy software; metadata access point external communication layer; mixed-use test network; network security sensor; networked industrial ecosystem; proof of concept prototype; self-managed framework; service oriented architecture; simple object access protocol-based interface; traffic monitor; virtual network hosts; Autonomic computing; control systems ;industrial ecosystems; network security; service-oriented architecture (ID#:14-2863) the implementation of the algorithms and decision-making system.
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6547755&isnumber=6809862
- Zonouz, S.A; Khurana, H.; Sanders, W.H.; Yardley, T.M., "RRE: A Game-Theoretic Intrusion Response and Recovery Engine," Parallel and Distributed Systems, IEEE Transactions on, vol.25, no.2, pp.395, 406, Feb. 2014. doi: 10.1109/TPDS.2013.211 Preserving the availability and integrity of networked computing systems in the face of fast-spreading intrusions requires advances not only in detection algorithms, but also in automated response techniques. In this paper, we propose a new approach to automated response called the response and recovery engine (RRE). Our engine employs a game-theoretic response strategy against adversaries modeled as opponents in a two-player Stackelberg stochastic game. The RRE applies attack-response trees (ART) to analyze undesired system-level security events within host computers and their countermeasures using Boolean logic to combine lower level attack consequences. In addition, the RRE accounts for uncertainties in intrusion detection alert notifications. The RRE then chooses optimal response actions by solving a partially observable competitive Markov decision process that is automatically derived from attack-response trees. To support network-level multiobjective response selection and consider possibly conflicting network security properties, we employ fuzzy logic theory to calculate the network-level security metric values, i.e., security levels of the system's current and potentially future states in each stage of the game. In particular, inputs to the network-level game-theoretic response selection engine, are first fed into the fuzzy system that is in charge of a nonlinear inference and quantitative ranking of the possible actions using its previously defined fuzzy rule set. Consequently, the optimal network-level response actions are chosen through a game-theoretic optimization process. Experimental results show that the RRE, using Snort's alerts, can protect large networks for which attack-response trees have more than 500 nodes. the implementation of the algorithms and decision-making system.
Keywords: Boolean functions; Markov processes; computer network security; decision theory; fuzzy set theory; stochastic games; trees (mathematics); ART; Boolean logic; RRE; Snort alerts; attack-response trees; automated response techniques; detection algorithms; fuzzy logic theory; fuzzy rule set; fuzzy system; game-theoretic intrusion response and recovery engine strategy; game-theoretic optimization process; intrusion detection; lower level attack consequences; network level game-theoretic response selection engine; network security property; network-level multiobjective response selection; network-level security metric values; networked computing systems; nonlinear inference; optimal network-level response actions; partially observable competitive Markov decision process; system-level security events; two-player Stackelberg stochastic game; Computers; Engines; Games; Markov processes; Security; Subspace constraints; Uncertainty; Computers; Engines; Games; Intrusion response systems; Markov decision processes; Markov processes; Security; Subspace constraints; Uncertainty; and fuzzy logic and control; network state estimation; stochastic games (ID#:14-2864) the implementation of the algorithms and decision-making system.
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6583161&isnumber=6689796
- Thorat, S.S.; Markande, S.D., "Reinvented Fuzzy logic Secure Media Access Control Protocol (FSMAC) to improve lifespan of Wireless Sensor Networks," Issues and Challenges in Intelligent Computing Techniques (ICICT), 2014 International Conference on, pp.344,349, 7-8 Feb. 2014. doi: 10.1109/ICICICT.2014.6781305 Wireless Sensor Networks (WSN) have grown in size and importance in a very short time. WSN is very sensitive to various attacks, hence Security has become prominent issue in WSNs. Denial-Of-Service (DOS) attack is one of main concern for WSNs. DOS Attack diminishes the resources of sensor nodes which affect the normal functioning of the node. Media Access Control (MAC) layer is responsible for communication within multiple access networks and incorporates shared medium. Fuzzy logic-optimized Secure Media Access Control (FSMAC) Protocol gives good solution against DOS Attack. It detects all intrusion taking place and also decreases average energy consumed by the sensor network than in attacked scenario. These results are responsible for increase in the lifespan of a sensor network. Fuzzy logic deals with uncertainty for human reasoning and decision making. Innovational use of Fuzzy logic theory is applied to this FSMAC protocol to enhance the performance. Here in this paper, Reinvention in FSMAC protocol is proposed using new intrusion detector parameters like No of times node sensed channel free and Variation in channel sense period. Performance of new protocol is tested on the basis of time of first node dead and average energy consumed by the sensor node. These results show that the lifespan of sensor network increases and average energy consumed by sensor node decreases. the implementation of the algorithms and decision-making system.
Keywords: access protocols; cryptographic protocols; decision making; energy consumption; fuzzy logic; telecommunication security; wireless sensor networks; DOS attack; FSMAC protocol; WSN improvement; decision making; denial of service; energy consumption; fuzzy logic secure media access control protocol; human reasoning intrusion detector parameter; multiple access networks; sensor nodes; uncertainty handling; wireless sensor network; Frequency division multiaccess; Indexes; Protocols; Receivers; Uncertainty; Wireless sensor networks; Denial-Of-Service (DOS) Attack; Fuzzy logic-optimized Secure Media access Control Protocol (FSMAC); Media Access Control (MAC) Protocol; Security Issues; Wireless Sensor Networks (ID#:14-2865) the implementation of the algorithms and decision-making system.
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6781305&isnumber=6781240
- Rambabu, C.; Obulesu, Y.P.; Saibabu, C., "Evolutionary Algorithm-Based Technique For Power System Security Enhancement," Advances in Electrical Engineering (ICAEE), 2014 International Conference on, pp.1,5, 9-11 Jan. 2014. doi: 10.1109/ICAEE.2014.6838521 Security constraint optimal power flow is one of the most cost effective measures to promote both cost minimization and maximum voltage security without jeopardizing the system operation. It is developed into a multi-objective problem that involves objectives such as economical operating condition of the system and system security margin. This paper explores the application of Particle Swarm Optimization Algorithm (PSO) to solve the security enhancement problem. In this paper, a novel fuzzy logic composite multi-objective evolutionary algorithm for security problem is presented. Flexible AC Transmission Systems (FACTS) devices, are modern compensators of active and reactive powers, can be considered viable options in providing security enhancement. The proposed algorithm is tested on the IEEE 30-bus system. The proposed methods have achieved solutions with good accuracy, stable convergence characteristics, simple implementation and satisfactory computation time. the implementation of the algorithms and decision-making system.
Keywords: flexible AC transmission systems; fuzzy logic; particle swarm optimisation; power system security; FACTS; IEEE 30-bus system; cost minimization; economical operating condition; flexible AC transmission systems; fuzzy logic; maximum voltage security; multiobjective evolutionary algorithm; multiobjective problem; optimal power flow ;particle swarm optimization algorithm; power system security enhancement; security enhancement problem; Indexes; Power capacitors; Power system stability; Reactive power; Security; Silicon; Thyristors; Fuzzy Logic; Particle Swarm Optimization; Power System Security; TCSC (ID#:14-2866) the implementation of the algorithms and decision-making system.
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6838521&isnumber=6838422
- AlOmary, R.Y.; Khan, S.A, "Fuzzy Logic Based Multi-Criteria Decision-Making Using Dubois and Prade's Operator For Distributed Denial Of Service Attacks In Wireless Sensor Networks," Information and Communication Systems (ICICS), 2014 5th International Conference on, pp.1,6, 1-3 April 2014 doi: 10.1109/IACS.2014.6841979 Wireless sensor networks (WSNs) have emerged as an important technology for monitoring of critical situations that require real-time sensing and data acquisition for decision-making purposes. Security of wireless sensor networks is a contemporary challenging issue. A significant number of various types of malicious attacks have been identified against the security of WSNs in recent times. Due to the unreliable and untrusted environments in which WSNs operate, the threat of distributed attacks against sensory resources such as power consumption, communication, and computation capabilities cannot be neglected. In this paper, a fuzzy logic based approach is proposed in the context of distributed denial of service attacks in WSNs. The proposed approach is modelled and formulated as multi-criteria decision-making problem, while considering attack detection rate and energy decay rate as the two decision criteria. Using the Dubois and Prade's fuzzy operator, a mechanism is developed to achieve the best trade-off between the two aforementioned conflicting criteria. Empirical analysis proves the effectiveness of the proposed approach. the implementation of the algorithms and decision-making system.
Keywords: computer network security; decision making; fuzzy logic; wireless sensor networks; Dubois; Prade; WSN; attack detection rate; data acquisition; distributed denial of service attacks; energy decay rate; fuzzy logic based approach; fuzzy operator; malicious attacks; multicriteria decision-making problem; real-time sensing; sensory resources; wireless sensor networks security; Computer crime; Decision making; Fuzzy logic; Monitoring; Wireless sensor networks (ID#:14-2867) the implementation of the algorithms and decision-making system.
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6841979&isnumber=6841931
- Chaudhary, A; Kumar, A; Tiwari, V.N., "A Reliable Solution Against Packet Dropping Attack Due To Malicious Nodes Using Fuzzy Logic in MANETs," Optimization, Reliability, and Information Technology (ICROIT), 2014 International Conference on, pp.178,181, 6-8 Feb. 2014. doi: 10.1109/ICROIT.2014.6798326 The recent trend of mobile ad hoc network increases the ability and impregnability of communication between the mobile nodes. Mobile ad Hoc networks are completely free from pre-existing infrastructure or authentication point so that all the present mobile nodes which are want to communicate with each other immediately form the topology and initiates the request for data packets to send or receive. For the security perspective, communication between mobile nodes via wireless links make these networks more susceptible to internal or external attacks because any one can join and move the network at any time. In general, Packet dropping attack through the malicious node (s) is one of the possible attack in the mobile ad hoc network. This paper emphasized to develop an intrusion detection system using fuzzy Logic to detect the packet dropping attack from the mobile ad hoc networks and also remove the malicious nodes in order to save the resources of mobile nodes. For the implementation point of view Qualnet simulator 6.1 and Mamdani fuzzy inference system are used to analyze the results. Simulation results show that our system is more capable to detect the dropping attacks with high positive rate and low false positive. the implementation of the algorithms and decision-making system.
Keywords: fuzzy logic; inference mechanisms; mobile ad hoc networks; mobile computing; security of data; MANET; Mamdani fuzzy inference system; Qualnet simulator 6.1;data packets; fuzzy logic; intrusion detection system; malicious nodes; mobile ad hoc network; mobile nodes; packet dropping attack; wireless links; Ad hoc networks; Artificial intelligence; Fuzzy sets; Mobile computing; Reliability engineering; Routing; Fuzzy Logic; Intrusion Detection System (IDS); MANETs Security Issues; Mobile Ad Hoc networks (MANETs); Packet Dropping attack (ID#:14-2868) the implementation of the algorithms and decision-making system.
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6798326&isnumber=6798279
- Khanum, S.; Islam, M.M., "An Enhanced Model Of Vertical Handoff Decision Based On Fuzzy Control Theory & User Preference," Electrical Information and Communication Technology (EICT), 2013 International Conference on, pp.1,6, 13-15 Feb. 2014. doi: 10.1109/EICT.2014.6777873 With the development of wireless communication technology, various wireless networks will exist with different features in same premises. Heterogeneous networks will be dominant in the next generation wireless networks. In such networks choose the most suitable network for mobile user is one of the key issues. Vertical handoff decision making is one of the most important topics in wireless heterogeneous networks architecture. Here the most significant parameters are considered in vertical handoff decision. The proposed method considered Received signal strength (RSS), Monetary Cost(C), Bandwidth (BW), Battery consumption (BC), Security (S) and Reliability (R). Handoff decision making is divided in two sections. First section calculates system obtained value (SOV) considering RSS, C, BW and BC. SOV is calculated using fuzzy logic theory. Today's mobile user are very intelligent in deciding there desired type of services. User preferred network is choose from user priority list is called User obtained value (UOV). Then handoff decisions are made based on SOV & UOV to select the most appropriate network for the mobile nodes (MNs). Simulation results show that fuzzy control theory & user preference based vertical handoff decision algorithm (VHDA) is able to make accurate handoff decisions, reduce unnecessary handoffs decrease handoff calculation time and decrease the probability of call blocking and dropping. the implementation of the algorithms and decision-making system.
Keywords: decision making; fuzzy control; fuzzy set theory; mobile computing; mobility management (mobile radio); probability; telecommunication network reliability; telecommunication security; MC; RSS; SOV; VHDA; bandwidth; battery consumption; decrease call blocking probability; decrease call dropping probability; decrease handoff calculation time; fuzzy control theory; fuzzy logic theory; mobile nodes; monetary cost; next generation wireless networks; received signal strength; reliability; security; system obtained value calculation; unnecessary handoff reduction; user obtained value; user preference; user priority list; vertical handoff decision enhancement model; vertical handoff decision making; wireless communication technology; wireless heterogeneous networks architecture; Bandwidth; Batteries; Communication system security; Mobile communication; Vectors; Wireless networks; Bandwidth; Cost; Fuzzy control theory; Heterogeneous networks; Received signal strength; Security and user preference; Vertical handoff (ID#:14-2869) the implementation of the algorithms and decision-making system.
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6777873&isnumber=6777807
- Karakis, R.; Guler, I, "An Application Of Fuzzy Logic-Based Image Steganography," Signal Processing and Communications Applications Conference (SIU), 2014 22nd, pp.156, 159, 23-25 April 2014. doi: 10.1109/SIU.2014.6830189 Today, data security in digital environment (such as text, image and video files) is revealed by development technology. Steganography and Cryptology are very important to save and hide data. Cryptology saves the message contents and Steganography hides the message presence. In this study, an application of fuzzy logic (FL)-based image Steganography was performed. First, the hidden messages were encrypted by XOR (eXclusive Or) algorithm. Second, FL algorithm was used to select the least significant bits (LSB) of the image pixels. Then, the LSBs of selected image pixels were replaced with the bits of the hidden messages. The method of LSB was improved as robustly and safely against steganalysis by the FL-based LSB algorithm. the implementation of the algorithms and decision-making system.
Keywords: cryptography; fuzzy logic; image coding; steganography; FL-based LSB algorithm; XOR algorithm; cryptology; data security; eXclusive OR algorithm; fuzzy logic; image steganography; least significant bits; Conferences; Cryptography; Fuzzy logic; Internet; PSNR; Signal processing algorithms (ID#:14-2870) the implementation of the algorithms and decision-making system.
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830189&isnumber=6830164
- Nesteruk, P.; Nesteruk, L.; Kotenko, I, "Creation of a Fuzzy Knowledge Base for Adaptive Security Systems," Parallel, Distributed and Network-Based Processing (PDP), 2014 22nd Euromicro International Conference on, pp.574, 577, 12-14 Feb. 2014. doi: 10.1109/PDP.2014.115 To design next generation adaptive security systems the powerful intelligent components should be developed. The paper describes the fuzzy knowledge base specifying relationships between threats and protection mechanisms by Mathworks MATLAB Fuzzy Logic Toolbox. The goal is to increase the effectiveness of the system reactions by minimization of neural network weights. We demonstrate a technique for creation of a fuzzy knowledge base to improve the system protection via rules monitoring and correction. the implementation of the algorithms and decision-making system.
Keywords: adaptive systems; fuzzy set theory; knowledge based systems; security of data; MATLAB; adaptive security systems; fuzzy knowledge; fuzzy logic toolbox; neural network weights; rules monitoring; Adaptation models; Adaptive systems; Biological system modeling; Fuzzy logic; Knowledge based systems; MATLAB; Security; MATLAB Fuzzy Logic Toolbox; adaptive security rules; adaptive security system; fuzzy knowledge base (ID#:14-2871) the implementation of the algorithms and decision-making system.
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6787332&isnumber=6787236
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Visible Light Communication
Visible light communication (VLC) offers an unregulated and free light spectrum and potentially could be a solution for overcoming overcrowded radio spectrum, especially for wireless communication systems, and doing it securely. In the articles cited here, security issues are addressed related to secure bar codes for smart phones, reducing the impact of ambient light (optical "noise"), physical layer security for indoor visible light, and using xenon flashlights for mobile payments. Also sited are works covering a broader range of visible light communications topics. These works appeared in the first half of 2014.
- Bingsheng Zhang; Kui Ren; Guoliang Xing; Xinwen Fu; Cong Wang, "SBVLC: Secure Barcode-Based Visible Light Communication For Smartphones," INFOCOM, 2014 Proceedings IEEE, pp.2661,2669, April 27 2014-May 2 2014. doi: 10.1109/INFOCOM.2014.6848214 As an alternative to NFC technology, 2D barcodes have been increasingly used for security-sensitive applications including payments and personal identification. However, the security of barcode-based communication in mobile applications has not been systematically studied. Due to the visual nature, 2D barcodes are subject to eavesdropping when they are displayed on the screen of a smartphone. On the other hand, the fundamental design principles of 2D barcodes make it difficult to add security features. In this paper, we propose SBVLC - a secure system for barcode-based visible light communication (VLC) between smartphones. We formally analyze the security of SBVLC based on geometric models and propose physical security enhancement mechanisms for barcode communication by manipulating screen view angles and leveraging user-induced motions. We then develop two secure data exchange schemes. These schemes are useful in many security-sensitive mobile applications including private information sharing, secure device pairing, and mobile payment. SBVLC is evaluated through extensive experiments on both Android and iOS smartphones.
Keywords: Android (operating system); bar codes; electronic data interchange; mobile commerce; near-field communication; radiofrequency identification; smart phones; telecommunication security;2D barcodes; Android smartphones; NFC technology; SBVLC; eavesdropping; geometric model; iOS smartphones; mobile payment; payments identification; personal identification; physical security enhancement mechanism; private information sharing; screen view angle manipulation; secure barcode-based visible light communication; secure data exchange scheme; secure device pairing; security sensitive application; security sensitive mobile application; user induced motion; Cameras; Decoding; Receivers; Security; Smart phones; Solid modeling; Three-dimensional displays}, (ID#:14-2927)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848214&isnumber=6847911
- Verma, S.; Shandilya, A; Singh, A, "A Model For Reducing The Effect Of Ambient Light Source In VLC System," Advance Computing Conference (IACC), 2014 IEEE International, pp.186,188, 21-22 Feb. 2014. doi: 10.1109/IAdCC.2014.6779317 In recent years, Visible Light Communication has generated worldwide interest in the field of wireless communication because of its low cost and secure data exchange. However VLC suffers from serious drawbacks which degrade the communication performance. One of the major problems faced by any VLC system is the interference caused by ambient light noise, deteriorating the performance of the system. In this paper we propose an AVR based model to mitigate the ambient light noise interference and discuss its effectiveness. Further we have discussed other difficulties of VLC system.
Keywords: electronic data interchange; interference suppression; light interference; optical communication; optical noise; telecommunication security; AVR based model; VLC system; ambient light noise interference mitigation; ambient light source; secure data exchange; visible light communication; wireless communication; Conferences; Decision support systems; Handheld computers; Ambient noise mitigation; LED transmitter; Visible Light Communication (VLC) (ID#:14-2928)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6779317&isnumber=6779283
- Mostafa, A; Lampe, L., "Physical-layer Security For Indoor Visible Light Communications," Communications (ICC), 2014 IEEE International Conference on, pp.3342,3347, 10-14 June 2014. doi: 10.1109/ICC.2014.6883837 This paper considers secure transmission over the visible light communication (VLC) channel by the means of physical-layer security techniques. In particular, we consider achievable secrecy rates of the multiple-input, single-output (MISO) wiretap VLC channel. The VLC channel is modeled as a deterministic and real-valued Gaussian channel subject to amplitude constraints. We utilize null-steering and artificial noise strategies to achieve positive secrecy rates when the eavesdropper's channel state information (CSI) is perfectly known and entirely unknown to the transmitter, respectively. In both scenarios, the legitimate receiver's CSI is available to the transmitter. We numerically evaluate achievable secrecy rates under typical VLC scenarios and show that simple precoding techniques can significantly improve the confidentiality of VLC links.
Keywords: Gaussian channels indoor communication; optical communication; precoding; radio receivers; radio transmitters; telecommunication security; CSI; MISO channel; achievable secrecy rates; amplitude constraints; artificial noise; channel state information; deterministic Gaussian channel; indoor visible light communications; legitimate receiver; multiple-input single-output channel; null steering; physical layer security; positive secrecy rates; real-valued Gaussian channel; secure transmission; simple precoding; transmitter; wiretap VLC channel; Light emitting diodes; Optical transmitters; Receivers; Security; Signal to noise ratio; Vectors (ID#:14-2929)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883837&isnumber=6883277
- Galal, M.M.; El Aziz, AA; Fayed, H.A; Aly, M.H., "Employing Smartphones Xenon Flashlight For Mobile Payment," Multi-Conference on Systems, Signals & Devices (SSD), 2014 11th International, pp.1,5, 11-14 Feb. 2014. doi: 10.1109/SSD.2014.6808780 Due to the huge dependence of the users on their smartphones and the huge technological advances in their design, smartphones have replaced many electronic devices nowadays. For that reason, it is of great interest to use such phones to replace magnetic cards. This paper uses the built-in Xenon flashlight in today's Android smartphones to experimentally transmit the data stored on the user magnetic card to a card reader or automatic teller machine (ATM). We experimentally modulate the embedded Xenon flashlight in a smartphone with the required information of a traditional magnetic card and transmit the light over a secure high speed optical link at 15 bps with no additional hardware at the user end. The paper introduces the design of an implemented small, inexpensive supplementary receiver circuit module, which is easily attached to a contemporary card reader or ATM machine. Furthermore, the paper tests the system performance under the effect of interference from another transmitter as well as compares its speed and security to the regular ATM card and to other competing technologies.
Keywords: electronic commerce; optical links; smart phones; ATM; Android smartphones; automatic teller machine; contemporary card reader; magnetic cards; mobile payment; secure high speed optical link; smartphones Xenon flashlight; supplementary receiver circuit module; IEC standards; Photodetectors; Pulse width modulation; Receivers; Smart phones; Transmitters; ATM machines; Visible light communication; Xenon flashlight; smart payments; smartphones (ID#:14-2930)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6808780&isnumber=6808745
- Kizilirmak, R.C.; Uysal, M., "Relay-assisted OFDM Transmission For Indoor Visible Light Communication," Communications and Networking (BlackSeaCom), 2014 IEEE International Black Sea Conference on, pp.11,15, 27-30 May 2014. doi: 10.1109/BlackSeaCom.2014.6848995 In this study, we investigate a relay-assisted visible light communication (VLC) system where an intermediate light source cooperates with the main light source. Specifically, we consider two light sources in an office space; one is the information source employed on the ceiling and the other one is a task light mounted on a desk. Our system builds upon DC biased optical orthogonal frequency division multiplexing (DCO-OFDM). The task light performs amplify-and-forward relaying to assist the communication and operates in half-duplex mode. We investigate the error rate performance of the proposed OFDM-based relay-assisted VLC system. Furthermore, we present joint AC and DC optimal power allocation in order to improve the performance. The DC power allocation is controlled by sharing the number of LED chips between the terminals and the AC power allocation decides the fraction of the information signal energy to be consumed at the terminals. Simulation results reveal that the VLC system performance can be improved via relay-assisted transmission and the performance gain as much as 6 dB can be achieved.
Keywords: OFDM modulation; amplify and forward communication; indoor communication ;light sources; optical communication; optical modulation; relay networks (telecommunication);DC biased optical orthogonal frequency division multiplexing; LED chips; VLC system; amplify-and-forward relaying; error rate performance; half-duplex mode; indoor visible light communication; information signal energy; information source; intermediate light source; joint AC-DC optimal power allocation; office space; relay-assisted OFDM transmission; relay-assisted visible light communication system; Bit error rate; Light sources; Lighting; OFDM; Relays; Resource management; Sea surface; DCO-OFDM; Visible light communication; amplify-and-forward; half-duplex; power allocation (ID#:14-2931)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6848995&isnumber=6848989
- Fisne, A; Toker, C., "Investigation of the Channel Structure in Visible Light Communication," Signal Processing and Communications Applications Conference (SIU), 2014 22nd, pp.1646, 1649, 23-25 April 2014. doi: 10.1109/SIU.2014.6830562 Visible Light Communication comes forward particularly in indoor communication as an important alternative to the radio communication systems, nowadays. In Visible Light Communication, information is transferred by means of the light used for lighting rather than radio frequencies. In this paper, structure of the channel used for Visible Light Communication is examined. The effects of geometry between the receiver and transmitter upon communication are analyzed, supported with simulations.
Keywords: geometry; indoor communication; optical receivers; optical transmitters; channel structure; geometry effects; indoor communication; lighting; visible light communication; Conferences; Light emitting diodes; Lighting; Masers; Mathematical model; Signal to noise ratio (ID#:14-2932)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6830562&isnumber=6830164
- Wang Yuanquan; Chi Nan, "A High-Speed Bi-Directional Visible Light Communication System Based on RGB-LED," Communications, China, vol.11, no.3, pp.40, 44, March 2014. doi: 10.1109/CC.2014.6825257 In this paper, we propose and experimentally demonstrate a bi-directional indoor communication system based on visible light RGB-LED. Spectrally efficient modulation formats (QAM-OFDM), advanced digital signal processing, pre- and post-equalization are adopted to compensate the severe frequency response of indoor channel. In this system, we utilize red-green-blue Light emitting diodes (LEDs), of which each color can be used to carry different signals. For downlink, the low frequencies of each color are used while for uplink, the high frequencies are used. The overall data rate of downlink and uplink are 1.15-Gb/s and 300-Mb/s. The bit error ratios (BERs) for all channels after 0.7 m indoor delivery are below pre-forward-error-correction (pre-FEC) threshold of 3.8x103. To the best of our knowledge, this is the highest data rate in bi-directional visible light communication system.
Keywords: {OFDM modulation; error statistics; forward error correction; indoor communication; light emitting diodes; optical communication; quadrature amplitude modulation; telecommunication channels; BER; QAM-OFDM; advanced digital signal processing ;bi-directional indoor communication system; bi-directional visible light communication system; bit error ratios ;bit rate 1.15 Gbit/s; bit rate 300 Mbit/s; downlink; equalization; error-correction; frequency response; high-speed bi-directional visible light communication system; indoor channel; indoor delivery; modulation formats; preFEC threshold; preforward-error-correction; red-green-blue Light emitting diodes; uplink; visible light RGB-LED; Bidirectional control; Downlink; Image color analysis; Light emitting diodes; Modulation; OFDM; Uplink; bidirectional transmission; light emitting diode; orthogonal frequency division multiplexing; visible light communication (ID#:14-2933)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6825257&isnumber=6825249
- Xu Bao; Xiaorong Zhu; Tiecheng Song; Yanqiu Ou, "Protocol Design and Capacity Analysis in Hybrid Network of Visible Light Communication and OFDMA Systems," Vehicular Technology, IEEE Transactions on, vol.63, no.4, pp.1770, 1778, May 2014. doi: 10.1109/TVT.2013.2286264 Visible light communication (VLC) uses a vast unregulated and free light spectrum. It is considered to be a solution for overcoming the crowded radio spectrum for wireless communication systems. However, duplex communication, user mobility, and handover mechanisms are becoming challenging tasks in a VLC system. This paper proposes a hybrid network model of VLC and orthogonal frequency-division multiplexing access (OFDMA) in which the VLC channel is only used for downlink transmission, whereas OFDMA channels are served for uplinks in any situation or for downlinks only without VLC hotspots coverage. A novel protocol is proposed combined with access, horizontal, and vertical handover mechanisms for mobile terminal (MT) to resolve user mobility among different hotspots and OFDMA system. A new VLC network scheme and its frame format are presented to deal with the multiuser access problems in every hotspot. In addition, a new metric r is defined to evaluate the capacity of this hybrid network as the spatial density of interarrival time of MT requests in s-1m-2 under the assumption of the homogenous Poisson point process (HPPP) distribution of MTs. Analytical and simulation results show improvements in capacity performance of the hybrid, when compared to OFDMA system.
Keywords: Poisson distribution; frequency division multiple access; optical communication; protocols; HPPP distribution; OFDMA channels; OFDMA systems; VLC channel; VLC hotspots coverage; capacity analysis; downlink transmission; duplex communication; free light spectrum; handover mechanisms; homogenous Poisson point process; hybrid network model; interarrival time; mobile terminal; mobility mechanisms; multiuser access problems; orthogonal frequency-division multiplexing access; protocol design; radio spectrum; spatial density; visible light communication; wireless communication systems; Downlink; Handover; Protocols; Radio frequency; Servers; Uplink; Capacity analysis; VLC frame format; Visible light Communication (VLC);capacity analysis; horizontal and vertical handover; hybrid VLC-OFDMA network; hybrid visible light communication (VLC)-orthogonal frequency-division multiplexing access (OFDMA) network (ID#:14-2934)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6637084&isnumber=6812142
- Mondal, R.K.; Saha, N.; Yeong Min Jang, "Performance Enhancement Of MIMO Based Visible Light Communication," Electrical Information and Communication Technology (EICT), 2013 International Conference on, pp.1,5, 13-15 Feb. 2014.doi: 10.1109/EICT.2014.6777901 The camera based visible light communication (VLC) is the merger of VLC with vision technology in order to deploy VLC features in hand-held devices e.g., in Smartphone, employing light emitting diode (LED) transmitter to camera communication. However, the most advantageous features of VLC technology have not been achieved due to the low frame handling rate in camera module. In contrast, the spatially light source separation characteristic in camera module explores the scope to deploy multiple-input multiple-output (MIMO) concept for enhancing the overall system capacity and robust signal reception in camera based VLC system. In this paper, the performance of spatial multiplexing in MIMO based VLC system is evaluated.
Keywords: {MIMO communication; cameras; light emitting diodes; optical communication; LED transmitter; MIMO based visible light communication; Smartphone; VLC; camera based visible light communication; hand held devices; light emitting diode; performance enhancement; robust signal reception; spatial multiplexing; vision technology; Bit error rate; Cameras; MIMO; Multiplexing; Optical transmitters; Receivers; Signal to noise ratio; LED; MIMO; Spatial Multiplexing; Visible light communication; image sensor (ID#:14-2935)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6777901&isnumber=6777807
- Din, I; Hoon Kim, "Energy-Efficient Brightness Control and Data Transmission for Visible Light Communication," Photonics Technology Letters, IEEE, vol. 26, no. 8, pp.781, 784, April 15, 2014. doi: 10.1109/LPT.2014.2306195 This letter considers the efficient utilization of energy in a visible light communication (VLC) system. A joint brightness control and data transmission are presented to reduce the total power consumption while satisfying lighting and communication requirements. An optimization problem is formulated to determine the optimal parameters for the input waveform of light emitting diode (LED) lamps; this problem reduces the total energy consumption of the LED lamps while ensuring the desired brightness and communication link quality. The simulation results show that the proposed scheme increases the energy efficiency of the VLC system.
Keywords: {LED lamps; brightness; data communication; energy consumption; optical communication equipment; optical links; LED lamps; VLC system; communication link quality; data transmission; energy consumption; energy efficiency; energy-efficient brightness control; input waveform; light emitting diode lamps; optimization problem; power consumption; visible light communication system; Brightness; Data communication; LED lamps; Modulation; Optical receivers; Visible light communication; energy efficiency; subcarrier pulse position modulation; wireless communication (ID#:14-2936)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6740016&isnumber=6776431
- Monteiro, E.; Hranilovic, S., "Design and Implementation of Color-Shift Keying for Visible Light Communications," Lightwave Technology, Journal of, vol.32, no.10, pp.2053, 2060, May 15, 2014. doi: 10.1109/JLT.2014.2314358 Color-shift keying (CSK) is a visible light communication intensity modulation scheme, outlined in IEEE 802.15.7, that transmits data imperceptibly through the variation of the color emitted by red, green, and blue light emitting diodes. An advantage of CSK is that the power envelope of the transmitted signal is fixed; therefore, CSK reduces the potential for human health complications related to fluctuations in light intensity. In this work, a rigorous design framework for high order CSK constellations is presented. A key benefit of the frame work is that it optimizes constellations while accounting for crosstalk between the color communication channels. In addition, and unlike previous approaches, the method is capable of optimizing 3-D constellations. Furthermore, a prototype CSK communication system is presented to validate the performance of the optimized constellations, which provide gains of 1-3 dB over standard 805.15.7 constellations.
Keywords: IEEE standards; light emitting diodes; optical communication equipment; optical crosstalk; optical design techniques; optical modulation; optimisation; visible spectra;3D high order CSK constellation optimization; IEEE 802.15.7;blue light emitting diodes; color communication channels; color-shift keying design; color-shift keying implementation; data transmission; gain 1 dB to 3 dB; green light emitting diodes; light intensity fluctuations; optical crosstalk; red light emitting diodes; signal transmission; visible light communication intensity modulation scheme; Color; Image color analysis; Light emitting diodes; Noise; Optical receivers; Optical transmitters; Optimization; Color-shift keying (CSK); intensity modulation; visible light communications (VLC) (ID#:14-2937)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6780585&isnumber=6808425
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.
Web Caching
Web Caching Web caches offer a potential for mischief. With the expanded need for caching capability with the cloud and mobile communications, the need for more and better security has also grown. The articles cited here address cache security issues including geo-inference attacks, scriptless timing attacks, and a proposed incognito tab. Other research on caching generally is cited. These articles appeared in 2014.
- Jia, Y.; Dong, X.; Liang, Z.; Saxena, P., "I Know Where You've Been: Geo-Inference Attacks via the Browser Cache," Internet Computing, IEEE, vol. PP, no.99, pp.1, 1, August 2014. doi: 10.1109/MIC.2014.103 Many websites customize their services according to different geo-locations of users, to provide more relevant content and better responsiveness, including Google, Craigslist, etc. Recently, mobile devices further allow web applications to directly read users' geo-location information from GPS sensors. However, if geo-oriented websites leave location-sensitive content in the browser cache, other sites can sniff users' geo-locations by utilizing timing side-channels. In this paper, we demonstrate that such geo-location leakage channels are widely open in popular web applications today, including 62 percent of Alexa Top 100 websites. With geo-inference attacks that measure the timing of browser cache queries, we can locate users' countries, cities and neighborhoods in our case studies. We also discuss existing defenses and propose a more balanced solution to defeat such attacks with minor performance overhead.
Keywords: (not provided) (ID#:14-3050)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6879050&isnumber=5226613
- Bin Liang; Wei You; Liangkun Liu; Wenchang Shi; Heiderich, M., "Scriptless Timing Attacks on Web Browser Privacy," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.112,123, 23-26 June 2014. doi: 10.1109/DSN.2014.93 The existing Web timing attack methods are heavily dependent on executing client-side scripts to measure the time. However, many techniques have been proposed to block the executions of suspicious scripts recently. This paper presents a novel timing attack method to sniff users' browsing histories without executing any scripts. Our method is based on the fact that when a resource is loaded from the local cache, its rendering process should begin earlier than when it is loaded from a remote website. We leverage some Cascading Style Sheets (CSS) features to indirectly monitor the rendering of the target resource. Three practical attack vectors are developed for different attack scenarios and applied to six popular desktop and mobile browsers. The evaluation shows that our method can effectively sniff users' browsing histories with very high precision. We believe that modern browsers protected by script-blocking techniques are still likely to suffer serious privacy leakage threats.
Keywords: data privacy; online front-ends; CSS features; Web browser privacy; Web timing attack methods; cascading style sheets; client-side scripts; desktop browser; mobile browser; privacy leakage threats; rendering process; script-blocking techniques; scriptless timing attacks; user browsing history; Animation; Browsers; Cascading style sheets; History; Rendering (computer graphics); Timing; Web privacy; browsing history; scriptless attack; timing attack (ID#:14-3051)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903572&isnumber=6903544
- Qingsong Wei; Cheng Chen; Jun Yang, "CBM: A Cooperative Buffer Management for SSD," Mass Storage Systems and Technologies (MSST), 2014 30th Symposium on, pp.1, 12, 2-6 June 2014. doi: 10.1109/MSST.2014.6855545 Random writes significantly limit the application of Solid State Drive (SSD) in the I/O intensive applications such as scientific computing, Web services, and database. While several buffer management algorithms are proposed to reduce random writes, their ability to deal with workloads mixed with sequential and random accesses is limited. In this paper, we propose a cooperative buffer management scheme referred to as CBM, which coordinates write buffer and read cache to fully exploit temporal and spatial localities among I/O intensive workload. To improve both buffer hit rate and destage sequentiality, CBM divides write buffer space into Page Region and Block Region. Randomly written data is put in the Page Region at page granularity, while sequentially written data is stored in the Block Region at block granularity. CBM leverages threshold-based migration to dynamically classify random write from sequential writes. When a block is evicted from write buffer, CBM merges the dirty pages in write buffer and the clean pages in read cache belonging to the evicted block to maximize the possibility of forming full block write. CBM has been extensively evaluated with simulation and real implementation on OpenSSD. Our testing results conclusively demonstrate that CBM can achieve up to 84% performance improvement and 85% garbage collection overhead reduction compared to existing buffer management schemes.
Keywords: cache storage ;flash memories; input-output programs; CBM; I/O intensive workload; OpenSSD; block granularity; block region; buffer hit rate; buffer management algorithms; cooperative buffer management; flash memory; garbage collection overhead reduction; page region; performance improvement; random write reduction; solid state drive; write sequentiality; Algorithm design and analysis; Buffer storage; Flash memories; Nonvolatile memory; Power line communications; Radiation detectors; Random access memory; buffer hit ratio; cooperative buffer management; flash memory; write sequentiality (ID#:14-3052)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6855545&isnumber=6855532
- Gomaa, H.; Messier, G.G.; Davies, R., "Hierarchical Cache Performance Analysis Under TTL-Based Consistency," Networking, IEEE/ACM Transactions on, vol. PP, no. 99, pp. 1, 1, May 2014. doi: 10.1109/TNET.2014.2320723 This paper introduces an analytical model for characterizing the instantaneous hit ratio and instantaneous average hit distance of a traditional least recently used (LRU) cache hierarchy. The analysis accounts for the use of two variants of the Time-to-Live (TTL) weak consistency mechanism. The first is the typical TTL scheme (TTL-T) used in the HTTP/1.1 protocol where expired objects are refreshed using conditional GET requests. The second is TTL immediate ejection (TTL-IE) where objects are ejected as soon as they expire. The analysis also accounts for two sharing protocols: Leave Copy Everywhere (LCE) and Promote Cached Objects (PCO). PCO is a new sharing protocol introduced in this paper that decreases the user's perceived latency and is robust under nonstationary access patterns.
Keywords: Analytical models; IEEE transactions; Markov processes; Measurement; Probability; Protocols; Servers; Analysis; Markov chain; Web; cache consistency; content-centric network (CCN);hierarchical cache; least recently used (LRU);time-to-live (TTL) (ID#:14-3053)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6812201&isnumber=4359146
- Kumar, K.; Bose, J., "User Data Management By Tabs During A Browsing Session," Digital Information and Communication Technology and it's Applications (DICTAP), 2014 Fourth International Conference on, pp.258,263, 6-8 May 2014.doi: 10.1109/DICTAP.2014.6821692 Nowadays, most browsers are multi tab, where the user activity is segregated in parallel sessions, one on each tab. However, the user data, including history, cookies and cache, while browsing is not similarly segregated and only accessible together. This presents difficulties for the user to access their data separately by the tab. In this paper, we seek to solve the problem by organizing tab specific browser data in different tabs. We implement the system and present alternate ways to visualize the tab specific data, and also show it does not lead to appreciable slowdown in the browser performance. We also propose a method to convert an incognito tab, where the data is not stored, while browsing into a normal tab and vice versa. Such methods of tabbed data management will enable the user to better organize and view the tab specific data.
Keywords: data handling; online front-ends; Web browser; incognito tab; multitab browsing session; parallel sessions; tab specific browser data; user data management; Browsers; Clustering algorithms; Context; Databases; History; Organizing; Switches; android; incognito mode; tabbed browsing; user data; web browser (ID#:14-3054)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821692&isnumber=6821645
- Kazi, A.W.; Badr, H., "Some Observations On The Performance of CCN-Flooding," Computing, Networking and Communications (ICNC), 2014 International Conference on, pp.334, 340, 3-6 Feb. 2014 doi: 10.1109/ICCNC.2014.6785356 We focus on one of the earliest forwarding strategies proposed for Content-Centric Networks (CCN), namely the CCN-Flooding approach to populate the Forwarding Information Bases (FIB) and forward packets. Pure CCN-Flooding in its own right is a potentially viable, though highly deprecated, option to forward packets. But CCN-Flooding is also proposed as an integral component of alternative forwarding strategies. Thus, it cannot entirely be dismissed, and its behavior merits study. We examine the CCN-Flooding approach using a combination of several topologies and workload sets with differing characteristics. In addition to topological effects, we identify various issues that arise, such as: the difficulty of calibrating Pending Interest Table (PIT) timeouts; a PIT-induced isolation effect that negatively impacts bandwidth consumption and system response time; and the effects of adopting or not adopting FIB routes based on volatile in-network cache entries. In conclusion, we briefly compare CCN-Flooding vs. CCN-Publication when the overhead bandwidth costs of pre-populating FIBs in the latter are also taken into account.
Keywords: computer networks; packet radio networks; telecommunication network topology; CCN-flooding; CCN-publication; FIB; PIT timeouts; PIT-induced isolation effect; bandwidth consumption; behavior merits; content-centric networks; forward packets; forwarding information bases; integral component; pending interest table timeouts; several topology; system response time; topological effects; volatile in-network cache entries; workload sets; Bandwidth; Floods; IP networks; Measurement; Network topology; Topology; Web and internet services; CCN performance evaluation; bandwidth consumption; caching; forwarding (ID#:14-3055)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6785356&isnumber=6785290
- Lei Wang; Jianfeng Zhan; Chunjie Luo; Yuqing Zhu; Qiang Yang; Yongqiang He; Wanling Gao; Zhen Jia; Yingjie Shi; Shujie Zhang; Chen Zheng; Gang Lu; Zhan, K.; Xiaona Li; Bizhu Qiu, "BigDataBench: A Big Data Benchmark Suite From Internet Services," High Performance Computer Architecture (HPCA), 2014 IEEE 20th International Symposium on, pp.488,499, 15-19 Feb. 2014. doi: 10.1109/HPCA.2014.6835958 As architecture, systems, and data management communities pay greater attention to innovative big data systems and architecture, the pressure of benchmarking and evaluating these systems rises. However, the complexity, diversity, frequently changed workloads, and rapid evolution of big data systems raise great challenges in big data benchmarking. Considering the broad use of big data systems, for the sake of fairness, big data benchmarks must include diversity of data and workloads, which is the prerequisite for evaluating big data systems and architecture. Most of the state-of-the-art big data benchmarking efforts target evaluating specific types of applications or system software stacks, and hence they are not qualified for serving the purposes mentioned above. This paper presents our joint research efforts on this issue with several industrial partners. Our big data benchmark suite-BigDataBench not only covers broad application scenarios, but also includes diverse and representative data sets. Currently, we choose 19 big data benchmarks from dimensions of application scenarios, operations/ algorithms, data types, data sources, software stacks, and application types, and they are comprehensive for fairly measuring and evaluating big data systems and architecture. BigDataBench is publicly available from the project home page http://prof.ict.ac.cn/BigDataBench. Also, we comprehensively characterize 19 big data workloads included in BigDataBench with varying data inputs. On a typical state-of-practice processor, Intel Xeon E5645, we have the following observations: First, in comparison with the traditional benchmarks: including PARSEC, HPCC, and SPECCPU, big data applications have very low operation intensity, which measures the ratio of the total number of instructions divided by the total byte number of memory accesses; Second, the volume of data input has non-negligible impact on micro-architecture characteristics, which may impose challenges for simulation-based- big data architecture research; Last but not least, corroborating the observations in CloudSuite and DCBench (which use smaller data inputs), we find that the numbers of L1 instruction cache (L1I) misses per 1000 instructions (in short, MPKI) of the big data applications are higher than in the traditional benchmarks; also, we find that L3 caches are effective for the big data applications, corroborating the observation in DCBench.
Keywords: Big Data; Web services; cache storage; memory architecture; Big Data benchmark suite; Big Data systems; BigDataBench; CloudSuite; DCBench; HPCC; Intel Xeon E5645;Internet services;L1 instruction cache misses; MPKI; PARSEC; SPECCPU; big data benchmark suite; big data benchmarking; data management community; data sources; data types; memory access; micro-architecture characteristics; simulation-based big data architecture research; software stacks; system software stack; Benchmark testing; Computer architecture; Search engines; Social network services; System software (ID#:14-3056)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6835958&isnumber=6835920
- Imtiaz, Al; Hossain, Md.Jayed, "Distributed Cache Management Architecture: To Reduce The Internet Traffic By Integrating Browser And Proxy Caches," Electrical Engineering and Information & Communication Technology (ICEEICT), 2014 International Conference on, pp.1,4, 10-12 April 2014. doi: 10.1109/ICEEICT.2014.6919088 The World Wide Web is one of the most popular Internet applications, and its traffic volume is increasing and evolving due to the popularity of social networking, file hosting, and video streaming sites. Wide ranges of research have been done on this field and numbers of architecture exist for caching those web content. Each of those has their own advantages and limitations. Browser caches handle single user by caching and storing web content on user computer. Where the proxy caches could handles thousands of users by handling, providing, and optimizing those web contents. But the World Wide Web (WWW) suffers from scaling and reliability problems due to overloaded and congested proxy servers. Distributed and Hierarchical architecture could be integrated as hybrid architecture for better performance and efficiency. Based on the secondary information by literature review, this paper is aimed to propose few feasible strategies to improve the cache management architecture by integrating browser with proxy caches server, where the browser cache will act as proxy cache server by sharing its content through hybrid architecture. This paper will also focus on the present architecture and challenges of current system that are needed to be resolved.
Keywords: Browsers; Computer architecture; Computers; Internet; Protocols; Servers; Web pages; Browser cache; Cache management; Distributed cache; Web Traffic; Web cache (ID#:14-3057)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6919088&isnumber=6919024
- Einziger, G.; Friedman, R., "TinyLFU: A Highly Efficient Cache Admission Policy," Parallel, Distributed and Network-Based Processing (PDP), 2014 22nd Euromicro International Conference on, pp.146, 153, 12-14 Feb. 2014. doi: 10.1109/PDP.2014.34 This paper proposes to use a frequency based cache admission policy in order to boost the effectiveness of caches subject to skewed access distributions. Rather than deciding on which object to evict, TinyLFU decides, based on the recent access history, whether it is worth admitting an accessed object into the cache at the expense of the eviction candidate. Realizing this concept is enabled through a novel approximate LFU structure called TinyLFU, which maintains an approximate representation of the access frequency of recently accessed objects. TinyLFU is extremely compact and lightweight as it builds upon Bloom filter theory. The paper shows an analysis of the properties of TinyLFU including simulations of both synthetic workloads as well as YouTube and Wikipedia traces.
Keywords: cache storage; data structures; Bloom filter theory; TinyLFU; Wikipedia; YouTube; access frequency; frequency based cache admission policy; novel approximate LFU structure; Approximation methods; Finite wordlength effects; Histograms; History; Memory management; Optimization; Radiation detectors; Cache; LFU; TinyLFU; approximate count; bloom filter; cloud cache; data cache; sketch; sliding window; web cache; zipf (ID#:14-3058)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6787265&isnumber=6787236
- Pal, M.B.; Jain, D.C., "Web Service Enhancement Using Web Pre-fetching by Applying Markov Model," Communication Systems and Network Technologies (CSNT), 2014 Fourth International Conference on, pp.393,397, 7-9 April 2014. doi: 10.1109/CSNT.2014.84 Rapid growth of web application has increased the researcher's interests in this era. All over the world has surrounded by the computer network. There is a very useful application call web application used for the communication and data transfer. An application that is accessed via a web browser over a network is called the web application. Web caching is a well-known strategy for improving the performance of Web based system by keeping Web objects that are likely to be used in the near future in location closer to user. The Web caching mechanisms are implemented at three levels: client level, proxy level and original server level. Significantly, proxy servers play the key roles between users and web sites in lessening of the response time of user requests and saving of network bandwidth. Therefore, for achieving better response time, an efficient caching approach should be built in a proxy server. This paper use FP growth, weighted rule mining concept and Markov model for fast and frequent web pre fetching in order to has improved user response of web page and expedites users visiting speed.
Keywords: Markov processes; Web services; Web sites; cache storage; data mining; file servers; Markov model; Web application; Web based system performance; Web browser; Web caching mechanism; Web objects; Web page; Web prefetching; Web service enhancement; Web sites; client level caching; communication; computer network; data transfer; network bandwidth saving; proxy level caching; proxy server; server level caching; user request; user response; weighted rule mining concept; Cleaning; Markov processes; Servers; Web mining; Web pages; Log file; Web Services; data cleaning; log preprocessing (ID#:14-3059)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6821425&isnumber=6821334
- Johnson, T.; Seeling, P., "Desktop and Mobile Web Page Comparison: Characteristics, Trends, And Implications," Communications Magazine, IEEE, vol.52, no.9, pp.144, 151, September 2014. doi: 10.1109/MCOM.2014.6894465 The broad proliferation of mobile devices in recent years has drastically changed the means of accessing the World Wide Web. Describing a shift away from the desktop computer era for content consumption, predictions indicate that the main access of web-based content will come from mobile devices. Concurrently, the manner of content presentation has changed as well; web artifacts are allowing for richer media and higher levels of user interaction which is enabled by the increasing speeds of access networks. This article provides an overview of more than two years of high level web page characteristics by comparing the desktop and mobile client versions. Our study is the first long-term evaluation of differences as seen by desktop and mobile web browser clients. We showcase the main differentiating factors with respect to the number of web page object requests, their sizes, relationships, and web page object caching. We find that over time, initial page view sizes and number of objects increase faster for desktop versions. However, web page objects have similar sizes in both versions, though they exhibit a different composition by type of object in greater detail.
Keywords: Web sites; microcomputers; mobile computing; online front-ends; subscriber loops; World Wide Web; access networks; broad proliferation; content consumption; desktop client versions; desktop computer; desktop web page; high level web page characteristics; mobile client versions; mobile devices; mobile web browser clients; mobile web page; web artifacts; web page object caching; web-based content; Cascading style sheets; Internet; Market research; Mobile communication; Mobile handsets; Web pages (ID#:14-3060)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6894465&isnumber=6894440
- Pourmir, A.; Ramanathan, P., "Distributed caching and coding in VoD," Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on, pp.233, 238, April 27 2014-May 2 2014. doi: 10.1109/INFCOMW.2014.6849237 Caching decreases content access time by keeping contents closer to the clients. In this paper we show that network coding chunks of different contents and storing them in cache, can be beneficial. Recent research considers caching network coded chunks of same content, but not different contents. This paper proposes three different methods, IP, layered-IP and Greedy algorithm, with different performance and complexity. Simulation results show that caching encoded chunks of different contents can significantly reduce the average data access time. Although we evaluate our ideas using Video on Demand (VoD) application on cable networks, they can be extended to broader contexts including content distribution in peer-to-peer networks and proxy web caches.
Keywords: IP networks; Internet; cache storage; client-server systems; computational complexity; greedy algorithms; integer programming; network coding; peer-to-peer computing; video on demand; VoD application; average data access time; binary integer program; cable networks; caching network coded chunks; content distribution; greedy algorithm; layered-IP methods; peer-to-peer networks; proxy Web cache; video on demand application; Arrays; Conferences; Encoding; IP networks; Mathematical model; Probability; Servers; Binary Integer Program; caching; network coding (ID#:14-3061)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6849237&isnumber=6849127
- Fankhauser, T.; Qi Wang; Gerlicher, A.; Grecos, C.; Xinheng Wang, "Web Scaling Frameworks: A Novel Class Of Frameworks For Scalable Web Services In Cloud Environments," Communications (ICC), 2014 IEEE International Conference on, pp.1760, 1766, 10-14 June 2014. doi: 10.1109/ICC.2014.6883577 The social web and huge growth of mobile smart devices dramatically increases the performance requirements for web services. State-of-the-art Web Application Frameworks (WAFs) do not offer complete scaling concepts with automatic resource-provisioning, elastic caching or guaranteed maximum response times. These functionalities, however, are supported by cloud computing and needed to scale an application to its demands. Components like proxies, load-balancers, distributed caches, queuing and messaging systems have been around for a long time and in each field relevant research exists. Nevertheless, to create a scalable web service it is seldom enough to deploy only one component. In this work we propose to combine those complementary components to a predictable, composed system. The proposed solution introduces a novel class of web frameworks called Web Scaling Frameworks (WSFs) that take over the scaling. The proposed mathematical model allows a universally applicable prediction of performance in the single-machine- and multi-machine scope. A prototypical implementation is created to empirically validate the mathematical model and demonstrates both the feasibility and increase of performance of a WSF. The results show that the application of a WSF can triple the requests handling capability of a single machine and additionally reduce the number of total machines by 44%.
Keywords: Web services; cache storage; cloud computing; WSFs; Web application frameworks; Web scaling frameworks; automatic resource-provisioning; cloud computing; cloud environments; distributed caches; elastic caching; guaranteed maximum response times; load-balancers; mathematical model; messaging systems; mobile smart devices; proxies; queuing; scalable Web services; social Web; Concurrent computing; Delays; Mathematical model; Multimedia communication; Radio frequency; Web services (ID#:14-3062)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6883577&isnumber=6883277
- Guedes, Erico A.C.; Silva, Luis E.T.; Maciel, Paulo R.M., "Performability Analysis Of I/O Bound Application On Container-Based Server Virtualization Cluster," Computers and Communication (ISCC), 2014 IEEE Symposium on, pp.1, 7, 23-26 June 2014. doi: 10.1109/ISCC.2014.6912556 Use of server virtualization for providing applications produces overloads that degrade the performance of provided systems. The use of container-based virtualization enabled a narrowing of this overload. On this work, we go a step forward and demonstrate how a broad tuning combination of several performance factors concerning to web cache server - the I/O bound application analysed - to file system and to operating system, led to a higher performance of proposed cluster, when it is executed on a container-based operating system virtualization environment. Availability and performance similarity of web cache service, under non-virtualized and virtualized systems, were evaluated when submitted to proposed web workload. Results reveal that web cache service provided under virtual environment, without unresponsiveness fails due to overload, i. e., with high availability, presents a 6% higher hit ratio and a 21.4% lower response time than those observed on non-virtualized environments.
Keywords: Availability; Operating systems; Protocols; Servers; Throughput; Time factors; Virtualization; Container-based Operating Systems; Performability; Server Virtualization (ID#:14-3063)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6912556&isnumber=6912451
- Akherfi, K.; Harroud, H.; Gerndt, M., "A Mobile Cloud Middleware To Support Mobility And Cloud Interoperability," Multimedia Computing and Systems (ICMCS), 2014 International Conference on, pp.1189, 1194, 14-16 April 2014. doi: 10.1109/ICMCS.2014.6911331 With the recent advances in cloud computing and the improvement in the capabilities of mobile devices in terms of speed, storage, and computing power, Mobile Cloud Computing (MCC) is emerging as one of important branches of cloud computing. MCC is an extension of cloud computing with the support of mobility. In this paper, we first present the specific concerns and key challenges in mobile cloud computing, we then discuss the different approaches to tackle the main issues in MCC that have been introduced so far, and finally we focus on describing the proposed overall architecture of a middleware that will contribute to providing mobile users data storage and processing services based on their mobile devices capabilities, availability, and usage. A prototype of the middleware is developed and three scenarios are described to demonstrate how the middleware performs in adapting the provision of cloud web services by transforming SOAP messages to REST and XML format to JSON, in optimizing the results by extracting relevant information, and in improving the availability by caching. Initial analysis shows that the mobile cloud middleware improves the quality of service for mobiles, and provides lightweight responses for mobile cloud services.
Keywords: cloud computing; middleware; mobile computing; object-oriented methods; JSON format; MCC;REST format; SOAP messages; XML format; cloud interoperability; data processing service; data storage service; mobile cloud computing; mobile devices; mobility support; Cloud computing; Mobile communication; Mobile computing; Mobile handsets; Simple object access protocol; XML (ID#:14-3064)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6911331&isnumber=6911126
- Yizheng Chen; Antonakakis, M.; Perdisci, R.; Nadji, Y.; Dagon, D.; Wenke Lee, "DNS Noise: Measuring the Pervasiveness of Disposable Domains in Modern DNS Traffic," Dependable Systems and Networks (DSN), 2014 44th Annual IEEE/IFIP International Conference on, pp.598,609, 23-26 June 2014. doi: 10.1109/DSN.2014.61 In this paper, we present an analysis of a new class of domain names: disposable domains. We observe that popular web applications, along with other Internet services, systematically use this new class of domain names. Disposable domains are likely generated automatically, characterized by a "one-time use" pattern, and appear to be used as a way of "signaling" via DNS queries. To shed light on the pervasiveness of disposable domains, we study 24 days of live DNS traffic spanning a year observed at a large Internet Service Provider. We find that disposable domains increased from 23.1% to 27.6% of all queried domains, and from 27.6% to 37.2% of all resolved domains observed daily. While this creative use of DNS may enable new applications, it may also have unanticipated negative consequences on the DNS caching infrastructure, DNSSEC validating resolvers, and passive DNS data collection systems.
Keywords: Internet; query processing; telecommunication traffic; ubiquitous computing; DNS caching infrastructure; DNS noise; DNS queries; DNS traffic; DNSSEC; Internet service provider; Internet services; Web applications; disposable domain pervasiveness measurement; live DNS traffic spanning; one-time use pattern; passive DNS data collection systems; signaling; Data collection Educational institutions; Google; Internet; Monitoring; Servers; Web and internet services; Disposable Domain Name; Internet Measurement (ID#:14-3065)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6903614&isnumber=6903544
- Rottenstreich, O.; Keslassy, I., "The Bloom Paradox: When Not to Use a Bloom Filter," Networking, IEEE/ACM Transactions on, vol. PP, no.99, pp.1, 1, Feb 2014. doi: 10.1109/TNET.2014.2306060 In this paper, we uncover the Bloom paradox in Bloom Filters: Sometimes, the Bloom Filter is harmful and should not be queried. We first analyze conditions under which the Bloom paradox occurs in a Bloom Filter and demonstrate that it depends on the a priori probability that a given element belongs to the represented set. We show that the Bloom paradox also applies to Counting Bloom Filters (CBFs) and depends on the product of the hashed counters of each element. In addition, we further suggest improved architectures that deal with the Bloom paradox in Bloom Filters, CBFs, and their variants. We further present an application of the presented theory in cache sharing among Web proxies. Lastly, using simulations, we verify our theoretical results and show that our improved schemes can lead to a large improvement in the performance of Bloom Filters and CBFs.
Keywords: A priori membership probability; Bloom Filter; Counting Bloom Filter; the Bloom Filter paradox (ID#:14-3066)
URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6748924&isnumber=4359146
Note:
Articles listed on these pages have been found on publicly available internet pages and are cited with links to those pages. Some of the information included herein has been reprinted with permission from the authors or data repositories. Direct any requests via Email to SoS.Project (at) SecureDataBank.net for removal of the links or modifications to specific citations. Please include the ID# of the specific citation in your correspondence.